Unified transform lab

Applied mathematics research lab welcoming students from undergrad to PhD

Projects

IBVP with complicated boundary conditions

Initial boundary value problems (IBVP) are typically specified as a partial differential equation (PDE) describing the evolution of a physical system, an initial condition (IC) describing the initial state of the system and some collection of boundary conditions (BC) constraining the behaviour of the system at the spatial boundaries. Such IBVP are extremely successful at describing the real world, representing heat flow through a solid, water waves in a canal, deformation of a flexible beam, and countless other examples. But there are situations where classical two point or one point boundary conditions are not sufficient to model the physics.

These projects describe physical and mathematical extensions of IBVP and extensions of solution methods using the unified transform method.

Interface problems for wave equations

Open project.

Supervisor: Dave Smith.

Dispersive wave equations on complex domains can be used to model information flow through social networks, pressure waves in vascular systems, and flow of light packets through branching fibre topographies. Typically, due to their complexity, these problems are studied numerically. Classical analytic Fourier methods would not be applicable to such problems in general, particularly for dispersive wave equations of high spatial order. The unified transform method is a modern Fourier method applicable to equations of arbitrary spatial order, but the only work on the unified transform method on network domains is restricted to the heat equation.

The purpose of this project is to extend the unified transform method to analysis of dispersive wave equations, such as the Schrödinger and linearized Korteweg-de Vries equations, on network domains. Particular applications may also be explored.

Required modules: Linear algebra, Proof, Real Analysis.

Useful modules: Foundations of applied mathematics, Ordinary & partial differential equations, Complex analysis.

The Schrödinger equation with piecewise constant potential

Open project.

Supervisor: Dave Smith.

The time dependent linear Schrödinger equation is a model for the wave function of a single nonrelativistic particle, in which the potential is a function that describes the particle's environment. Partial differential equations such as this one can be solved numerically to any desired accuracy, but analytic (exact) solutions are relatively difficult to obtain. Spectral methods provide the best hope, because they exploit the tools of linear algebra to diagonalize the problem, reducing it from a partial differential equation to an ordinary differential equation Unfortunately, the eigenfunctions are usually difficult to calculate, obstructing the diagonalization.

With a zero or constant potential, the diagonalization is possible, but a more complicated potential complicates the eigenfunction calculation. The next logical step is to try a potential which is piecewise constant (a step function). Such potentials also have the advantage that they can be used to approximate any other potential one might desire.

A couple of modern techniques have been developed, each of which produces analytic solutions of the linear Schrödinger equation with piecewise constant potential. The purpose of this project is to compare and contrast these results, and implement the methods using mathematical software package julia.

Required modules: Linear algebra, Multivariable calculus or Foundations of applied mathematics.

Useful modules: Proof, Ordinary & partial differential equations, Real Analysis, Complex analysis.

Interface linearizations to model nonlinear effects

Nonlinear partial differential equations are typically extremely difficult to solve analytically. They usually require complicated ad hoc arguments, such as inverse scattering, or are impossible to solve using known methods. Therefore, nonlinear partial differential equations are often "linearized", meaning that their nonlinear terms are replaced with linear approximations, typically using a known solution such as the zero solution. If a nonconstant solution, such as the famous soliton or dispersive shock solutions, is selected to linearize around, then the linearized system is necessarily a variable coefficient partial differential equation. Variable coefficient linear partial differential equations are usually easier to solve than nonlinear equations, but still may be very difficult. In contrast, the unified transform method is an efficient tool available for constant coefficient linear partial differential equations.

In a recent work of Smith, Trogdon & Vasan, the Korteweg-de Vries equation was linearized about a piecewise constant approximation of its dispersive shock wave solution. This reduced the nonlinear partial differential equation to an interface problem, with interface translating at constant velocity, for a linear third order dispersive equation. They refer to the reduction as an "interface linearization". The unified transform method, as extended by Sheils to interface problems, was then applied to solve the approximate problem, and similarities to the full dispersive shockwave problem were noted.

The aim of these projects is to implement an interface linearization of classical nonlinear equations, then solve the resulting system using the unified transform method. The purpose is to cheaply model the original more complicated systems while preserving the qualitative features of the nonlinear effects.

Interface linearization of KdV solitons

Open project.

Supervisor: Dave Smith.

We model solutions of the Korteweg-de Vries equation with a constant coefficient linear equation by linearizing about a translating step function approximation of the soliton. The interface linearization and stage 1 of the unified transform method was already performed in project UTM stage 1 for interface linearization of KdV solitons, however that work only covers the crudest soliton approximation: the box.

The main aim of this project is to complete the solution of the reduced interface system. This project can be extended by using a finer soliton approximation for the interface linearization and solving the resulting more complicated linear problem.

Required modules: Linear algebra, Proof, Real analysis.

Useful modules: Foundations of applied mathematics, Ordinary & partial differential equations, Complex analysis.

Spectral theory of ordinary differential operators

The spectral theory of ordinary differential operators is crucial to the analysis of partial differential equations representing evolution with spatial constraints. The classical theory of two point differential operators with constant coefficients has been quite well understood for more than a century, although advances such as [FS2016a] have been made more recently. In particular, the extent of failure of selfadjointness caused by particular classes of boundary conditions, and the effects of such failure on the spectral theory is still not concluded.

These projects extend classical results, such as the construction of adjoint operators and spectral analysis, from two point differential operators to differential operators subject to more complicated classes of boundary conditions.

Boundary wellposedness of initial multipoint value problems

Open project.

Supervisor: Dave Smith.

A linear evolution partial differential equations, together with initial and boundary conditions, is called an initial boundary value problem. Such problems are often used to model physical systems, such as water waves or the distribution of heat in a solid. Hadamard gave specific criteria for such a problem to be "wellposed": existence and unicity of a solution, and that solution changing only slightly with slight changes in the data of the problem. The data of the problem are typically the initial state of the system and information about how the system evolves at the spatial boundaries. Therefore, Hadamard's third criterion is important, because it ensures that a small measurement error will not have ruinous effect on the output of the model.

If the differential equation is high order in space, then it is necessary to specify more than one condition at the spatial boundary. Typically, the value of the quantity under study and a spatial derivative are both prescribed at the boundary. But what if there were a small error in the measurement of where one of these was specified?

The unified transform method, as recently extended by Pelloni & Smith, provides a powerful tool for the analysis of high order initial boundary problems, and their generalizations relevant to the above question, initial multipoint value problems. The method exploits complex analysis to provide novel solutions to problems for which classical Fourier analysis is not applicable. Preliminary work on this project is the investigation of Hadamard's third criterion using solution representations obtained via the unified transform method.

The main aim of this project is to investigate how solutions of high order initial boundary value problems and initial multipoint value problems depend upon the points at which their conditions are specified.

Required modules: Linear algebra, Proof, Real analysis.

Useful modules: Foundations of applied mathematics, Ordinary & partial differential equations, Complex analysis.

Adjoints of ordinary difference operators

Open project.

Supervisor: Dave Smith.

In spectral methods for partial differential equations, one typically finds a way to simplify a problem by separating it into a temporal part and a spatial part. The spatial part is often a Sturm-Liouville eigenvalue problem for a "two point" spatial ordinary differential operator. The words "two point" are used because there are boundary conditions at two points: the left and right edges of the spatial domain of the original partial differential equation. Methods for the solution of such problems are well known and, with more complicated boundary conditions, rely upon explicit construction of the adjoint of the spatial ordinary differential operator.

A parallel theory to differential equations is the discrete analogue: difference equations. Difference equations find particular application in finance, infectious diseases, and other fields in which variables are naturally discrete instead of continuous. Moreover, when a computer is used to solve a differential equation numerically, the first step is usually some kind of discretization that implicitly replaces the differential equation with a difference equation expected to approximate it well.

It is known (if, formulaically, somewhat involved) how to construct the adjoint of any two point differential operator. The purpose of this project is to construct the adjoint of an ordinary difference operator, and explore applications of this adjoint operator.

Required modules: Linear algebra, Calculus.

Useful modules: Proof, Real analysis, Multivariable calculus or Foundations of applied mathematics, Ordinary & partial differential equations, Complex analysis.

Adjoints of matrix differential operators with two point boundary conditions

Open project.

Supervisor: Dave Smith.

In spectral methods for partial differential equations, one typically finds a way to simplify a problem by separating it into a temporal part and a spatial part. The spatial part is often a Sturm-Liouville eigenvalue problem for a "two point" spatial ordinary differential operator. The words "two point" are used because there are boundary conditions at two points: the left and right edges of the spatial domain of the original partial differential equation. Methods for the solution of such problems are well known and, with more complicated boundary conditions, rely upon explicit construction of the Lagrange adjoint of the spatial ordinary differential operator. The Lagrange adjoint is defined in terms of a formal adjoint and adjoint two point boundary conditions. Coddington and Levinson (Theory of Ordinary Differential Equations, McGraw-Hill, 1955) showed how to construct both parts.

If one works in the more complicated setting of a linear system of partial differential equations, then one may derive a matrix system of spatial ordinary differential operators, but one requires the corresponding matrix adjoint. Schwarz achieved the construction of the formal adjoint (Adjoint Matrix Differential Systems, JMAA 129, 103-117, 1988) but made no attempt to construct the adjoint boundary conditions. In this project, we will build upon the works of Schwarz and Coddington and Levinson to construct the Lagrange adjoint of a two point matrix differential system.

Potential applications include the solution of initial boundary value problems for general systems of linear partial differential equations.

Required modules: Linear algebra, Calculus, Proof, Real analysis.

Useful modules: Multivariable calculus or Foundations of applied mathematics, Ordinary & partial differential equations, Complex analysis.

Dispersive revival phenomena

Revivals and fractalization are visually striking phenomena occurring for dispersive partial differential equations. The phenomena were first discovered by William Fox Henry Talbot (and often known as "the Talbot effect") but have been rediscovered many times since. If a dispersive system is supplied with a step function as the initial datum, then the solution at a future time is a fractal, continuous but nowhere differentiable. That is, unless one observes the solution at a rational time, in which case the solution collapses to another step function, piecewise smooth but discontinuous at finitely many points! Although this is the most visually striking manifestation, one can prove that similar results hold for general initial data, with the rational time solution always a finite sum of shifts of the initial datum. The effect is known to occur for the linear Schrödinger equation and linearised Korteweg-de Vries equations, among many other dispersive systems. See a recent DSWeb survey article for an introduction and overview of recent results.

For the simplest dispersive systems, it is clear how to define the concepts of revivals and fractalization, but in more complicated settings the definitions become more blurred, and the results are less clear. As this is a relatively unexplored field, there is still much to investigate, both numerically and analytically. Successful studies tend to combine the two approaches, with numerical experiments to identify an effect before careful analytic argument to investigate.

These projects use computational and analytical tools to discover and explain revival phenomena in dispersive systems with complicated boundary conditions.

Revivals and fractalization on network domains

Open project.

Supervisor: Dave Smith.

Most revival and fractalization results concern dispersive systems on the simplest domains with periodic boundary conditions. But it has been proved that pseudoperiodic boundary conditions can yield these effects, and numerical evidence suggests that a still wider class of boundary conditions preserve the phenomena. More complicated domains, such as domains with interfaces have not yet been studied. As dispersive equations are used to model the flow of information in a channel, it makes sense to pose such equations on network domains, where a single channel is replaced with a network of channels connected at certain interfaces.

The purpose of this project is to investigate, both numerically and analytically, whether revivals and fractalization occur on network domains.

Required modules: Linear algebra, Calculus, Experience with the fundamentals of computer programming, preferably in a language of scientific computation such as julia or python.

Useful modules: Proof, Real analysis, Multivariable calculus or Foundations of applied mathematics, Introduction to computer science, Data structures & algorithms, Ordinary & partial differential equations, Complex analysis.

Boundary revivals

Open project.

Supervisor: Dave Smith.

Revivals and fractalization are visually striking phenomena occurring for dispersive partial differential equations. The phenomena were first discovered by William Fox Henry Talbot (and often known as "the Talbot effect") but have been rediscovered many times since. If a periodic dispersive system on an interval of length L is supplied with a step function as the initial datum, then the solution at a future time is a fractal, continuous but nowhere differentiable. That is, unless one observes the solution at a rational time, in which case the solution collapses to another step function, piecewise smooth but discontinuous at finitely many points! Although this is the most visually striking manifestation, one can prove that similar results hold for general initial data, with the rational time solution always a finite sum of shifts of the initial datum. Here, a "rational time" is a time that is a rational multiple of L^2/pi. See the recent review article "Revivals and Fractalization" on DSWeb for an introduction and overview of recent results.

Dujardin studied the same dispersive system, but supplied with homogeneous initial condition, and a time periodic boundary condition (Asymptotics of linear initial boundary value problems with periodic boundary data on the half-line and finite intervals, Proc Roy Soc A 465 3341-3360, 2009). He wanted to show that the solution of that system was asymptotically periodic. He found that the system is asymptotically periodic only if the period of the driving boundary condition is a rational multiple of L^2/pi.

The appearance of the criterion "rational multiple of L^2/pi" in two very different questions for the same system is remarkable. In this project, we will investigate whether it is purely coincidental, or there is some deeper property of the dispersive system from which both of these effects arise.

Required modules: Linear algebra, Calculus, Proof, Real Analysis.

Useful modules: Ordinary & partial differential equations, Complex analysis.

D to N maps for evolution equations

An initial boundary value problem for an evolution partial differential equation is the problem of finding the full spatial and temporal dependence of a function based on information at the edges of the spacetime domain. In a D to N map, one seeks to map only from certain known boundary data to other information at the spatial boundaries. For example, one might seek to reconstruct a function at the boundary if one knows only the value of its derivative at the boundary and its initial value.

D to N maps are useful because, in certain situations, one does not really care about the value of a function in the body, only at the edges of its domain. In other settings, the construction of a D to N map is a crucial step in solving the full initial boundary value problem.

Many of these projects extend the applicability of the "Q equation method", which was introduced for linear evolution equations by Fokas and van der Weele in 2021, and adapted to finite interval problems by Fokas, Pelloni and Smith in 2022.

Interface D to N maps for the classical second order linear equations of mathematical physics

Open project.

Supervisor: Dave Smith.

We apply the Q equation method to interface problems for the classical second order evolution equations: heat, Schrödinger and wave. We aim to obtain representations of D to N maps for temporally periodic problems, which are valid in the long time limit. Possible extensions include problems in cylindrical coordinates and evolution equations of higher spatial order. This represents the first study of D to N maps for interface problems via the Q equation method.

Long time and semiclassical asymptotics

Asymptotics is the study of how a mathematical object behaves in some kind of limit. This field studies questions such as "using the laws of thermodynamics, how does a physical system relax given enough time?" and "how can a small high order derivative prevent wave breaking?" Asymptotic analysis synthesizes the tools of real and complex analysis to quantify the dominant effects of parameters.

Semiclassical asymptotics for high order boundary value problems

Open project.

Supervisor: Dave Smith.

Initial boundary value problems for a linear evolution equation are the mathematical formulations of questions like "if I know the [initial surface elevation of water / distribution of heat in a metal rod] and the [elevation of the water at the ends of the tank / temperature at the ends of the rod] at all times, what is the [surface elevation of the water / temperature] at all positions and times?" Spectral methods for such problems exploit linear algebraic techniques to turn partial differential equations into eigenvalue problems for ordinary differential operators, which are much easier to solve.

For example, the time dependent linear Schrödinger equation can be reduced to an eigenvalue problem for the Schrödinger operator. But the leading order derivative in that differential operator is scaled by a small parameter, the reduced Planck constant. It is therefore physically important to understand how the eigenvalues of the differential operator are affected by this parameter becoming very small. Such calculations are called "semiclassical asymptotics".

Such small parameters appear in other initial boundary value problems derived from physical systems, particularly in water wave problems in very shallow water. In such problems, due to the higher order spatial derivative, the spectral properties of the differential operator can be more complex. We will study the semiclassical asymptotics of the eigenvalue problems associated with boundary value problems for such high order linear evolution equations.

Required modules: Linear algebra, Calculus, Proof, Real analysis.

Useful modules: Multivariable calculus or Foundations of applied mathematics, Ordinary & partial differential equations, Complex analysis.

Long time asymptotics for the Stokes equation on a finite interval

Open project.

Supervisor: Dave Smith.

The Stokes equation ut = uxxx is the prototypical third order dispersive equation. It also forms the linear part of the celebrated Korteweg-de Vries equation for unidirectional water waves in a shallow channel. The long time asymptotics of solutions of this equation are well understood on a semiinfinite domain and in the spatially periodic regime. The aim of this project is to provide the long time asymptotics of solutions of this equation on the finite interval, for a broad class of homogeneous linear boundary conditions.

In the simpler cases, the solution of such inital boundary value problems may be expressed as a kind of generalized Fourier series, thereby reducing the analysis to the problem of finding the asymptotic locus of eigenvalues. However, in cases where the eigenfunctions fail to form a complete system, one must instead analyse certain Fourier integrals, which appear in the solution representation given by the unified transform method. This project will provide a solid grounding in the asymptotic analysis of integrals and exponential sums.

Required modules: Linear algebra, Calculus, Proof, Real analysis.

Useful modules: Multivariable calculus or Foundations of applied mathematics, Ordinary & partial differential equations, Complex analysis.

Computational analysis of exponential polynomials

Exponential polynomials are a special class of holomorphic (complex analytic) functions which can be expressed as a finite linear combination of pure exponential functions, in which each coefficient is a polynomial function. Examples include the trigonometric and hyperbolic sine and cosine functions, but there are others that are rather more complicated. These functions, especially the locus of their zeros and their efficient evaluation, are crucial for computational and numerical implementation of the unified transform method. As soon as a trigonometric polynomial has more than two terms, it is rare to be able to find its zeros exactly in closed form. Instead one can use tools of asymptotic analysis and geometry to construct approximate zeros, and complex analysis to find zeros to arbitrary precision.

These projects are related to the computational and numerical treatment of zero locus problems for exponential polynomials, and efficient evaluation of related oscillatory integrals.

Analytic-geometric asymptotic analysis of exponential polynomials

Open project.

Supervisor: Dave Smith.

Langer (1931) uses geometric and asymptotic analysis to find asymptotic expressions for the zero locus of exponential polynomials. Langer's arguments rely on asymptotic analysis of the relative dominance of each term in the exponential polynomial in different regions of the complex plane, informed by the geometry of the convex hull of the exponents. Here we apply similar relative dominance arguments to the numerical evaluation of exponential polynomials, disregarding terms of lesser importance to improve efficiency.

Hybrid analytic numerical evaluation of oscillatory complex contour integrals

Open project.

Supervisor: Dave Smith.

An integral in which the integrand rapidly switches between positive and negative, almost but not exactly cancelling, is called oscillatory. Oscillatory integrals often arise in the solution of differential equations, making their evaluation an important but challenging problem. Some powerful tools for numerical evaluation of oscillatory integrals have recently been developed, including the julia package ApproxFun, which uses orthogonal polynomial approximation to efficiently evaluate integrals to high accuracy, but there is necessarily a time cost in the initial polynomial approximation step. Combining the tools of geometry with asymptotic analysis, it is possible to simplify certain integrands by discarding extremely small terms. Complex analysis also provides tools such as Cauchy's theorem, Jordan's lemma and the method of steepest descent to make integrals of analytic functions, including some oscillatory integrals, much easier to evaluate numerically.

The purpose of this project is to develop, and to implement as a julia package, a method for evaluating oscillatory contour integrals. Applications include the analysis of integrals appearing in the solution representations of initial boundary value problems for linear evolution equations of high spatial order.

Required modules: Proof, Foundations of applied mathematics, Introduction to computer science.

Useful modules: Real analysis, Python, Data structures & algorithms, Linear algebra, Ordinary & partial differential equations, Complex analysis.