Partial Differential Equations

This page is a sub-page of our page on Calculus of Several Real Variables.

///////

Related KMR-pages:

Differential Equations = Ordinary Differential Equations

///////

Books:

Introduction to Partial Differential Equations – from Fourier Series to Boundary-value Problems, by Arne Broman, Dover Publications Inc., 1989 (1970)
Partial Differential Equations – An Introduction, by David Colton, Dover Publications inc. 1988

///////

Other relevant sources of information:

The superposition principle
Orthogonal coordinates
Curvilinear coordinates
Coordinate system

///////

List of anchors into the text below:

What is a Partial Differential Equation?
But what is a Partial Differential Equation?
Brief History of PDEs
Wave equation
Elliptic PDEs
Hyperbolic PDEs
Parabolic PDEs
Separation of Variables

///////

What is a Partial Differential Equation?

Partial differential equation at Scholarpedia
Partial differential equation at Wikipedia
Partial Differential Equation at Wolfram MathWorld
Partial Differential Equation at Britannica.com

/////// Quoting Wikipedia (Partial differential equation):

In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function.

The function is often thought of as an “unknown” to be solved for, similarly to how \, x \, is thought of as an unknown number, to be solved for, in an algebraic equation like \, x^2 − 3x + 2 = 0 . However, it is usually impossible to write down explicit formulas for solutions of partial differential equations. There is, correspondingly, a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers.

Partial differential equations also occupy a large sector of pure mathematical research, in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations.

Partial differential equations are ubiquitous in mathematically-oriented scientific fields, such as physics and engineering. For instance, they are foundational in the modern scientific understanding of sound, heat, diffusion, electrostatics, electrodynamics, fluid dynamics, elasticity, general relativity, and quantum mechanics.[citation needed] They also arise from many purely mathematical considerations, such as differential geometry and the calculus of variations; among other notable applications, they are the fundamental tool in the proof of the Poincaré conjecture from geometric topology.

Partly due to this variety of sources, there is a wide spectrum of different types of partial differential equations, and methods have been developed for dealing with many of the individual equations which arise. As such, it is usually acknowledged that there is no “general theory” of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields.[1]

Ordinary differential equations form a subclass of partial differential equations, corresponding to functions of a single variable. Stochastic partial differential equations and nonlocal equations are, as of 2020, particularly widely studied extensions of the “PDE” notion. More classical topics, on which there is still much active research, include elliptic and parabolic partial differential equations, fluid mechanics, Boltzmann equations, and dispersive partial differential equations.

/////// End of Quote from Wikipedia

But what is a Partial Differential Equation? (Steven Strogatz on YouTube):

Brief History of Partial Differential Equations

/////// Quoting Colton, p. 49:

Wave equation

Mathematicians did not spontaneously decide to create the theory of partial differential equations, but rather were initially led to study certain particular equations arising in the mathematical formulation of specific physical phenomena. The first significant progress in solving partial differential equations occurred in the middle of the eighteenth century when Euler (1707 – 1783) and d’Alembert (1717 – 1783) investigated the wave equation

\, \dfrac{ {\partial}^2 u }{ \partial x^2} = \dfrac{1}{c^2} \dfrac{ {\partial}^2 u }{ \partial t^2} .

Both were led to the solution

\, u(x,t) = f(x+ct) + g(x-ct) \, ,

where f \, and g \, are “arbitrary” functions, and a debate continued to rage until the 1770s on what “arbitrary” meant. Euler also took up the problem of the vibrations of a regular and circular drum governed by the two-dimensional wave equation

\, \dfrac{ {\partial}^2 u }{ \partial x^2} + \dfrac{ {\partial}^2 u }{ \partial y^2} = \dfrac{1}{c^2} \dfrac{ {\partial}^2 u }{ \partial t^2} ,

and obtained various special solutions by what is now known as the method of separation of variables. Finally, in a series of definitive papers on the propagation of sound, Euler obtained cylindrical and spherical wave solutions of the wave equation in two and three variables. Progress, however, was limited by the lack of knowledge of Fourier series and of the behavior of the special functions arising from the application of the method of separation of variables.

Research into the theory of gravitational attraction led to the formulation of Laplace’s equation

\, \dfrac{ {\partial}^2 u }{ \partial x^2} + \dfrac{ {\partial}^2 u }{ \partial y^2} + \dfrac{ {\partial}^2 u }{ \partial z^2} = 0 ,

for the potential function \, u(x,y,z) . The first significant work on potential theory was done by Legendre (1752 – 1833) in his 1782 study of the gravitational attraction of spheroids, in which he introduced what are now known as Legendre polynomials. This work was continued by Laplace (1749 – 1827) in 1785 (although Laplace never mentioned Legendre!). In a series of papers continuing through the 1780s, Legendre and Laplace continued their investigations of potential theory and the use of Legendre polynomials, associated Legendre polynomials, [cylindrical harmonics] and spherical harmonics, laying the foundation for the vast work in the nineteenth century on the theory of harmonic functions. However, no general method for solving Laplace’s equation was developed in the eighteenth century, nor were the full potentialities of the use of special functions appreciated.

The study of partial differential equations experienced a phenomenal growth in the nineteenth century. This growth not only illuminated new areas of physics, but created the need for mathematical developments in such diverse areas as analytic function theory, the calculus of variations, ordinary differential equations, and differential geometry. In this brief history, we can only highlight a few of the developments that are relevant to the material covered in this book.

The first major step was taken by Fourier (1768 – 1830) in 1807 when he submitted a paper on heat conduction and trigonometric series to the Academy of Sciences of Paris. His paper was rejected; however, when the Academy made the subject of heat conduction the topic of a grand prize in 1812, Fourier submitted a revised copy and this time he won the prize. He continued to work in the area and in 1822 published his classic Théorie analytique de la chaleur in which, following his paper of 1807, he derived the equation of heat conduction

\, \dfrac{ {\partial}^2 u }{ \partial x^2} = \dfrac{1}{{\alpha}^2} \dfrac{ \partial u }{ \partial t} ,

and solved specific heat conduction problems by what is now known as the method of separation of variables and Fourier series. All of Fourier’s work was purely formal, and the convergence properties of Fourier series were left unexamined until later in the century. Later, Poisson (1781 – 1840) made use of Legendre polynomials and spherical harmonics in addition to trigonometric series to study multi-dimensional problems.

At the same time as Fourier series were being developed, Fourier , Cauchy (1789 – 1857), and Poisson discovered what is now called the Fourier integral and applied it to various problems in heat conduction and water waves. Because all three presented papers orally to the Academy of Sciences and published their results only later, it is not possible to assign priority to the discovery of Fourier integrals and transforms.

Mathematicians of the nineteenth century vigorously investigated problems associated with Laplace’s equation, continuing the research initiated by Legendre and Laplace. In a paper written in 1813, Poisson showed that the gravitational attraction of a body with density \, \rho(x,y,z) \, is given by

\, \dfrac{ {\partial}^2 u }{ \partial x^2} + \dfrac{ {\partial}^2 u }{ \partial y^2} + \dfrac{ {\partial}^2 u }{ \partial z^2} = -4 \pi \rho(x,y,z) ,

for points inside the body. Poisson’s derivation of this result was not rigorous, even by the standards of his time, and the first rigorous derivation of Poisson’s equation was given by Gauss (1777 – 1855) in 1839.

However, despite the work of Legendre, Laplace, Poisson, and Gauss, almost nothing was known about the general properties of solutions to Laplace’s equation. In 1828, Green (1793 – 1841), a self-taught English mathematician, published a privately printed booklet entitled An Essay on the Application of Mathematical Analysis to the Theory of Electricity and Magnetism. In this small masterwork Green derived what is now known as Green’s formulas and introduced the concept of the Green’s function. Unfortunately, his work was neglected for over twenty years until Sir William Thomson (later Lord Kelvin, 1828 – 1907) discovered it and, recognizing its great value, had it published in the Journal für Mathematik.

Until the middle part of the nineteenth century, mathematicians simply assumed that a solution to the Laplace or Poisson equations existed, usually arguing from physical considerations. In particular, Green’s proof of the existence of a Green’s function was based entirely on a physical argument. However, in the second half of the century extensive work was undertaken on the problem of existence of solutions to partial differential equations, not only for Laplace’s equation and Poisson’s equation, but for partial differential equations with variable coefficients. In particular, Riemann (1826 – 1866) and Hadamard (1865 – 1963) investigated initial value problems for hyperbolic equations, Picard (1856 – 1941) and others for elliptic equations, while Cauchy and Kowalewsky (1850 – 1891) studied the initial value, or Cauchy problem, for general systems of partial differential equations with analytic coefficients.

Gradually, mathematicians became aware that different types of equations required different types of boundary and initial conditions, leading to the now-standard classification of partial differential equations into elliptic, hyperbolic, and parabolic types. This classification was introduced by DuBois-Reymond (1831 – 1889).

In addition to investigations on existence theorems for general partial differential equations, research continued on the Dirichlet problem and the Neumann problem for Laplace’s equation through methods involving analytic function theory, the calculus of variations, and the method of integral equations (using successive approximation techniques). Considerable effort was also made to prove the existence of eigenvalues for

\, \dfrac{ {\partial}^2 u }{ \partial x^2} + \dfrac{ {\partial}^2 u }{ \partial y^2} + k^2 u = 0 ,

particulary by Schwarz (1843 – 1921) and Poincaré (1854 – 1912). The systematic treatment of the eigenvalue problems for partial differential equations was delayed until the development of the theory of integral equations in the twentieth century by Fredholm (1866 – 1927) and Hilbert (1862 – 1943). We shall return to this theme shortly.

Throughout the nineteenth century, mathematicians and mathematical physicists were concerned with the theory of wave motion, continuing the tradition established by Euler  and d’Alembert in the eighteenth century. In particular, numerous papers were written applying the method of separation of variables in curvilinear  coordinates to solve initial-boundary value problems for the wave equation and boundary value problems for the reduced wave equation or Helmholtz equation. Of paramount importance was the theory of Bessel functions, which were first systematically studied by Bessel (1784 – 1846), a mathematician and director of the astronomical observatory in Königsberg. Although these functions are of central importance in the study of wave propagation, Bessel was in fact led to his study while working on the motion of the planets.

In addition to the method of separation of variables for solving initial-boundary problems in wave propagation, integral representations of solutions to the wave equation and reduced wave equation were established by Poisson, Helmholtz (1821 – 1894), and Kirchhoff (1824 – 1887). The most spectacular triumph of these investigations into the theory of wave propagation was Maxwell’s derivation in 1864 of the laws of electromagnetism. From his equations, Maxwell (1831 – 1879) predicted that electromagnetic waves travel through space at the speed of light and that light itself was an electromagnetic phenomenon. Maxwell’s research was the highlight of nineteenth century mathematical physics and his monograph A Treatise on Electricity and Magnetism, published in 1873, is one of the classics of scientific thought.

Early in the twentieth century, a major new era in the theory of partial differential equations began with the development of the theory of integral equations to solve boundary value problems for partial differential equations. Integral equations had already been used by Neumann (1832 – 1925) in 1870 to solve the Dirichlet problem for Laplace’s equation in a convex domain by the method of successive approximations. However, due to the fact that no systematic theory of integral equations was available, Neumann was not able to remove the restrictive condition of convexity from his analysis.

The first step toward a general theory of integral equations was taken by Volterra (1860 – 1940) in 1896 and 1897 when he used the method of successive approximations to solve whati s now called the Volterra integral equation of the second kind:

\, \phi(s) - \int_{a}^{s} K(s, t) \phi(t) dt = f(s) .

Volterra’s ideas were taken up by Fredholm, a professor at Stockholm, who established what is now known as the Fredholm alternative for Fredholm integral equations of the second kind:

\, \phi(s) - \lambda \int_{a}^{b} K(s, t) \phi(t) dt = f(s) .

Fredholm then proceeded to use his theory to solve the Dirichlet problem for Laplace’s equation in domains that were not necessarily convex, his first results appearing in a seminal paper published in 1900. Fredholm‘s ideas were brought to fruition by Hilbert, a professor at Göttingen and the leading mathematician of the early part of the twentieth century. In a series of six papers published between 1904 and 1910, Hilbert more simply formulated Fredholm’s ideas, established the fact that an “arbitrary” function can be expanded in a series of eigenfunctions of the integral equation (now called the Hilbert-Schmidt theorem), and applied his results to problems in mathematical physics. The method of integral equations has been applied to an increasing number of problems in mathematical physics, most notably the scattering of acoustic, electromagnetic, and elastic waves by inhomogeneities in the medium.

The work by Volterra , Fredholm, and Hilbert has reverberated through the twentieth century, leading first to Hilbert space theory and functional analysis with applications to distributional solutions of initial value and boundary value problems for partial differential equations, and, in a somewhat different direction, to singular integral operators and the “general” theory of linear partial differential operators. However, these topics are beyond the scope of this brief survey; indeed, as the twentieth century reached middle age the era of partial differential equations became so broad and deep that a short survey of the directions taken and the results discovered would require a small monograph! Thus we conclude this section by indicating only three of these directions that are relevant to this book: numerical methods, nonlinear problems, and improperly posed problems.

We recall that nineteenth century research in the theory of partial differential equations was concerned primarily with well posed linear problems – by well posed we mean that a solution exists, is unique, and depends continuously on the boundary or initial data. For such problems, interest was focused on obtaining series or integral representations for the solution. However, as the demands of science increased, it became evident that such representations were often not suitable for numerical computation. For example, in using the series representation of the solution of Maxwell’s equations describing the propagation of radio waves around the earth it was discovered that over a thousand terms of the series were needed in order to assure the needed accuracy – a formidable task even for a modern computer! Hence mathematicians were led to derive new methods for the approximate solution of boundary and initial value problems of mathematical physics, leading to a fruitful interplay between the art of computer science and the methods of numerical analysis.

At the same time, it has become clear that the real world is in fact nonlinear and that although linear models are useful and valid in certain contexts, many phenomena can be understood only by a nonlinear model. Motivated by an increasing number of apparently intractable problems in fluid and gas dynamics, elasticity, and chemical reactions, mathematicians in the twentieth century have systematically studied nonlinear partial differential equations. This subject has by now reached full maturity and forms one of the major areas of the theory of partial differential equations. Finally, by mid-century, mathematicians realized (after some resistance!) that well posed problems were not the only ones of physical interest. In particular, such problems as the design of shock-free air foils and the inverse scattering problems associated with radar, sonar, and medical imaging have led mathematicians seriously to consider improperly posed problems and to derive methods for their “solution.” Although the subject areas of study in partial differential equations in the twentieth century are significantly different from those of the last, the words of Fourier still provide the appropriate guidelines: “The profound study of nature is the most fertile source of mathematical discoveries.”

/////// End of Quote from Colton

Elliptic Partial Differential Equations

Elliptic Partial Differential Equation at Wikipedia

Laplace’s Equation at Wolfram MathWorld
Laplace’s Equation at Wikipedia
Poisson’s Equation at Wikipedia

///////

Hyperbolic Partial Differential Equations

Hyperbolic Partial Differential Equation at Wikipedia

The Wave Equation at Wikipedia
The Electromagnetic Wave Equation at Wikipedia
The Wave Equation at ScienceDirect

Hearing the shape of a drum
Vibrations of a circular membrane

///////

Parabolic Partial Differential Equations

Parabolic Partial Differential Equation at Wikipedia

The Heat Equation at Wikipedia
The Heat Equation at Wolfram MathWorld
The Schrödinger Equation at Wikipedia
The Schrödinger Wave Equation at Eric Weissenstein’s world of physics
Fisher’s equation

///////

Separation of Variables

/////// Quoting Wikipedia on “Orthogonal coordinates

While vector operations and physical laws are normally easiest to derive in Cartesian coordinates, non-Cartesian orthogonal coordinates are often used instead for the solution of various problems, especially boundary value problems, such as those arising in field theories of quantum mechanics, fluid flow, electrodynamics, plasma physics and the diffusion of chemical species or heat.

The chief advantage of non-Cartesian coordinates is that they can be chosen to match the symmetry of the problem. For example, the pressure wave due to an explosion far from the ground (or other barriers) depends on 3D space in Cartesian coordinates, however the pressure predominantly moves away from the center, so that in spherical coordinates the problem becomes very nearly one-dimensional (since the pressure wave dominantly depends only on time and the distance from the center). Another example is (slow) fluid in a straight circular pipe:

in Cartesian coordinates, one has to solve a (difficult) two dimensional boundary value problem involving a partial differential equation, but in cylindrical coordinates the problem becomes one-dimensional with an ordinary differential equation instead of a partial differential equation.

The reason to prefer orthogonal coordinates instead of general curvilinear coordinates is simplicity: many complications arise when coordinates are not orthogonal. For example, in orthogonal coordinates many problems may be solved by separation of variables.

Separation of variables is a mathematical technique that converts a complex d-dimensional problem into d one-dimensional problems that can be solved in terms of known functions. Many equations can be reduced to Laplace’s equation or the Helmholtz equation. Laplace’s equation is separable in 13 orthogonal coordinate systems (listed in the table below with the exception of toroidal), and the Helmholtz equation is separable in 11 orthogonal coordinate systems.

Orthogonal coordinates never have off-diagonal terms in their metric tensor. In other words, the infinitesimal squared distance \, {ds}^2 \, can always be written as a scaled sum of the squared infinitesimal coordinate displacements. […] These scaling functions are used to calculate differential operators in the new coordinates, e.g., the gradient, the Laplacian, the divergence and the curl.

A simple method for generating orthogonal coordinates systems in two dimensions is by a conformal mapping of a standard two-dimensional grid of Cartesian coordinates (x, y) . A complex number z = x + iy can be formed from the real coordinates x and y , where i represents the imaginary unit. Any holomorphic function w = f(z) with non-zero complex derivative will produce a conformal mapping; if the resulting complex number is written w = u + iv , then the curves of constant u and v intersect at right angles, just as the original lines of constant x and y did.

Orthogonal coordinates in three and higher dimensions can be generated from an orthogonal two-dimensional coordinate system, either by projecting it into a new dimension (cylindrical coordinates) or by rotating the two-dimensional system about one of its symmetry axes. However, there are other orthogonal coordinate systems in three dimensions that cannot be obtained by projecting or rotating a two-dimensional system, such as the ellipsoidal coordinates [ which are based on confocal quadrics]. More general orthogonal coordinates may be obtained by starting with some necessary coordinate surfaces and considering their orthogonal trajectories.

/////// End of Quote from Wikipedia

• The Helmholtz Equation at Wikipedia
Bessel’s differential equation
Cylindrical harmonics

////////////////////////////// Testing place

What is a Partial Differential Equation?

///////////////////////////////

/////////////////////////////////////////////////////////////////////////////////////////

An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism[1][2] is a fundamental publication by George Green in 1828, where he extends previous work of Siméon Denis Poisson on electricity and magnetism. The work in mathematical analysis, notably including what is now universally known as Green’s theorem, is of the greatest importance in all branches of mathematical physics. It contains the first exposition of the theory of potential. In physics, Green’s theorem is mostly used to solve two-dimensional flow integrals, stating that the sum of fluid outflows at any point inside a volume is equal to the total outflow summed about an enclosing area. In plane geometry, and in particular, area surveying, Green’s theorem can be used to determine the area and centroid of plane figures solely by integrating over the perimeter.

It is in this essay that the term ‘potential function‘ first occurs. Herein also his remarkable theorem in pure mathematics, since universally known as Green’s theorem, and probably the most important instrument of investigation in the whole range of mathematical physics, made its appearance. We are all now able to understand, in a general way at least, the importance of Green’s work, and the progress made since the publication of his essay in 1828. But to fully appreciate his work and subsequent progress one needs to know the outlook for the mathematico-physical sciences as it appeared to Green at this time and to realize his refined sensitiveness in promulgating his discoveries.[3]

/////////////////////////////////

Green’s theorem – Wikipedia

Let C be a positively oriented, piecewise smooth, simple closed curve in a plane, and let D be the region bounded by C. If L and M are functions of (x, y) defined on an open region containing D and have continuous partial derivatives there, then

{\displaystyle \oint _{C}(L\,dx+M\,dy)=\iint _{D}\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right)dx\,dy}

where the path of integration along C is anticlockwise.[1][2]

In physics, Green’s theorem finds many applications. One is solving two-dimensional flow integrals, stating that the sum of fluid outflowing from a volume is equal to the total outflow summed about an enclosing area. In plane geometry, and in particular, area surveying, Green’s theorem can be used to determine the area and centroid of plane figures solely by integrating over the perimeter.

Relationship to Stokes’ theorem

Green’s theorem is a special case of the Kelvin–Stokes theorem, when applied to a region in the xy-plane.

We can augment the two-dimensional field into a three-dimensional field with a z component that is always 0. Write F for the vector-valued function \mathbf {F} =(L,M,0). Start with the left side of Green’s theorem:

{\displaystyle \oint _{C}(L\,dx+M\,dy)=\oint _{C}(L,M,0)\cdot (dx,dy,dz)=\oint _{C}\mathbf {F} \cdot d\mathbf {r} .}

The Kelvin–Stokes theorem:

{\displaystyle \oint _{C}\mathbf {F} \cdot d\mathbf {r} =\iint _{S}\nabla \times \mathbf {F} \cdot \mathbf {\hat {n}} \,dS.}

The surface S is just the region in the plane D, with the unit normal \mathbf {\hat {n}} defined (by convention) to have a positive z component in order to match the “positive orientation” definitions for both theorems.

The expression inside the integral becomes

{\displaystyle \nabla \times \mathbf {F} \cdot \mathbf {\hat {n}} =\left[\left({\frac {\partial 0}{\partial y}}-{\frac {\partial M}{\partial z}}\right)\mathbf {i} +\left({\frac {\partial L}{\partial z}}-{\frac {\partial 0}{\partial x}}\right)\mathbf {j} +\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right)\mathbf {k} \right]\cdot \mathbf {k} =\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right).}

Thus we get the right side of Green’s theorem

{\displaystyle \iint _{S}\nabla \times \mathbf {F} \cdot \mathbf {\hat {n}} \,dS=\iint _{D}\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right)\,dA.}

Green’s theorem is also a straightforward result of the general Stokes’ theorem using differential forms and exterior derivatives:

{\displaystyle \oint _{C}L\,dx+M\,dy=\oint _{\partial D}\!\omega =\int _{D}d\omega =\int _{D}{\frac {\partial L}{\partial y}}\,dy\wedge \,dx+{\frac {\partial M}{\partial x}}\,dx\wedge \,dy=\iint _{D}\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right)\,dx\,dy.}

Relationship to the divergence theorem

Considering only two-dimensional vector fields, Green’s theorem is equivalent to the two-dimensional version of the divergence theorem:

\iint _{D}\left(\nabla \cdot \mathbf {F} \right)dA=\oint _{C}\mathbf {F} \cdot \mathbf {\hat {n}} \,ds,

where \nabla \cdot \mathbf {F} is the divergence on the two-dimensional vector field \mathbf {F} , and \mathbf {\hat {n}} is the outward-pointing unit normal vector on the boundary.

To see this, consider the unit normal \mathbf {\hat {n}} in the right side of the equation. Since in Green’s theorem d\mathbf {r} =(dx,dy) is a vector pointing tangentially along the curve, and the curve C is the positively oriented (i.e. anticlockwise) curve along the boundary, an outward normal would be a vector which points 90° to the right of this; one choice would be (dy,-dx). The length of this vector is {\textstyle {\sqrt {dx^{2}+dy^{2}}}=ds.} So (dy,-dx)=\mathbf {\hat {n}} \,ds.

Start with the left side of Green’s theorem:

{\displaystyle \oint _{C}(L\,dx+M\,dy)=\oint _{C}(M,-L)\cdot (dy,-dx)=\oint _{C}(M,-L)\cdot \mathbf {\hat {n}} \,ds.}

Applying the two-dimensional divergence theorem with \mathbf {F} =(M,-L), we get the right side of Green’s theorem:

{\displaystyle \oint _{C}(M,-L)\cdot \mathbf {\hat {n}} \,ds=\iint _{D}\left(\nabla \cdot (M,-L)\right)\,dA=\iint _{D}\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right)\,dA.}

Area calculation

Green’s theorem can be used to compute area by line integral.[4] The area of a planar region D is given by

{\displaystyle A=\iint _{D}dA.}

Choose L and M such that {\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}=1. Then the area is given by

{\displaystyle A=\oint _{C}(L\,dx+M\,dy).}

Possible formulas for the area of D include[4]

{\displaystyle A=\oint _{C}x\,dy=-\oint _{C}y\,dx={\tfrac {1}{2}}\oint _{C}(-y\,dx+x\,dy).}

History

Green’s theorem is named after George Green, who stated a similar result in an 1828 paper titled An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism. In 1846, Augustin-Louis Cauchy published a paper stating Green’s theorem as the penultimate sentence. This is in fact the first printed version of Green’s theorem in the form appearing in modern textbooks. Bernhard Riemann gave the first proof of Green’s theorem in his doctoral dissertation on the theory of functions of a complex variable.[5][6]

//////////////////////////////////////////////////////////////////////////////////////

Green’s function – Wikipedia

In mathematics, a Green’s function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

An animation that shows how Green's functions can be superposed to solve a differential equation subject to an arbitrary source.
If one knows the solution {\textstyle G(x,x')} to a differential equation subject to a point source {\textstyle {\hat {L}}(x)G(x,x')=\delta (x-x')} and the differential operator {\textstyle {\hat {L}}(x)} is linear, then one can superpose them to build the solution {\textstyle u(x)=\int f(x')G(x,x')\,dx'} for a general source {\textstyle {\hat {L}}(x)u(x)=f(x)}.

This means that if {\displaystyle \operatorname {L} } is the linear differential operator, then

Through the superposition principle,
given a linear ordinary differential equation {\displaystyle \operatorname {L} y=f},
one can first solve {\displaystyle \operatorname {L} G=\delta _{s}}, for each s, and realizing that,
since the source is a sum of delta functions, then, by linearity of L,
the solution is a sum of Green’s functions.

Green’s functions are named after the British mathematician George Green, who first developed the concept in the 1820s. In the modern study of linear partial differential equations, Green’s functions are studied largely from the point of view of fundamental solutions instead.

Under many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics, seismology and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. In quantum field theory, Green’s functions take the roles of propagators.

Definition and uses

A Green’s function, G(x,s), of a linear differential operator {\displaystyle \operatorname {L} =\operatorname {L} (x)} acting on distributions over a subset of the Euclidean space \mathbb {R} ^{n}, at a point s, is any solution of

{\displaystyle \operatorname {L} \,G(x,s)=\delta (s-x)\,,}       (1)

where δ is the Dirac delta function. This property of a Green’s function can be exploited to solve differential equations of the form

{\displaystyle \operatorname {L} \,u(x)=f(x)~.}       (2)

If the kernel of L is non-trivial, then the Green’s function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green’s function. Green’s functions may be categorized, by the type of boundary conditions satisfied, by a Green’s function number. Also, Green’s functions in general are distributions, not necessarily functions of a real variable.

Green’s functions are also useful tools in solving wave equations and diffusion equations. In quantum mechanics, Green’s function of the Hamiltonian is a key concept with important links to the concept of density of states.

The Green’s function as used in physics is usually defined with the opposite sign, instead.

That is,

{\displaystyle \operatorname {L} \,G(x,s)=\delta (x-s)~.}

This definition does not significantly change any of the properties of Green’s function due to the evenness of the Dirac delta function.

If the operator is translation invariant, that is, when {\displaystyle \operatorname {L} } has constant coefficients with respect to x, then the Green’s function can be taken to be a convolution kernel, that is,  

{\displaystyle G(x,s)=G(x-s)~.}

In this case, Green’s function is the same as the impulse response of linear time-invariant system theory.

Fundamental solution – Wikipedia

In mathematics, a fundamental solution for a linear partial differential operator L is a formulation in the language of distribution theory of the older idea of a Green’s function (although unlike Green’s functions, fundamental solutions do not address boundary conditions).

In terms of the Dirac delta “function” δ(x), a fundamental solution F is a solution of the inhomogeneous equation

LF = δ(x).

Here F is a priori only assumed to be a distribution.

This concept has long been utilized for the Laplacian in two and three dimensions. It was investigated for all dimensions for the Laplacian by Marcel Riesz.

The existence of a fundamental solution for any operator with constant coefficients — the most important case, directly linked to the possibility of using convolution to solve an arbitrary right hand side — was shown by Bernard Malgrange and Leon Ehrenpreis. In the context of functional analysis, fundamental solutions are usually developed via the Fredholm alternative and explored in Fredholm theory.

Example

Consider the following differential equation Lf = sin(x) with

{\displaystyle L={\frac {d^{2}}{dx^{2}}}.}

The fundamental solutions can be obtained by solving LF = δ(x), explicitly,

{\displaystyle {\frac {d^{2}}{dx^{2}}}F(x)=\delta (x)\,.}

Since for the Heaviside function H we have

{\displaystyle {\frac {d}{dx}}H(x)=\delta (x)\,,} there is a solution

{\displaystyle {\frac {d}{dx}}F(x)=H(x)+C\,.} Here C is an arbitrary constant introduced by the integration. For convenience, set C = −1/2.

After integrating {\displaystyle {\frac {dF}{dx}}} and choosing the new integration constant as zero, one has

{\displaystyle F(x)=xH(x)-{\frac {1}{2}}x={\frac {1}{2}}|x|~.}

////////////////////////////////////////

Motivation

See also: Spectral theory

Loosely speaking, if such a function G can be found for the operator {\displaystyle \operatorname {L} }, then, if we multiply the equation (1) for the Green’s function by f(s), and then integrate with respect to s, we obtain :

{\displaystyle \int \operatorname {L} \,G(x,s)\,f(s)\,ds=\int \delta (x-s)\,f(s)\,ds=f(x)~.}

Because the operator {\displaystyle \operatorname {L} =\operatorname {L} (x)} is linear and acts only on the variable x (and not on the variable of integration s), one may take the operator {\displaystyle \operatorname {L} } outside of the integration, yielding

{\displaystyle \operatorname {L} \,\left(\int G(x,s)\,f(s)\,ds\right)=f(x)~.} This means that

{\displaystyle u(x)=\int G(x,s)\,f(s)\,ds}       (3)

is a solution to the equation {\displaystyle \operatorname {L} u(x)=f(x)~.}

Thus, one may obtain the function u(x) through knowledge of the Green’s function in equation (1) and the source term on the right-hand side in equation (2). This process relies upon the linearity of the operator {\displaystyle \operatorname {L} }.

In other words, the solution of equation (2), u(x), can be determined by the integration given in equation (3). Although f(x) is known, this integration cannot be performed unless G is also known. The problem now lies in finding the Green’s function G that satisfies equation (1). For this reason, the Green’s function is also sometimes called the fundamental solution associated to the operator {\displaystyle \operatorname {L} }.

Not every operator {\displaystyle \operatorname {L} } admits a Green’s function. A Green’s function can also be thought of as a right inverse of {\displaystyle \operatorname {L} }. Aside from the difficulties of finding a Green’s function for a particular operator, the integral in equation (3) may be quite difficult to evaluate. However the method gives a theoretically exact result.

This can be thought of as an expansion of f according to a Dirac delta function basis, projecting f over {\displaystyle \delta (x-s)}; and a superposition of the solution on each projection. Such an integral equation is known as a Fredholm integral equation, the study of which constitutes Fredholm theory.

See also: Volterra integral equation

//////////////////////////

Green’s functions for solving inhomogeneous boundary value problems

The primary use of Green’s functions in mathematics is to solve non-homogeneous boundary value problems. In modern theoretical physics, Green’s functions are also usually used as propagators in Feynman diagrams; the term Green’s function is often further used for any correlation function.

Framework

Let {\displaystyle \operatorname {L} } be the Sturm–Liouville operator, a linear differential operator of the form

{\displaystyle \operatorname {L} ={\dfrac {d}{dx}}\left[p(x){\dfrac {d}{dx}}\right]+q(x)} and let D → {\displaystyle {\vec {\operatorname {D} }}} be the vector-valued boundary conditions operator

{\displaystyle {\vec {\operatorname {D} }}\,u={\begin{bmatrix}\alpha _{1}u'(0)+\beta _{1}u(0)\\\alpha _{2}u'(\ell )+\beta _{2}u(\ell )\end{bmatrix}}~.}

Let f(x) be a continuous function in {\displaystyle [0,\ell ]\,.} Further suppose that the problem

{\displaystyle {\begin{aligned}\operatorname {L} \,u&=f\\{\vec {\operatorname {D} }}\,u&={\vec {0}}\end{aligned}}} is “regular”, i.e., the only solution for f(x) = 0 for all x is u(x)=0.[a]

Theorem

There is one and only one solution u(x) that satisfies

{\displaystyle {\begin{aligned}\operatorname {L} \,u&=f\\{\vec {\operatorname {D} }}\,u&={\vec {0}}\end{aligned}}} and it is given by

{\displaystyle u(x)=\int _{0}^{\ell }f(s)\,G(x,s)\,ds~,} where G(x,s) is a Green’s function satisfying the following conditions:

  1. G(x,s) is continuous in x and s.
  2. For {\displaystyle x\neq s~},{\displaystyle \quad \operatorname {L} \,G(x,s)=0~}.
  3. For {\displaystyle s\neq 0~}{\displaystyle \quad {\vec {\operatorname {D} }}\,G(x,s)={\vec {0}}~}.
  4. Derivative “jump”:{\displaystyle \quad G'(s_{0+},s)-G'(s_{0-},s)=1/p(s)~}.
  5. Symmetry:{\displaystyle \quad G(x,s)=G(s,x)~}.

Advanced and retarded Green’s functions

See also: Green’s function (many-body theory) and propagator

Green’s function is not necessarily unique since the addition of any solution of the homogeneous equation to one Green’s function results in another Green’s function. Therefore if the homogeneous equation has nontrivial solutions, multiple Green’s functions exist. In some cases, it is possible to find one Green’s function that is non-vanishing only for {\displaystyle s\leq x}, which is called a retarded Green’s function, and another Green’s function that is non-vanishing only for {\displaystyle s\geq x}, which is called an advanced Green’s function.

In such cases, any linear combination of the two Green’s functions is also a valid Green’s function. The terminology advanced and retarded is especially useful when the variable x corresponds to time. In such cases, the solution provided by the use of the retarded Green’s function depends only on the past sources and is causal whereas the solution provided by the use of the advanced Green’s function depends only on the future sources and is non-causal. In these problems, it is often the case that the causal solution is the physically important one. The use of advanced and retarded Green’s function is especially common for the analysis of solutions of the inhomogeneous electromagnetic wave equation.

[ … ]

Table of Green’s functions

The following table gives an overview of Green’s functions of frequently appearing differential operators, where {\textstyle r={\sqrt {x^{2}+y^{2}+z^{2}}}}, {\textstyle \rho ={\sqrt {x^{2}+y^{2}}}}, {\textstyle \Theta (t)} is the Heaviside step function, {\textstyle J_{\nu }(z)} is a Bessel function, {\textstyle I_{\nu }(z)} is a modified Bessel function of the first kind, and {\textstyle K_{\nu }(z)} is a modified Bessel function of the second kind.[2] Where time (t) appears in the first column, the retarded (causal) Green’s function is listed.

Differential
operator L
Green’s function GExample of application
{\displaystyle \partial _{t}^{n+1}} {\displaystyle {\frac {t^{n}}{n!}}\Theta (t)}
\partial_t + \gamma {\displaystyle \Theta (t)e^{-\gamma t}}
\left(\partial_t + \gamma \right)^2 {\displaystyle \Theta (t)te^{-\gamma t}}
\partial_t^2 + 2\gamma\partial_t + \omega_0^2
where {\displaystyle \gamma <\omega _{0}}
{\displaystyle \Theta (t)e^{-\gamma t}~{\frac {\sin(\omega t)}{\omega }}}   with \omega=\sqrt{\omega_0^2-\gamma^2} 1D under-damped harmonic
oscillator
\partial_t^2 + 2\gamma\partial_t + \omega_0^2
where {\displaystyle \gamma >\omega _{0}}”></td><td> <img decoding=   with {\displaystyle \omega ={\sqrt {\gamma ^{2}-\omega _{0}^{2}}}}
1D over-damped harmonic
oscillator
\partial_t^2 + 2\gamma\partial_t + \omega_0^2
where {\displaystyle \gamma =\omega _{0}}
{\displaystyle \Theta (t)e^{-\gamma t}t} 1D critically damped harmonic
oscillator
2D Laplace operator
{\displaystyle \nabla _{\text{2D}}^{2}=\partial _{x}^{2}+\partial _{y}^{2}}
{\displaystyle {\frac {1}{2\pi }}\ln \rho }   with  \rho=\sqrt{x^2+y^2} 2D Poisson equation
3D Laplace operator
{\displaystyle \nabla _{\text{3D}}^{2}=\partial _{x}^{2}+\partial _{y}^{2}+\partial _{z}^{2}}
\frac{-1}{4 \pi r}   with   {\displaystyle r={\sqrt {x^{2}+y^{2}+z^{2}}}} Poisson equation
Helmholtz operator
{\displaystyle \nabla _{\text{3D}}^{2}+k^{2}}
{\displaystyle {\frac {-e^{-ikr}}{4\pi r}}=i{\sqrt {\frac {k}{32\pi r}}}} {\displaystyle =i{\frac {k}{4\pi }}\,} stationary 3D Schrödinger
equation for free particle
Divergence operator
{\displaystyle \nabla \cdot v}
{\displaystyle 1/(4\pi )({\bf {{x}-{\bf {{x_{0}})/\|{\bf {{x}-{\bf {{x_{0}}\|^{3}}}}}}}}}}
Curl operator
\nabla \times v
{\displaystyle 1/(4\pi )({\bf {{x}-{\bf {{x_{0}})\times ({\bf {{x}-{\bf {{x_{0}})/\|{\bf {{x}-{\bf {{x_{0}}\|^{3}}}}}}}}}}}}}}
{\displaystyle \nabla ^{2}-k^{2}}
in n dimensions
{\displaystyle -(2\pi )^{-n/2}\left({\frac {k}{r}}\right)^{n/2-1}K_{n/2-1}(kr)} Yukawa potential,
Feynman
propagator,
Screened
Poisson equation
\partial_t^2 - c^2\partial_x^2 {\displaystyle {\frac {1}{2c}}\Theta (t-|x/c|)} 1D wave equation
{\displaystyle \partial _{t}^{2}-c^{2}\,\nabla _{\text{2D}}^{2}} {\displaystyle {\frac {1}{2\pi c{\sqrt {c^{2}t^{2}-\rho ^{2}}}}}\Theta (t-\rho /c)} 2D wave equation
D’Alembert operator
{\displaystyle \square ={\frac {1}{c^{2}}}\partial _{t}^{2}-\nabla _{\text{3D}}^{2}}
\frac{\delta(t-\frac{r}{c})}{4 \pi r} 3D wave equation
\partial_t - k\partial_x^2 {\displaystyle \Theta (t)\left({\frac {1}{4\pi kt}}\right)^{1/2}e^{-x^{2}/4kt}} 1D diffusion
{\displaystyle \partial _{t}-k\,\nabla _{\text{2D}}^{2}} {\displaystyle \Theta (t)\left({\frac {1}{4\pi kt}}\right)e^{-\rho ^{2}/4kt}} 2D diffusion
{\displaystyle \partial _{t}-k\,\nabla _{\text{3D}}^{2}} {\displaystyle \Theta (t)\left({\frac {1}{4\pi kt}}\right)^{3/2}e^{-r^{2}/4kt}} 3D diffusion
{\displaystyle {\frac {1}{c^{2}}}\partial _{t}^{2}-\partial _{x}^{2}+\mu ^{2}} {\displaystyle {\frac {1}{2}}\left[\left(1-\sin {\mu ct}\right)(\delta (ct-x)+\delta (ct+x))+\mu \Theta (ct-|x|)J_{0}(\mu u)\right]}   with   {\displaystyle u={\sqrt {c^{2}t^{2}-x^{2}}}} 1D Klein–Gordon
equation
{\displaystyle {\frac {1}{c^{2}}}\partial _{t}^{2}-\nabla _{\text{2D}}^{2}+\mu ^{2}} {\displaystyle {\frac {1}{4\pi }}\left[(1+\cos(\mu ct)){\frac {\delta (ct-\rho )}{\rho }}+\mu ^{2}\Theta (ct-\rho )\operatorname {sinc} (\mu u)\right]}   with   {\displaystyle u={\sqrt {c^{2}t^{2}-\rho ^{2}}}} 2D Klein–Gordon
equation
{\displaystyle \square +\mu ^{2}}{\displaystyle {\frac {1}{4\pi }}\left[{\frac {\delta \left(t-{\frac {r}{c}}\right)}{r}}+\mu c\Theta (ct-r){\frac {J_{1}\left(\mu u\right)}{u}}\right]}  
with   {\displaystyle u={\sqrt {c^{2}t^{2}-r^{2}}}}
3D Klein–Gordon
equation
{\displaystyle \partial _{t}^{2}+2\gamma \partial _{t}-c^{2}\partial _{x}^{2}} {\displaystyle {\frac {1}{2}}e^{-\gamma t}\left[\delta (ct-x)+\delta (ct+x)+\Theta (ct-|x|)\left({\frac {\gamma }{c}}I_{0}\left({\frac {\gamma u}{c}}\right)+{\frac {\gamma t}{u}}I_{1}\left({\frac {\gamma u}{c}}\right)\right)\right]}   with   {\displaystyle u={\sqrt {c^{2}t^{2}-x^{2}}}} telegrapher’s
equation
{\displaystyle \partial _{t}^{2}+2\gamma \partial _{t}-c^{2}\,\nabla _{\text{2D}}^{2}} {\displaystyle {\frac {e^{-\gamma t}}{4\pi }}\left[(1+e^{-\gamma t}+3\gamma t){\frac {\delta (ct-\rho )}{\rho }}+\Theta (ct-\rho )\left({\frac {\gamma \sinh \left({\frac {\gamma u}{c}}\right)}{cu}}+{\frac {3\gamma t\cosh \left({\frac {\gamma u}{c}}\right)}{u^{2}}}-{\frac {3ct\sinh \left({\frac {\gamma u}{c}}\right)}{u^{3}}}\right)\right]}   with   {\displaystyle u={\sqrt {c^{2}t^{2}-\rho ^{2}}}} 2D relativistic heat
conduction
{\displaystyle \partial _{t}^{2}+2\gamma \partial _{t}-c^{2}\,\nabla _{\text{3D}}^{2}} {\displaystyle {\frac {e^{-\gamma t}}{20\pi }}\left[\left(8-3e^{-\gamma t}+2\gamma t+4\gamma ^{2}t^{2}\right){\frac {\delta (ct-r)}{r^{2}}}+{\frac {\gamma ^{2}}{c}}\Theta (ct-r)\left({\frac {1}{cu}}I_{1}\left({\frac {\gamma u}{c}}\right)+{\frac {4t}{u^{2}}}I_{2}\left({\frac {\gamma u}{c}}\right)\right)\right]}   with   {\displaystyle u={\sqrt {c^{2}t^{2}-r^{2}}}}

Leave a Reply

Your email address will not be published. Required fields are marked *