Vector Analysis

This page is a sub-page of our page on Calculus of Several Real Variables.

///////

The interactive simulations on this page can be navigated with the Free Viewer
of the Graphing Calculator.

///////

Related KMR-pages:

Gradients
Linear Algebra

Exact Differential Forms
Partial Differential Equations

Complex Derivative
Complex trigonometry
Conformal Mapping
Inversion
Möbius transformations
Conformal Mapping
Steiner Circles
Stereographic Projection
The Riemann Zeta function
Einstein for Flatlanders

In Swedish:

Vektoranalys

///////

Books:

In Swedish:

• Ramgard, A., Vektoranalys – 2:a upplagan,
Teknisk Högskolelitteratur i Stockholm AB (THS AB), 1992.

/////// Translating from Ramgard (1992, page 1):

1.1 Vector-valued functions

Definition 1: A vector-valued function \, \textbf{A} \, is a function whose codomain \, B \, consists of vectors. Let us assume that \, \textbf{A} :s domain \, D \, consists of \, n -tuples \, (u, v, \cdots) \, of real numbers. In that case the function \, \textbf{A} \, uniquely associates a vector \, \textbf{A} (u, v, \cdots) \, with every set of values of the independent variables \, u, v, \cdots \, that corresponds to a point in \, D \,

In general, we will consider vectors that belong to a three-dimensional vector space and therefore can be represented by arrows in the “usual” three-dimensional space \, \mathbb{E}^3 . We often use cartesian coordinates \, x, y, z \, in order to label the points in \, \mathbb{E}^3 . By a cartesian coordinate system we always mean an orthogonal and right-handed system.

An arbitrary vector \, \textbf{A} \, can be referred to the basis-vectors \, {\textbf{e}}_x, {\textbf{e}}_y, {\textbf{e}}_z \, in the cartesian coordinate system:

\textbf{A} \, = \, (A_x, A_y, A_z) \, \equiv \, A_x {\textbf{e}}_x + A_y {\textbf{e}}_y + A_z {\textbf{e}}_z. \qquad \qquad \qquad \qquad \qquad (1.1)

According to (1.1) a vector-valued function is equivalent to three real-valued functions:

\, \textbf{A} (u, v, \cdots) \, \equiv \,  (A_x(u, v, \cdots), A_y(u, v, \cdots), A_z(u, v, \cdots)) .

Notation: From now on we will often refer to a vector-valued function just as a function, and distinguish it by marking its symbol as a vector (i.e., in bold).

Definition 2: A vector valued function \textbf{A} (u, v, \cdots) \, is said to be continuous at the point \, (u, v, \cdots) \, if, for each value of \, \epsilon > 0 , one can find a \, \delta(\epsilon) > 0 \, such that

\, 0 < | \, \Delta u \, | < \delta \, , \, 0 < | \, \Delta v \, | < \delta \, , \, \cdots \, \implies

| \, \textbf{A}(u + \Delta u, v + \Delta v, \cdots) \, - \, \textbf{A} (u, v, \cdots) \, | \, < \, \epsilon .

\textbf{A}(u, v, \cdots) \, is continuous if and only if
the component functions \, A_x(u, v, \cdots), \cdots \, are continuous functions.

Definition 3: A function \textbf{A} (t) \, has the limit \textbf{A} (t_0) \, when \, t \, tends to \, t_0 \, :

\lim\limits_{t \rightarrow t_0} \, \textbf{A} (t) \, = \, \textbf{A} (t_0) ,

if, for each value of \, \epsilon > 0 , there exists a \, \delta(\epsilon) > 0 \, such that:

0 < | \, t - t_0 \, | < \delta \, \implies \, | \, \textbf{A} (t) - \, \textbf{A} (t_0) \, | \, < \epsilon .

/////// End of the translation from from Ramgard (1992).

/////// Translating from Ramgard (1992, page 7):

2.1 Differentiation and integration of vector-valued functions

Derivatives of vector-valued functions are formally defined in the same way as derivatives of scalar-valued functions:

Definition 4: Let \textbf{A} (u) \, be a vector-valued function, and let

\, \Delta \textbf{A} \, \equiv \, \textbf{A}(u + \Delta u) - \textbf{A}(u) .

If the limit \, \lim\limits_{t \rightarrow t_0} \, \frac{\Delta \textbf{A}}{\Delta u} \, , exists,
the function \textbf{A} (u) \, is said to have the derivative \, \frac{d \textbf{A}}{du} \stackrel {\mathrm{def}}{=} \lim\limits_{\Delta u \rightarrow 0} \, \frac{\Delta \textbf{A}}{\Delta u} .

\, \frac{d \textbf{A}}{du} \, , which is computed as the limit of the quotient between a vector and a scalar, is obviously also a vector-valued function. If \, \frac{d \textbf{A}}{du} \, is differentiated we get the second derivative \, \frac{d^2 \textbf{A}}{du^2} \, etc.

We get a geometric interpretation of the derivative if we lay out all the vectors \, \textbf{A} (u) \, from a common point \, O . Then the tips of these vectors trace out a curve in space,
the so-called hodograph, and \, \frac{d \textbf{A}}{du} \, is a tangent vector to this curve.

The cartesian components of the derivative of a vector
are computed by differentiation of the cartesian components of the vector.

Theorem 2.1 \,\, \frac{d \textbf{A}}{du} = \frac{d}{du} (A_x, A_y, A_z) = ( \frac{d \textbf{A}_x}{du}, \frac{d \textbf{A}_y}{du}, \frac{d \textbf{A}_z}{du} ) .

Proof: By component-wise differentiation (see page 8). \qquad \qquad \qquad \qquad \qquad \boxdot

One can also let theorem 2.1 be the definition of \, \frac{d \textbf{A}}{du} .

Theorem 2.2 Let \, \textbf{A} (u) \, och \, \textbf{B} (u) \, be vector-valued functions
and let \, \Phi (u) \, be a scalar-valued function.
Then the following differentiation rules apply:

\,\frac{d}{du}(\textbf{A} + \textbf{B}) = \frac{d \textbf{A}}{du} + \frac{d \textbf{B}}{du} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2.3)

\,\frac{d}{du}(\textbf{A} \cdot \textbf{B}) = \frac{d \textbf{A}}{du} \cdot \textbf{B} + \textbf{A} \cdot \frac{d \textbf{B}}{du} \qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad \;\, (2.4)

\,\frac{d}{du}(\textbf{A} \times \textbf{B}) = \frac{d \textbf{A}}{du} \times \textbf{B} + \textbf{A} \times \frac{d \textbf{B}}{du} \qquad \qquad \qquad \qquad \qquad \quad \qquad \;\;\; (2.5)

\,\frac{d}{du}(\Phi \textbf{A}) = \frac{d \Phi}{du}\textbf{A} + \Phi \frac{d \textbf{A}}{du}. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \;\; (2.6)

Proof: [The proofs of these differentiation rules] are formally identical to the proofs of the corresponding differentiation rules for real-valued functions. This is because the proofs only make use of arithmetical laws – the commutativity of addition and the distributivity of multiplication w.r.t. addition – which hold for vectors as well as for scalars. \qquad \quad \boxdot

Alternatively, one can prove (2.3 – 2.6) by making use of theorem 2.1 and the component representations of \, \textbf{A} + \textbf{B} \, , \, \textbf{A} \cdot \textbf{B} \, , \, \cdots

[…]

Theorem 2.3 Assume that \, \textbf{A} = \textbf{A}(u) \, and that \, u = u(v) \, are differentiable functions of \, u \, respectively \, v . Then we have

\, \frac{d}{dv} \textbf {A}(u(v)) \, = \, \frac{d \textbf{A}}{du} \frac{du}{dv}. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \;\;\; (2.7)

Proof: Make use of theorem 2.1 and the chain rule for real-valued functions. \qquad \boxdot

2.2 Partial derivatives of vector-valued functions

Definition 5: By the partial derivative of \, A(u, v, ...) \, with respect to \, u \,
we mean the following limit (assuming that it exists):

\, \frac{\partial \textbf{A}}{\partial u} \, = \, \lim\limits_{\Delta u \rightarrow 0} \, \frac{ \textbf{A}(u + \Delta u, v, \cdots) \, - \, \textbf{A}(u, v, \cdots) }{\Delta u} .

Theorem 2.4 \,\, \frac{\partial \textbf{A}}{\partial u} = \frac{ \partial }{\partial u} (A_x, A_y, A_z) = ( \frac{\partial \textbf{A}_x}{\partial u}, \frac{\partial \textbf{A}_y}{\partial u}, \frac{\partial \textbf{A}_z}{\partial u} ) .

Theorem 2.5 If \, \textbf{A}(u, v, \cdots) \, and \, \textbf{B} (u, v, \cdots) \, are differentiable, vector-valued functions, and if \, \Phi (u, v, \cdots) \, is a differentiable, scalar-valued function, then the following partial differentiation rules apply:

\,\frac{\partial}{\partial u}(\textbf{A} + \textbf{B}) = \frac{\partial \textbf{A}}{\partial u} + \frac{\partial \textbf{B}}{\partial u} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2.10)

\,\frac{\partial}{\partial u}(\textbf{A} \cdot \textbf{B}) = \frac{\partial \textbf{A}}{\partial u} \cdot \textbf{B} + \textbf{A} \cdot \frac{\partial \textbf{B}}{\partial u} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \;\, (2.11)

\,\frac{\partial}{\partial u}(\textbf{A} \times \textbf{B}) = \frac{\partial \textbf{A}}{\partial u} \times \textbf{B} + \textbf{A} \times \frac{\partial \textbf{B}}{\partial u} \qquad \qquad \qquad \qquad \qquad \qquad \quad \;\;\;\, (2.12)

\,\frac{\partial}{\partial u}(\Phi \textbf{A}) = \frac{\partial \Phi}{\partial u}\textbf{A} + \Phi \frac{\partial \textbf{A}}{\partial u}. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \;\;\, (2.13)

/////// End of the translation from from Ramgard (1992).

/////// Translating from Ramgard (1992, page 11):

2.3 Differentials of vector-valued functions

Let \, \textbf{A}(u, v, \cdots) \, be a vector-valued function
whose partial derivatives \, \partial \textbf{A} / \partial u , \, \partial \textbf{A} / \partial v , \, \cdots \, are continuous functions.

We introduce

Definition 6: The change \, \Delta \textbf{A} of the vector-valued function value \textbf{A} :

\, \Delta \textbf{A} \equiv \textbf{A}(u + \Delta u, v + \Delta v, \, \cdots) \, - \textbf{A}(u , v, \, \cdots) \,  \,\qquad \qquad \qquad \quad \;\; (2.17)

Definition 7: The differential \, d \textbf{A} of the vector-valued function value \textbf{A} :

\, d \textbf{A} \, \equiv \, \frac{\partial \textbf{A}}{\partial u} du \, + \, \frac{\partial \textbf{A}}{\partial v} dv \, + \, \cdots \, . \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \; (2.18)

Theorem 2.7 The change \, \Delta \textbf{A} \, is approximated arbitrarily close by the differential \, d \textbf{A} \, in the sense that:

\, \Delta \textbf{A} \, = \, d \textbf{A} + \textbf{h} du + \textbf{k} dv + \cdots \, , \qquad \qquad \qquad \qquad \qquad \qquad \qquad \; (2.19)

where \, \textbf{h} \, och \, \textbf{k} \, are vectors whose lengths go to zero when \, d u \, , \, d v \, , \, \cdots \, goes to zero.

[…]

Theorem 2.8 If \, \textbf{A} \, , \, \textbf{B} \, and \, \Phi \, are differentiable functions,
the following rules for differentiation apply:

\, d (\textbf{A} + \textbf{B}) \, = \, d \textbf{A} + d \textbf{B} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2.20)

\, d (\textbf{A} \cdot \textbf{B}) \, = \, d \textbf{A} \cdot \textbf{B} + \textbf{A} \cdot d \textbf{B} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \;\, (2.21)

\, d (\textbf{A} \times \textbf{B}) \, = \, d \textbf{A} \times \textbf{B} + \textbf{A} \times d \textbf{B} \qquad \qquad \qquad \qquad \qquad \qquad \quad \;\;\;\, (2.22)

\, d (\Phi \textbf{A}) \, = \, d \Phi \textbf{A} + \Phi d \textbf{A}. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \;\;\, (2.23)

[…]

2.4 The differential of the position vector

The position vector \, \textbf{r} \, from the origin \, O \, to a point \, P \,
can be viewed as a function of \, P :s cartesian coordinates \, x, y, z \, :

\, \textbf{r} \, = \, \textbf{r}(x, y, z) \, = \, x {\textbf{e}}_x + y {\textbf{e}}_y + z {\textbf{e}}_z \, = \, (x, y, z). \qquad \qquad \qquad \qquad \; (2.24)

The cartesian components of the differential of the position vector \, \textbf{r}
can be obtained as the differentials of the cartesian components of \, \textbf{r} :

\, d \textbf{r} \, = \, (d x, d y, d z). \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \, (2.25)

\, d \textbf{r} \, approximates the change \, \Delta \textbf{r} \, of the position vector
when moving from the point \, P : x, y, z \,
to a neighboring point \, P' : x + d x, y + d y, z + d z .

In this special case there is exact equality,
since the function (2.24) is linear in the independent variables.

/////// End of the translation from from Ramgard (1992).

/////// Translating from Ramgard (1992, page 19):

3.1 The gradient and the directional derivative

Let \, \Phi \, be a continuously differentiable scalar field, by which we mean
(and will mean below) that the three partial (first) derivatives are continuous functions.

At the point \, \textbf{r}(x, y, z) \, the field assumes the value \, \Phi(x, y, z) \,
and at the neighboring point \, \textbf{r} + d \textbf{r} = (x + d x, y + d y, z + d z) \,
the field assumes the value \, \Phi(x, y, z) + \Delta \Phi , where

\, \Delta \Phi \, \approx \, d \Phi \, = \, \frac{\partial \Phi}{\partial x} d x + \frac{\partial \Phi}{\partial y} d y + \frac{\partial \Phi}{\partial z} d z. \qquad \qquad \qquad \qquad \qquad \qquad \;\; (3.1)

The partial derivatives in (3.1) are evaluated at the point \, \textbf{r} .

We now introduce a continuous vector field \, \text{grad} \, \Phi ,
which concisely describes \, \Phi :s variation in the immediate vicinity of each point:

Definition 8: The gradient of the scalar field \, \Phi \, is the vector field:

\, \text{grad} \, \Phi \, \stackrel {\mathrm{def}}{=} \, (\frac{\partial \Phi}{\partial x}, \frac{\partial \Phi}{\partial y}, \frac{\partial \Phi}{\partial z}). \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (3.2)

IMPORTANT: The expression (3.2) is only valid if the coordinate system is orthonormal.

Evidently, the differential of \, \Phi \, i (3.1) can be written as the scalar product of \, \text{grad} \, \Phi \, and the differential of the position vector:

\, d \Phi = \text{grad} \, \Phi \cdot d \textbf{r} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \;\;\; (3.3)

IMPORTANT: The expression (3.3) can be used as a coordinate-free definition of the gradient, since it does not refer to any specific coordinate system in space.

We now introduce into equation (3.3) the modulus \, d s \, and the direction unit-vector \, \textbf{e} \, of the position-vector differential \, d \textbf{r} \, :

\, d \textbf{r} = \textbf{e} \, d s \, \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \;\; (3.4)

Next we divide (3.4) by \, d s . In this way we arrive at:

Definition 9:  The directional derivative along the direction \, \textbf{e} \, away from the point \, \textbf{r} \, :

\, \frac{d \Phi}{d s} = \text{grad} \, \Phi \cdot \textbf{e}. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad (3.5)

The rate of increase of \, \Phi \, along a given direction \, \textbf{e} \, is therefore equal to the component of the gradient vector \, \text{grad} \, \Phi \, along this direction.

If we want, we can define the directional derivative as:

\, \frac{d \Phi}{d s} = \lim\limits_{s \, \rightarrow \, 0} \frac{ \Phi ( \textbf{r} + s \textbf{e} ) - \Phi ( \textbf{r} ) }{ s }. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \;\; (3.6)

Theorem 3.1 The value of \, \text{grad} \, \Phi \, at the point \, P \, , the vector \, {(\text{grad} \, \Phi)}_P \, , points in the direction along which \, \Phi \, increases the fastest when moving away from \, P .
Moreover, the maximal increase of \, \Phi \, per unit of length is equal to \, | {(\text{grad} \, \Phi)}_P | .

Proof: The directional derivative along the direction \, \textbf{e} \, :

\, \frac{d \Phi}{d s} = \text{grad} \, \Phi \cdot \textbf{e} = | {(\text{grad} \Phi)}_P | \, \cos \alpha ,

has its maximum equal to \, | {(\text{grad} \, \Phi)}_P | \, when \, \alpha = 0 ,
that is, when \, \textbf{e} \, \shortparallel \, \text{grad} \, \Phi . \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \;\;\; \boxdot

Theorem 3.2 If \, \Phi \, has maximum or minimum at a point, then \, \text{grad} \, \Phi = 0 \, at this point.

Theorem 3.3 \, \text{grad} \, \Phi \, at the point \, P \, is orthogonal to the level surface \, \Phi = c \, that passes through the point \, P .

Proof:The value of the scalar field remains unchanged when the field is subjected to a small displacement \, d \textbf{r} \, along a level surface: \, d \Phi = \text{grad} \, \Phi \cdot d \textbf{r} = 0 , which says that \, \text{grad} \, \Phi \, is orthogonal to each \, d \textbf{r} \, in the level surface, which means that \, \text{grad} \, \Phi \, is orthogonal to the level surface. \, \qquad \qquad \qquad \qquad \qquad \qquad \quad \boxdot

Theorem 3.4 The perpendicular distance at the point \, P \,
between the closely situated level surfaces \, \Phi = c \, och \, \Phi = c + h \,
is approximately equal to:

\, \Delta s \approx \frac{ h }{ | {(\text{grad} \, \Phi)}_P | } \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \; (3.7)

Proof: Let \, d \textbf{r} \, in (3.3) be orthogonal to \, \Phi = c \, i.e., parallel to \, {(\text{grad} \, \Phi)}_P .
Moreover, let \, d\Phi \approx \Delta \Phi = h \, and \, | d \textbf{r} | \approx \Delta s . \qquad \qquad \qquad \qquad \qquad \quad \boxdot

The density of surfaces in the family of level surfaces \, \Phi = c + n h \, , \, n \in \mathbb{Z} \,
is therefore directly proportional to the modulus of the gradient vector.

/////// End of the translation from from Ramgard (1992).

/////// Translating from Ramgard (1992, page 22):

3.2 The potential

Definition 8: Consider a vector field \, \textbf{A} . If there exists a scalar field \, \Phi \, such that:

\, A = \text{grad} \, \Phi \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \;\; (3.8)

then the vector field \, \textbf{A} \, is said to have the (scalar) potential \, \Phi .

The potential for \, \textbf{A} \, is determined up to an arbitrary constant. Because if \, A = \text{grad} \, {\Phi}_1 = \text{grad} \, {\Phi}_2 \, holds true, then we have \, \text{grad} \, ( {\Phi}_1 - {\Phi}_2 ) = 0 \, and this implies that \, {\Phi}_1 - {\Phi}_2 = c , that is, \, {\Phi}_1 = {\Phi}_2 + c .

Theorem 3.5 If the continuously differentiable vector field \, \textbf{A} \, has a potential, then we have

\, \frac{\partial A_y}{\partial x} = \frac{\partial A_x}{\partial y} \, , \quad \small \text {cycl.} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad  \;\;\;\;   (3.9)

Proof:

\, \frac{\partial A_y}{\partial x} = \frac{\partial}{\partial x} \frac{\partial \Phi}{\partial y} = \frac{\partial}{\partial y} \frac{\partial \Phi}{\partial x} = \frac{\partial A_x}{\partial y} \, , \quad \small \text {cycl.} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \boxdot .

Reversely, we will see (in chapter 7) that from (3.9) and certain prerequisites one can conclude that the vector field \, \textbf{A} \, has a potential.

Often a potential \, U(\textbf{r}) \, for \, \textbf{A} is defined by the equation:

\, \textbf{A} = - \text{grad} \, U. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad (3.10)

The relationship between \, \Phi \, och \, U \, is given by

\, U(\textbf{r}) = - \Phi(\textbf{r}) + c. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \, (3.11)

/////// End of the translation from from Ramgard (1992).

///////

The complex exponential function

///////

Electromagnetic radiation

A planar electro-magnetic wave , MathRehab on YouTube.
Some light quantum mechanics, Steven Strogatz on YouTube.
Bell’s Theorem: The Quantum Venn Diagram Paradox, minutephysics at YouTube.

///////

The interactive simulation that created this movie.

The electric part of the wave: \, E(\mathbf{\hat{k}}, \mathbf{x}, \omega, t) \, = \, e^{ \, i \,(\mathbf{\hat{k}} \cdot \mathbf{x} \, - \, \omega \, t)} \,

The magnetic part of the wave: \, B(\mathbf{\hat{k}}, \mathbf{x}, \omega, t) \, = \, e^{ \, i \, (\mathbf{\hat{k}} \cdot \mathbf{x} \, - \, (\omega \, + \, \pi/2) \, t)} \,

The entire wave: \, E_m(\mathbf{\hat{k}}, \mathbf{x}, \omega, t) \, = \, E(\mathbf{\hat{k}}, \mathbf{x}, \omega, t) \, + \, B(\mathbf{\hat{k}}, \mathbf{x}, \omega, t) \,

Its Poynting vector : \, S \, = \, \frac{1}{{\mu}_0} \, E \, \times \, B

///////

Electromagnetism

Maxwell and Dirac theories as an already unified theory

Conceptual background:

Geometric Algebra

Clifford Algebra

Historical background:

The Evolution Of Geometric Arithmetic

///////

Divergence and curl: The language of Maxwell’s equations, fluid flow, and more
(Steven Strogatz on YouTube):

///////

/////// Translated from Folke Eriksson, Flerdimensionell analys, p.98

The gradient as a covariant vector:

/////// End of translation from Eriksson, Flerdimensionell analys.

////////////////////////////////////////////////////////////////////////////////////////////////

Green’s identities – Wikipedia

In mathematics, Green’s identities are a set of three identities in vector calculus relating the bulk with the boundary of a region on which differential operators act. They are named after the mathematician George Green, who discovered Green’s theorem.

Green’s first identity

This identity is derived from the divergence theorem applied to the vector field F = ψφ while using an extension of the product rule that ∇ ⋅ (ψ X ) = ∇ψX + ψ ∇⋅X: Let φ and ψ be scalar functions defined on some region URd, and suppose that φ is twice continuously differentiable, and ψ is once continuously differentiable. Using the product rule above, but letting X = ∇φ, integrate ∇⋅(ψφ) over U. Then[1]

{\displaystyle \int _{U}\left(\psi \,\Delta \varphi +\nabla \psi \cdot \nabla \varphi \right)\,dV=\oint _{\partial U}\psi \left(\nabla \varphi \cdot \mathbf {n} \right)\,dS=\oint _{\partial U}\psi \,\nabla \varphi \cdot d\mathbf {S} }

where ∆ ≡ ∇2 is the Laplace operator, ∂U is the boundary of region U, n is the outward pointing unit normal to the surface element dS and dS = ndS is the oriented surface element.

This theorem is a special case of the divergence theorem, and is essentially the higher dimensional equivalent of integration by parts with ψ and the gradient of φ replacing u and v.

Note that Green’s first identity above is a special case of the more general identity derived from the divergence theorem by substituting F = ψΓ :

{\displaystyle \int _{U}\left(\psi \,\nabla \cdot \mathbf {\Gamma } +\mathbf {\Gamma } \cdot \nabla \psi \right)\,dV=\oint _{\partial U}\psi \left(\mathbf {\Gamma } \cdot \mathbf {n} \right)\,dS=\oint _{\partial U}\psi \mathbf {\Gamma } \cdot d\mathbf {S} ~.}

Green’s second identity

If φ and ψ are both twice continuously differentiable on UR3, and ε is once continuously differentiable, one may choose F = ψεφφεψ to obtain :

{\displaystyle \int _{U}\left[\psi \,\nabla \cdot \left(\varepsilon \,\nabla \varphi \right)-\varphi \,\nabla \cdot \left(\varepsilon \,\nabla \psi \right)\right]\,dV=\oint _{\partial U}\varepsilon \left(\psi {\partial \varphi  \over \partial \mathbf {n} }-\varphi {\partial \psi  \over \partial \mathbf {n} }\right)\,dS.}

For the special case of ε = 1 all across UR3, then we have

{\displaystyle \int _{U}\left(\psi \,\nabla ^{2}\varphi -\varphi \,\nabla ^{2}\psi \right)\,dV=\oint _{\partial U}\left(\psi {\partial \varphi  \over \partial \mathbf {n} }-\varphi {\partial \psi  \over \partial \mathbf {n} }\right)\,dS.}

In the equation above, ∂φ/∂n is the directional derivative of φ in the direction of the outward pointing surface normal n of the surface element dS

{\displaystyle {\partial \varphi  \over \partial \mathbf {n} }=\nabla \varphi \cdot \mathbf {n} =\nabla _{\mathbf {n} }\varphi .}

Explicitly incorporating this definition in the Green’s second identity with ε = 1 results in

{\displaystyle \int _{U}\left(\psi \,\nabla ^{2}\varphi -\varphi \,\nabla ^{2}\psi \right)\,dV=\oint _{\partial U}\left(\psi \nabla \varphi -\varphi \nabla \psi \right)\cdot d\mathbf {S} .}

In particular, this demonstrates that the Laplacian is a self-adjoint operator in the L2 inner product for functions vanishing on the boundary so that the right hand side of the above identity is zero.

Green’s third identity

Green’s third identity derives from the second identity by choosing φ = G, where the Green’s function G is taken to be a fundamental solution of the Laplace operator, ∆. This means that :

{\displaystyle \Delta G(\mathbf {x} ,{\boldsymbol {\eta }})=\delta (\mathbf {x} -{\boldsymbol {\eta }})~.}

For example, in R3, a solution has the form

{\displaystyle G(\mathbf {x} ,{\boldsymbol {\eta }})={\frac {-1}{4\pi \|\mathbf {x} -{\boldsymbol {\eta }}\|}}~.}

Green’s third identity states that if ψ is a function that is twice continuously differentiable on U, then

{\displaystyle \int _{U}\left[G(\mathbf {y} ,{\boldsymbol {\eta }})\,\Delta \psi (\mathbf {y} )\right]\,dV_{\mathbf {y} }-\psi ({\boldsymbol {\eta }})=\oint _{\partial U}\left[G(\mathbf {y} ,{\boldsymbol {\eta }}){\partial \psi  \over \partial \mathbf {n} }(\mathbf {y} )-\psi (\mathbf {y} ){\partial G(\mathbf {y} ,{\boldsymbol {\eta }}) \over \partial \mathbf {n} }\right]\,dS_{\mathbf {y} }.}

A simplification arises if ψ is itself a harmonic function, i.e. a solution to the Laplace equation. Then ∇2ψ = 0 and the identity simplifies to

{\displaystyle \psi ({\boldsymbol {\eta }})=\oint _{\partial U}\left[\psi (\mathbf {y} ){\frac {\partial G(\mathbf {y} ,{\boldsymbol {\eta }})}{\partial \mathbf {n} }}-G(\mathbf {y} ,{\boldsymbol {\eta }}){\frac {\partial \psi }{\partial \mathbf {n} }}(\mathbf {y} )\right]\,dS_{\mathbf {y} }.}

The second term in the integral above can be eliminated if G is chosen to be the Green’s function that vanishes on the boundary of U (Dirichlet boundary condition) :

{\displaystyle \psi ({\boldsymbol {\eta }})=\oint _{\partial U}\psi (\mathbf {y} ){\frac {\partial G(\mathbf {y} ,{\boldsymbol {\eta }})}{\partial \mathbf {n} }}\,dS_{\mathbf {y} }~.}

This form is used to construct solutions to Dirichlet boundary condition problems. Solutions for Neumann boundary condition problems may also be simplified, though the Divergence theorem applied to the differential equation defining Green’s functions, which shows that the Green’s function cannot integrate to zero on the boundary, and hence cannot vanish on the boundary. See Green’s functions for the Laplacian or [2] for a detailed argument, with an alternative.

It can be further verified that the above identity also applies when ψ is a solution to the Helmholtz equation or wave equation and G is the appropriate Green’s function. In such a context, this identity is the mathematical expression of the Huygens principle, and leads to Kirchhoff’s diffraction formula and other approximations.

On manifolds

Green’s identities hold on a Riemannian manifold. In this setting, the first two are

{\displaystyle {\begin{aligned}\int _{M}u\,\Delta v\,dV+\int _{M}\langle \nabla u,\nabla v\rangle \,dV&=\int _{\partial M}uNv\,d{\widetilde {V}}\\\int _{M}\left(u\,\Delta v-v\,\Delta u\right)\,dV&=\int _{\partial M}(uNv-vNu)\,d{\widetilde {V}}\end{aligned}}}

where u and v are smooth real-valued functions on M, dV is the volume form compatible with the metric, d\widetilde {V} is the induced volume form on the boundary of M, N is the outward oriented unit vector field normal to the boundary, and Δu = div(grad u) is the Laplacian.

Green’s vector identity

Green’s second identity establishes a relationship between second and (the divergence of) first order derivatives of two scalar functions. In differential form :

{\displaystyle p_{m}\,\Delta q_{m}-q_{m}\,\Delta p_{m}=\nabla \cdot \left(p_{m}\nabla q_{m}-q_{m}\,\nabla p_{m}\right),}

where pm and qm are two arbitrary twice continuously differentiable scalar fields. This identity is of great importance in physics because continuity equations can thus be established for scalar fields such as mass or energy.[3]

In vector diffraction theory, two versions of Green’s second identity are introduced.

One variant invokes the divergence of a cross product [4][5][6] and states a relationship in terms of the curl-curl of the field :

{\displaystyle \mathbf {P} \cdot \left(\nabla \times \nabla \times \mathbf {Q} \right)-\mathbf {Q} \cdot \left(\nabla \times \nabla \times \mathbf {P} \right)=\nabla \cdot \left(\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right)-\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)\right).}

This equation can be written in terms of the Laplacians :

{\displaystyle \mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} +\mathbf {Q} \cdot \left[\nabla \left(\nabla \cdot \mathbf {P} \right)\right]-\mathbf {P} \cdot \left[\nabla \left(\nabla \cdot \mathbf {Q} \right)\right]=\nabla \cdot \left(\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)-\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right)\right).}

However, the terms Q ⋅ [ ∇ ( ∇ ⋅ P ) ] − P ⋅ [ ∇ ( ∇ ⋅ Q ) ] ,

{\displaystyle \mathbf {Q} \cdot \left[\nabla \left(\nabla \cdot \mathbf {P} \right)\right]-\mathbf {P} \cdot \left[\nabla \left(\nabla \cdot \mathbf {Q} \right)\right],} could not be readily written in terms of a divergence.

The other approach introduces bi-vectors, this formulation requires a dyadic Green function.[7][8] The derivation presented here avoids these problems.[9]

Consider that the scalar fields in Green’s second identity are the Cartesian components of vector fields, i.e., :

{\displaystyle \mathbf {P} =\sum _{m}p_{m}{\hat {\mathbf {e} }}_{m},\qquad \mathbf {Q} =\sum _{m}q_{m}{\hat {\mathbf {e} }}_{m}.}

Summing up the equation for each component, we obtain

{\displaystyle \sum _{m}\left[p_{m}\Delta q_{m}-q_{m}\Delta p_{m}\right]=\sum _{m}\nabla \cdot \left(p_{m}\nabla q_{m}-q_{m}\nabla p_{m}\right).}

The LHS according to the definition of the dot product may be written in vector form as

{\displaystyle \sum _{m}\left[p_{m}\,\Delta q_{m}-q_{m}\,\Delta p_{m}\right]=\mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} .}

The RHS is a bit more awkward to express in terms of vector operators. Due to the distributivity of the divergence operator over addition, the sum of the divergence is equal to the divergence of the sum, i.e., :

{\displaystyle \sum _{m}\nabla \cdot \left(p_{m}\nabla q_{m}-q_{m}\nabla p_{m}\right)=\nabla \cdot \left(\sum _{m}p_{m}\nabla q_{m}-\sum _{m}q_{m}\nabla p_{m}\right).}

Recall the vector identity for the gradient of a dot product :

{\displaystyle \nabla \left(\mathbf {P} \cdot \mathbf {Q} \right)=\left(\mathbf {P} \cdot \nabla \right)\mathbf {Q} +\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} +\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)+\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right),}

which, written out in vector components is given by :

{\displaystyle \nabla \left(\mathbf {P} \cdot \mathbf {Q} \right)=\nabla \sum _{m}p_{m}q_{m}=\sum _{m}p_{m}\nabla q_{m}+\sum _{m}q_{m}\nabla p_{m}.}

This result is similar to what we wish to evince in vector terms ‘except’ for the minus sign. Since the differential operators in each term act either over one vector (say p_{m}) or the other ( q_{m}), the contribution to each term must be

{\displaystyle \sum _{m}p_{m}\nabla q_{m}=\left(\mathbf {P} \cdot \nabla \right)\mathbf {Q} +\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right),}

{\displaystyle \sum _{m}q_{m}\nabla p_{m}=\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} +\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right).}

These results can be rigorously proven to be correct through evaluation of the vector components. Therefore, the RHS can be written in vector form as

{\displaystyle \sum _{m}p_{m}\nabla q_{m}-\sum _{m}q_{m}\nabla p_{m}=\left(\mathbf {P} \cdot \nabla \right)\mathbf {Q} +\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)-\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} -\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right).}

Putting together these two results, a result analogous to Green’s theorem for scalar fields is obtained

Theorem for vector fields:

{\displaystyle \color {OliveGreen}\mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} =\left[\left(\mathbf {P} \cdot \nabla \right)\mathbf {Q} +\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)-\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} -\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right)\right].}

The curl of a cross product can be written as

{\displaystyle \nabla \times \left(\mathbf {P} \times \mathbf {Q} \right)=\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} -\left(\mathbf {P} \cdot \nabla \right)\mathbf {Q} +\mathbf {P} \left(\nabla \cdot \mathbf {Q} \right)-\mathbf {Q} \left(\nabla \cdot \mathbf {P} \right);}

Green’s vector identity can then be rewritten as

{\displaystyle \mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} =\nabla \cdot \left[\mathbf {P} \left(\nabla \cdot \mathbf {Q} \right)-\mathbf {Q} \left(\nabla \cdot \mathbf {P} \right)-\nabla \times \left(\mathbf {P} \times \mathbf {Q} \right)+\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)-\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right)\right].}

Since the divergence of a curl is zero, the third term vanishes to yield Green’s vector identity :

{\displaystyle \color {OliveGreen}\mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} =\nabla \cdot \left[\mathbf {P} \left(\nabla \cdot \mathbf {Q} \right)-\mathbf {Q} \left(\nabla \cdot \mathbf {P} \right)+\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)-\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right)\right].}

With a similar procedure, the Laplacian of the dot product can be expressed in terms of the Laplacians of the factors :

{\displaystyle \Delta \left(\mathbf {P} \cdot \mathbf {Q} \right)=\mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} +2\nabla \cdot \left[\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} +\mathbf {Q} \times \nabla \times \mathbf {P} \right].}

As a corollary, the awkward terms can now be written in terms of a divergence by comparison with the vector Green equation :

{\displaystyle \mathbf {P} \cdot \left[\nabla \left(\nabla \cdot \mathbf {Q} \right)\right]-\mathbf {Q} \cdot \left[\nabla \left(\nabla \cdot \mathbf {P} \right)\right]=\nabla \cdot \left[\mathbf {P} \left(\nabla \cdot \mathbf {Q} \right)-\mathbf {Q} \left(\nabla \cdot \mathbf {P} \right)\right].}

This result can be verified by expanding the divergence of a scalar times a vector on the RHS.

See also

11 thoughts on “Vector Analysis

  1. You actually make it seem so easy with your presentation but I find this topic to be actually something that I think I would never understand. It seems too complex and very broad for me. I am looking forward to your next post, I will try to get the hang of it!

  2. Great goods from you, man. I have be mindful your stuff previous to and you’re simply extremely fantastic. I really like what you’ve acquired here, certainly like what you’re stating and the way during which you are saying it. You’re making it enjoyable and you still take care of to keep it sensible. I can not wait to read far more from you. That is really a terrific site.

  3. We are a group of volunteers and starting a brand new scheme in our community. Your site offered us with helpful info to work on. You’ve done an impressive task and our entire group will likely be grateful to you.

  4. These are actually fantastic ideas in regarding blogging. You have touched some pleasant points here. Any way keep up writing.

  5. This is my first time go to see at here and i am truly impressed to read all at single place.

  6. Appreciating the commitment you put into your blog and detailed information you provide. It’s awesome to come across a blog every once in a while that isn’t the same unwanted rehashed information. Fantastic read! I’ve saved your site and I’m including your RSS feeds to my Google account.

  7. If you want to improve your experience simply keep visiting this web page and be updated with the most recent news update posted here.

  8. It is truly a great and useful piece of information. I am happy that you simply shared this helpful information with us. Please stay us up to date like this. Thanks for sharing.

  9. Hello There. I discovered your weblog the usage of msn. This is an extremely well written article. I will be sure to bookmark it and come back to learn extra of your helpful info. Thank you for the post. I will definitely comeback.

Leave a Reply

Your email address will not be published. Required fields are marked *