37
18.336—Numerical Methods for Partial Differential Equations, Spring 2005 Plamen Koev May 3, 2005

Methods for Partial Differential Equations, Spring 2005

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Methods for Partial Differential Equations, Spring 2005

18.336—Numerical Methods for Partial Differential Equations, Spring 2005

Plamen Koev

May 3, 2005

Page 2: Methods for Partial Differential Equations, Spring 2005

Contents

1 Hyperbolic PDEs 51.1 Consistency, Stability, Well­posedness, and Convergence . . . . . . . . . . . . . . . . . . . . . 51.2 Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Von Neumann Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4 The Leap­Frog Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.5 Dissipation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.6 Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.7 Group Velocity and Propagation of Wave Packets . . . . . . . . . . . . . . . . . . . . . . . . . 151.8 Summary of Schemes for the Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2 Parabolic equations 192.1 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2 The Du Fort–Frankel Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.3 The Convection­Diffusion Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.4 Summary of Schemes for the Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3 Systems of PDEs in Higher Dimensions 273.1 The Equation ut + Aux = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.2 The Equation ut + Aux = Bu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.3 The Equation ut + Aux + Buy = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.4 The equation ut = b1uxx + b2uyy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.5 ADI methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.6 Boundary conditions for ADI methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4 Elliptic Equations 354.1 Steady­State Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.2 Numerical methods for uxx + uyy = f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384.3 Jacobi, Gauss–Seidel, and SOR(ω) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3

Page 3: Methods for Partial Differential Equations, Spring 2005

Chapter 1

Hyperbolic PDEs

We consider the hyperbolic equation ut + aux = 0

for t ≥ 0 and initial condition u(0, x) = u0(x). The unique solution to this problem is given by

u(t, x) = u0(x− at),

i.e., the solution is wave traveling right if a > 0 and left if a < 0. We will show (a) how to generate schemes for its numerical solution, (b) verify that these schemes are

a good approximation to the differential equation (i.e., are consistent) and (c) that the numerical solution converges to the solution to the differential equation.

The idea in using finite differences to solve a PDE is to select a grid in time and space (with meshlengths k and h, respectively) and to approximate the values u(mk, nh) for integer m,n. All all that follows u will denote the exact solution to the PDE and

v(time) = v n ≈ u(mh, kn)(space) m

will denote the approximate finite difference solution. We will approximate derivatives of a function f as follows:

δ+f(x) = f(x + h) − f(x)

forward difference h

δ−f(x) = f(x) − f(x− h)

backward difference h

δ0f(x) = f(x + h) − f(x− h)

centered difference 2h

For a grid function v = (. . . , v−2, v−1, v0, v1, v2, . . .) we have:

δ+vm = vm+1 − vm forward difference

h

δ−vm = vm − vm−1 backward difference

h

δ0vm = vm+1 − vm−1 centered difference

2h

1.1 Consistency, Stability, Well­posedness, and Convergence

Definition 1 (Consistency). We say that a finite difference scheme Pk,hv = f is consistent with the PDE Pu = f of order (r, s) if for any smooth function φ

Pφ− Pk,hφ = O(kr , hs) (1.1)

5

Page 4: Methods for Partial Differential Equations, Spring 2005

� �

• � �

6 CHAPTER 1. HYPERBOLIC PDES

To verify consistency expand φ in Taylor series and make sure (1.1) holds.

Definition 2 (L2 norm). For a function w = (. . . , w−2, w−1, w0, w1, w2, . . .) on a grid with step size h: � �1/2∞

�w� = h |wm|2

m=−∞

For a function f on the real line: �� �1/2∞

�f� = |f(x) 2dx|−∞

Definition 3 (Stability). A finite one­step difference scheme Pk,hvn = 0 for a first­order PDE is stable ifm there exist numbers k0 > 0 and h0 > 0 such that for any T > 0 there exists a constant CT such that

nv v� � ≤ CT � 0�

for 0 ≤ nk ≤ T, 0 < h ≤ h0, 0 < k ≤ k0.

Definition 4 (Well­posedness). The initial value problem for the first­order PDE Pu = 0 is well­posed if for any time T ≥ 0, there is a constant CT , such that any solution u(t, x) satisfies

u(t, x)� ≤ CT �u(0, x)�, for 0 ≤ t ≤ T.

Definition 5 (Convergence). A one­step finite difference scheme approximating a PDE is convergent if for 0any solution to the PDE, u(t, x), and solution to the finite difference scheme vn , such that vm u(0, x) as

nmh x, we have vm → u(t, x) as (nk, mh) → (t, x) (as h, k → 0). m →

→Theorem 1 (Lax). A consistent finite difference scheme for a PDE for which the initial value problem is well­posed is convergent if and only if it is stable.

Theorem 2 (The Courant–Friedrichs–Lewy Condition). A necessary condition for stability of the explicit scheme for the hyperbolic equation ut + aux = 0:

n+1 v = αv n + βv n + γv n m m−1 m m+1

with k/h = λ held constant is aλ| ≤ 1.|

Proof. The solution is u(t, x) = u0(x − at) and u(1, 0) = u0(−a). The finite difference scheme vn depends 0 0only on v for |m| ≤ n. Therefore |hn| ≥ | − a|. Since kn = 1, we have n = 1/k and a orm |n/k| ≥ | |

aλ ≤ 1.| |

1.2 Fourier Analysis

Fourier Transform and Inversion formula

For u defined on R 1 ∞ 1 ∞

e−iωx iωx ˆu(ω) = √2π −∞

u(x)dx, u(x) = √2π −∞

e u(ω)dω

• For a grid function v = (. . . , v−2, v−1, v0, v1, v2, . . .) with grid spacing h (here ξ ∈ [−π/h, π/h])� 1 � π/h1

v(ξ) = √2π

e−imhξ vmh, vm = √2π −π/h

e imhξ v(ξ)dξ m=−∞

Theorem 3 (Parseval). u(x)� = �u(ω)�, �v� = v�. � π/h (where �v�2 = −π/h |v(ξ) 2dξ.)|

Page 5: Methods for Partial Differential Equations, Spring 2005

1.3. VON NEUMANN ANALYSIS 7

1.3 Von Neumann Analysis

Provides an uniform way of verifying if a finite difference scheme is stable.

Example 1. Let’s study the forward­time backward­space scheme

n+1 n n nvvm − vm + a m − vm−1 = 0. k h

Rewrite as n+1 n v = (1 − aλ)v + aλvn m m m−1,

where λ = k/h. Use the Fourier inversion formula

1 � π/h

n v = √2π −π/h

e imhξ vn(ξ)dξm

and substitute to obtain

1 � π/h 1

� π/h

√2π −π/h

e imhξ vn+1(ξ) dξ = √2π −π/h

e imhξ [(1 − aλ) + aλe−ihξ ]vn(ξ) dξ. � �� � � �� � ∗ ∗∗

The Fourier transform is unique, so (*) must equal (**):

vn+1(ξ) = [(1 − aλ) + aλe−ihξ ]vn(ξ).

Denote g(hξ) ≡ (1 − aλ) + aλe−ihξ , called amplification factor. We have

vvn(ξ) = g(hξ)ˆn−1(ξ) = . . . = g(hξ) �n

v0(ξ).

Now � π/h � π/h 2nv n 2 = vn(ξ)|2dξ = g(hξ)| |v0(ξ) 2dξ. � �

−π/h |

−π/h | |

n 2Therefore �v v0� (i.e., the scheme is stable), if g(hξ) ≤ 1. Write θ = hξ and evaluate |g(θ)� ≤ � | | |

2 = 2|g(θ)| |(1 − aλ) + aλe−ihξ |= (1 − aλ + aλ cos θ)2 + a 2λ2 sin2 θ

2 θ= (1 − 2aλ sin2 θ 2 )

2 + 4a 2λ2 sin2 θ cos2 2

= 1 − 4aλ(1 − aλ) sin2 θ .2

Thus g(θ) ≤ 1 if 0 ≤ aλ ≤ 1. Then �vn v0� and the scheme is stable if 0 ≤ aλ ≤ 1.| | � ≤ �

Theorem 4 (Stability condition). A one­step finite difference scheme is stable if and only if there exist positive constants K, h0, k0 such that

g(θ, k, h) ≤ 1 + Kk | |

for all θ, 0 < k ≤ k0, 0 < h ≤ h0. If g is independent of k, then the condition is

|g(θ, k, h) ≤ 1.|

Proof. Assume g(θ, k, h) ≤ 1 + Kk for some K. � π/h � π/h n 2 = 2n 0 2 v g| |v0(ξ) 2dξ ≤ (1 + Kk)2n

−π/h |v0(ξ) 2dξ ≤ (1 + Kk)2n v� �

−π/h | | | � �

Page 6: Methods for Partial Differential Equations, Spring 2005

� �

8 CHAPTER 1. HYPERBOLIC PDES

Now nk ≤ T and (1 + Kk)n ≤ (1 + Kk)T /k ≤ eKT , meaning �vn KT �v0� and the scheme is stable. � ≤ eConversely, assume that for any C there exists an interval [θ1, θ2] such that g ≥ 1+Ck for θ ∈ [θ1, θ2], h ∈

(0, h0], and k ∈ (0, k0]. Let | |

0 if hξ �∈ [θ1, θ2], v0(ξ) = �

h(θ2 − θ1)−1 if hξ ∈ [θ1, θ2].

Now �v0� = 1 and � π/h � θ2 /h h 1 2T C n 2 = 2n 2n 0 2 g| |v0(ξ) 2dξ = |g|θ2 − θ1

dξ ≥ (1 + Ck)2n e �v�v �−π/h

| |θ1 /h

≥ 2

for n near T /k. Therefore the scheme is unstable if C can be arbitrarily large. If g is independent of h and k, then g ≤ 1 + Kk must hold for any 0 < k ≤ k0, therefore g ≤ 1.| | | |

In practice to analyze a finite difference scheme we do not write integrals. Instead we replace vn by m gneimθ and solve for g.

Example 2. Forward­time centered space

n+1 n n n vm − vm + avm+1 − vm−1 = 0.

k 2h

0 = gn+1eimθ − gn imθ gnei(m+1)θ − gneim−1θ eiθ − e−iθe

+ a = g n e imθ g − 1+ a

2h.

k 2h k

So g(θ) = 1 − iaλ sin θ, with λ = k/h. If λ is constant, then g(θ) 2 = 1 + a2λ2 sin2 θ > 1 and the scheme is | |unstable.

Example 3. The Lax–Wendroff scheme:

aλ a2λ2 n n n n n v n+1 = v n

2(vm+1 − vm−1) +

2(vm+1 − 2vm + vm−1),m m −

so the amplification factor is:

aλ iθ a2λ2 iθ g(θ) = 1 −

2(e − e−iθ ) + (e − 2 + e−iθ )

2 = 1 − iaλ sin θ − a 2λ2(1 − cos θ) = 1 − 2a 2λ2 sin2 θ

2 − iaλ sin θ

Thus

2|g(θ) = (1 − 2a 2λ2 sin2 θ| 2 )2 + 2aλ sin θ cos θ

�2

2 2

= 1 − 4a 2λ2(1 − a 2λ2) sin4 θ 2

The scheme is stable only when g(θ) ≤ 1, i.e., when aλ ≤ 1.| | | |

Example 4. For the Crank–Nicolson scheme

v n+1 = v n aλ n+1 n+1 n n m −

4(vm+1 − vm−1 + vm+1 − vm−1)m

we obtain 1

g(θ) = 1 − 2 iaλ sin θ

|g(θ)1 + ( 1 aλ sin θ)2

2 = 2thus1 + 1 iaλ sin θ

|1 + ( 1 aλ sin θ)2

= 1 2 2

so this scheme is unconditionally stable.

Page 7: Methods for Partial Differential Equations, Spring 2005

� � � � � �

� � � �

� � � � �

� �� �

� �� �� � � � �� � � �� �� � � �� � � �� �� � � �� �

� �� �� �

� � � � � �

� �

� �

� �

� � � �

� �� � � � � � � �

1.4. THE LEAP­FROG SCHEME 9

1.4 The Leap­Frog Scheme

In this section we prove that Leap­frog is stable if and only if |aλ < 1. The scheme is |

n+1 n−1 n n n+1 n nvm − vm + a

vm+1 − vm−1 = 0, i.e., vm = v n−1 + aλ(vm+1 − vm−1).m2k 2h

vm+1 −vm−1Write δ0vm = 2h . Then

nvn+1 −2kaδ0 1 vm = m . vn 1 0

· vn−1

m m

Fourier transform for vectors = Fourier transform in each component:

1 � π/h n+1 vn+1(ξ)v imhξm dξ, = e√

2π vn(ξ)nv −π/h m

e−imhξ v1 = √

n+1(ξ) n+1v m h vn(ξ) nvm m=−∞

and Parseval for vectors (| · | means the 2­norm for vectors or matrices so we can tell it apart from the L2­norm)

2

= h ∞ 2 � π/h

= 2 2n+1 n+1

m n+1(ξ) n+1(ξ)v vv v

dξ = . vn(ξ) vn(ξ)n nv vm −π/h m=−∞

Now Fourier of Leap­Frog:

� π/h � π/h 1 1vn+1(ξ) −2kaδ0 1 dξ = 1

vn(ξ)imhξ imhξ dξ√2π

√2π

e e n−1(ξ)vn(ξ) 0 v−π/h −π/h

1 � π/h imhξ −2kaδ0e

imhξ vn(ξ) + e vn−1(ξ) dξ= √

1

−π/h � π/h

eimhξ vn(ξ) � � ihξ −e−ihξ

2h 1 vn(ξ)−2ka e imhξ dξ= e√

2π 1 0 vn−1(ξ)−π/h � π/h � � 1 −2iaλ sin(hξ) 1 vn(ξ)imhξ

1 dξ= e√

2π n−1(ξ)0 v−π/h

Therefore

vn+1(ξ) −2iaλ sin(hξ) 1 vn(ξ) vn(ξ) v1(ξ)= = . . . = Gn .1 = G · ·n−1(ξ) n−1(ξ) 0(ξ)vn(ξ) 0 v v v

G(hξ)

Page 8: Methods for Partial Differential Equations, Spring 2005

�� � �

� ��� � �

�� � �

� ��� � � � � �� � �� � �� �� � � � ��� � � � � � �

� �� � � � � � � �

� �� � � �

� � � �

� �

� � � �

� �� � � �

� � � �

� �� � � �

� � � �

� � � �

10 CHAPTER 1. HYPERBOLIC PDES

Parseval now gives

2

= vn+1

nvvn+1(ξ) vn(ξ)

2

� π/h 2

(G(hξ))n v1(ξ) · dξ= v0(ξ)−π/h � π/h

· 2

v1(ξ) v0(ξ)

nG(hξ)| dξ≤ −π/h

|

2 v1(ξ) v0(ξ)

2nmax≤ |hξ|≤π

|G(hξ)|

2n= max |hξ|≤π

|G(hξ)|21v

0v.

Remains to see when the 2­norm of G is bounded. Jordan form: G = T ΛT −1 and Gn = T ΛnT −1 . Characteristic polynomial:

g 2 + 2iaλ sin(hξ)g − 1 = 0

Λn bounded only if the roots (not to be confused with λ): |λ1,2| ≤ 1, but λ1λ2 = −1, so we must have = λ2 = 1. Eigenvalues (denote s ≡ sin(hξ) for short): |λ1| | |

λ1,2 = −iaλs ± 1 − (aλs)2 .

If |aλ > 1, then there exists ξ, s.t. |aλs > 1 and both λ1 and λ2 are purely imaginary and distinct, so one| |of them will be > 1 and the other < 1 in magnitude. So we must have aλ ≤ 1.|

When aλ ≤ 1 we have λ1,22 = (aλs)2 + 1 − (aλs)2 = 1. Therefore both

|λ1 and λ2 are on the unit circle. | | | |

If λ1 = λ2, then Jordan form of G is (exercise):

−λ1 −1 λ2 1

� � 1 1 λ1G = −λ2 −λ1 λ2 −λ1 + λ2

2)1/2 (Frobenius norm). Exercise: A F = ( i,j |aij| | ≤ �A� |Now

√4 2

T ΛnT −1 T T −1|Gn| ≤ | | ≤ | | · 1 · | | ≤ �T �F �T −1�F ≤√

4 · ≤− 2 1 − 1 −2 2aλ aλ| | | | | |

is nicely bounded. Going back to Parseval

√2

(1 − aλ 2)1/4| |n+1 1

0 v v

nv v

and Leap­frog is stable. Next case: λ1 = λ2. It occurs when sin(hξ) = ±1 and = 1. Assume aλs = 1, (the −1 case is |aλ|

analogous). � � � �

G = −2i 1

= 1 1 −i −i 0 −i

= T ΛT −1

1 0 i 0 −i 1 i

A little unusual to write a Jordan block with −i in position (1, 2) but legal and, in this case, convenient.

n1 1 1 n T −1 = (−i)nT T −1

0 1Gn = T (−i)n ,0 1

Page 9: Methods for Partial Differential Equations, Spring 2005

� �� � � �� �� �

11 1.5. DISSIPATION

i.e., G(±π/2) will blow up. You’d think that there may be cancellation, but no:

1 n inT −1GnT T Gn T· ·n = = | | ≤ | | | | | |0 1

and both |T and |T −1 are nicely bounded by (say) 2, so Gn ≥ n/4 = T/(4k) →∞ as k 0.| |The stability condition is therefore |aλ < 1.

| | →|

1.5 Dissipation

We would expect the wave equation to propagate the initial condition with a constant speed a, including all frequencies that make up that initial condition.

Unfortunately the discrete nature of our data means that instead of the initial condition u0(x) we have na discrete version of it—v0 .

The initial condition u0(x) is a superposition (in theory) of an infinite number of frequencies (think Fourier expansion), whereas vn only inherits the frequencies ξ ∈ [−π/h, π/h]. All higher frequencies are 0

vignored by our discrete initial condition. Recall that the Fourier transform ˆn(ξ) of vn is only defined for ξ ∈ [−π/h, π/h].

Obviously different frequencies are treated differently and we would like to get a better understanding of that treatment. Example is the best way to go here. Consider Lax–Friedrichs:

n+1 1 n n n nvm − 2 (vm+1 + vm−1) + avm+1 − vm−1 = 0,

k 2h

equivalently, n+1 = 1−aλ n + 1+aλ n v 2 vm+1 2 vm−1.m

Von Neumann analysis implies

g(hξ) = cos(hξ) − iaλ sin(hξ), 2|g(hξ) = cos2(hξ) + (aλ)2 sin2(hξ).|

Let θ = hξ as usual. We see that θ = 0 and θ = π are not dampened, but all other θ are. Let’s observe closely. Pick aλ to be

(say) 1/2. Then 1 3n+1 = n n vvm vm+1 + m−14 4

• θ = π/2. Then e = e

n = 4 n = 3 n = 2 n = 1 n = 0

imhξ imθ = eimπ/2 = . . . , 1, 0, −1, 0, 1, 0, −1, 0, 1, . . .}{

1/16 1/8 0 −1/8

1/4 0 −1/4 0 1/4 1/2 0 −1/2 0 1/2 0 −1/2

1 0 −1 0 1 0 −1 0 1

• θ = π, we have eimhξ = eimθ = eimπ = . . . , 1, −1, 1, −1, 1, . . .} = (−1)m .{We can verify that vn = (−1)m+n is a solution to Lax–Friedrichs, so θ = π is not dampened at all. m

We don’t really expect good results for wildly oscillating solutions, so we can expect that the higher frequencies will not be well­represented in our calculation. However it is unacceptable for higher frequencies to be less dampened than the middle­range ones.

Another example. Look at Lax–Wendroff: |g|2 = 1 − 4a2λ2(1 − a2λ2) sin4 θ ≤ 1 − const · sin4 θ .2 2 This is very important—says that all frequencies, except ξ = 0 (then θ = 0) are decreasing and the

highest frequencies are suppressed the most. This is exactly what we want and will call schemes that have this property dissipative.

Page 10: Methods for Partial Differential Equations, Spring 2005

12 CHAPTER 1. HYPERBOLIC PDES

Definition 6 (Dissipative Scheme). A scheme is dissipative of order 2r if

θ|g(θ) sin2r 2 .| ≤ 1 − c ·

The reason we like dissipative schemes is that if we are not doing a good job with the high frequencies anyway, why not kill them.

Remark 1. A dissipative scheme is always stable.

How can we make a non­dissipative scheme dissipative? This calls for another example. Crank–Nicolson, which is second order accurate.

n+1 n n+1 n+1 n n vm − vm v m+1 − v m−1 + vm+1 − vm−1+ a = 0 k 4h

So adding a fourth derivative in there will not affect the order of accuracy of the approximation, since fourth derivatives get ignored anyway. When we do the Fourier analysis the fourth derivative will bring a sin4 θ .2

n+1 n n+1 n+1 n n n n n n n vm − vm vm+1 − vm−1 + vm+1 − vm−1 vm−2 − 4vm−1 + 6vm − 4vm+1 + vm+2+ a + C� = 0 k 4h h4

Now select C appropriately so that the fourth derivative only brings sin4 θ into the picture, without any 2

weird powers of k and h: C = h4

16k . Then after some simplification

g(θ) = 1 − � sin4 θ

2 − i aλ 2

1 + i aλ 2 sin θ

sin θ

implying

|g(θ)|2 = 1 +

� aλ 2 sin θ

�2 − 2� sin4 θ 2

1 + �

aλ 2 sin θ

�2

+ �2 sin8 θ 2

� >0 for �<1 �� � = 1 −

� sin4 θ 2 − � sin4 θ

2 (1 − � sin4 θ 2 )

1 + �

aλ 2 sin θ

�2

≤ 1 − � sin4 θ

2

1 + �

aλ 2 sin θ

�2 .

If we now restrict |aλ| (say) ≤ 10, then 1 + �

aλ 2 sin θ

�2 ≤ 26, and

|g|2 ≤ 1 − � 26

sin4 θ 2 .

We want a bound on |g|, not |g|2:

|g|2 ≤ 1 − � 26

sin4 θ 2 ≤ |g|2 ≤ 1 −

� 26

sin4 θ 2 +

� � 52

�2 sin8 θ

2 = � 1 −

� 52

sin4 θ 2

�2 ,

so |g| ≤ 1 −

� 52

sin4 θ 2 .

The scheme is all of a sudden dissipative of order 4 (since 2r = 4). Although Crank–Nicolson is stable for all aλ we cannot make it dissipative without restricting aλ.

The exact same trick works for Leap­frog.

Page 11: Methods for Partial Differential Equations, Spring 2005

1.6. DISPERSION 13

1.6 Dispersion

In this section we investigate whether in the numerical solution of ut + aux = 0 different frequencies travel with the same speed a as they should. They, of course, do not and we will see that in fact travel with speed α(hξ) ≈ a.

Look for a solution tout + aux = 0, u(0, x) = f (x)

(which has a unique solution u(t, x) = f (x − at)) using separation of variables

ixξ u(t, x) = g(t)e ,

assuming u(0, x) = g(0)eixξ = eixξ =periodic wave (here we assume g(0) = 1). Then

ut + aux = g�(t)e ixξ + ag(t)iξe ixξ = (g�(t) + aiξg(t))e ixξ = 0.

ixξSince |e | = 1 we have g�(t) + aiξg(t) = 0, which implies g(t) = e−iatξ g(0). Insert back and get

u(t, x) = g(0)e−iatξ ixξ = e i(x−at)ξ e ,

since g(0) = 1. Therefore the initial condition is translated with speed a for all ξ.

Example 5. Same Fourier analysis can be used for other equation also to study the speed of different frequency waves, e.g.,

ut + aux + uxxx = 0.

For u(t, x) = g(t)eixξ we get

ut + aux + uxxx = (g� + iξ(a − ξ2)g)e ixξ = 0,

thus g� + iξ(a − ξ2)g = 0, ⇒ g(t) = e−iξ(a−ξ2 )t g(0),

so the solution is u(t, x) = e i(x−(a−ξ2 )t)ξ g(0).

now the speed of the waves depends on ξ.

Definition 7 (Dispersion). The phenomenon of waves with different frequencies moving with different speeds is called dispersion.

Return now to the solution of the difference equation. Take Lax–Friedrichs:

1 aλn+1 n n n n v = 2(vm+1 + vm−1) −

2(vm+1 − vm−1).m

n = gneimhξSeparation of variables: v and substitute above to get m

g = cos(hξ) − iaλ sin(hξ),

so the solution is n imhξ v = (cos(hξ) − iaλ sin(hξ))n e ,m

i(x−at)ξwhich looks nothing like e . Let

g(hξ) ≡ ρe−iω = ρ cos ω − ρi sin ω = cos(hξ) − iaλ sin(hξ).

Page 12: Methods for Partial Differential Equations, Spring 2005

14 CHAPTER 1. HYPERBOLIC PDES

Therefore tan ω = aλ tan(hξ), ρ2 = cos2(hξ) + a2λ2 sin2(hξ). For hξ ≤ π/2 we have | |

= ρn e i(x− ωn t ω n imhξ imhξ−iωn i(mh−ωn/ξ)ξ t)ξ i(x−α(hξ)t)ξ v = (ρe−iω )n e = ρn e = ρn e ξ nk )ξ = ρn e i(x− kξ = ρn e ,m

where x = mh, t = nk and

α(hξ) ≡ ω

= arctan(aλ tan(hξ))

kξ λhξ

(Recall tan � ≈ � and arctan � ≈ � so α ≈ arctan(aλ tan(hξ)) ≈ a.)λhξ We have

v n = |g(hξ) n e i(x−α(hξ)t)ξ .m | ·

Definition 8 (Phase speed). The quantity α(hξ) is called phase speed, and is the speed at which waves of frequency ξ are propagated by the difference scheme.

Once again, waves with different frequencies travel with different speeds. Thus we say that the scheme is dispersive. We want the scheme to be dispersive as little as possible (i.e., α(hξ) ≈ a), so that the numerical solution looks like the exact solution.

Time to study the Taylor series for α(hξ) to obtain a better estimate of the closeness to a.

1 tan z = z + z 3 + O(z 5)

3 1

arctan z = z − z 3 + O(z 5)3

Let z = hξ

arctan(aλ tan z)α(z) =

λz aλ tan z (aλ tan z)3

= λz

− 3λz

+ . . .

3z + z3/3 + . . . a3λ3 z= � z

− λ 3z

+ . . . a ·

= a 1 + (1 − a 2λ2)(hξ)2

+ . . . 3

So, if ξ is given and h is small, then the wave speed is slightly higher than a, and the high frequencies travel fastest. Let’s look at some special cases.

Take hξ = π/2. Then ω = π/2 and ρ = |aλ , so |

n = aλ|n e imhξ e−inπ/2 = aλ|n e i(xξ−πn/2) = aλ|n e i(x−t/λ)ξ vm | · | |

a(since nπ/2 = (t/k)(hξ) = tξ/λ). So the speeds can be quite different. Exact = a; Computed = 1/λ = aλ , so it is not a good idea to take aλ small. The closer to the stability limit (i.e., the closer to 1) the better.

Page 13: Methods for Partial Differential Equations, Spring 2005

� �

� � � �

1.7. GROUP VELOCITY AND PROPAGATION OF WAVE PACKETS 15

1.7 Group Velocity and Propagation of Wave Packets

Consider the numerical solution to ut + aux = 0 with initial data u(0, x) = eiξ0 xp(x), where p(x) decays rapidly at ∞. The function u(0, x) is called a wave packet—see Figure 1.1.

−1 −0.5 0 0.5 1 1.5 2−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Figure 1.1: Wave packet cos 5π cos2 πx on [−1, 1]2

The exact solution is u(t, x) = eiξ0 (x−at)p(x − at).

Proposition 1. A finite difference scheme will have a solution that is approximately

v∗(t, x) = e iξ0 (x−α(hξ0 )t)p(x − γ(hξ0)t),

where α(hξ0) is the phase speed, and γ(hξ0) is the group velocity, given by

γ(θ) = α(θ) + θα�(θ).

The rest of this section is devoted to proving Proposition 1 and may be skipped on a first reading. Consider a class of one­step schemes with the property

vn(ξ) = g(hξ)ˆn−1(ξ) = . . . = (g(hξ))n v0(ξ).v

In addition, for simplicity, assume |g(hξ) = 1. The numerical method will give |

1 � π/h

n v = √2π −π/h

e iξxm (g(hξ))n v0(ξ)dξ. m

On the other side �1 ∞

imhξ ˆu(0, mh) = √2π −∞

e u(0, ξ)dξ. �π

h ) to obtain Split up the interval (−∞, ∞) = ∞ [l 2hπ − h , l

2π + π l= h−∞

π� 2l h + π h1 imhξ u(0, ξ)dξ. u(0, mh) = √

2π e

π2l h −π hl

Set ξ = l 2π + ξ�, meaning dξ = dξ� andh

1π h 2π2imh(l he

π +ξ� ) u(0, l u(0, mh) + ξ�)dξ�= √2π hπ

hl −

hπ ∞1 2πimhξ� u(0, ξ + l )dξ .= √

2π e

h hπ− l=−∞

Page 14: Methods for Partial Differential Equations, Spring 2005

16 CHAPTER 1. HYPERBOLIC PDES

This gives a formula for ∞� 2π

ˆv0(ξ) = u(0, ξ + l ). h

l=−∞

If u is smooth, then its Fourier transform decays rapidly, and only the middle l = 0 term really matters

v0(ξ) ∼ u(0, ξ),

with the error bounded by h to some high power depending on the smoothness of u(0, x). Consider

u(0, ξ) = √12π

e−ixξ u(0, x)dx �−∞

= √12π

e−ixξ e iξ0x p(x)dx �−∞

1 ∞

e−ix(ξ−ξ0)= √2π −∞

p(x)dx

= p(ξ − ξ0)

Let’s recall how we handle the phase speed

| k hξ λhξ tξ = e−iα(hξ)tξ ng(hξ) = |g(hξ) e−iω = e−iω ⇒ (g(hξ))n = e−iωn = e

−i ωnk ξ = e−i ω

.

We can return to 1 � π/h

n e−iα(hξ)tn ξ v = √2π

e iξxm v0(ξ)dξ. m · ·−π/h

2πExercise 1. Verify that eiξxm , v0(ξ), g(hξ), and ω are periodic functions of ξ with period h .

Since ˆ0(ξ) ∼ ˆv u(0, ξ) = p(ξ − ξ0), we change the variables φ = ξ − ξ0. Then ξ = φ + ξ0, and

1 � π/h+ξ0

n e−iα(h(φ+ξ0))tn (φ+ξ0)v = √2π

e i(φ+ξ0)xm v0(φ + ξ0)dφ.m · ·−π/h+ξ0

Since all functions are periodic with period 2hπ we can shift the interval of integration back and get

1 � π/h

n e−iα(h(φ+ξ0))tn (φ+ξ0)v = √2π

e i(φ+ξ0)xm v0(φ + ξ0)dφ.m · ·−π/h

This begins to look right. Factor out the phase speed

1 � π/h

n = e iξ0(xm −α(hξ0)tn ) iφxm e−i[(φ+ξ0)α(hφ+hξ0)−ξ0α(hξ0)]tnv · √2π −π/h

e v0(φ + ξ0)dφ.m · ·

This begins to look like a Fourier transform

“ ”

∼ e iξ0(x−α(hξ0)t) 1 � π/h

iφ x− (φ+ξ0)α(hφ+hξ0)−ξ0α(hξ0)

v0(φ + ξ0)dφ.t

e φ ·· √2π −π/h

v v vSince ˆ0(ξ) ∼ p(ξ − ξ0) we have ˆ0(φ + ξ0) ∼ p(φ + ξ0 − ξ0) = p(φ). The next step is to replace ˆ0(φ + ξ) by ˆ p(φ) goes to zero rapidly as φ →∞ we may as well let the integral go to infinity. Hencep(φ). Also since ˆ

n vm ∼ e iξ0(x−α(hξ0)t) 1 ∞ iφ(x−˜ iξ0(x−α(hξ0)t) · p(x − ˜· √

2πe γt) p(φ)dφ = e γt).·

−∞

Page 15: Methods for Partial Differential Equations, Spring 2005

� � �� � �� �

� �� � �� � � �� � � �� � � �

� � � �� � � ��

17 1.7. GROUP VELOCITY AND PROPAGATION OF WAVE PACKETS

The last step is wrong because γ depends on φ, but it does tell us where we are going. We have

˜(φ + ξ0)α(hφ + hξ0) − ξ0α(hξ0)

γ ≡ φ

(hφ + ξ0)α(hφ + hξ0) − hξ0α(hξ0)= hφ

(θ + θ0)α(θ + θ0) − θ0α(θ0)= θ

G(θ + θ0) − G(θ0)= θ

= G�(θ0) + θ G��(θ∗),2

where θ ≡ hφ, θ0 ≡ hξ0 and G(θ) ≡ θα(θ) and θ∗ is between θ0 and θ. The beauty of the above expression is that G�(θ0) does not depend on φ, but only on hξ0 = θ0.

Let’s go back to

n vm ∼ e iξ0 (xm −α(hξ0 )tn ) 1 � π/h

iφ(xm −G� (hξ0 ))tn e−iφ hφ G�� (θ∗ )t ˆ· √2π −π/h

e 2 p(φ)dφ.·

2The idea is to replace e−iφ hφ G�� (θ∗ )t by one. By doing so we are making an error bounded by ∞

e−iφ hφ G�� (θ∗ )t2 − 1 p(φ) dφ.|· |−∞

We will now show that this error is at most O(h). Let’s first bound p(φ)||

∞1 e−iφx p(φ) p(x)dx= √

2π −∞

1 = √

φ4 p(φ) e−iφx(−iφ)4 p(x)dx; −∞

∂4

∂x4 (e−iφx) · p(x)dx

1 = √

−∞∞1

e−iφx · p����(x)dx= √2π −∞

Thus ∞ p����(x)1 dx| | C−∞p(φ) =√

2π| | ≤ .·

4 4φ φ|By using this estimate we have

| | |Also eiz − 1 2 = 4 sin2 z z 2, so eiz − 1| | ≤ | | | z| ≤ | |.2

· |∞ ∞ 1

e−iφ hφ G�� (θ∗ )t2 hφ2− 1 p(φ) G��(θ∗)t p(φ)dφ dφ· | | ≤ |2−∞ −∞ ∞

|φ2 p(φ)const ·h · dφ≤ |−∞ ∞ 1

const ·h · dφ≤ φ2

−∞

const.h ·≤

If we work in L2 we can bound the error by h2 in norm, but not pointwise. Either way we have shown that

v n = e iξ0 (x−α(hξ0 )t)p(x − G�(hξ)t) + small terms.m

Page 16: Methods for Partial Differential Equations, Spring 2005

18 CHAPTER 1. HYPERBOLIC PDES

Definition 9. The quantity γ(θ) = G�(θ) = α(θ) + θ · α�(θ) is called group velocity.

We have α(θ) → a as h → 0. So the phase speed is different from the group velocity, but both tend to the correct speed a as h → 0. Otherwise the numerical method will not converge.

1.8 Summary of Schemes for the Wave Equation ut + aux = 0 kNotation: λ = h , θ = hξ.

Name Scheme g(θ) Stable dissipative α(θ)/a Forward time forward space

v n+1 m −v n

m k + a

v n m+1 −v n

m h = 0

Forward time backward space

v n+1 m −v n

m k + a

v n m −v n

m−1 h = 0 1 − aλ + aλe−iθ 0 ≤ aλ ≤ 1

order 2, if 0 < aλ < 1

Forward­time centered space

v n+1 m −v n

m k + a

v n m+1 −v n

m−1 2h = 0 1 − iaλ sin θ No No

Backward­time centered space

v n m −v n−1

m k + a

v n m+1 −v n

m−1 2h = 0 1 − θ2

6 (1 + 2a 2 λ2 )

Lax–Wendroff v n+1

m =v n m − aλ

2 (v n m+1 − v n

m−1 )

+ a 2 λ2

2 (v n m+1 − 2v n

m + v n m−1 )

1 − 2a 2 λ2 sin2 θ 2

−iaλ sin θ |aλ| ≤ 1 order 4 1 − 1

6 θ2 (1 − a 2 λ2 )

Lax–Friedrichs v n+1

m − 1 2 (v n

m+1 +v n m−1 )

k

+a m n

m+1 −v n m−1

2h = 0 cos θ − iaλ sin θ |aλ| ≤ 1 No 1 + (1 − a 2 λ2 ) θ

2

3

Leap­Frog v n+1

m −v n−1 m

2k + a v n

m+1 −v n m−1

2h =0 −iaλ sin θ ±

p1 − (aλ sin θ)2 |aλ| < 1 No

atan aλ sin θ√1−(aλ sin θ)2

a

Crank–Nicolson v n+1

m −v n m

k +

a v n+1

m+1 −v n+1 m−1 +v n

m+1 −v n m−1

4h = 0

1− 1 2 iaλ sin θ

1+ 1 2 iaλ sin θ

Always No 1 − θ2

6 (1 + 1 2 a 2 λ2 )

Page 17: Methods for Partial Differential Equations, Spring 2005

� �

Chapter 2

Parabolic equations

2.1 The Heat Equation

ut = buxx

Schemes:

Lax–Friedrichs • n+1 1

2 (v −1) +1 − 2vnm

nm+ v n

mnm

nm+ v−v v+1 −1m = b

h2k

Lax–Wendroff •

k2

u(t + k) = u + kut + 2 utt

k2b2

= u + kbuxx + 2 uxxxx

+1 − 2v +2 − 4v + 6v − 4vk2b2nm

nm

nm

nm

nm

nm+ v −1 + v −2v v−1 +1n+1

m = v nm + kb +v 2

·h2 h4

• Forward in time, centered in space

+1 − 2vn+1 nm

nm

nm+ vn

m− v vv −1m = b h2k

• Backward in time, centered in space

n+1 n+1 n+1 m+1 − 2vn+1 + vn

m v− vv m−1mm = b h2k

• Leap­Frog +1 − 2vn+1

m m− vn−1

2k

nm

nm

nm+ vvv −1= b

h2

Du Fort–Frankel • n+1 m m+ vn−1

+1 − (v ) + vn+1 m m− vn−1

2k = b

vnm

nmv −1

h2

Crank–Nicolson •

n+1 n+1 n+1 m+1 − 2v +1 − 2vn+1 + v n

mnm

nm+ vn

m b v− v vv m−1 −1mm += .2 h2 h2k

19

nm

nm

Page 18: Methods for Partial Differential Equations, Spring 2005

� � � � �

� �

� �

20 CHAPTER 2. PARABOLIC EQUATIONS

For parabolic equations it appears natural to have a “new” λ, which we will call µ.

Definition 10 (µ). k

.µ ≡ h2

n = gneimhξVon Neumann analysis works the same way: v .m

Example 6. Forward in time, centered in space.

g(hξ) = 1 − 4bµ sin2 hξ .2

Stability requires g ≤ 1, i.e., | |10 ≤ 4bµ sin2 hξ

2 ≤ 2 for all hξ ≤ π, meaning bµ ≤ 2 .| |1 1The scheme is dissipative of order 2 as long as bµ is strictly less than 2 (check!). For bµ = we have 2

g = 1 − 2 sin2 hξ and the frequency ξ = π is not damped at all: v0 = eimhξ = eimπ = (−1)m remains 2 h m unchanged by the scheme.

Definition 11. Let �� � 1 2∞

u(t, x)�x |u(t, x) 2dx� ≡ |−∞

mean the L2 norm of u(t, x) with respect to x for a fixed t.

Remark 2. Let u be a solution to ut = buxx. Then the overall energy E(t) ≡ �u(t, x)�2 decreases with timex

u(t, x)�x u(s, x)�x, when t ≥ s� ≤ �

and the solution becomes smoother with time

1 22 ux(t, x)� u(0, x)� .x� ≤ 2bt

The dissipative schemes possess the same qualities

n+1 n+1 n n nv � = �v � = �g(hξ)ˆ v vv� � ≤ �ˆ � = � �

and 4µ �δ+v n 2 0 2 v ,� ≤ Ct � �

vm+1 −vmwhere δ+vm ≡ h , and C is a constant.

Proof. We will show that E�(t) ≤ 0 meaning E(t) is decreasing:∞ ∞ ∞∞

2 2

−∞ − 2b

−∞

because u(t, x) → 0 as x → ±∞ for �u(t, x)�x to exist. The above implies (after integrating from 0 to t):

−∞ −∞ (t) 2uutdx = 2buuxxdx = 2buux −2b� ≤ 0,E� dx == ux�ux x

t t 2 2E(t) − E(0) = −2b ux(τ, x)� dτ ⇒ E(0) ≥ 2b ux(τ, x)� dτx x

0 �

0 �

∂The derivative ux = ∂x u(t, x) is also a solution to ut = buxx because (ux)t = (ut)x = buxxx = b(ux)xx, therefore �ux(t, x)�x ux(s, x)�x for t ≥ s. Now we get ≤ �

t 2 2E(0) ≥ 2b ux(τ, x)� dτ ≥ 2bt�ux(t, x)�x,x

0

Page 19: Methods for Partial Differential Equations, Spring 2005

�� � � � �� �

�� � � � � � �

21 2.1. THE HEAT EQUATION

meaning 1 22 ux(t, x)� u(0, x)�x,x� ≤

2bt �

i.e., the solution get smoother and smoother as t →∞. Now repeat the same analysis for a difference scheme that is dissipative of order (say) 2:

v | 2 ,vn+1(ξ) = g(hξ)ˆn(ξ), where |g(hξ) ≤ 1 − C sin2 hξ

which implies 2

n+1(ξ)�2 v�ˆ ≤ 1 − C sin2 hξ 2 vn(ξ)

and (after some major reworking):

sin hξ 2 ·

n v2

n+1 2 + C 2 v�ˆ nv≤ �ˆ

(these are, of course, the discrete L2 norms). Now comes the big moment,

e−ihξ/2 ihξsin(hξ/2) · vn(ξ) =

eihξ/2 − e−ihξ/2

vn(ξ) = 2i

(e − 1)ˆn(ξ).v2i

·

Next, observe that

n 1 n 1 � π/h

imhξ eihξ

δ+vm ≡ h

(v n m) = √2π −π/h

e − 1

vn(ξ)dξ. m+1 − v h

On the other side, 1 � π/h

n imhξ �= √2π −π/h

e δ+vn(ξ)dξ, δ+vm

so ihξe

δ+vn(ξ),− 1

vn(ξ) = �h

and inserting we get n 1 ihξ n 1 v� sin(hξ/2) · v � = 2 �(e − 1)ˆ � = 2 δ+vn .· h · ���

We can therefore simplify our inequality

h2 n+1 2 + C δ+vn 2 n 2 v v .�ˆ �

4 ��� ≤ �ˆ �

Parseval says “hats = no hats”, so

4µ �δ+v n 2 n 2�v n+1 2 +

Ck v , (2.1)� � ≤ � �

where, again, µ = k/h2 . In particular, n+1 nv v .� � ≤ � �

Next, we prove that �δ+vn+1 n . The only property we used was that vn was solution to m� ≤ �δ+v �

n+1 n n n + vn vm − vm vm+1 − 2vm m−1= b . k h2

Now try n+1 n n n + vnvm+1 − vm+1 vm+2 − 2vm+1 m= b .

k h2

Page 20: Methods for Partial Differential Equations, Spring 2005

22 CHAPTER 2. PARABOLIC EQUATIONS

Subtract first from second, divide by h and get

n+1 n+1 n n n n n n n n vv m+1 −v m − v m+1 −v m v m+2 −v m+1 − 2

v m+1 −v m + m −v m−1

h h = b h h h . k h2

Or in a simpler language

(δ+v)nm + (δ+v)n(δ+v)n+1

m+1 − 2(δ+v)n m − (δ+vn)m = b m−1 ,

k h2

n+1meaning �δ+v δ+v .� ≤ � n�The inequality (2.1) works for all time steps

� n+1 2 + Ck

δ+v n 2 n 2 v v�4µ � � ≤ � �

2 n−1 2�v n 2 + Ck

v�4µ �δ+v n−1� ≤ � �

. . .

� 1�2 + Ck

δ+v 0 2 0 2 v v ,4µ � � ≤ � �

which we sum up and cancel common terms to obtain

n

� n+1 2 + Ck �

δ+v k 2 0 2 v v�4µ

� � ≤ � � ⇒ k=0

n+1 2 + Ck(n + 1)

δ+v n+1 2 0 2 v v� �4µ

� � ≤ � � ⇒

4µδ+v n+1 2 0 2 v .� � ≤

Ct� �

This means that the numerical solution will smooth out—as long as the scheme is dissipative.

Page 21: Methods for Partial Differential Equations, Spring 2005

2.2. THE DU FORT–FRANKEL SCHEME 23

2.2 The Du Fort–Frankel Scheme

This is an example of an explicit and unconditionally stable scheme for ut = buxx. 1The problem with schemes like forward time, centered space is that they are stable for bµ = bk/h2 ≤ 2 ,

which puts a terrible restriction k ≤ h2 on the timestep. The Du Fort–Frankel scheme, 2b

n+1 n n+1 nn−1 = 2bµ(vm+1 − (vm + v n−1) + vm−1),vm − vm m

is a slight modification of the unstable Leap–Frog scheme. We rewrite the Du Fort–Frankel scheme as

n+1 n n(1 + 2bµ)v − (1 − 2bµ)v n−1 = 2bµ(vm+1 + vm−1).m m

n = gneimhξTo study the stability, we substitute v to get m

2 ihξ + e−ihξ )g, (1 + 2bµ)g − (1 − 2bµ) = 2bµ(e

which implies � 2bµ cos(hξ) ± 1 − 4b2µ2 sin2(hξ)

g± = 1 + 2bµ

.

The scheme is not dissipative since g−(π) = −1. To determine stability we consider two cases:

1 − 4b2µ2 ≥ 0 ⇒ |g±| ≤ 2bµ| cos(hξ) +√

1 ≤ 2bµ+1| = 1.1+2bµ 1+2bµ•

2 2 2 2

• 1 − 4b2µ2 < 0 ⇒ |g±|2 = (2bµ cos(hξ))2 +4b µ cos (hξ)−1 4b2 µ −1 2bµ−11+2bµ = (1+2bµ)2 = 1+2bµ ≤ 1.

In addition, we do not want double roots on the unit circle. Double root occurs when 1 − 4b2µ2 = 0, but | | ≤ 2bµ| cos(hξ)then g±

| < 1.1+2bµSo we have stability for any value of µ. But how is that possible? The catch is in the consistency. In

order for the scheme to be consistent we must have k/h → 0, as we will now demonstrate. Rewrite Du Fort–Frankel as

n nn+1 n−1 n n + vm−1 − bvn+1 + vn−1 − 2vmvm − vm vm+1 − 2vm m m= b

2k h2 h2

then expand in Taylor to see that it approximates

k2 � h2 � � k2 k4 � ut + uttt = b uxx +

12 uxxxx utt +

12h2 utttt .

6− b

h2

kNow think numerically. For hyperbolic systems we could (at best) hope for h ≈ 1. However, if we use Du Fort–Frankel with such a timestep, k = h, the solution will not converge to the solution of ut = buxx, but instead to the solution of butt + ut = buxx (i.e., the solution to a wave equation). This was not the purpose of the exercise. So the scheme will only converge to the solution of ut = buxx if k 0. Even so the truncation

kerror will be dominated by b k2

h →

h2 utt, which is not small unless h2 is constant, but then we are back where we started—with the same restrictions as the ones for forward in time centered in space.

We, of course have two explicit schemes—backward in time, centered in space (which is O(k + h2) and kdissipative) and Crank–Nicolson (which is O(k2 + h2) and not dissipative if h is constant.).

Page 22: Methods for Partial Differential Equations, Spring 2005

� � � �

� � � �

24 CHAPTER 2. PARABOLIC EQUATIONS

2.3

convection­diffusion equation The Convection­Diffusion Equation: ut + aux = buxx

If a = 0, we have the heat equation, and if b = 0, we have the wave equation. Define a new function w such that w(t, x − at) = u(t, x), i.e., w(t, x) = u(t, x + at). Then

bwxx = buxx = ut + aux = wt − awx + awx = wt.

So u is simply the solution to the heat equation translated with speed a. The problem occurs when the viscosity coefficient b is very small compared to a. Then the obvious numerical methods trick you. We have a choice of

+1 − 2vn+1 nm

nm

nm

nm

nm+ vn

m +1 − v

2h − v v vv −1 −1 , (2.2)m + a = b

h2k

+1 − 2vn+1 nm

nm

nm

nm

nm+ vn

m +1 − v− v v vv −1 , m + a = b h2k h

+1 − 2vn+1 nm − vn

mnm

nm

nm+ vn

m− v v vv −1 −1 . (2.3)m + a = b h2k h

We expand in Taylor series to see that the orders of accuracy are

O(k + h2), O(k + h), and O(k + h),

respectively, which speaks strongly in favor of (2.2). The heat equation has one nice property. The maximum at a later time is less than the maximum at an

earlier time. We will copy that. Rewrite (2.2) as

aλ aλn+1 m +1 + (1 − 2bµ)v n

mnm + bµ + n

m= bµ −v v v −12 2

aλ aλ1 − n

m nm+1 + (1 − 2bµ)v + bµ 1 + n

m= bµ v v −1.2bµ 2bµ

aλLet α ≡ aλ = ah . Now if all the coefficients are positive (i.e., α = | 2bµ | < 1), then 2bµ 2b | | n+1 (1 ) max bµ α−

m| n

m| + (1 2 ) max bµ−m

| (1 + ) max+ bµ αm

nm| n

m|| | ≤ |v v v vm

≤ [ (1 ) + (1 2 ) + (1 + )] max bµ α bµ bµ α− −m

nm||v

nm|max

m|≤ v .

The maximum is a decreasing function of time if α ≤ 1, i.e., if | |

a h|b

| · 2 ≤ 1, which is the same as h ≤

|2a

b | . (2.4)

This will, of course, be satisfied eventually as h → 0, but who can wait that long? Say a = 10, b = 10−2 ⇒h ≈ 10−3 is needed. And remember, for stability we must have 1/2 ≥ bµ = 10−2 · k/(10−3)2 . This implies k ≈ 10−4/2, which is terribly small.

Now look at (2.3) instead: n+1 n

m + aλ(v −1) = bµ(v +1 − 2v −1), nm

nm

nm

nm

nm+ v− v − vv

which we rewrite as

m

n+1 m (bµ + aλ)v −1 + (1 − 2bµ − aλ)v n

mnm + bµvn

m=v +1

aλ aλ bµ(1 + )v −1 + (1 − 2bµ(1 + ))v n

mnm

nm+ bµv= +12bµbµ

bµ(1 + 2α)v −1 + (1 − 2bµ(1 + α))v nm

nm

nm+ bµv= +1.

Page 23: Methods for Partial Differential Equations, Spring 2005

� �

25 2.4. SUMMARY OF SCHEMES FOR THE HEAT EQUATION

Say a > 0, so α > 0. Now the requirement for max­norm stability becomes

2bµ(1 + α) < 1 or 2bµ + aλ < 1,

which is a lot less restrictive than (2.4). We can pick h to be 10−2 instead of 10−3, i.e., 10 times larger. We try k = 105

−3 , h = 10−2, a = 10, b = 10−2 . Then

2bk

+ ak

= 2 · 10−2 10

5

−3

= 2

+1

< 1. h2 h

· (10−2)2 50 5

So we increased the timestep by a factor of 10. But at what price? We can rewrite (2.3) as

n+1 n n n n n + vn vm − vm + avm+1 − vm−1 ah vm+1 − 2vm m−1=

k 2hb +

2 h2 .

We introduced an artificial viscosity ah = bα. This artificial viscosity comes from our numerical method. 2 Therefore solving ut + aux = b(1 + α)uxx using (2.2) is equivalent to solving ut + aux = buxx using (2.3).

2.4 Summary of Schemes for the Heat Equation ut = buxx

kNotation: µ ≡ h2 , θ = hξ. Name Scheme g(hξ) Stable Dissipative

Lax–Friedrichs v n+1

m − 1 2 (v n

m+1 +v n m−1 )

k = b v n

m+1 −2v n m +v n

m−1 h2

Lax–Wendroff v n+1

m = v n m + kb

v n m+1 −2v n

m +v n m−1

h2

+ k2 b2 2 ·

v n m+2 −4v n

m+1 +6v n m −4v n

m−1 +v n m−2

h4

Forward time centered space

v n+1 m −v n

m k = b

v n m+1 −2v n

m +v n m−1

h2 1 − 4bµ sin2 hξ 2 bµ ≤ 1

2 2, if bµ < 1 2

Backward time centered space

v n+1 m −v n

m k = b

v n+1 m+1 −2v n+1

m +v n+1 m−1

h2 Always if µ ≥ c, c > 0

Leap­Frog v n+1

m −v n−1 m

2k = b v n

m+1 −2v n m +v n

m−1 h2 No

Du Fort–Frankel v n+1

m −v n−1 m

2k = b v n

m+1 −(v n+1 m +v n−1

m )+v n m−1

h20

2bµ cos θ±q

1−4b2 µ2 sin2 θ 1+2bµ Always No

Crank–Nicolson

v n+1 m −v n

m k = b

2 @

v n+1 m+1 −2v n+1

m +v n+1 m−1

h2

+ v n

m+1 −2v n m +v n

m−1 h2

! Always order 2

Page 24: Methods for Partial Differential Equations, Spring 2005

� �

� � � �

Chapter 3

Systems of PDEs in Higher Dimensions

3.1 The Equation ut + Aux = 0

We consider ut + Aux = 0, where u is a d­vector of functions and A and B are d­by­d matrices. Initial conditions u(0, x) = f(x) (all d­vectors). If A is symmetric A = QΛQT , where QT Q = I, then

ut + QΛQT ux = 0 ⇒ QT ut + ΛQT ux = 0 wt + Λwx = 0,⇒

where w = QT u. The matrix Λ is diagonal, so the problem falls apart to d independent equations.

(wi)t + λi(wi)x = 0.

The solution is wi(t, x) = gi(x − λit), where g(x) = QT f(x). In general

u(t, x) = Qw(t, x) = Q · (g1(x − λ1t), . . . , gd(x − λdt))T .

How do we generate and analyze a numerical scheme? Take Lax–Wendroff:

k2 k2

u(t + k) = u + kut + 2 utt = u − kAux + A2 uxx.

2

Therefore

n n n+1 n k2

A2 vm+1 − 2vm m−1 v = vm − kAvm+1 − vm−1 +

n n + vn

m 2h 2 h2

λ λ2 n n n n n n= vm −

2 A(vm+1 − vm−1) + A2(vm+1 − 2vm + vm−1).2

Here vn is a d­vector. We can use the same Fourier analysis as before. Fourier transform on a vector means m Fourier transform in each component, therefore, as before

vn+1(ξ) = I − iλA sin(hξ) − 2λ2A2 sin2 hξ vn(ξ) = G(hξ)ˆn(ξ) = . . . = (G(hξ))n v 0(ξ)v

2

We call G the amplification matrix. Stability will mean

n+1 n n+1 0v � ≤ (1 + Ck)�v , or v � ≤ Const · �v ,

27

Page 25: Methods for Partial Differential Equations, Spring 2005

� � � �

| | � � � �

28 CHAPTER 3. SYSTEMS OF PDES IN HIGHER DIMENSIONS

when C does not depend on h, k. Let’s analyze (G(hξ))n . Assume A = T ΛT−1 . Then (after some very simple math) � �n

Gn(hξ) = T I − iλΛ sin(hξ) − 2λ2Λ2 sin2 hξ T −1 .

2

Therefore Gn(hξ) is bounded if [I−iλΛ sin(hξ)−2λ2Λ2 sin2 hξ ]n is. The latter is a diagonal matrix, meaning 2| |each element on the diagonal needs to be bounded. We have reduced the problem to the one­dimensional case (which implies λ|λj (A) < 1) , therefore the stability condition is |

maxλ · 1≤j≤d

|λj (A)| ≤ 1.

3.2 The Equation ut + Aux = Bu

Take Lax–Wendroff for ut + Aux = 0 and just add the lower order term

λ λ2 n n n n n v n+1 = v n A(vm+1 − vm−1) + A2(vm+1 − 2vm + vm−1) + kBv n

m mm − 2 2

The amplification factor for the scheme becomes

vn+1(ξ) = (G + kB)ˆn(ξ),v

so we end up with the question when |(G + kB)n is bounded by a constant, where Gn(hξ) ≤ C. We use | | |Strang’s idea:

(G + kB)n = Gn + k Gn−1B + Gn−2BG + . . . BGn−1

+ k2 Gn−2B2 + Gn−3BGB + Bn−4BG2B + . . . + B2Gn−2

+ . . .

+ kn(G0BG0B . . . G0BG0).

Now estimate upward using Gn ≤ C to obtain

n |(G + kB)n kC2|B + n

k2C3 B|2 + . . . | ≤ C +1

| 2

|� � � � � � n n ≤ C 1 + kC · |B + · |B 2 + . . . 1

| 2

k2C2 |

≤ C(1 + k|B|C)n ≤ Ce nk|B|C ≤ Ce T |B|C

for all nk ≤ T . Therefore the condition for stability is once again |Gn ≤ C.|

3.3 The Equation ut + Aux + Buy = 0

The equation ut + Aux + Buy = 0, where A and B are d­by­d matrices, is a baby problem for an under­lying three dimensional problem, which linearized and reduced leads to the above equation, namely Euler’s equation in fluid mechanics. For a numerical scheme pick

n+1 n−1 n n n n vml − vml vm+1,l − vm−1,l vm,l+1 − vm,l−1+ A + B = 0.

2k 2h 2h

Using Fourier transforms in two dimensions we get

vn+1(ξ1, ξ2) − vn(ξ1, ξ2) = 2iλ[A sin(hξ1) + B sin(hξ2)]ˆn(ξ1, ξ2),v

Page 26: Methods for Partial Differential Equations, Spring 2005

29 3.3. THE EQUATION UT + AUX + BUY = 0

which is the same as � � � � � � � � nvn+1 2iλ(A sin θ1 + B sin θ2) I vn v

v= 0 vn−1 = G(θ1, θ2)n I vn−1 .

So we have to study |Gn(θ1, θ2) and see when it is bounded. Depending on the assumptions one makes this |can be easy or hard. If we make the simplifying assumptions that A and B are simultaneously diagonalizable, A = P Λ�P −1 , B = P Λ��P −1 . Then

A sin(hξ1) + B sin(hξ2) = P diag(λi� sin(hξ1) + λ�� sin(hξ2))P −1

i

and we can write � � � � � � P 2iλ(Λ� sin θ1 + Λ�� sin θ2) I P −1

G = ,P I 0 P −1

so the whole analysis breaks down to the analysis of the scalar case. Without repeating the previous analysis we want

|λ(λ� sin θ1 + λ�� sin θ2) < 1 − �i i |

for all θ1 = hξ1, θ2 = hξ2 and i = 1, 2, . . . , d. Our condition for stability is then

λ max( λ� + λi��|) < 1 − �.

i | i| |

In general if the assumption is that A and B are only symmetric, then the stability criterion is the same, but the proof is a bit more complicated.

Page 27: Methods for Partial Differential Equations, Spring 2005

� � � �

30 CHAPTER 3. SYSTEMS OF PDES IN HIGHER DIMENSIONS

3.4 The equation ut = b1uxx + b2uyy

Introduce a notation ∂2 ∂2

A1u = b1 ∂x2

u, A2u = b2 ∂y2

u.

The discrete versions of these operators are

n n + vn n n + vn n

vm+1,l − 2vml m−1,l and A2hvml = b2 vm,l+1 − 2vml m,l−1

,A1hvml = b1 h2

n

h2

where u(t, x, y) = u(nk, mh, lh) ∼ vn (assume Δx = Δy = h and Δt = k). Next we attack ut = ml b1uxx + b2uyy = A1u + A2u using the Crank–Nicolson idea

n+1 n n+1 nvml + vmlvml − vml = (A1h + A2h)2

,k

which is equivalent to k nk

v n+1 = I + 2(A1h + A2h) v ,I −

2(A1h + A2h)

and also equivalent to � �� � � �� �

I − k

A1h I − k

A2h v n+1 k k k2 n+1 n)= I + A1h I + A2h 4

(v2 2 2 2

v n + A1hA2h � �� − v � (∗)

The expression (*) is O(k3) (since vn+1 − vn = O(k)), so dropping it does not affect the order of the accuracy of the scheme.

Our scheme thus becomes � �� � � �� � k k k

I − k

A1h I − A2h v n+1 = I + A1h I + A2h v n .2 2 2 2

Page 28: Methods for Partial Differential Equations, Spring 2005

� � � �

� � � �

� � � �

� � � �

� � � �

�� � �� � �

3.5. ADI METHODS 31

3.5 ADI methods

Solve the above as

k k I − A1h v n+1/2 = I + A2h v n

2 2

k I −

kA2h v n+1 A1h v n+1/2= I +

2 2

This is the Peaceman–Rachford Algorithm, which is an ADI method—alternating direction implicit method. Meaning that the two­dimensional problem has been reduced to two one­dimensional implicit problems by factoring the scheme.

Let’s now perform stability analysis of this scheme.

k I −

kA2h v n+1 A1h v n+1/2= I +

2 2

k kihξ2 − 2 + e−ihξ2 ) n+1 = ihξ1 − 2 + e−ihξ1 ) n+1/21 − b2 2h2 (e v 1 + b1 2h2

(e v

1 + 2µb2 sin2(hξ2/2) n+1 1 − 2µb1 sin2(hξ1/2) n+1/2 v v=

1 − 2µb1 sin2(hξ1/2)n+1 = ˆn+1/2 v v . 1 + 2µb2 sin2(hξ2/2)

n+1/2 nSimilarly v =1 − 2µb2 sin2(hξ2/2)

v1 + 2µb1 sin2(hξ1/2)

Meaning v1 − 2µb1 sin2(hξ1/2) 1 − 2µb2 sin2(hξ2/2)n+1 = n v . 1 + 2µb1 sin2(hξ1/2)

· 1 + 2µb2 sin2(hξ2/2)

·

Since 1−x 1+x ≤ 1 for any x ≥ 0, we conclude vn+1 nv || | ≤ | .

Now using Parseval � π/h � π/h n+1 2 h2 v vn+1(ξ1, ξ2)|2dξ1dξ2| ml | · =

−π/h −π/h |

m,l � π/h � π/h

vn(ξ1, ξ2)|2dξ1dξ2≤ −π/h −π/h

|

| n 2 h2= vml| .· m,l

Therefore we have stability for all values of m and l, and the order of accuracy of the scheme is O(k2 +h2). Thus we can take k = h and the scheme is efficient and accurate.

The Douglas–Rachford method starts with the backward­time, central­space scheme for ut = A1u+A2u

n+1 n(I − kA1h − kA2h)v = vml,ml

to obtain (after dropping an O(k3) term)

n+1 n(I − kA1h)(I − kA2h)v = (I + k2A1hA2h)vml ml.

The method is

n(I − kA1h) v n+1/2 = (I + kA2h) v n+1(I − kA2h) v = v n+1/2 − kA2hv n .

Page 29: Methods for Partial Differential Equations, Spring 2005

� �

� �� �

� �

� �� �

� �� �

32 CHAPTER 3. SYSTEMS OF PDES IN HIGHER DIMENSIONS

The Mitchell–Fairweather method is second­order accurate in time and fourth­order accurate in space. Recall the operator δ2:

δ2f(x) ≡ f(x + h) − 2f(x) + f(x − h)

h2

and n n + vn n n + vn

δ2 n vm+1,l − 2vml m−1,l

δ2 n vm,l+1 − 2vml m,l−1

xvml = h2

and y vml = h2

.

Then h2

δ2f = f �� + f ���� + O(h4) = f �� + O(h2),12

so f �� = δ2f + O(h2). Then

h2

δ2f = f �� + f ���� + O(h4)12 h2

= f �� + 12

(f ��)�� + O(h4)

h2

= f �� + 12

(δ2f �� + O(h2)) + O(h4)

h2

= f �� + δ2f �� + O(h4)12

h2 d2

= δ2f = 1 + δ2 f + O(h4). (3.1)12 dx2

Now start with Peaceman–Rachford idea for ut = A1u + A2u: � �� � � �� � k k k

I − k

A1 I − A2 u n+1 = I + A1 I + A2 u n + O(k3)2 2 2 2

Multiply both sides by h2 h2

1 + δ2 1 + δ2 x y12 12

and replace h2 ∂2

1 + δ2 x12 ∂x2

by δ2 + O(h4) (see (3.1)). Similar changes are made for the derivatives with respect to y. The result is x

h2

δ2 k h2

δ2 k n+11 + 1 + ux y12 x − 2 b1δ

2

12 y − 2 b2δ

2

= 1 + h2

δ2 + k

b1δ2 1 +

h2

δ2 + k

b2δ2 u n + O(k3) + O(kh4).x x y y12 2 12 2

We obtain the Mitchell–Fairweather scheme: � � � � � � � �1 1 1 1 nb1µ1 − h2

2δ2

1δ2 v n+1/2 = 1 + b2µ2 + h2

y vx1 − 2 6 2 6 � � � � � � � �

1 b2µ2 −

1 2δ

2 n+1 =1 1 n+1/2h2

y v 1 + b1µ1 + h2 1δ

2 v .x1 − 2 6 2 6

Page 30: Methods for Partial Differential Equations, Spring 2005

� � � �

� � � �

� � � �

� � � �

3.6. BOUNDARY CONDITIONS FOR ADI METHODS 33

3.6 Boundary conditions for ADI methods

One can obtain the boundary conditions for the intermediate step vn+1/2 by solving for it using the boundary conditions at steps n and n + 1.

For example by subtracting the equations of the Peaceman–Rachford method

k k I − A1 u n+1/2 = I + A2 u n

2 2

k I −

kA2 u n+1 A1 u n+1/2= I +

2 2

(note that we use u’s and not v’s and keep everything is operator form for the moment) we get

1 k 1 k A2 u n+1 u n+1/2 = I + A2 u n +

2 I −

2 .

2 2

n n+1/2Now if the indices i and j in vij range from 1 to m, the desired boundary conditions at v n+1/2 and vmi1i for i = 2, 3, . . . ,m− 1 are computed as

n+1/2 1 k n 1 k n+1 v = 2

1 + A2 v1i + 2 1 −

2 A2 v1i 1i2

n n n n+1 n+1 − b2µv n+1b2µv1,i−1 + 2(1 − b2µ)v1i + b2µv1,i+1 + −b2µv1,i−1 + 2(1 + b2µ)v1i 1,i+1 =

4 .

4

And similarly for v n+1/2 for i = 2, 3, . . . ,m− 1.mi The boundary condition

n+1/2 v = u(tn+1/2, xi, yj )ij

is only first order accurate and if used with the Peaceman–Rachford method (or other similar second order accuracy) will result in the overall accuracy being only first order.

Boundary conditions for the Mitchell–Fairweather scheme are obtained as follows. First we eliminate 1 1δ2 vn+1/2 terms by multiplying the first equation by 1+ 1 6 )h

2 x and the second by 1− 2 (b1µ1 − 6 )h

2 2 (b1µ1 + 1 1δ

21δ

2 x x

to obtain: � � � � � � � � n + 1 1 1

2δ2 n+1

2 (b2µ2 + 1 2δ2 vb1µ1 + 1 1 + 1 6 )h

2 y 1i b1µ1 − 6 1 − 2 (b2µ2 − 6 )h

2 y v1in+1/2 6 v = 1i 2b1µ1

for i = 2, 3, . . . ,m− 1, and similarly for the other boundary.

Page 31: Methods for Partial Differential Equations, Spring 2005

Chapter 4

Elliptic Equations

4.1 Steady­State Heat Equation

The steady­state heat equation is uxx + uyy = f (x, y). (4.1)

We solve it numerically by introducing a rectangular grid on a finite domain Ω (which need not be rectan­gular).

The numerical scheme is then

vm+1,l − 2vml + vm−1,l vm,l+1 − 2vml + vm,l−1+ = fml, (4.2)h2 h2

with boundary conditions vml specified on the boundary of Ω. We write Ωh for the set of grid points in Ω, ∂Ω for the boundary of Ω and ∂Ωh for the set of grid points on the boundary of Ω.

Existence and uniqueness of the solution of (4.2)

Theorem 5. The equation (4.2) has a unique solution.

Proof. Assume there is another solution to (4.2); call it wml, and let eml = vml − wml. Subtract (4.2) for v and w to obtain

em+1,l − 2eml + em−1,l em,l+1 − 2eml + em,l−1+ = 0,h2 h2

with eml = 0 on the boundary. Assume now that eml attains its maximum at an inner point (m, l) on Ω. Then

em+1,l + em−1,l + em,l+1 + em,l−1 eml =

4 .

But eml ≥ em±1,l±1, so equality is only possible if eml = em±1,l±1. Continuing that way we observe that the maximum must also occur on the boundary, where eml = 0. Thus the maximum of eml is zero and occurs on the boundary. Similarly for the minimum. Thus eml ≡ 0 for all m, l, which implies vml = wml for all m, l.

The equations (4.2) represent a linear system for vml, which we can write as Av = f . Since Ae = 0 has the unique solution e = 0, we conclude that det A = 0 and thus � Av = f has an unique solution.

Convergence

Next we study convergence, namely does vml → u(xm, yl) as h 0? Let →

Δhv ≡ vm+1,l − 2vml + vm−1,l vm,l+1 − 2vml + vm,l−1+ .

h2 h2

35

Page 32: Methods for Partial Differential Equations, Spring 2005

� �

� �

36

8

CHAPTER 4. ELLIPTIC EQUATIONS

To prove convergence we need two results. The first is analogous to the argument above:

Δhv ≥ 0 max vml = max vml⇒ Ωh ∂Ωh

and is called The Discrete Maximum Principle. Second, we establish the inequality

1 �v�∞,Ωh 8 �Δhv�∞,Ωh ,≤

where Ω is the unit square, vml = 0 on ∂Ωh and �v�∞,Ωh ≡ max(m,l)∈Ωh vml|. The constant 1 is connected |

with the shape of the domain. Start with the obvious

. (4.3)−�f�∞,Ωh ≤ fml ≤ �f�∞,Ωh

Define 1 �� 1

�2 � �2 �

1 wml = 4 xm − + yl − 2 .2

Check that (a) Δhwml = 1, and (b) wml ≤ 1 on ∂Ωh (done in class). 8 Rewrite (4.3) as

Δhwml.−�f�∞,Ωh Δhwml ≤ fml ≤ �f�∞,Ωh

Therefore

Δh(v + �f�∞,Ωh w) (4.4)≥ 0

Δh(�f�∞,Ωh w − v) ≥ 0. (4.5)

Now from the discrete maximum principle we have

max(vml + �f�∞,Ωh wml) ≤ max(vml + �f�∞,Ωh wml). Ωh ∂Ωh

Then

vml ≤ vml + �f�∞,Ωh wml since w ≥ 0

max(vml + �f�∞,Ωh wml)≤ Ωh

max(vml + �f�∞,Ωh wml)≤ ∂Ωh

1 1since vml = 0 on ∂Ωh and wml ≤ on ∂Ωh.≤

8 �f�∞,Ωh 8

1 1From (4.5) (in class) we get − ≤ vml, therefore vml and we have 8 �f�∞,Ωh | | ≤ 8 �f�∞,Ωh

1 1 max vml =

8 �Δhv�∞,Ωh .

Ωh

| | ≤ 8 �f�∞,Ωh

Talyor series implies

h2 h2

Δhuml = (Δu)xm ,yl + 12

(∂4 u(xm + θmlh, yl) + ∂4 u(xm, yl + θmlh) = fml + u + ∂4 u) .x y �

x y12(∂4 ��

truncation error

As usual define eml =computed − exact = vml − uml, then

Δheml = Δhvml − Δhuml

h2

u + ∂4 u)= fml − fml + x y12(∂4

h2

u + ∂4 u).x y = − 12

(∂4

Page 33: Methods for Partial Differential Equations, Spring 2005

� �

4.1. STEADY­STATE HEAT EQUATION 37

Note that eml = 0 on ∂Ωh, thus

1 max eml| max |ΔhemlΩh

| ≤ 8 Ωh

|

h2

max ∂4 u + ∂4 u|x y≤8 · 12

· Ωh

|

h2

max(|∂4 u(x, y) , ∂4 u(x, y) ).y |≤ 48 ·

Ω x | |

So the error goes to 0 as h 0. Remember that the constant 1 was married to the unit square. 8→

The continuous case

We will now prove that the equation uxx + uyy = 0 with u(x, y) = 0 on ∂Ω has a unique solution u ≡ 0. We start with the theorem of Gauss � �

uxi dx = uνids, Ω ∂Ω

2 2where νi is the normal in the ith direction. Since uuxx + uuyy = 0 we have (uux)x + (uuy )y − ux − uy = 0, which we integrate over Ω using Gauss’ theorem to obtain

2 2(uuxν1 + uuy ν2) − (u + uy ) = 0.x ∂Ω Ω

The first integral is 0, since u = 0 on ∂Ω, therefore ux = uy = 0 on Ω, which along with u = 0 on ∂Ω implies u ≡ 0 on Ω.

Page 34: Methods for Partial Differential Equations, Spring 2005

v

38 CHAPTER 4. ELLIPTIC EQUATIONS

4.2 Numerical methods for uxx + uyy = f

We have

1 h2

vml − 4(vm−1,l + vm+1,l + vm,l−1 + vm,l+1) = − fml.4

Where 0 ≤ m, l ≤ N . The above represents a linear system with (N − 1)2 unknowns vml, 1 ≤ m, l ≤ N − 1. Write it as Av = b. Solving Av = b directly using Gaussian elimination would cost O(((N − 1)2)3) = O(N6), which is prohibitive. We’d better use the banded structure of the system. For example when N = 4 with zero boundary conditions we have ⎤⎡⎤⎡⎤⎡

1 −1/4 −1/4 1

v11 f11 ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

·

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

f13

f23

v33 f33

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

(4.6)

−1/4 −1/4 1

−1/4 f12v12

−1/4 −1/4 v13

v21−1/4 1 −1/4 1

−1/4 f21h2

−1/4 −1/4 −1/4 1

−1/4 f22 = −v22 4 ·

−1/4 −1/4 −1/4 v23

v31−1/4 1 −1/4 1

f31

−1/4 −1/4 −1/4 1

f32v32

−1/4 −1/4

or

Av = b.

The idea in Jacobi, Gauss–Seidel, and SOR is to split A = B + C and solve (B + C)v = b iteratively as n+1 = B−1(b − Cvn), where n is the iteration, not the time. The error e = vn − v (= computed − exact)

satisfies n+1 e = B−1Ce n

or en+1 = Fen for short.

Lemma 1. Say en+1 = Fen for n = 1, 2 . . .. Then en 0 if and only if ρ(F ) < 1, where →

ρ(F ) = max |λj (F )j

|

is the spectral radius of F .

Proof. If ρ(F ) ≥ 1 and (say) λ1 = ρ(F ), then by picking e1 as the eigenvector of F corresponding to λ1 we n+1 = λn

| |have e 1 e

1 which will never converge to 0. Now assume ρ(F ) < 1 and let F = T−1JT be the eigenvalue decomposition of F , where ⎤⎡

λ1 c1

λ2 c2 J =

⎢⎢⎢⎣

⎥⎥⎥⎦. .. .. . λk

and ci = 0 or 1 depending on whether there is a Jordan block. The choice of ones as superdiagonal elements of Jordan blocks is is a matter of convention, we can put any positive number there, since if S = diag(�, . . . , �k ),

Page 35: Methods for Partial Differential Equations, Spring 2005

� � � �� �

39 4.3. JACOBI, GAUSS–SEIDEL, AND SOR(ω)

then

J S−1JS ⎡ = ⎡⎤⎡⎤ �−1 �1λ1 c1

⎤ ⎢⎢⎢⎣

�−2

. . .

⎢⎢⎢⎣

⎥⎥⎥⎦ . .

⎢⎢⎢⎣

⎥⎥⎥⎦

�2

. . .

λ2 c2 = . .. .

�−k λk �k ⎤⎡ λ1 �c1

λ2 �c2⎢⎢⎢⎣

⎥⎥⎥⎦ = .. .. .. .

λk

Selecting � = 1 − ρ(F ) /2 guarantees that | |

max( λj + � cj ) < 1 − � < 1, j

| | | |

i.e., �J�∞ < 1 − �. Now F = TSJ(TS)−1 and Fn = TSJn(TS)−1, which implies

+1 1n

�e �∞ = �TSJn(TS)−1 e �∞ n 1e≤ �TS�∞ · �J�∞ · �(TS)−1�∞ · � �∞

1 0 as →∞e n .

≤ �TS�∞ · (1 − �)n · �(TS)−1�∞ · � �∞ →

4.3 Jacobi, Gauss–Seidel, and SOR(ω)

All start with an initial guess and iterate

1 h2 n+1 n n n nJacobi vml =

4(vm+1,l + vm−1,l + vm,l+1 + vm,l−1) − fml;4

h2 n+1 1 n n+1 n n+1Gauss–Seidel vml =

4(vm+1,l + v m−1,l + vm,l+1 + vm,l−1) − fml;4

n+1 n ω n n+1 n n+1 ωh2

SOR(ω) v = (1 − ω)vml + 4

(vm+1,l + v m−1,l + vm,l+1 + vm,l−1) − fml.ml 4

In matrix form we write the system (4.6) Av = b as

(I − L− U)v = b,

where L (U) is strictly lower (upper) triangular. Then

n+1Jacobi v = (L + U)v n + b n+1Gauss–Seidel = (I − L)−1(Uv n + b)v

−1 � n+1 1 ( 1 ω

n + bI − I + U)vSOR(ω) v I − L= .ω

The convergence of each method will be determined by looking at

max λj (L + U)| < 1 j |

max λj ((I − L)−1U) < 1| |j

< 1( 1 I − L)−1( 1 ω ω − I + U)λjmax

j

⎥⎥⎥⎦

Page 36: Methods for Partial Differential Equations, Spring 2005

� �

40 CHAPTER 4. ELLIPTIC EQUATIONS

Convergence of Jacobi’s method

We must find the eigenvalues of L + U , i.e., solve

vm+1,l + vm−1,l + vm,l+1 + vm,l−1 = 4λvml.

A direct verification (exercise in manipulating trigonometric functions) confirms that

amπ blπ vml = sin sin

N ·

N

satisfy the above equation for any a, b = 1, 2, . . . , N − 1. Thus we have found the (N − 1)2 orthogonal eigenvectors. The eigenvalues are

1 aπ bπ λab = cos + cos .

2 N N

The spectral radius is now π 1 � π �2

ρ(L + U ) = cos N ≈ 1 −

2 .

N

Convergence of Gauss–Seidel method

We must find the eigenvalues of (I − L)−1U , i.e., solve

vm+1,l + λvm−1,l + vm,l+1 + λvm,l−1 = 4λvml.

The trick is to now set vml = λ(m+l)/2wml (we may lose a zero eigenvalue this way, but a zero eigenvalue will not hinder convergence; it will turn out later there were no zero eigenvalues). Then

wm+1,l + wm−1,l + wm,l+1 + wm,l−1 = 4√

λvml.

We already solved this problem for Jacobi. Therefore

ρ((I − L)−1U ) = cos2 π π2

. N ≈ 1 −

N 2

Gauss–Seidel converges about twice as fast as Jacobi. To compare convergence look at the number of steps it will take to decrease the error by a factor of e (think, e.g., e = 102 or e = 104, etc.). For Jacobi,

π2 π2(1 − 2N 2 )j = e−1 implies j log(1 − 2N 2 ) = log e−1 . Since log(1 −ρ) ≈ −ρ for small ρ, we have j ≈ 2N 2

· log e,π2

π2whereas for Gauss–Seidel (1 − N 2 )g = e−1 implies g ≈ N 2

· log e, i.e., Gauss–Seidel converges twice as fast π2

as Jacobi.

Convergence of SOR(ω)

Using the same substitution as in Gauss–Seidel we obtain

wm+1,l + wm−1,l + wm,l+1 + wm,l−1 = 4 λ + ω − 1

wml, ω√

λ

λ+ω−1i.e., µ = √λω

is an eigenvalue of L + U . (Do not confuse ω with wml). Selecting ω to minimize

ρ � ( 1

ω I − L)−1( 1 ω I − I + U )

� we obtain

2 2 ωopt

ρ � ( 1

ωopt I − L)−1( 1

ωopt I − I + U )

=

=

1 + �

1 − µ2

cos2 π N

(1 + sin π N )

2

= 1 + sin π

N

≈ 1 − 2π N

,

i.e., SOR(ω) is N times faster than Gauss–Seidel.

Page 37: Methods for Partial Differential Equations, Spring 2005

Index

L2 norm, 6

ADI methods, 31boundary conditions, 33

amplification factor, 7

consistency, 5convergence, 6

dispersion, 13dissipation, 11Douglas–Rachford, 31

Fourier Analysis, 6

Gauss–Seidel method, 39convergence, 40

group velocity, 15

Jacobi’s method, 39convergence, 40

Mitchell–Fairweather, 32

Parseval identity, 6phase speed, 14

schemeheat equation

Crank–Nicolson, 25Du Fort–Frankel, 23, 25Lax–Friedrichs, 25Lax–Wendroff, 25Leap­frog, 25

wave equationforward time centered space, 8Lax­Wendroff, 8Leap­frog, 9

SOR(ω), 39convergence, 40

stability, 6

Von Neumann analysis, 7

wave packet, 15well­posedness, 6

41