Upload
others
View
10
Download
0
Embed Size (px)
Citation preview
Finite difference method for elliptic problems: I
Praveen. [email protected]
Tata Institute of Fundamental ResearchCenter for Applicable Mathematics
Bangalore 560065http://math.tifrbng.res.in/~praveen
January 13, 2013
1 / 34
Contents
1 1-D BVP and FDM
2 2-D BVP and FDM
3 Higher order schemes
4 Iterative matrix solution
5 Discontinuous coefficients, finite volume method
6 Convection dominated problem
General approach of numerical methods:
Stability + Consistency = Convergence
2 / 34
1-D boundary value problemDifferential equation:
−u′′(x) + c(x)u(x) = f(x) x ∈ Ω = (0, 1)
u(0) = 0, u(1) = 0(1)
Finite difference mesh: Let N ≥ 2 be an integer and let
mesh size: h =1
N
h0 1 2 N − 2 N − 1 N
x = 0 x = 1
mesh points: xi = ih, i = 0, 1, . . . , N
Ωh = xi : i = 1, 2, . . . , N − 1, Γh = x0, xN, Ωh = Ωh ∪ Γh
Ui = numerical approximation to u(xi)
Need to find U1, U2, . . . , UN−1
3 / 34
Finite difference approximationLet u : [0, 1]→ R. By Taylor series
u(xi±1) = u(xi ± h) = u(xi)± hu′(xi) +h2
2u′′(xi)±
h3
6u′′′(xi) +O
(h4)
Forward difference for u′: u ∈ C2[0, 1]
D+x u(xi) :=
u(xi+1)− u(xi)
h= u′(xi) +O (h)
Backward difference for u′: u ∈ C2[0, 1]
D−x u(xi) :=u(xi)− u(xi−1)
h= u′(xi) +O (h)
Central difference for u′′: u ∈ C4[0, 1]
D+xD−x u(xi) = D−x D
+x u(xi) =
u(xi−1)− 2u(xi) + u(xi+1)
h2= u′′(xi) +O
(h2)
Note: If u ∈ C3[0, 1] then D+xD−x u(xi)− u′′(xi) = O (h).
4 / 34
Finite difference method: u′′ → D+xD
−x u
Find (U1, U2, . . . , UN−1) such that
(AU)i := −D+xD−x Ui + c(xi)Ui = f(xi) i = 1, 2, . . . , N − 1
U0 = 0, UN = 0(2)
Then the hope is that Ui ≈ u(xi) which we have to prove. In matrix notation
or more compactly
AU = F, A ∈ R(N−1)×(N−1), U ∈ RN−1, F ∈ RN−1
Does solution exist, i.e., is the matrix A invertible ?5 / 34
Discrete inner productLet V , W be two grid functions defined at the mesh points, vanishing ati = 0, N . Define the discrete inner product
(V,W )h =
N−1∑i=1
hViWi
which resembles the L2 inner product
(v, w) =
∫ 1
0
v(x)w(x)dx
Lemma (Summation by parts)
Suppose V is a grid function defined at the mesh-points xi, i = 0, 1, . . . , N andlet V0 = VN = 0. Then
(−D+
xD−x V , V
)h
=
N∑i=1
h|D−x Vi|2 (3)
6 / 34
Discrete inner productProof: We do summation by parts,
(−D+
xD−x V , V
)h
= −N−1∑i=1
h(D+xD−x Vi)Vi
= −N−1∑i=1
Vi+1 − Vih
Vi +
N−1∑i=1
Vi − Vi−1
hVi
= −N∑i=2
Vi − Vi−1
hVi−1 +
N−1∑i=1
Vi − Vi−1
hVi (shift indices)
= −N∑i=1
Vi − Vi−1
hVi−1 +
N∑i=1
Vi − Vi−1
hVi (V0 = VN = 0)
=
N∑i=1
Vi − Vi−1
h(Vi − Vi−1) =
N∑i=1
h|D−x Vi|2
Discrete analogue of −∫ 1
0
v′′vdx =
∫ 1
0
(v′)2dx (v(0) = v(1) = 0)
7 / 34
Existence of discrete solutionLet V be a grid function such that V0 = VN = 0 and let c ≥ 0. Then
(AV , V )h =(−D+
xD−x V + cV , V
)h
=(−D+
xD−x V, V
)h
+ (cV, V )h ≥N∑i=1
h|D−x Vi|2 ≥ 0
If AV = 0 for some V then necessarily
N∑i=1
h|D−x Vi|2 = 0 =⇒ D−x Vi = 0, i = 1, . . . , N
=⇒ V0 = V1 = . . . = VN
But since V0 = VN = 0, we obtain that V = 0. Hence AV = 0 if and only ifV = 0 from which we deduce that A is a non-singular matrix.
Theorem (Existence of FD solution)
Suppose c and f are continuous functions on [0, 1] and c(x) ≥ 0, x ∈ [0, 1].Then the finite difference scheme (2) has a unique solution U = A−1F .
8 / 34
Discrete normsDiscrete L2 norm
‖U‖h :=√
(U,U)h =
(N−1∑i=1
h|Ui|2) 1
2
Discrete Sobolev norm
‖U‖1,h :=(‖U‖2h + ‖D−x U ]|2h
) 12
where
‖V ]|2h :=
N∑i=1
h|Vi|2 (includes last grid point i = N)
Using this notation
(AV, V )h ≥ ‖D+x V ]|2h (equality if c ≡ 0)
Using a discrete version of Poincare-Friedrichs inequality, we will show that
(AV, V )h ≥ c0 ‖V ‖21,h
where c0 is a positive constant. This is a discrete coercivity property.9 / 34
Lemma (Discrete Poincare-Friedrichs inequality)
Let V be a mesh function with V0 = VN = 0. Then ∃ c∗ > 0, independent of Vand h, such that
‖V ‖2h ≤ c∗‖D−x V ]|2h (4)
for all such V .
Proof: Using Cauchy-Schwarz inequality, we have
|Vi|2 =
∣∣∣∣∣∣i∑
j=1
(D−x Vj)h
∣∣∣∣∣∣2
≤
i∑j=1
h
i∑j=1
h|D−x Vj |2 = ih
i∑j=1
h|D−x Vj |2
‖V ‖2h =
N−1∑i=1
h|Vi|2 ≤N−1∑i=1
ih2i∑
j=1
h|D−x Vj |2 ≤(N−1∑i=1
i
)h2
N∑j=1
h|D−x Vj |2
≤ (N − 1)N
2h2
N∑j=1
h|D−x Vj |2 ≤1
2‖D−x V ]|2h, since (N − 1)N <
1
h2
which proves (4) with c∗ = 12 .
10 / 34
Discrete coercivity property:
(AV, V )h ≥ ‖D−x V ]|2h ≥1
c∗‖V ‖2h
Combining
c∗ (AV, V )h ≥ ‖V ‖2h and (AV, V )h ≥ ‖D−x V ]|2h
we get(AV, V )h ≥ (1 + c∗)
−1(‖V ‖2h + ‖D−x V ]|2h)
With c0 = (1 + c∗)−1 = 2
3 , we have the coercivity property
(AV, V )h ≥ c0 ‖V ‖21,h (5)
Theorem (Stability of FD solution)
The scheme (2) is stable in the sense that
‖U‖1,h ≤1
c0‖f‖h (6)
Proof: Use coercivity (5) and Cauchy-Schwarz
c0 ‖U‖21,h ≤ (AU,U)h = (f, U)h ≤ ‖f‖h ‖U‖h ≤ ‖f‖h ‖U‖1,h11 / 34
Global error and truncation errorThe global error between the true solution u and the numerical solution U is
ei := u(xi)− Ui, i = 0, 1, . . . , N
Due to boundary conditionse0 = eN = 0
Then
Aei = Au(xi)−AUi = Au(xi)− f(xi)
= −D+xD−x u(xi) + c(xi)u(xi)− [−u′′(xi) + c(xi)u(xi)]
= u′′(xi)−D+xD−x u(xi), i = 1, 2, . . . , N − 1
Local truncation error: error in central difference approximation
τi := u′′(xi)−D+xD−x u(xi)
Thus the error satisfies the equation
(Ae)i = τi, i = 1, 2, . . . , N − 1
e0 = eN = 0(7)
12 / 34
Theorem (Error estimate)
Let f ∈ C[0, 1], c ∈ C[0, 1] with c(x) ≥ 0 and suppose that the solution of (1)belongs to C4[0, 1]. Then
‖u− U‖1,h ≤h2
8‖u(4)‖∞ (8)
Proof: Using Taylor series with remainder term, show that
τi = u′′(xi)−D+xD−x u(xi) = −h
2
12u(4)(ξi), ξi ∈ [xi−1, xi+1]
so that
|τi| ≤h2
12sup
xi−1≤x≤xi+1
|u(4)(x)| ≤ h2
12sup
0≤x≤1|u(4)(x)| (9)
Applying stability result (6) to (7) we obtain
‖e‖1,h ≤1
c0‖τ‖h =
1
c0
(N−1∑i=1
h|τi|2) 1
2
≤ h2
12c0‖u(4)‖∞
which yields the error estimate (8) since c0 = 23 .
13 / 34
General framework
Linear differential equation
Lu = f in Ω
lu = g on Γ(10)
Finite difference approximation
LhU = fh in Ωh
lhU = gh on Γh(11)
Two key steps:
(1) Show that the scheme is stable:
9U9Ωh≤ Cs
(‖fh‖Ωh
+ ‖gh‖Γh
)where Cs > 0 is independent of f , g, h.
14 / 34
General framework
(2) Show that the scheme is consistent: Local truncation error
τΩh= Lhu− fh in Ωh
τΓh= lhu− gh in Γh
For Dirichlet problem, τΓh= 0. Assuming sufficiently smooth solution u show
that‖τΩh
‖Ωh+ ‖τΓh
‖Γh≤ Cτhp as h→ 0
where Cτ > 0 independent of h but might depend on u and p > 0.
Lax equivalence theorem
Suppose the finite difference scheme (11) is stable and consistent. Then it is aconvergent approximation of (10).
Proof: Define the global error e = u− U . Then
Lhe = Lhu− LhU = Lhu− fh = τΩh
15 / 34
General framework
and similarlylhe = τΓh
Error is governed by the equation
Lhe = τΩhin Ωh
lhe = τΓhin Γh
By stability and consistency of the scheme
9u− U9Ωh= 9e9Ωh
≤ Cs(‖τΩh‖Ωh
+ ‖τΓh‖Γh
) ≤ CsCτhp
Convergence of U now follows since
9u− U9Ωh→ 0 as h→ 0
The quantity p is called the order of accuracy of the scheme. It is desirable tohave a large value of p since we can get more accurate solution with smallernumber of grid points.
16 / 34
We will next show some results in the maximum norm. We begin with somedefinitions.
Definitions• Non-negative matrix: A matrix A is said to be non-negative if all its
entries are non-negative. We will indicate this property by writing A ≥ 0.
• Non-negative vector: A vector V is said to be non-negative if all itsentries are non-negative. We will indicate this property by writing V ≥ 0.
• Monotone matrix: A real, square matrix A is said to be monotone if it isinvertible and the matrix A−1 is non-negative.
• M-matrix: A real, square matrix A = (aij) is called an M-matrix ifI aii > 0 and aij ≤ 0 for i 6= jI A−1 is non-negative
Thus an M-matrix is also a monotone matrix.
17 / 34
Theorem (Characterization of monotone matrices)
A real matrix A of order n is monotone if and only if the inclusion
v ∈ Rn : Av ≥ 0 ⊂ v ∈ Rn : v ≥ 0
is satisfied.
Proof: (a) If A is monotone and the vector Av is non-negative then
v = A−1(Av) ≥ 0
(b) Conversely, suppose the inclusion is satisfied. Then
Av = 0 =⇒ v ≥ 0
A(−v) = 0 =⇒ − v ≥ 0
=⇒ v = 0
Hence A is non-singular and A−1 exists. The j’th column vector of A−1 is
bj = A−1ej , ej = [0, . . . , 0, 1, 0, . . . , 0]>
↑ j’th position
18 / 34
so thatAbj = ej ≥ 0 =⇒ bj ≥ 0 =⇒ A−1 ≥ 0
Theorem
Suppose that c is non-negative. Then the matrix A in (2) is monotone.
Proof: Let A ∈ R(N−1)×(N−1) be the matrix of the finite difference scheme.Due to above characterization, it is enough to show that
Av ≥ 0 =⇒ v ≥ 0
Given any vector v ∈ RN−1 such that Av ≥ 0, let p ∈ 1, . . . , N − 1 be aninteger satisfying
vp ≤ vi for i = 1, 2, . . . , N − 1 ( i.e., vp = min1≤i≤N−1
vi)
We have to show that vp ≥ 0. (a) If p = 1
0 ≤ (2 + c1h2)v1 − v2 = (1 + c1h
2)v1 + (v1 − v2) ≤ (1 + c1h2)v1
19 / 34
(b) If 2 ≤ p ≤ N − 2 then
0 ≤ −vp−1 + (2 + cph2)vp − vp+1 ≤ cph2vp
(c) If p = N − 1 then
0 ≤ −vN−2 + (2 + cN−1h2)vN−1 ≤ (1 + cN−1h
2)vN−1
Hence we have
min1≤i≤N−1
vi ≥ 0 if ci > 0, 2 ≤ i ≤ N − 2
It remains to look at the case where atleast one of the ci, 2 ≤ i ≤ N − 2 is zero.We already know that A is invertible (even if c ≡ 0). Now the matrix A+ αIis monotone for every α > 0. This implies that
(A+ αI)−1 ≥ 0
The elements of (A+ αI)−1 are continuous functions of α ≥ 0 and hence itfollows that A−1 ≥ 0.
20 / 34
Matrix norm
For any square matrix M = (mij) ∈ Rn×n and vector norm ‖·‖ : Rn → R+
‖M‖ := maxV 6=0
‖MV ‖‖V ‖ = max
‖V ‖=1‖MV ‖
In particular
‖M‖∞ = max1≤i≤n
n∑j=1
|mij |
Theorem (Error in max norm)
Suppose that c is non-negative. If the solution u of the BVP (1) satisfiesu ∈ C4[0, 1] then we have the bound
max1≤i≤N−1
|ui − Ui| = ‖u− U‖∞ ≤h2
96sup
0≤x≤1|u(4)(x)|
21 / 34
Proof: (1) We first show the stability result
∥∥A−1∥∥∞ ≤
1
8
(∥∥A−1∥∥∞ ≤
∥∥A−10
∥∥∞ ≤
1
8
)(12)
Let A0 be the matrix A with c = 0. Since A, A0 are monotone
A−1 ≥ 0 and A−10 ≥ 0
Since c is non-negativeA−A0 = diag(ci) ≥ 0
so thatA−1
0 −A−1 = A−10 (A−A0)A−1 ≥ 0
Then using the expression of the matrix norm ‖·‖∞ we obtain∥∥A−1∥∥∞ ≤
∥∥A−10
∥∥∞ ≤ ?
Observe that
A−10 ≥ 0 =⇒
∥∥A−10
∥∥∞ =
∥∥A−10 E
∥∥∞ , E = [1, 1, . . . , 1]> ∈ RN−1
22 / 34
But A−10 E is the finite difference approximation to the solution of
−v′′(x) = 1 x ∈ Ω = (0, 1)
v(0) = 0, v(1) = 0
The solution is
v(x) =1
2x(1− x) =⇒ v(3)(x) = v(4)(x) = 0
Hence the finite difference solution A−10 E is exact at the nodes, i.e.,
(A−10 E)i = v(xi), 1 ≤ i ≤ N − 1
so that ∥∥A−10 E
∥∥∞ = max
1≤i≤N−1|v(xi)| ≤ max
0≤x≤1|v(x)| = 1
8
(2) We have already seen the error equation (7) for ei = u(xi)− Ui
(Ae)i = τi, i = 1, 2, . . . , N − 1
e0 = eN = 0
23 / 34
Hence using (12) and (9)
‖e‖∞ =∥∥A−1τ
∥∥∞ ≤
∥∥A−1∥∥∞‖τ‖∞ ≤
1
8
h2
12max
0≤x≤1|u(4)(x)|
which proves the desired result.
Maximum principle (Differential equation)
Suppose that c(x) ≥ 0 and
−u′′(x) + c(x)u(x) ≥ 0, 0 ≤ x ≤ 1
u(0) ≥ 0, u(1) ≥ 0
=⇒ u(x) ≥ 0
Maximum principle (FDM)
Suppose A is monotone. Then
AU ≥ 0
U0 ≥ 0, UN ≥ 0
=⇒ U ≥ 0
24 / 34
Steady diffusion-convection-reaction
Au := −au′′ + bu′ + cu = f, in Ω = (0, 1) (13)
u(0) = u0, u(1) = u1
a(x), b(x), c(x) are smooth functions and a > 0, c ≥ 0 in Ω
Finite difference approximation of PDE
(AU)j := −aj Uj−1−2Uj+Uj+1
h2 + bjUj+1−Uj−1
2h + cjUj = fj (14)
U0 = u0, UN = u1
or, for j = 1, 2, . . . , N − 1
−(aj +1
2hbj)Uj−1 + (2aj + h2cj)Uj − (aj −
1
2hbj)Uj+1 = h2fj
25 / 34
Steady diffusion-convection-reaction
Discrete maximum principle
Assume that h is so small that aj ± 12hbj ≥ 0 and that U satisfies AUj ≤ 0
(AUj ≥ 0).
1 If c = 0, then
max0≤j≤N
Uj = maxU0, UN(
min0≤j≤N
Uj = minU0, UN)
2 If c ≥ 0, then
max0≤j≤N
Uj ≤ maxU0, UN , 0(
min0≤j≤N
Uj ≥ minU0, UN , 0)
26 / 34
Steady diffusion-convection-reaction
1 Since c = 0 and AUj ≤ 0
Uj =(aj + 1
2hbj)
2ajUj−1 +
(aj − 12hbj)
2ajUj+1 +
h2
2ajAUj
≤ (aj + 12hbj)
2ajUj−1 +
(aj − 12hbj)
2ajUj+1
≤ max(Uj−1, Uj+1) for 1 ≤ j ≤ N − 1
Assume that U has an interior maximum at Uj , i.e.,
Uj = max0≤k≤N
Uk
But this contradicts the above inequality unless Uj = Uj−1 = Uj+1 whichmeans that Uj = constant = U0 = UN . Hence the maximum of UjN0must occur on the boundary.
27 / 34
Steady diffusion-convection-reaction
2 Case c ≥ 0 and AUj ≤ 0:
1 If Uj ≤ 0, then we are done.
2 Otherwise assume that maxj Uj = Uk > 0 for some 1 ≤ k ≤ N − 1. Let(l, r) be the largest subinterval containing k such that Uj > 0, j ∈ (l, r).
3 We now have AUj := AUj − cjUj ≤ 0 in (xl, xr). Applying result of Part 1,we have Uk = maxUl, Ur.
4 But then xl and xr cannot both be interior points of Ω, for then either Ul
or Ur would be positive, and the interval (xl, xr) would not be the largestsubinterval with Uj > 0. This implies that Uk = maxU0, UN.
Remark: The conditions in the theorem ensure that A is an M-matrix.
Remark: Key concept used in proof was convexity. An M-matrix givesconvexity property to the scheme.
Remark: For proof of maximum principle in continuous case, see e.g.,Larsson and Thomee.
28 / 34
Mesh Peclet number
• Note that
aj = viscosity coefficient bj = convection speed
The condition aj ± 12hbj ≥ 0 requires that
Pj =h|bj |aj≤ 2
Here, Pj is called mesh Peclet number. If bj ≡ 0, i.e., there is noconvection, then the condition is trivially satisfied for all h.
• When convection is large, we have to choose a small mesh h, whichincreases computational cost and hence is not desirable.
• For non-linear problems, the speed b will depend on the solution, which isitself unknown.
• These problems arise because we chose a central difference approximationfor the term bu′ which has a hyperbolic character.
29 / 34
Numerical solution
• Consider the boundary value problem
−u′′(x) = f(x), x ∈ (a, b)
with boundary condition
u(a) = ua, u(b) = ub
• At i = 12
h2U1 −
1
h2U2 = f1 +
1
h2ua
• For i = 2, . . . , N − 2
− 1
h2Ui−1 +
2
h2Ui −
1
h2Ui+1 = fi
• At i = N − 1
− 1
h2UN−2 +
2
h2UN−1 = fN−1 +
1
h2ub
30 / 34
FDM for −u′′ = f
For N = 11, putting all equations together
1
h2
2 −1 0 0 0 0 0 0 0 0−1 2 −1 0 0 0 0 0 0 00 −1 2 −1 0 0 0 0 0 00 0 −1 2 −1 0 0 0 0 00 0 0 −1 2 −1 0 0 0 00 0 0 0 −1 2 −1 0 0 00 0 0 0 0 −1 2 −1 0 00 0 0 0 0 0 −1 2 −1 00 0 0 0 0 0 0 −1 2 −10 0 0 0 0 0 0 0 −1 2
U1
U2
U3
U4
U5
U6
U7
U8
U9
U10
=
f1 + uah2
f2f3f4f5f6f7f8f9f10 + ub
h2
or
AU = b
We have N − 1 equations for the N − 1 unknowns: [U1, U2, . . . , UN−1]
31 / 34
FDM for ODE
• We take
f(x) = sin(x), (a, b) = (0, 2π), u(a) = u(b) = 0
• Efficient solution using Thomas Tri-diagonal algorithm
bvp 1d.m0 1 2 3 4 5 6
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
x
u(x)
NumericalExact
32 / 34
Thomas tri-diagonal algorithm
General n× n tri-diagonal matrix A
7.1 Direct methods 161
all the time, but with a different known term). Consequently, as the computational costis concentrated in the elimination procedure (about 2n3/3 flops), we have in this waya considerable reduction in the number of operations when we want to solve severallinear systems having the same matrix.
If A is a symmetric positive definite matrix, the LU factorization can be conve-niently specialized. Indeed, there exists only one upper triangular matrix H with posi-tive elements on the diagonal such that
A = HTH. (7.5)
Equation (7.5) is the so-called Cholesky factorization. The hij elements of HT aregiven by the following formulae: h11 =
!a11 and, for i = 2, . . . , n,
hij =
!
aij "j!1"
k=1
hikhjk
#
/hjj, j = 1, . . . , i" 1,
hii =
!
aii "i!1"
k=1
h2ik
#1/2.
This algorithm only requires about n3/3 flops, i.e. saves about twice the computingtime of the LU factorization and about half the memory.
Let us now consider the particular case of a linear system with non-singular tridi-agonal matrix A of the form
A =
$
%%%%&
a1 c1 0
b2 a2. . .
. . . cn!10 bn an
'
(((().
In this case, the L and U matrices of the LU factorization of A are two bidiagonalmatrices of the type
L =
$
%%%&
1 0!2 1
. . .. . .
0 !n 1
'
(((), U =
$
%%%%&
"1 c1 0
"2. . .. . . cn!1
0 "n
'
(((().
The "i and !i unknown coefficients can be easily computed through the followingequations:
"1 = a1, !i =bi"i!1
, "i = ai " !ici!1, i = 2, . . . , n.
This algorithm is named Thomas algorithm and can be seen as a particular kind of LUfactorization without pivoting.
Make an LU decompositionA = LU
with L lower triangular and U upper triangular matrix.
7.1 Direct methods 161
all the time, but with a different known term). Consequently, as the computational costis concentrated in the elimination procedure (about 2n3/3 flops), we have in this waya considerable reduction in the number of operations when we want to solve severallinear systems having the same matrix.
If A is a symmetric positive definite matrix, the LU factorization can be conve-niently specialized. Indeed, there exists only one upper triangular matrix H with posi-tive elements on the diagonal such that
A = HTH. (7.5)
Equation (7.5) is the so-called Cholesky factorization. The hij elements of HT aregiven by the following formulae: h11 =
!a11 and, for i = 2, . . . , n,
hij =
!
aij "j!1"
k=1
hikhjk
#
/hjj, j = 1, . . . , i" 1,
hii =
!
aii "i!1"
k=1
h2ik
#1/2.
This algorithm only requires about n3/3 flops, i.e. saves about twice the computingtime of the LU factorization and about half the memory.
Let us now consider the particular case of a linear system with non-singular tridi-agonal matrix A of the form
A =
$
%%%%&
a1 c1 0
b2 a2. . .
. . . cn!10 bn an
'
(((().
In this case, the L and U matrices of the LU factorization of A are two bidiagonalmatrices of the type
L =
$
%%%&
1 0!2 1
. . .. . .
0 !n 1
'
(((), U =
$
%%%%&
"1 c1 0
"2. . .. . . cn!1
0 "n
'
(((().
The "i and !i unknown coefficients can be easily computed through the followingequations:
"1 = a1, !i =bi"i!1
, "i = ai " !ici!1, i = 2, . . . , n.
This algorithm is named Thomas algorithm and can be seen as a particular kind of LUfactorization without pivoting.
33 / 34
Thomas tri-diagonal algorithmThe αi and βi are obtained from
α1 = a1, βi =biαi−1
, αi = ai − βici−1, i = 2, 3, . . . , n
We want to solveAx = b =⇒ LUx = b
We do this in two steps: Ly = b and Ux = y
1 Solve Ly = b by forward substitution
y1 = b1
β2y1 + y2 = b2
β3y2 + y3 = b3 etc.
2 Solve Ux = y by backward substitution.
αnxn = yn
αn−1xn−1 + cn−1xn = yn−1
αn−2xn−2 + cn−2xn−1 = yn−2 etc.
No need to store the fullmatrix A. Store only the threediagonals.
Solution obtained in O (N)operations.
34 / 34