23
COURSE 9 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>100 000 variables). An iterative scheme for linear systems consists of converting the system Ax = b (1) to the form x = b 0 - Bx. After an initial guess for x (0) , the sequence of approximations of the solution x (0) ,x (1) , ..., x (k) , ...is generated by computing x (k) = b 0 - Bx (k-1) , for k =1, 2, 3, ....

COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

COURSE 9

4.3. Iterative methods for solving linear systems

Because of round-off errors, direct methods become less efficient than

iterative methods for large systems (>100 000 variables).

An iterative scheme for linear systems consists of converting the system

Ax = b (1)

to the form

x = b′ −Bx.

After an initial guess for x(0), the sequence of approximations of the

solution x(0), x(1), ..., x(k), ...is generated by computing

x(k) = b′ −Bx(k−1), for k = 1,2,3, ....

Page 2: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

4.3.1. Jacobi iterative method

Consider the n× n linear system,a11x1 + a12x2 + ...+ a1nxn = b1a21x1 + a22x2 + ...+ a2nxn = b2

...an1x1 + an2x2 + ...+ annxn = bn,

where we assume that the diagonal terms a11, a22, ..., ann are all

nonzero.

We begin our iterative scheme by solving each equation for one of the

variables: x1 = u12x2 + ...+ u1nxn + c1x2 = u21x1 + ...+ u2nxn + c2...xn = un1x1 + ...+ unn−1xn−1 + cn,

where uij = −aijaii , ci = biaii, i = 1, ..., n.

Page 3: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

Let x(0) = (x(0)1 , x

(0)2 , ..., x

(0)n ) be an initial approximation of the solu-

tion. The k + 1-th approximation is:x

(k+1)1 = u12x

(k)2 + ...+ u1nx

(k)n + c1

x(k+1)2 = u21x

(k)1 + u23x

(k)3 + ...+ u2nx

(k)n + c2

...

x(k+1)n = un1x

(k)1 + ...+ unn−1x

(k)n−1 + cn,

for k = 0,1,2, ...

An algorithmic form:

x(k)i =

bi −n∑

j=1,j 6=i

aijx(k−1)j

aii, i = 1,2, ..., n, for k ≥ 1.

The iterative process is terminated when a convergence criterion issatisfied.

Stopping criterions:∣∣∣x(k) − x(k−1)

∣∣∣ < ε or

∣∣∣x(k)−x(k−1)∣∣∣∣∣x(k)

∣∣ < ε, with ε > 0 -

a prescribed tolerance.

Page 4: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

Example 1 Solve the following system using the Jacobi iterative method.Use ε = 10−3 and x(0) = (0 0 0 0) as the starting vector.

7x1 − 2x2 + x3 = 17x1 − 9x2 + 3x3 − x4 = 13

2x1 + 10x3 + x4 = 15x1 − x2 + x3 + 6x4 = 10.

These equations can be rearranged to give

x1 = (17 + 2x2 − x3)/7

x2 = (−13 + x1 + 3x3 − x4)/9

x3 = (15− 2x1 − x4)/10

x4 = (10− x1 + x2 − x3)/6

and, for example,

x(1)1 = (17 + 2x(0)

2 − x(0)3 )/7

x(1)2 = (−13 + x

(0)1 + 3x(0)

3 − x(0)4 )/9

x(1)3 = (15− 2x(0)

1 − x(0)4 )/10

x(1)4 = (10− x(0)

1 + x(0)2 − x(0)

3 )/6.

Page 5: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

Substitute x(0) = (0,0,0,0) into the right-hand side of each of theseequations to get

x(1)1 = (17 + 2 · 0− 0)/7 = 2.428 571 429

x(1)2 = (−13 + 0 + 3 · 0− 0)/9 = −1.444 444 444

x(1)3 = (15− 2 · 0− 0)/10 = 1.5

x(1)4 = (10− 0 + 0− 0)/6 = 1.666 666 667

and so x(1) = (2.428 571 429,−1.444 444 444,1.5,1.666 666 667).The Jacobi iterative process:

x(k+1)1 =

(17 + 2x(k)

2 − x(k)3

)/7

x(k+1)2 =

(−13 + x

(k)1 + 3x(k)

3 − x(k)4

)/9

x(k+1)3 =

(15− 2x(k)

1 − x(k)4

)/10

x(k+1)4 =

(10− x(k)

1 + x(k)2 − x(k)

3

)/6, k ≥ 1.

We obtain a sequence that converges to

x(9) = (2.000127203,−1.000100162,1.000118096,1.000162172).

Page 6: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

4.3.2. Gauss-Seidel iterative method

Almost the same as Jacobi method, except that each x-value is im-

proved using the most recent approx. of the other variables.

For a n× n system, the k + 1-th approximation is:x

(k+1)1 = u12x

(k)2 + ...+ u1nx

(k)n + c1

x(k+1)2 = u21x

(k+1)1 + u23x

(k)3 + ...+ u2nx

(k)n + c2

...

x(k+1)n = un1x

(k+1)1 + ...+ unn−1x

(k+1)n−1 + cn,

with k = 0,1,2, ...; uij = −aijaii , ci = biaii, i = 1, ..., n (as in Jacobi

method).

Algorithmic form:

x(k)i =

bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

aii

Page 7: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

for each i = 1,2, ...n, and for k ≥ 1.

Stopping criterions:∣∣∣x(k) − x(k−1)

∣∣∣ < ε, or

∣∣∣x(k)−x(k−1)∣∣∣∣∣x(k)

∣∣ < ε, with ε - a

prescribed tolerance, ε > 0.

Remark 2 Because the new values can be immediately stored in the

location that held the old values, the storage requirements for x with

the Gauss-Seidel method is half than that for Jacobi method and the

rate convergence is faster.

Example 3 Solve the following system using the Gauss-Seidel iterative

method. Use ε = 10−3 and x(0) = (0 0 0 0) as the starting vector.7x1 − 2x2 + x3 = 17x1 − 9x2 + 3x3 − x4 = 13

2x1 + 10x3 + x4 = 15x1 − x2 + x3 + 6x4 = 10

Page 8: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

We have

x1 = (17 + 2x2 − x3)/7

x2 = (−13 + x1 + 3x3 − x4)/9

x3 = (15− 2x1 − x4)/10

x4 = (10− x1 + x2 − x3)/6,

and, for example,

x(1)1 = (17 + 2x(0)

2 − x(0)3 )/7

x(1)2 = (−13 + x

(1)1 + 3x(0)

3 − x(0)4 )/9

x(1)3 = (15− 2x(1)

1 − x(0)4 )/10

x(1)4 = (10− x(1)

1 + x(1)2 − x(1)

3 )/6,

Page 9: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

which provide the following Gauss-Seidel iterative process:

x(k+1)1 =

(17 + 2x(k)

2 − x(k)3

)/7

x(k+1)2 =

(−13 + x

(k+1)1 + 3x(k)

3 − x(k)4

)/9

x(k+1)3 =

(15− 2x(k+1)

1 − x(k)4

)/10

x(k+1)4 =

(10− x(k+1)

1 + x(k+1)2 − x(k+1)

3

)/6, for k ≥ 1.

Substitute x(0) = (0,0,0,0) into the right-hand side of each of these

equations to get

x(1)1 = (17 + 2 · 0− 0)/7 = 2.428 571 429

x(1)2 = (−13 + 2.428 571 429 + 3 · 0− 0)/9 = −1.1746031746

x(1)3 = (15− 2 · 2.428 571 429− 0)/10 = 1.0142857143

x(1)4 = (10− 2.428 571 429− 1.1746031746− 1.0142857143)/6

= 0.8970899472

Page 10: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

and so

x(1) = (2.428571429− 1.1746031746,1.0142857143,0.8970899472).

Similar procedure generates a sequence that converges to

x(5) = (2.000025,−1.000130,1.000020.0.999971).

Page 11: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

4.3.3. Relaxation method

In case of convergence, the Gauss-Seidel method is twice faster than

Jacobi method. The convergence can be more improved using relax-

ation method (SOR method) (SOR=Succesive Over Relaxation)

Algorithmic form of the method:

x(k)i =

ω

aii

(bi −

i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

)+ (1− ω)x(k−1)

i

for each i = 1,2, ...n, and for k ≥ 1.

For 0 < ω < 1 the procedure is called under relaxation method,

that can be used to obtain convergence for systems which are not

convergent by Gauss-Siedel method.

For ω > 1 the procedure is called over relaxation method, that can be

used to accelerate the convergence for systems which are convergent

by Gauss-Siedel method.

Page 12: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

By Kahan’s Theorem follows that the method converges for 0 < ω < 2.

Remark 4 For ω = 1, relaxation method is Gauss-Seidel method.

Example 5 Solve the following system, using relaxation iterative method.

Use ε = 10−3, x(0) = (1 1 1) and ω = 1.25,

4x1 + 3x2 = 243x1 + 4x2 − x3 = 30

− x2 + 4x3 = −24

We have

x(k)1 = 7.5− 0.937x(k−1)

2 − 0.25x(k−1)1

x(k)2 = 9.375− 9.375x(k)

1 + 0.3125x(k−1)3 − 0.25x(k−1)

2

x(k)3 = −7.5 + 0.3125x(k)

2 − 0.25x(k−1)3 , for k ≥ 1.

The solution is (3,4,−5).

Page 13: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

4.3.4 The matriceal formulations of the iterative methods

Split the matrix A into the sum

A = D + L+ U,

where D is the diagonal of A, L the lower triangular part of A, and U

the upper triangular part of A. That is,

D =

a11 0 · · · 00 .. . . . . ...... . . . . . . 00 · · · 0 ann

, L =

0 · · · 0a21

... . . .... . . . . . .an1 · · · an,n−1 0

,

U =

0 a12 · · · a1n... . . . . . . ...

. . . an−1,n0 · · · 0

The system Ax = b can be written as

(D + L+ U)x = b.

Page 14: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

The Jacobi method in matriceal form is given by:

Dx(k) = −(L+ U)x(k−1) + b

the Gauss-Seidel method in matriceal form is given by:

(D + L)x(k) = −Ux(k−1) + b

and the relaxation method in matriceal form is given by:

(D + ωL)x(k) = ((1− ω)D − ωU)x(k−1) + ωb

Convergence of the iterative methods

Remark 6 The convergence (or divergence) of the iterative process

in the Jacobi and Gauss-Seidel methods does not depend on the initial

guess, but depends only on the character of the matrices themselves.

However, a good first guess in case of convergence will make for a

relatively small number of iterations.

Page 15: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

A sufficient condition for convergence:

Theorem 7 (Convergence Theorem) If A is strictly diagonally dom-inant, then the Jacobi, Gauss-Seidel and relaxation methods convergefor any choice of the starting vector x(0).

Example 8 Consider the system of equations 3 1 1−2 4 0−1 2 −6

x1x2x3

=

412

.The coefficient matrix of the system is strictly diagonally dominantsince

|a11| = |3| = 3 > |1|+ |1| = 2

|a22| = |4| = 4 > |−2|+ |0| = 2

|a33| = |−6| = 6 > |−1|+ |2| = 3.

Hence, if the Jacobi or Gauss-Seidel method are used to solve thesystem of equations, they will converge for any choice of the startingvector x(0)

Page 16: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

Example 9 Consider the linear system[4 12 5

] [xy

]=

[31

]Perform two iterations of the Jacobi and the Gauss-Seidel methods’

to this system beginning with the vector x = [3,11]. (Solutions of the

system are 7/9 and −1/9).

Page 17: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

5. Numerical methods for solving nonlinear equations in R

Let f : Ω→ R, Ω ⊂ R. Consider the equation

f (x) = 0, x ∈ Ω. (2)

We attach a mapping F : D → D, D ⊂ Ωn to this equation.

Let (x0, ..., xn−1) ∈ D. Using F and the numbers x0, x1, ..., xn−1 we

construct iteratively the sequence

x0, x1, ..., xn−1, xn, ... (3)

with

xi = F(xi−n, ..., xi−1

), i = n, . . . . (4)

The problem consists in choosing F and x0, ..., xn−1 ∈ D such that the

sequence (3) to be convergent to the solution of the equation (2).

Page 18: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

Definition 10 The procedure of approximation the solution of equa-

tion (2) by the elements of the sequence (3), computed as in (4), is

called F -method.

The numbers x0, x1, ..., xn−1 are called the starting points and the

k-th element of the sequence (3) is called an approximation of k-th

order of the solution.

If the set of starting points has only one element then the F -method

is an one-step method; if it has more than one element then the

F -method is a multistep method.

Definition 11 If the sequence (3) converges to the solution of the

equation (2) then the F -method is convergent, otherwise it is diver-

gent.

Page 19: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

Definition 12 Let α ∈ Ω be a solution of the equation (2) and letx0, x1, ..., xn−1, xn, ... be the sequence generated by a given F -method.The number p having the property

limxi→α

α− F (xi−n+1, ..., xi)

(α− xi)p= C 6= 0, C = constant,

is called the order of the F -method.

We construct some classes of F -methods based on the interpolationprocedures.

Let α ∈ Ω be a solution of the equation (2) and V (α) a neighborhoodof α. Assume that f has inverse on V (α) and denote g := f−1. Since

f (α) = 0

it follows that

α = g (0) .

This way, the approximation of the solution α is reduced to the ap-proximation of g (0) .

Page 20: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

Definition 13 The approximation of g by means of an interpolating

method, and α by the value of g at point zero is called inverse inter-

polation procedure.

5.1. One-step methods

Let F be a one-step method, i.e., for a given xi we have xi+1 = F (xi) .

Remark 14 If p = 1 the convergence condition is∣∣F ′ (x)

∣∣ < 1.

If p > 1 there always exists a neighborhood of α where the F -method

converges.

All information on f are given at a single point, the starting value ⇒we are lead to Taylor interpolation.

Theorem 15 Let α be a solution of equation (2), V (α) a neighbor-

hood of α, x, xi ∈ V (α), f fulfills the necesary continuity conditions.

Page 21: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

Then we have the following method, denoted by FTm, for approximatingα:

FTm(xi) = xi +m−1∑k=1

(−1)k

k! [f(xi)]kg(k)(f(xi)), (5)

where g = f−1.

Proof. There exists g = f−1 ∈ Cm[V (0)]. Let yi = f (xi) and considerTaylor interpolation formula

g (y) =(Tm−1g

)(y) +

(Rm−1g

)(y) ,

with

(Tm−1g

)(y) =

m−1∑k=0

1k! (y − yi)k g(k)(yi),

and Rm−1g is the corresponding remainder.

Since α = g (0) and g ≈ Tm−1g, it follows

α ≈(Tm−1g

)(0) = xi +

m−1∑k=1

(−1)k

k! yki g(k)(yi).

Page 22: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

Hence,

xi+1 := FTm (xi) = xi +m−1∑k=1

(−1)k

k! [f(xi)]kg(k)(f(xi))

is an approximation of α, and FTm is an approximation method for the

solution α.

Concerning the order of the method FTm we state:

Theorem 16 If g = f−1 satisfies condition g(m) (0) 6= 0, then ord(FTm) =

m.

Proof. Bibliography

Remark 17 We have an upper bound for the absolute error in approx-

imating α by xi+1 :∣∣∣α− FTm (xi)∣∣∣ ≤ 1

m![f (xi)]mMmg, with Mmg = supy∈V (0)

∣∣∣g(m) (y)∣∣∣ .

Page 23: COURSE 9 4.3. Iterative methods for solving linear systemsmath.ubbcluj.ro/~tcatinas/wp-content/uploads/CN_Curs9-16-04-25.pdf · 4.3. Iterative methods for solving linear systems Because

Particular cases.

1) Case m = 2.

FT2 (xi) = xi −f(xi)f ′(xi)

.

This method is called Newton’s method (the tangent method). Its

order is 2.