14
CHAPTER V Iterative Methods to Solve Systems of Linear Equations By: Maria Fernanda Vergara Mendoza Petroleum Engineerin Universidad Industrial de Santande Colombia-201

Chapter v

Embed Size (px)

Citation preview

Page 1: Chapter v

CHAPTER VIterative Methods to Solve Systems of Linear Equations

By: Maria Fernanda Vergara Mendoza.Petroleum Engineering

Universidad Industrial de SantanderColombia-2010

Page 2: Chapter v

SPECIAL MATRICES A banded matrix is a square matrix that has

all elements equal to zero, with the exception of a band centered on the main diagonal. Gauss elimination or LU decomposition can be used to solve these systems, but they are inefficient, because if pivoting is unnecessary none of elements outside the band would change from their original values of zero.

If is known beforehand that pivoting is unnecessary, very efficient algorithms can be developed that do not involve the zero elements outside the band.

Page 3: Chapter v

The dimension of a banded system can be quantified by two parameters:

• The band-width (BW)• The half-bandwidth (HBW) BW=2HBW+1

HBW+1

HBW

BW

Diagonal

Page 4: Chapter v

THE JACOBI METHOD “Jacobi method is an algorithm for determining the

solutions of a system of linear equations with largest absolute values in each row and column dominated by the diagonal element. Each diagonal element is solved for, and an approximate value plugged in. The process is then iterated until it converges.”

Given a square system of n linear equations: Ax=b Where:

The system of linear equations may be rewritten as: (D+R)x=b Dx+Rx=b Dx=b-Rx

Page 5: Chapter v

THE JACOBI METHOD The Jacobi method is an iterative technique that

solves the left hand side of this expression for x, using previous value for x on the right hand side. Analytically, this may be written as:

The element-based formula is thus:

Page 6: Chapter v

EXAMPLE A linear system of the form Ax = b with initial estimate x(0) is

given by

Use the equation x(k + 1) = D − 1(b − Rx(k)), to estimate x. First, rewrite the equation into the form D − 1(b − Rx(k)) = Tx(k) + C, where T = − D − 1R and C = D − 1b. Note that R = L + U where L and U are the strictly lower and upper parts of A. From the known values

Determine T = − D − 1(L + U) as

Page 7: Chapter v

EXAMPLE C is found as

With T and C calculated, we estimate x as x(1) = Tx(0) + C:

The next iteration yields

This process is repeated until convergence (i.e., until |Ax(n) – b| is small). The solution after 25 iterations is

Page 8: Chapter v

GAUSS-SEIDEL METHOD Very similar to the Jacobi Method. Though it can be applied to any matrix with non-zero

elements on the diagonals, convergence is only guaranteed if the matrix is either diagonally dominant, or symmetric and positive definite.

method is the most commonly used iterative method. Asume that Ax=b, and supposing that A=LU (As we seen previously):

The system of linear equations may be rewritten as: “The Gauss–Seidel method is an iterative technique that

solves the left hand side of this expression for x, using previous value for x on the right hand side.”

Page 9: Chapter v

The elements of x(k+1) can be computed sequentially using forward substitution:

The procedure is continued until the changes made by an iteration are below some tolerance.

Convergence:

Page 10: Chapter v

EXAMPLE

We want to use the equation

In the form

Where

We must decompose A into the sum of L* + U

The inverse of L* is

and

Now we can find T and C:

Page 11: Chapter v

We can use T and C to obtain the vectors X iteratively, supposing an x(0):

.

.

.

Page 12: Chapter v

As expected, the algorithm converges to the exact solution:

Page 13: Chapter v

IMPROVEMENT OF CONVERGENCE USING RELAXATION Relaxation represents a slight modification of the Gauss-

Seidel method and is designed to enhance convergence. After each new value of x is computed, that value is modified by a weighted average of the result of the previous and the present iterations:

Where λ is a weighting factor that is assigned a value between 0 and 2.

If λ = 1, the result is unmodified. If 0< λ<1, is called underrelaxation If 1< λ <2, is called overrelaxation

Page 14: Chapter v

BIBLIOGRAPHY CHAPRA S., Numerical methods for engineers,

Mc Graw Hill. http://www.math-linux.com/spip.php?article48 http://www.netlib.org/linalg/html_templates/

node14.html#figgs