15
Special Matrices. A band matrix is a square matrix in which all elements are zero, except for a band on the main diagonal. A tridiagonal system (ie with a bandwidth of 3) can be expressed generally as: f 0 g 0 x 0 b 0 e 1 f 1 g 1 x 1 = b 1 e 2 f 2 g 2 x 2 b 2 e 3 f 3 x 3 b 3 Based on LU decomposition we can see that the Thomas algorithm is: 1 ,.., 1 * 1 1 n k g e f f f e e k k k k k k k The forward substitution is 1 ,.., 1 * 1 n k r e b b k k k k and back is:

Metodos jacobi y gauss seidel

Embed Size (px)

Citation preview

Page 1: Metodos jacobi y gauss seidel

Special Matrices.   A band matrix is a square matrix in which all elements are zero, except for a band on the main diagonal. A tridiagonal system (ie with a bandwidth of 3) can be expressed generally as:

 f0 g0       x0   b0

e1 f1 g1     x1 = b1

  e2 f2 g2   x2   b2

    e3 f3   x3   b3

 Based on LU decomposition we can see that the Thomas algorithm is:

 

1,..,1

* 1

1

nk

geff

f

ee

kkkk

k

kk

  The forward substitution is

 

1,..,1

* 1

nk

rebb kkkk

 and back is:

 

k

kkkk

nnn

f

xgrx

nk

fbx

1

11

0,...,2

/

  Example:   Solve the following tridiagonal system using the Thomas algorithm.

2.04 -1       x0   40.8

Page 2: Metodos jacobi y gauss seidel

-1 2.04 -1     x1 = 0.8  -1 2.04 -1   x2   0.8    -1 2.04   x3   200.8

  The solution of the triangular decomposition is:  

 2.04 -1    -0.49 1.55 -1  

  -0.645 1.395 -1    -0.717 1.323

  The solution of the system is:

 X = [65.970, 93.778, 124.538, 159.480]T

 Descomposición de Cholesky.

 This algorithm is based on the fact that a symmetric matrix can be decomposed into [A] = [L] [L] T, since the matrix [A] is a symmetric matrix. In this case we will apply the Crout elimination of the lower and upper matrix, simply have the same values.   So taking the equations for the LU factorization can be adapted as: We can see that any element under the diagonal is calculated as:

jj

j

kjkkiji

ji a

aaaa

,

1

0,,,

,

para todo i=0,...,n-1 y j = 0,...,i-1.  For the terms above the diagonal, in this case only the diagonal

1

0,,,,

i

kkikiiiii aaaa

para todo i=0,...,n-1.  The Java implementation is:

static public void Cholesky(double A[][]) { int i, j, k, n, s; double fact, suma = 0;

Page 3: Metodos jacobi y gauss seidel

  n = A.length;  for (i = 0; i < n; i++) { //k = i for (j = 0; j <= i - 1; j++) { //i = j suma = 0; for (k = 0; k <= j - 1; k++) // j = k suma += A[i][k] * A[j][k];  A[i][j] = (A[i][j] - suma) / A[j][j]; }  suma = 0; for (k = 0; k <= i - 1; k++) suma += A[i][k] * A[i][k]; A[i][i] = Math.sqrt(A[i][i] - suma); } }

 Jacobi Method

In numerical analysis method of Jacobi is an iterative method, used for solving systems of linear equations Ax = b. Type The algorithm is named after the German mathematician Carl Gustav Jakob Jacobi. •

Description

The basis of the method is to construct a convergent sequence defined iteratively. The limit of this sequence is precisely the solution of the system. For practical purposes if the algorithm stops after a finite number of steps leads to an approximation of the value of x in the solution of the system. The sequence is constructed by decomposing the system matrix as follows:

where

, Is a diagonal matrix

, Is a lower triangular matrix.

, Is an upper triangular matrix.

Page 4: Metodos jacobi y gauss seidel

Starting from, , we can rewrite this equation as:  

Then

If aii ≠ 0 for each i. For the iterative rule, the definition of the Jacobi method can be expressed as:

where k is the iteration counter, finally we have:  

Note that the calculation of xi (k +1) requires all elements in x (k), except the one with the same i. Therefore, unlike in the Gauss-Seidel method, you can not overwrite xi (k) xi (k +1), since its value will be for the remainder of the calculations. This is the most significant difference between the methods of Jacobi and Gauss-Seidel. The minimum amount of storage is two vectors of dimension n, and will need to make an explicit copy.

Page 5: Metodos jacobi y gauss seidel

Convergence

Jacobi's method always converges if the matrix A is strictly diagonally dominant and can converge even if this condition is not satisfied. It is necessary, however, that the diagonal elements in the matrix are greater (in magnitude) than the other elements.

Algorithm

The Jacobi method can be written in the form of an algorithm as follows:

Algoritmo Método de Jacobi

función Jacobi (A, x0)

//x0 es una aproximación inicial a la solución//

para hasta convergencia hacer

para hasta hacer

para hasta hacer

si entonces

fin para

fin para

comprobar si se alcanza convergencia

fin para

Page 6: Metodos jacobi y gauss seidel

algorithm in java

public class Jacobi {

double [][]matriz={{4,-2,1},{1,-5,3},{2,1,4}};double []vector={2,1,3};double []vectorR={1,2,3};double []x2=vectorR;double sumatoria=1;int max=50; public void SolJacobi(){ int tam = matriz.length;for (int y = 0; y < 10; y++) { system.outtt.println("\nvector " + y + "\n"); for(int t=0;t>max;t++){ x2=vectorR.clone(); for (int i = 0; i < tam; i++) { sumatoria=0; for (int s = 0; s < tam; s++) { if(s!=i)sumatoria += matriz[i][s]*x2[s];}vectorR[i]=(vector[i]-sumatoria)/matriz[i][i]; System.out.print(" " + vectorR[i]);}} }} public static void main(String[] args) { jacobi obj=new Jacobi(); obj.SolJacobi();}}

Gauss-Seidel Method

In numerical analysis method is a Gauss-Seidel iterative method used to solve systems of linear equations. The method is named in honor of the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel and is similar to the method of Jacobi.

Page 7: Metodos jacobi y gauss seidel

Description

It is an iterative method, which means that part of an initial approximation and the process is repeated until a solution with a margin of error as small as desired. We seek the solution to a system of linear equations, in matrix notation:  

The method of Gauss-Seidel iteration

where

para i=j, o para .

and

This is alsoque :

If

define

and

.

Whereas the system Ax = b, with the proviso that, i = 1, ..., n. Then we can write the iteration formula of the method

, i=1,...,n(*)

Page 8: Metodos jacobi y gauss seidel

The difference between this method and Jacobi is that, in the latter, improvements to conventional approaches are not used to complete the iterations.

Convergence

Theorem: Suppose a matrix is a nonsingular matrix that satisfies the condition  or.

ó .

Then the Gauss-Seidel method converges to a solution of the system of equations Ax = b, and the convergence is at least as fast as the convergence of Jacobi method.

For cases where the method converges show first that can be written as follows:  

(**)

(The term is the approximation obtained after the k-th iteration) this way of writing the iteration is the general form of a stationary iterative method.

First we must show that we want to solve the linear problem can be represented in the form (**), for this reason we must try to write the matrix as the sum of a lower triangular matrix, a diagonal and upper triangular A = D ( L + I + U), D = diag (). Making the necessary clearances write the method this way

hence B =- (L + I) -1 U. Now we can see that the relation between errors, which can be calculated to subtract x = Bx + c (**)

Page 9: Metodos jacobi y gauss seidel

Now suppose that , i= 1, ..., n, are the eigenvalues corresponding to eigenvectors ui, i = 1 ,..., n, which are linearly independent, then we can write the initial error  

(***)

Therefore, the iteration converges if and only if | λi | <1, i = 1, ..., n. From this it follows the following theorem:

Theorem: A necessary and sufficient condition for a stationary iterative method

converges for an arbitrary approximation x ^ ((0)) is that  

where ρ (B) is the spectral radius of B.

Explanation

We choose an initial approximation. M matrices are calculated and the vector c with the formulas mentioned. The process is repeated until xk is sufficiently close to xk - 1, where k represents the number of steps in the iteration.

Algorithm

The Gauss-Seidel can be written algorithm as follows:

Algoritmo Método de Gauss-Seidel

función Gauss-Seidel (A, x0)

//x0 es una aproximación inicial a la solución//

para hasta convergencia hacer

Page 10: Metodos jacobi y gauss seidel

para hasta hacer

para hasta hacer

si entonces

σ = σ + aijxj

fin para

fin para

comprobar si se alcanza convergencia

fin para

EXAMPLE JACOBI AND GAUSS SEIDEL METHOD

Two numerical methods, which allows us to find solutions to systems with the same number of equations than unknowns.

In both methods the following process is done with a little variation on Gauss-Seidel

We have these equations:

5x-2y+z=3-x-7y+3z=-22x-y+8z=1

1. Solve each of the unknowns in terms of the others

x=(3+2y-z)/5y=(x-3z-2)/-7z=(1-2x+y)/8

Page 11: Metodos jacobi y gauss seidel

2. Give initial values to the unknowns

x1=0y1=0z1=0

By Jacobi: Replace in each equation the initial values, this will give new values to be used in the next iteration

x=(3+2*0-0)/5=0,60y=(0-3*0-2)/-7=0,28z=(1-2x+y)/8=0,12

Gauss-Seidel Replace the values in each equation but found next.

x=(3+2*0-0)/5=0,6y=(0,6-3*0-2)/-7=0,2z=(1-2*0,6+0,2)/8=0

It performs many iterations you want, using as initial values the new values found. You can stop the execution of the algorithm to calculate the error of calculation, which we can find with this formula: sqr ((x1-x0) ^ 2 + (y1-y0) ^ 2 + (z1-z0) ^ 2)

Page 12: Metodos jacobi y gauss seidel

jacobi

Gauss-Seidel

The main difference is that the method of gauss_seidel uses the values found immediately, then makes the whole process faster, and consequently makes this a more effective method.

The formulas used in the excel sheet for the method of Jacobi is

Page 13: Metodos jacobi y gauss seidel

=(3+2*D5-E5)/5=(C5-3*E5-2)/-7=(1-2*C5+D5)/8=RAIZ((C6-C5)^2 + (D6-D5)^2 + (E6-E5)^2)

Corresponding to the variable X, Y, Z and failure respectively

To the Gauss-Seidel:

=(3+2*J5-K5)/5=(I6-3*K5-2)/-7=(1-2*I6+J6)/8=RAIZ((I6-I5)^2 + (J6-J5)^2 + (K6-K5)^2)