39
Direct and Iterative Methods for Sparse Linear Systems Shirley Moore [email protected] CPS5401 Fall 2013 svmoore.pbworks.com November 26, 2013 1

Direct and Iterative Methods for Sparse Linear Systems

  • Upload
    quincy

  • View
    93

  • Download
    0

Embed Size (px)

DESCRIPTION

Direct and Iterative Methods for Sparse Linear Systems. Shirley Moore [email protected] CPS5401 Fall 2013 svmoore.pbworks.com November 26, 2013. Learning Objectives. Describe advantages and disadvantages of direct and iterative methods for solving sparse linear systems. - PowerPoint PPT Presentation

Citation preview

Page 1: Direct and Iterative Methods for  Sparse Linear Systems

1

Direct and Iterative Methods for Sparse Linear Systems

Shirley [email protected] Fall 2013

svmoore.pbworks.com November 26, 2013

Page 2: Direct and Iterative Methods for  Sparse Linear Systems

2

Learning Objectives

• Describe advantages and disadvantages of direct and iterative methods for solving sparse linear systems.

• Describe the general methodology underlying the method of Conjugate Gradients (CG).

• Apply appropriate method from a solver library to solve a particular sparse linear system, including both symmetric positive definite and nonsymmetric matrices.

• Be able to find and make use of documentation on sparse solver libraries.

Page 3: Direct and Iterative Methods for  Sparse Linear Systems

3

Direct vs. Iterative Methods

• In a direct method, the matrix of the initial linear system is transformed or factorized into a simpler form, which can be solved easily. The exact solution is obtained in a finite number of arithmetic operations, if not considering numerical rounding errors.

• Iterative methods compute a sequence of approximate solutions, which converges to the exact solution in the limit, i.e., in practice until a desired accuracy is obtained.

Page 4: Direct and Iterative Methods for  Sparse Linear Systems

4

Direct vs. Iterative Methods (cont.)• Direct methods have been preferred to iterative methods for solving linear

systems, mainly because of their simplicity and robustness. • However, the emergence of conjugate gradient methods and Krylov

subspace iterations has provided an efficient alternative to direct solvers. • Nowadays, iterative methods are almost mandatory in complex

applications, mainly because of memory and computational requirements that prohibit the use of direct methods.

• Iterative methods usually involve a matrix-vector multiplication procedure that is cheap to compute on modern computer architectures.

• When the matrix A is very large and is composed of a majority of nonzero elements, the LU factorization would contain many more nonzero coefficients than the matrix A itself.

• Nonetheless, in some peculiar applications, very ill-conditioned matrices arise that may require a direct method for solving the problem at hand.

Page 5: Direct and Iterative Methods for  Sparse Linear Systems

5

Direct Solvers for Sparse Linear Systems

• Direct solvers for sparse matrices involve much more complicated algorithms than for dense matrices. The main complication is due to the need for efficient handling the fill-in in the factors L and U. A typical sparse solver consists of four distinct steps as opposed to two in the dense case: 1. An ordering step that reorders the rows and columns so that the factors

suffer little fill, or so that the matrix has special structure such as block triangular form.

2. An analysis step or symbolic factorization that determines the nonzero structures of the factors and creates suitable data structures for the factors.

3. Numerical factorization that computes the L and U factors. 4. A solve step that performs forward and back substitution using the factors.

Page 6: Direct and Iterative Methods for  Sparse Linear Systems

6

Direct Solver Packages

• SuperLU– http://crd-legacy.lbl.gov/~xiaoye/SuperLU/ – SuperLU for sequential machines– SuperLU_MT for shared memory parallel machines– SuperLU_DIST for distributed memory parallel machines

• See survey of direct solvers by Xiaoye Li– http://crd-legacy.lbl.gov/~xiaoye/SuperLU/SparseDirectSur

vey.pdf

• See also research by Tim Davis– http://www.cise.ufl.edu/~davis/welcome.html

Page 7: Direct and Iterative Methods for  Sparse Linear Systems

7

Iterative Methods• Use successive approximations to obtain more accurate solutions to a

linear system• Suitable for large sparse linear systems• Stationary methods are older, simple to understand and implement, not

as effective– Perform same operation each iteration

• Nonstationary methods are more recent, harder to understand, highly effective– Have iteration-dependent coefficients

• Typically use a transformation matrix called a preconditioner that improves convergence of the method

• Reference: Templates book http://www.netlib.org/linalg/html_templates/report.html

Page 8: Direct and Iterative Methods for  Sparse Linear Systems

8

Stationary Methods• Jacobi

– Based on solving for every variable locally with respect to the other variables– One iteration of the method corresponds to solving for every variable once. – Resulting method is easy to understand and implement, but convergence is slow.

• Gauss-Seidel – Like the Jacobi method, except that it uses updated values as soon as they are available – In general, if the Jacobi method converges, the Gauss-Seidel method will converge

faster than the Jacobi method, though still relatively slowly. • Successive Overrelaxation (SOR)

– Can be derived from the Gauss-Seidel method by introducing an extrapolation parameter ω

– For the optimal choice of ω, SOR may converge faster than Gauss-Seidel by an order of magnitude.

• Symmetric Successive Overrelaxation (SSOR)– No advantage over SOR as a stand-alone iterative method– However, it is useful as a preconditioner for nonstationary methods.

Page 9: Direct and Iterative Methods for  Sparse Linear Systems

9

Nonstationary Methods• Conjugate Gradient (CG )

– The conjugate gradient method derives its name from the fact that it generates a sequence of conjugate (or orthogonal) vectors. These vectors are the residuals of the iterates. They are also the gradients of a quadratic functional, the minimization of which is equivalent to solving the linear system. CG is an extremely effective method when the coefficient matrix is symmetric positive definite, since storage for only a limited number of vectors is required.

• Minimum Residual (MINRES ) and Symmetric LQ (SYMMLQ ) – These methods are computational alternatives for CG for coefficient matrices that are

symmetric but possibly indefinite. SYMMLQ will generate the same solution iterates as CG if the coefficient matrix is symmetric positive definite.

• Conjugate Gradient on the Normal Equations : CGNE and CGNR – These methods are based on the application of the CG method to one of two forms of

the normal equations When the coefficient matrix is nonsymmetric and nonsingular, the normal equations matrices will be symmetric and positive definite, and hence CG can be applied. The convergence may be slow, since the spectrum of the normal equations matrices will be less favorable.

Page 10: Direct and Iterative Methods for  Sparse Linear Systems

10

Nonstationary Methods (cont.)• Generalized Minimal Residual (GMRES )

– The Generalized Minimal Residual method computes a sequence of orthogonal vectors (like MINRES), and combines these through a least-squares solve and update. However, unlike MINRES (and CG) it requires storing the whole sequence, so that a large amount of storage is needed. For this reason, restarted versions of this method are used. In restarted versions, computation and storage costs are limited by specifying a fixed number of vectors to be generated. This method is useful for general nonsymmetric matrices.

• BiConjugate Gradient (BiCG )– The Biconjugate Gradient method generates two CG-like sequences of vectors, one based on a system with

the original coefficient matrix A , and one on AT. Instead of orthogonalizing each sequence, they are made mutually orthogonal, or ``bi-orthogonal''. This method, like CG, uses limited storage. It is useful when the matrix is nonsymmetric and nonsingular; however, convergence may be irregular, and there is a possibility that the method will break down. BiCG requires a multiplication with the coefficient matrix and with its transpose at each iteration.

• Quasi-Minimal Residual (QMR )– The Quasi-Minimal Residual method applies a least-squares solve and update to the BiCG residuals, thereby

smoothing out the irregular convergence behavior of BiCG, which may lead to more reliable approximations. In full glory, it has a look ahead strategy built in that avoids the BiCG breakdown. Even without look ahead, QMR largely avoids the breakdown that can occur in BiCG. On the other hand, it does not effect a true minimization of either the error or the residual, and while it converges smoothly, it often does not improve on the BiCG in terms of the number of iteration steps.

Page 11: Direct and Iterative Methods for  Sparse Linear Systems

11

Nonstationary Methods (cont.)• Conjugate Gradient Squared (CGS )

– The Conjugate Gradient Squared method is a variant of BiCG that applies the updating operations for the A-sequence and the AT-sequences both to the same vectors. Ideally, this would double the convergence rate, but in practice convergence may be much more irregular than for BiCG, which may sometimes lead to unreliable results. A practical advantage is that the method does not need the multiplications with the transpose of the coefficient matrix.

• Biconjugate Gradient Stabilized (Bi-CGSTAB )– The Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using

different updates for the -sequence in order to obtain smoother convergence than CGS.

• Chebyshev Iteration – The Chebyshev Iteration recursively determines polynomials with coefficients

chosen to minimize the norm of the residual in a min-max sense. The coefficient matrix must be positive definite and knowledge of the extremal eigenvalues is required. This method has the advantage of requiring no inner products.

Page 12: Direct and Iterative Methods for  Sparse Linear Systems

12

Conjugate Gradient Method

• Popular iterative method for solving large systems of sparse linear equations Ax=b

• A is known, square, symmetric, positive-definite

• Reference: Shewchuk

Page 13: Direct and Iterative Methods for  Sparse Linear Systems

13

Quadratic Form

• Quadratic form

• If A is symmetric and positive-definite, f(x) is minimized by the solution to Ax=b

cxbAxxxf TT 21)(

A=AT

Page 14: Direct and Iterative Methods for  Sparse Linear Systems

14

Method of Steepest Descent

• Choose the direction in which f decreases the most: the direction opposite to f’(x(i))

Error

Residual

Residual as the direction of steepest descent

Page 15: Direct and Iterative Methods for  Sparse Linear Systems

15

Line Search

= 0

Page 16: Direct and Iterative Methods for  Sparse Linear Systems

16

Summary

b – A( )b – A

)0()0()0()1(

)0()0(

)0()0()0(

)0()0(

rxx

Arrrr

Axbr

T

T

)1()1()1()2(

)1()1(

)1()1()1(

)0()0()0()1(

rxx

Arrrr

Arrr

T

T

)2()2()2()3(

)2()2(

)2()2()2(

)1()1()1()2(

rxx

Arrrr

Arrr

T

T

Page 17: Direct and Iterative Methods for  Sparse Linear Systems

17

Convergence Analysis

• Two cases where one-step convergence is possible– e(i) is the eigenvector (of A)– All eigenvalues of A are the same

Page 18: Direct and Iterative Methods for  Sparse Linear Systems

18

Convergence Analysis (cont)

• In general, depends on condition number of A and the slope of e(0) (relative to coord. system of eigenvectors) at the starting point

Page 19: Direct and Iterative Methods for  Sparse Linear Systems

19

Method of Conjugate Directions

• Idea: Pick a set of n orthogonal directions. In each direction, take exactly one step. After n steps, we’ll be done

But we don’t know e(i) !

Algorithm should look like:

Page 20: Direct and Iterative Methods for  Sparse Linear Systems

20

Conjugate Directions

• Make the search directions {d(i)}A-orthogonal

A

And require

Find minimum along d(i)

Def: A-orthogonal

0AvuT

and A-orthogonal

Page 21: Direct and Iterative Methods for  Sparse Linear Systems

21

Conjugate Directions

• To get stepsize• Algorithm Summary

0

0

)()()()(

)1()(

iiiTi

iTi

deAd

Aed

Page 22: Direct and Iterative Methods for  Sparse Linear Systems

22

Proof

• Can this really solve x in n steps? [since we’ve changed orthogonal to A-orthogonal?]

Intuition:Each step of conjugate directions eliminates one A-orthogonal component of error

Page 23: Direct and Iterative Methods for  Sparse Linear Systems

23

Initial error represent error in basis of {d(i)}

)1()1()0()0()0()(

)1()1()0()0()0(

)1()1()2()2()2(

)1()1()1()(

kkk

kk

kkkkk

kkkk

ddee

ddx

ddx

dxx

After n iterations, e(n) = 0

Page 24: Direct and Iterative Methods for  Sparse Linear Systems

24

To find A-orthogonal directions

• Conjugate Gram-Schmidt process

Difficulty: need to keep all the old search directions to construct the new one!

n linearly independent vectors

Page 25: Direct and Iterative Methods for  Sparse Linear Systems

25

Conjugate Gradients

• Conjugate directions where the search directions are constructed by conjugation of the residuals– Setting ui = r(i)

– Reasons: residuals are independent, orthogonal to previous search directions, …

Page 26: Direct and Iterative Methods for  Sparse Linear Systems

26

Method of Conjugate Gradients

Krylov subspace

• Each new residual is orthogonal to all the previous residuals and search directions.• Each new search direction is constructed to be A-orthogonal to all the previous residuals and search directions.• d(2) is a linear combination of r(2) and d(1).

Page 27: Direct and Iterative Methods for  Sparse Linear Systems

27

Reduction of Space Complexity

Let

Page 28: Direct and Iterative Methods for  Sparse Linear Systems

28

Convergence Analysis of Conjugate Gradients

• With exact arithmetic, CG is complete after n iterations.

• Accumulated roundoff error causes residual to gradually lose accuracy, and cancellation error causes search vectors to lose A-orthogonality.

• CG is useful for problems so large that is not feasible to run even n iterations.

• CG is quicker if there are duplicated eigenvalues.• CG converges more quickly when eigenvalues are

clustered together.

Page 29: Direct and Iterative Methods for  Sparse Linear Systems

29

Convergence Analysis of Conjugate Gradients (cont.)

• See Appendix C3 of Shewchuk

Page 30: Direct and Iterative Methods for  Sparse Linear Systems

30

Time and Space Complexity

• Shewchuk, p. 38

Page 31: Direct and Iterative Methods for  Sparse Linear Systems

31

Preconditioning

Page 32: Direct and Iterative Methods for  Sparse Linear Systems

32

Transformed Preconditioned Conjugate Gradient Method

Page 33: Direct and Iterative Methods for  Sparse Linear Systems

33

Untransformed Preconditioned Conjugate Gradient Method

Page 34: Direct and Iterative Methods for  Sparse Linear Systems

34

History of Conjugate Gradient Method

• Conjugate Direction methods were probably first presented by Schmidt [14] in 1908, and were independently reinvented by Fox, Huskey, and Wilkinson [7] in 1948.

• In the early fifties, the method of Conjugate Gradients was discovered independently by Hestenes [10] and Stiefel [15]; shortly thereafter, they jointly published what is considered the seminal reference on CG [11].

• Convergence bounds for CG in terms of Chebyshev polynomials were developed by Kaniel [12]. A more thorough analysis of CG convergence is provided by van der Sluis and van der Vorst [16].

• CG was popularized as an iterative method for large, sparse matrices by Reid [13] in 1971.

• CG was generalized to nonlinear problems in 1964 by Fletcher and Reeves [6], based on work by Davidon [4] and Fletcher and Powell [5]. Convergence of nonlinear CG with inexact line searches was analyzed by Daniel [3].

• A history and extensive annotated bibliography of CG to the mid-seventies is provided by Golub and O’Leary [9]. Most research since that time has focused on nonsymmetric systems.

Page 35: Direct and Iterative Methods for  Sparse Linear Systems

35

Templates section on Conjugate Gradient Method

• http://www.netlib.org/linalg/html_templates/node20.html

Page 36: Direct and Iterative Methods for  Sparse Linear Systems

36

MATLAB Preconditioned Conjugate Gradients Method

• http://www.mathworks.com/help/matlab/ref/pcg.html

Page 37: Direct and Iterative Methods for  Sparse Linear Systems

37

Generalized Minimal Residual (GMRES) Method

• Generalizes MINRES to unsymmetric systems• Templates book:– http://www.netlib.org/linalg/html_templates/nod

e29.html

Page 38: Direct and Iterative Methods for  Sparse Linear Systems

38

MATLAB documentation on GMRES

• http://www.mathworks.com/help/matlab/ref/gmres.html

Page 39: Direct and Iterative Methods for  Sparse Linear Systems

39

Iterative Method Packages• Included in PETSc

– http://www.mcs.anl.gov/petsc/ – Sparse linear solvers

• http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html

• ITSOL and pARMS– http://www-users.cs.umn.edu/~saad/software/

• Dongarra’s survey of freely available linear algebra software– http://www.netlib.org/utk/people/JackDongarra/la-sw.html

• Vendor libraries– IBM ESSL/PESSL

• http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.essl.v5r1.essl100.doc%2Fam501_smsubs.htm

– Cray-supported version of PETSc– Numerical Algorithms Group (NAG)

• http://www.nag.com/numeric/FL/decisiontrees.asp