# Direct and Iterative Methods for Sparse Linear Systems

• View
83

0

Tags:

• #### sparse solver libraries

Embed Size (px)

DESCRIPTION

Direct and Iterative Methods for Sparse Linear Systems. Shirley Moore svmoore@utep.edu CPS5401 Fall 2013 svmoore.pbworks.com November 26, 2013. Learning Objectives. Describe advantages and disadvantages of direct and iterative methods for solving sparse linear systems. - PowerPoint PPT Presentation

### Text of Direct and Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems

Direct and Iterative Methods for Sparse Linear SystemsShirley Mooresvmoore@utep.eduCPS5401 Fall 2013svmoore.pbworks.com November 26, 20131Learning ObjectivesDescribe advantages and disadvantages of direct and iterative methods for solving sparse linear systems.Describe the general methodology underlying the method of Conjugate Gradients (CG).Apply appropriate method from a solver library to solve a particular sparse linear system, including both symmetric positive definite and nonsymmetric matrices.Be able to find and make use of documentation on sparse solver libraries.2Direct vs. Iterative MethodsIn a direct method, the matrix of the initial linear system is transformed or factorized into a simpler form, which can be solved easily. The exact solution is obtained in a finite number of arithmetic operations, if not considering numerical rounding errors. Iterative methods compute a sequence of approximate solutions, which converges to the exact solution in the limit, i.e., in practice until a desired accuracy is obtained.

3Direct vs. Iterative Methods (cont.)Direct methods have been preferred to iterative methods for solving linear systems, mainly because of their simplicity and robustness. However, the emergence of conjugate gradient methods and Krylov subspace iterations has provided an efficient alternative to direct solvers. Nowadays, iterative methods are almost mandatory in complex applications, mainly because of memory and computational requirements that prohibit the use of direct methods. Iterative methods usually involve a matrix-vector multiplication procedure that is cheap to compute on modern computer architectures. When the matrix A is very large and is composed of a majority of nonzero elements, the LU factorization would contain many more nonzero coefficients than the matrix A itself. Nonetheless, in some peculiar applications, very ill-conditioned matrices arise that may require a direct method for solving the problem at hand.

4Direct Solvers for Sparse Linear SystemsDirect solvers for sparse matrices involve much more complicated algorithms than for dense matrices. The main complication is due to the need for efficient handling the fill-in in the factors L and U. A typical sparse solver consists of four distinct steps as opposed to two in the dense case: An ordering step that reorders the rows and columns so that the factors suffer little fill, or so that the matrix has special structure such as block triangular form. An analysis step or symbolic factorization that determines the nonzero structures of the factors and creates suitable data structures for the factors. Numerical factorization that computes the L and U factors. A solve step that performs forward and back substitution using the factors.

5Direct Solver PackagesSuperLUhttp://crd-legacy.lbl.gov/~xiaoye/SuperLU/ SuperLU for sequential machinesSuperLU_MT for shared memory parallel machinesSuperLU_DIST for distributed memory parallel machinesSee survey of direct solvers by Xiaoye Lihttp://crd-legacy.lbl.gov/~xiaoye/SuperLU/SparseDirectSurvey.pdf See also research by Tim Davishttp://www.cise.ufl.edu/~davis/welcome.html

If A is symmetric and positive-definite, f(x) is minimized by the solution to Ax=b

A=AT

13Method of Steepest DescentChoose the direction in which f decreases the most: the direction opposite to f(x(i))

ErrorResidualResidual as the direction of steepest descent14Error e(i) = x(i) x is a vector that indicates how far we are from the solution.

residual r(i) = b Ax(i) indicates how far we are from the correct value of b.

Think of the residual as the error transformed by A into the same space as b.

Think of the residual as the direction of steepest descent.14Line Search

= 0

15line search chooses to minimize f along a line

minimizes f when the directional derivative is equal to zero

Setting this to zero, should be chosen so that r(0) and fx(1) are orthogonal

Bottom figure:f is minimized where the projection of the gradient onto the line is zero15Summary

b A( )b A

16Starting at [-2,-2]

Converge

Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents