Iterative Methods for Sparse Linear Systems Second Edition saad/IterMethBook_2ndEd.pdf  Contents

  • View
    215

  • Download
    0

Embed Size (px)

Text of Iterative Methods for Sparse Linear Systems Second Edition saad/IterMethBook_2ndEd.pdf ...

  • Iterative Methods for Sparse

    Linear Systems

    Second Edition

    0.10E-06

    0.19E+07

    Yousef Saad

    Copyright c©2003 by the Society for Industrial and Applied Mathematics

  • Contents

    Preface xiii Preface to second edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Preface to first edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

    1 Background in Linear Algebra 1 1.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Square Matrices and Eigenvalues . . . . . . . . . . . . . . . . 3 1.3 Types of Matrices . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Vector Inner Products and Norms . . . . . . . . . . . . . . . . 6 1.5 Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.6 Subspaces, Range, and Kernel . . . . . . . . . . . . . . . . . . 10 1.7 Orthogonal Vectors and Subspaces . . . . . . . . . . . . . . . 11 1.8 Canonical Forms of Matrices . . . . . . . . . . . . . . . . . . 15

    1.8.1 Reduction to the Diagonal Form . . . . . . . . . . 16 1.8.2 The Jordan Canonical Form . . . . . . . . . . . . 17 1.8.3 The Schur Canonical Form . . . . . . . . . . . . 17 1.8.4 Application to Powers of Matrices . . . . . . . . 20

    1.9 Normal and Hermitian Matrices . . . . . . . . . . . . . . . . . 21 1.9.1 Normal Matrices . . . . . . . . . . . . . . . . . . 21 1.9.2 Hermitian Matrices . . . . . . . . . . . . . . . . 25

    1.10 Nonnegative Matrices, M-Matrices . . . . . . . . . . . . . . . 27 1.11 Positive-Definite Matrices . . . . . . . . . . . . . . . . . . . . 32 1.12 Projection Operators . . . . . . . . . . . . . . . . . . . . . . . 34

    1.12.1 Range and Null Space of a Projector . . . . . . . 34 1.12.2 Matrix Representations . . . . . . . . . . . . . . 36 1.12.3 Orthogonal and Oblique Projectors . . . . . . . . 37 1.12.4 Properties of Orthogonal Projectors . . . . . . . . 38

    1.13 Basic Concepts in Linear Systems . . . . . . . . . . . . . . . . 39 1.13.1 Existence of a Solution . . . . . . . . . . . . . . 40 1.13.2 Perturbation Analysis . . . . . . . . . . . . . . . 41

    2 Discretization of PDEs 47 2.1 Partial Differential Equations . . . . . . . . . . . . . . . . . . 47

    2.1.1 Elliptic Operators . . . . . . . . . . . . . . . . . 47

    v

  • vi CONTENTS

    2.1.2 The Convection Diffusion Equation . . . . . . . . 50 2.2 Finite Difference Methods . . . . . . . . . . . . . . . . . . . . 50

    2.2.1 Basic Approximations . . . . . . . . . . . . . . . 50 2.2.2 Difference Schemes for the Laplacean Operator . 52 2.2.3 Finite Differences for 1-D Problems . . . . . . . . 53 2.2.4 Upwind Schemes . . . . . . . . . . . . . . . . . 54 2.2.5 Finite Differences for 2-D Problems . . . . . . . . 56 2.2.6 Fast Poisson Solvers . . . . . . . . . . . . . . . . 58

    2.3 The Finite Element Method . . . . . . . . . . . . . . . . . . . 62 2.4 Mesh Generation and Refinement . . . . . . . . . . . . . . . . 69 2.5 Finite Volume Method . . . . . . . . . . . . . . . . . . . . . . 69

    3 Sparse Matrices 75 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.2 Graph Representations . . . . . . . . . . . . . . . . . . . . . . 76

    3.2.1 Graphs and Adjacency Graphs . . . . . . . . . . . 77 3.2.2 Graphs of PDE Matrices . . . . . . . . . . . . . . 78

    3.3 Permutations and Reorderings . . . . . . . . . . . . . . . . . . 79 3.3.1 Basic Concepts . . . . . . . . . . . . . . . . . . . 79 3.3.2 Relations with the Adjacency Graph . . . . . . . 81 3.3.3 Common Reorderings . . . . . . . . . . . . . . . 83 3.3.4 Irreducibility . . . . . . . . . . . . . . . . . . . . 91

    3.4 Storage Schemes . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.5 Basic Sparse Matrix Operations . . . . . . . . . . . . . . . . . 95 3.6 Sparse Direct Solution Methods . . . . . . . . . . . . . . . . . 96

    3.6.1 Minimum degree ordering . . . . . . . . . . . . . 97 3.6.2 Nested Dissection ordering . . . . . . . . . . . . 97

    3.7 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 98

    4 Basic Iterative Methods 105 4.1 Jacobi, Gauss-Seidel, and SOR . . . . . . . . . . . . . . . . . 105

    4.1.1 Block Relaxation Schemes . . . . . . . . . . . . 108 4.1.2 Iteration Matrices and Preconditioning . . . . . . 112

    4.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.2.1 General Convergence Result . . . . . . . . . . . . 115 4.2.2 Regular Splittings . . . . . . . . . . . . . . . . . 118 4.2.3 Diagonally Dominant Matrices . . . . . . . . . . 119 4.2.4 Symmetric Positive Definite Matrices . . . . . . . 122 4.2.5 Property A and Consistent Orderings . . . . . . . 123

    4.3 Alternating Direction Methods . . . . . . . . . . . . . . . . . 127

    5 Projection Methods 133 5.1 Basic Definitions and Algorithms . . . . . . . . . . . . . . . . 133

    5.1.1 General Projection Methods . . . . . . . . . . . . 134

  • CONTENTS vii

    5.1.2 Matrix Representation . . . . . . . . . . . . . . . 135 5.2 General Theory . . . . . . . . . . . . . . . . . . . . . . . . . 137

    5.2.1 Two Optimality Results . . . . . . . . . . . . . . 137 5.2.2 Interpretation in Terms of Projectors . . . . . . . 138 5.2.3 General Error Bound . . . . . . . . . . . . . . . . 140

    5.3 One-Dimensional Projection Processes . . . . . . . . . . . . . 142 5.3.1 Steepest Descent . . . . . . . . . . . . . . . . . . 142 5.3.2 Minimal Residual (MR) Iteration . . . . . . . . . 145 5.3.3 Residual Norm Steepest Descent . . . . . . . . . 147

    5.4 Additive and Multiplicative Processes . . . . . . . . . . . . . . 147

    6 Krylov Subspace Methods Part I 157 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 6.2 Krylov Subspaces . . . . . . . . . . . . . . . . . . . . . . . . 158 6.3 Arnoldi’s Method . . . . . . . . . . . . . . . . . . . . . . . . 160

    6.3.1 The Basic Algorithm . . . . . . . . . . . . . . . . 160 6.3.2 Practical Implementations . . . . . . . . . . . . . 162

    6.4 Arnoldi’s Method for Linear Systems (FOM) . . . . . . . . . . 165 6.4.1 Variation 1: Restarted FOM . . . . . . . . . . . . 167 6.4.2 Variation 2: IOM and DIOM . . . . . . . . . . . 168

    6.5 GMRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 6.5.1 The Basic GMRES Algorithm . . . . . . . . . . . 172 6.5.2 The Householder Version . . . . . . . . . . . . . 173 6.5.3 Practical Implementation Issues . . . . . . . . . . 174 6.5.4 Breakdown of GMRES . . . . . . . . . . . . . . 179 6.5.5 Variation 1: Restarting . . . . . . . . . . . . . . . 179 6.5.6 Variation 2: Truncated GMRES Versions . . . . . 180 6.5.7 Relations between FOM and GMRES . . . . . . . 185 6.5.8 Residual smoothing . . . . . . . . . . . . . . . . 190 6.5.9 GMRES for complex systems . . . . . . . . . . . 193

    6.6 The Symmetric Lanczos Algorithm . . . . . . . . . . . . . . . 194 6.6.1 The Algorithm . . . . . . . . . . . . . . . . . . . 194 6.6.2 Relation with Orthogonal Polynomials . . . . . . 195

    6.7 The Conjugate Gradient Algorithm . . . . . . . . . . . . . . . 196 6.7.1 Derivation and Theory . . . . . . . . . . . . . . . 196 6.7.2 Alternative Formulations . . . . . . . . . . . . . 200 6.7.3 Eigenvalue Estimates from the CG Coefficients . . 201

    6.8 The Conjugate Residual Method . . . . . . . . . . . . . . . . 203 6.9 GCR, ORTHOMIN, and ORTHODIR . . . . . . . . . . . . . . 204 6.10 The Faber-Manteuffel Theorem . . . . . . . . . . . . . . . . . 206 6.11 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . 208

    6.11.1 Real Chebyshev Polynomials . . . . . . . . . . . 209 6.11.2 Complex Chebyshev Polynomials . . . . . . . . . 210 6.11.3 Convergence of the CG Algorithm . . . . . . . . 214

  • viii CONTENTS

    6.11.4 Convergence of GMRES . . . . . . . . . . . . . . 215 6.12 Block Krylov Methods . . . . . . . . . . . . . . . . . . . . . 218

    7 Krylov Subspace Methods Part II 229 7.1 Lanczos Biorthogonalization . . . . . . . . . . . . . . . . . . 229

    7.1.1 The Algorithm . . . . . . . . . . . . . . . . . . . 229 7.1.2 Practical Implementations . . . . . . . . . . . . . 232

    7.2 The Lanczos Algorithm for Linear Systems . . . . . . . . . . . 234 7.3 The BCG and QMR Algorithms . . . . . . . . . . . . . . . . . 234

    7.3.1 The Biconjugate Gradient Algorithm . . . . . . . 234 7.3.2 Quasi-Minimal Residual Algorithm . . . . . . . . 236

    7.4 Transpose-Free Variants . . . . . . . . . . . . . . . . . . . . . 241 7.4.1 Conjugate Gradient Squared . . . . . . . . . . . . 241 7.4.2 BICGSTAB . . . . . . . . . . . . . . . . . . . . 244 7.4.3 Transpose-Free QMR (TFQMR) . . . . . . . . . 247

    8 Methods Related to the Normal Equations 259 8.1 The Normal Equations . . . . . . . . . . . . . . . . . . . . . . 259 8.2 Row Projection Methods . . . . . . . . . . . . . . . . . . . . 261

    8.2.1 Gauss-Seidel on the Normal Equations . . . . . . 261 8.2.2 Cimmino’s Method . . . . . . . . . . . . . . . . 263

    8.3 Conjugate Gradient and Normal Equations . . . . . . . . . . . 266 8.3.1 CGNR . . . . . . . . . . . . . . . . . . . . . . . 266 8.3.2 CGNE . . . . . . . . . . . . . . . . . . . . . . . 267

    8.4 Saddle-Point Problems . . . . . . . . . . . . . . . . . . . . . . 268

    9 Preconditioned Iterations 275 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 9.2 Preconditioned Conjugate Gradient . . . . . . . . . . . . . . . 276

    9.2.1 Preserving Symmetry . . . . . . . . . . . . . . . 276 9.2.2 Efficient Implementations . . . . . . . . . . . . . 279

    9.3 Preconditioned GMRES . . . . . . . . . . . . . . . . . . . . . 282 9.3.1 Left-Preconditioned GMRES . . . . . . . . . . . 282 9.3.