Upload
truongtu
View
248
Download
1
Embed Size (px)
Citation preview
Linear Algebra and Applications:Numerical Linear Algebra
David S. [email protected]
Department of Mathematics
Washington State University
IMA Summer Program, 2008 – p. 1
My Pledge to YouI promise not to cover as much materialas I previously claimed I would.
IMA Summer Program, 2008 – p. 2
Resources (a biased list)David S. Watkins,Fundamentals of MatrixComputations, Second Edition, John Wiley andSons, 2002. (FMC)
IMA Summer Program, 2008 – p. 3
Resources (a biased list)David S. Watkins,Fundamentals of MatrixComputations, Second Edition, John Wiley andSons, 2002. (FMC)
David S. Watkins,The QR algorithm revisited,SIAM Review, 50 (2008), pp. 133–145.
IMA Summer Program, 2008 – p. 3
Resources (a biased list)David S. Watkins,Fundamentals of MatrixComputations, Second Edition, John Wiley andSons, 2002. (FMC)
David S. Watkins,The QR algorithm revisited,SIAM Review, 50 (2008), pp. 133–145.
David S. Watkins,The Matrix Eigenvalue Problem,GR and Krylov Subspace Methods, SIAM, 2007.
IMA Summer Program, 2008 – p. 3
Leslie Hogben,Handbook of Linear Algebra,Chapman and Hall/CRC, 2007.
IMA Summer Program, 2008 – p. 4
Leslie Hogben,Handbook of Linear Algebra,Chapman and Hall/CRC, 2007.
Lloyd N. Trefethen and David Bau, III,NumericalLinear Algebra, SIAM, 1997.
IMA Summer Program, 2008 – p. 4
Leslie Hogben,Handbook of Linear Algebra,Chapman and Hall/CRC, 2007.
Lloyd N. Trefethen and David Bau, III,NumericalLinear Algebra, SIAM, 1997.
James W. Demmel,Applied Numerical LinearAlgebra, SIAM, 1997.
IMA Summer Program, 2008 – p. 4
Leslie Hogben,Handbook of Linear Algebra,Chapman and Hall/CRC, 2007.
Lloyd N. Trefethen and David Bau, III,NumericalLinear Algebra, SIAM, 1997.
James W. Demmel,Applied Numerical LinearAlgebra, SIAM, 1997.
G. H. Golub and C. F. Van Loan,MatrixComputations, Third Edition, Johns HopkinsUniversity Press, 1996.
IMA Summer Program, 2008 – p. 4
Common Linear AlgebraComputations
linear systemAx = b
overdetermined linear systemAx = b
IMA Summer Program, 2008 – p. 5
Common Linear AlgebraComputations
linear systemAx = b
overdetermined linear systemAx = b
eigenvalue problemAv = λv
IMA Summer Program, 2008 – p. 5
Common Linear AlgebraComputations
linear systemAx = b
overdetermined linear systemAx = b
eigenvalue problemAv = λv
various generalized eigenvalue problems,e.g.Av = λBv
IMA Summer Program, 2008 – p. 5
Linear SystemsAx = b, n × n, nonsingular, real or complex
Examples: FMC §1.2, 7.1; any linear algebra text
IMA Summer Program, 2008 – p. 6
Linear SystemsAx = b, n × n, nonsingular, real or complex
Examples: FMC §1.2, 7.1; any linear algebra text
Major tools:Gaussian elimination (LU Decomp.)various iterative methods
IMA Summer Program, 2008 – p. 6
Overdetermined Linear SystemsAx = b, n × m, n > m
oftenn ≫ m
Example: fitting data by a straight line
IMA Summer Program, 2008 – p. 7
Overdetermined Linear SystemsAx = b, n × m, n > m
oftenn ≫ m
Example: fitting data by a straight line
minimize‖b − Ax‖2
(least squares)
IMA Summer Program, 2008 – p. 7
Overdetermined Linear SystemsAx = b, n × m, n > m
oftenn ≫ m
Example: fitting data by a straight line
minimize‖b − Ax‖2
(least squares)
Major tools:QR decompositionsingular value decomposition
IMA Summer Program, 2008 – p. 7
Eigenvalue Problemsstandard:Av = λv, n × n, real or complex
Examples: FMC § 5.1
IMA Summer Program, 2008 – p. 8
Eigenvalue Problemsstandard:Av = λv, n × n, real or complex
Examples: FMC § 5.1
generalized:Av = λBv
IMA Summer Program, 2008 – p. 8
Eigenvalue Problemsstandard:Av = λv, n × n, real or complex
Examples: FMC § 5.1
generalized:Av = λBv
Examples: FMC § 6.7
IMA Summer Program, 2008 – p. 8
Eigenvalue Problemsstandard:Av = λv, n × n, real or complex
Examples: FMC § 5.1
generalized:Av = λBv
Examples: FMC § 6.7
product:A1A2
IMA Summer Program, 2008 – p. 8
Eigenvalue Problemsstandard:Av = λv, n × n, real or complex
Examples: FMC § 5.1
generalized:Av = λBv
Examples: FMC § 6.7
product:A1A2
Examples: generalized (AB−1), SVD (A∗A)
IMA Summer Program, 2008 – p. 8
Eigenvalue Problemsstandard:Av = λv, n × n, real or complex
Examples: FMC § 5.1
generalized:Av = λBv
Examples: FMC § 6.7
product:A1A2
Examples: generalized (AB−1), SVD (A∗A)
quadratic:(λ2K + λG + M)v = 0
IMA Summer Program, 2008 – p. 8
Solving Linear Systems:small problems
Ax = b, n × n, n “small”
storeA conventionally
IMA Summer Program, 2008 – p. 10
Solving Linear Systems:small problems
Ax = b, n × n, n “small”
storeA conventionally
solve using Gaussian elimination
IMA Summer Program, 2008 – p. 10
Solving Linear Systems:small problems
Ax = b, n × n, n “small”
storeA conventionally
solve using Gaussian elimination
A = LU
IMA Summer Program, 2008 – p. 10
Solving Linear Systems:small problems
Ax = b, n × n, n “small”
storeA conventionally
solve using Gaussian elimination
A = LU
PA = LU (partial pivoting)
IMA Summer Program, 2008 – p. 10
Solving Linear Systems:small problems
Ax = b, n × n, n “small”
storeA conventionally
solve using Gaussian elimination
A = LU
PA = LU (partial pivoting)
forward and back substitution
IMA Summer Program, 2008 – p. 10
Solving Linear Systems:small problems
Ax = b, n × n, n “small”
storeA conventionally
solve using Gaussian elimination
A = LU
PA = LU (partial pivoting)
forward and back substitution
Questions: cost?,
IMA Summer Program, 2008 – p. 10
Solving Linear Systems:small problems
Ax = b, n × n, n “small”
storeA conventionally
solve using Gaussian elimination
A = LU
PA = LU (partial pivoting)
forward and back substitution
Questions: cost?, accuracy? (FMC Ch. 2)
IMA Summer Program, 2008 – p. 10
Positive Definite CaseA = A∗, x∗Ax > 0 for all x 6= 0
A = R∗R Cholesky decomposition
IMA Summer Program, 2008 – p. 11
Positive Definite CaseA = A∗, x∗Ax > 0 for all x 6= 0
A = R∗R Cholesky decomposition
symmetric variant of Gaussian elimination
IMA Summer Program, 2008 – p. 11
Positive Definite CaseA = A∗, x∗Ax > 0 for all x 6= 0
A = R∗R Cholesky decomposition
symmetric variant of Gaussian elimination
flop count is halved
IMA Summer Program, 2008 – p. 11
Solving Linear Systems:medium problems
Larger problems are usually sparser.
IMA Summer Program, 2008 – p. 12
Solving Linear Systems:medium problems
Larger problems are usually sparser.
Use sparse data structure.
IMA Summer Program, 2008 – p. 12
Solving Linear Systems:medium problems
Larger problems are usually sparser.
Use sparse data structure.
sparse Gaussian elimination
IMA Summer Program, 2008 – p. 12
Solving Linear Systems:medium problems
Larger problems are usually sparser.
Use sparse data structure.
sparse Gaussian elimination
A = LU
IMA Summer Program, 2008 – p. 12
Solving Linear Systems:medium problems
Larger problems are usually sparser.
Use sparse data structure.
sparse Gaussian elimination
A = LU
factors “usually” less sparse thanA,
IMA Summer Program, 2008 – p. 12
Solving Linear Systems:medium problems
Larger problems are usually sparser.
Use sparse data structure.
sparse Gaussian elimination
A = LU
factors “usually” less sparse thanA, but still sparse
IMA Summer Program, 2008 – p. 12
Solving Linear Systems:medium problems
Larger problems are usually sparser.
Use sparse data structure.
sparse Gaussian elimination
A = LU
factors “usually” less sparse thanA, but still sparse
Crucial question: Can factors fit in main memory?
IMA Summer Program, 2008 – p. 12
Solving Linear Systems:large problems
L andU factors may be too large to store . . .
IMA Summer Program, 2008 – p. 13
Solving Linear Systems:large problems
L andU factors may be too large to store . . .
Use an iterative method.
IMA Summer Program, 2008 – p. 13
Solving Linear Systems:large problems
L andU factors may be too large to store . . .
Use an iterative method.
direct vs. iterative methods
IMA Summer Program, 2008 – p. 13
Solving Linear Systems:large problems
L andU factors may be too large to store . . .
Use an iterative method.
direct vs. iterative methods
Some buzz words: descent method, conjugategradients (CG), GMRES, . . .
IMA Summer Program, 2008 – p. 13
Solving Linear Systems:large problems
L andU factors may be too large to store . . .
Use an iterative method.
direct vs. iterative methods
Some buzz words: descent method, conjugategradients (CG), GMRES, . . .
preconditioners,
IMA Summer Program, 2008 – p. 13
Solving Linear Systems:large problems
L andU factors may be too large to store . . .
Use an iterative method.
direct vs. iterative methods
Some buzz words: descent method, conjugategradients (CG), GMRES, . . .
preconditioners, and on and on.
IMA Summer Program, 2008 – p. 13
Solving Linear Systems:large problems
L andU factors may be too large to store . . .
Use an iterative method.
direct vs. iterative methods
Some buzz words: descent method, conjugategradients (CG), GMRES, . . .
preconditioners, and on and on.
FMC Chapter 7
IMA Summer Program, 2008 – p. 13
Solving Linear Systems:large problems
L andU factors may be too large to store . . .
Use an iterative method.
direct vs. iterative methods
Some buzz words: descent method, conjugategradients (CG), GMRES, . . .
preconditioners, and on and on.
FMC Chapter 7
Richard Barrett et. al.,Templates for the Solution ofLinear Systems, . . . , SIAM 1994. (FREE!!!)
IMA Summer Program, 2008 – p. 13
Moving OnOrthogonal Transformations
generally useful computing tools
IMA Summer Program, 2008 – p. 14
Moving OnOrthogonal Transformations
generally useful computing tools
sticking to real case for simplicity
IMA Summer Program, 2008 – p. 14
Moving OnOrthogonal Transformations
generally useful computing tools
sticking to real case for simplicity
standard inner product: 〈x, y〉 =∑n
j=1xjyj
IMA Summer Program, 2008 – p. 14
Moving OnOrthogonal Transformations
generally useful computing tools
sticking to real case for simplicity
standard inner product: 〈x, y〉 =∑n
j=1xjyj
Euclidean norm: ‖x‖2
=(
∑nj=1
x2
j
)1/2
IMA Summer Program, 2008 – p. 14
Moving OnOrthogonal Transformations
generally useful computing tools
sticking to real case for simplicity
standard inner product: 〈x, y〉 =∑n
j=1xjyj
Euclidean norm: ‖x‖2
=(
∑nj=1
x2
j
)1/2
‖x‖2
=√
〈x, x〉
IMA Summer Program, 2008 – p. 14
Moving OnOrthogonal Transformations
generally useful computing tools
sticking to real case for simplicity
standard inner product: 〈x, y〉 =∑n
j=1xjyj
Euclidean norm: ‖x‖2
=(
∑nj=1
x2
j
)1/2
‖x‖2
=√
〈x, x〉
definition of orthogonal:QT = Q−1
IMA Summer Program, 2008 – p. 14
Moving OnOrthogonal Transformations
generally useful computing tools
sticking to real case for simplicity
standard inner product: 〈x, y〉 =∑n
j=1xjyj
Euclidean norm: ‖x‖2
=(
∑nj=1
x2
j
)1/2
‖x‖2
=√
〈x, x〉
definition of orthogonal:QT = Q−1
properties of orthogonal matricesIMA Summer Program, 2008 – p. 14
Elementary Reflectors= Householder transformations
one of two major classes of computationally usefulorthogonal transformations
IMA Summer Program, 2008 – p. 15
Elementary Reflectors= Householder transformations
one of two major classes of computationally usefulorthogonal transformations
Q = I − 2uuT , ‖u‖2
= 1
IMA Summer Program, 2008 – p. 15
Elementary Reflectors= Householder transformations
one of two major classes of computationally usefulorthogonal transformations
Q = I − 2uuT , ‖u‖2
= 1
geometric action
IMA Summer Program, 2008 – p. 15
Elementary Reflectors= Householder transformations
one of two major classes of computationally usefulorthogonal transformations
Q = I − 2uuT , ‖u‖2
= 1
geometric action
Qx = y
IMA Summer Program, 2008 – p. 15
Elementary Reflectors= Householder transformations
one of two major classes of computationally usefulorthogonal transformations
Q = I − 2uuT , ‖u‖2
= 1
geometric action
Qx = y
creating zeros
IMA Summer Program, 2008 – p. 15
Elementary Reflectors= Householder transformations
one of two major classes of computationally usefulorthogonal transformations
Q = I − 2uuT , ‖u‖2
= 1
geometric action
Qx = y
creating zeros
details: FMC Chapter 3
IMA Summer Program, 2008 – p. 15
Elementary Reflectors= Householder transformations
one of two major classes of computationally usefulorthogonal transformations
Q = I − 2uuT , ‖u‖2
= 1
geometric action
Qx = y
creating zeros
details: FMC Chapter 3
QR decomposition
IMA Summer Program, 2008 – p. 15
Uses of theQR DecompositionAx = b, n × n
overdetermined system
orthonormalizing vectors
IMA Summer Program, 2008 – p. 16
The Gram-Schmidt Processorthonormalization of vectors
relationship toQR decomposition
IMA Summer Program, 2008 – p. 17
The Gram-Schmidt Processorthonormalization of vectors
relationship toQR decomposition
reorthogonalization
IMA Summer Program, 2008 – p. 17
The SVDsingular value decomposition
A = UΣV T
product eigenvalue problem
IMA Summer Program, 2008 – p. 18
The SVDsingular value decomposition
A = UΣV T
product eigenvalue problem
FMC Chapter 4
IMA Summer Program, 2008 – p. 18
The SVDsingular value decomposition
A = UΣV T
product eigenvalue problem
FMC Chapter 4
numerical rank determination
IMA Summer Program, 2008 – p. 18
The SVDsingular value decomposition
A = UΣV T
product eigenvalue problem
FMC Chapter 4
numerical rank determination
solution of least-squares problem
IMA Summer Program, 2008 – p. 18