Upload
milad-naderi
View
216
Download
0
Embed Size (px)
Citation preview
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
1/58
Approximation of Eigenvalues and Eigenvectoresby the Direct methods and investigation on theRootfinding methods and their efficiency
Milad Naderi
92130901
1
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
2/58
Eigenvalue Problem
Eigenvalue problems occur in many areas of
science and engineering.
Eigenvalues are also important in analyzing
numerical methods.
Standard Eigenvalue problem:
is Eigenvalue and X is Eigenvector.
may be complex even if A is Real.
Spectral radius :
A X X
( ) max : ( )A A 2
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
3/58
Geometric interpretation
Matrix expands or shrinks any vector lying in
direction of eigenvector by scalar factor.
Expansion or contraction factor is given bycorresponding eigenvalue .
3
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
4/58
Characteristic Polynomial
Equation of is equivalent to:
Which has non zero solution X if and only if its matrix is singular.
Eigenvalues of A are roots of of characteristic polynomial:
A matrix always has n eigenvalues. But may not be real or
distinct.
Complex eigenvalues of real matrix occur in complex conjugate
pairs.
( ) X 0A I AX X
i
det( ) 0A I
n n
4
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
5/58
Coefficients of Characteristic Polynomial
We can write characteristic polynomial as:
Where pk are sum of all k-by-k principal minor of A.
For finding of Characteristic Polynomial coefficients, A Krylov
method is introduced.
5
1 2
1 2( ) ( 1) ( ... )n n n n
nP p p p
1 11 22 ...
( 1) det( )
nn
n
n
P a a a traceA
P A
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
6/58
Krylov method
Cayley-Hamilton theorem: every square matrix satisfies
its own characteristic equation.
Coefficients will find by sloving of below linearsystems:
6
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
7/58
Characteristic Polynomial, Continued
Computing eigenvalues using characteristic polynomial is not
recommended because of:Work in computing coefficients of characteristic
polynomial.
Sensitivity of coefficients of characteristic polynomial.
Work in solving for roots of characteristic polynomial. Characteristic polynomial is powerful theoretically tool but
usually not useful computationally.
7
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
8/58
Some notes
The problem of finding eigenvalues cannot be easier
than finding roots of polynomials.
If the degree of the polynomial is larger than 5,the rootscannot be found by analyzing method.
So all algorithms of eigenproblem will be iterative
essentially .
8
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
9/58
Multiplicity and Diagonalizability
Multiplicity is number root appears when polynomial is
written as product of linear factors.
Eigenvalues of multiplicity 1 is simple
Defective matrix has eigenvalue of multiplicity k>1with fewer than k linearly independent corresponding
eigenvector
Nondefective matrix A has n linearly independenteigenvectors , so it is diagonalizable:
Where X is non singular matrix of eigenvectors.9
1X AX D 1 2( , ,... )nD diag
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
10/58
Relevant Properties of Matrices
10
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
11/58
Properties of Eigenvalue Problems
Properties of eigenvalue problem affecting choice of algorithm:
Are all eigenvalues needed or only a few?
Are only eigenvalues needed, or are corresponding eigenvectorsalso needed?
Is matrix real or complex?
Is matrix relatively small and dense or large and sparse?
Does matrix have any special properties such as symmetry or is itgeneral matrix?
11
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
12/58
Transformation
Shift: If and is any scalar, then
, So eigenvalues of shifted
matrix are shifted eigenvalues of original matrix
Inversion: If A is nonsingular and with
Then and .
Powers: If then .Soeigenvalues of power matrix are same power of
eigenvalues of original matrix
AX X
( ) ( )A I X X
AX X 0X
0 1 1( )A X X
AX X k k
A X X
12
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
13/58
Similarity Transformation
B is similar to A if there is nonsingular matrix
such that
A and B have same eigenvalues
If y is eigenvector of B, Then x=Ty iseigenvector of A
1B T AT
1
( ) (Ty)By y T A T y y A T y
13
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
14/58
Forms Attainable by Similarity
Computations of Jordan form have two numerical problems:
1)It is a discontinuous function of A. So any rounding error can
change it completely.
2)It cannot be computed stably generally. 14
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
15/58
Roughly classification of eigenproblem
Direct methods
Used on dense matrices
Cost O( ) operation to compute all of the eigenvalues and
eigenvectors.
Iterative methods
Usually applied to sparse matrices or matrices for which
matrix-vector multiplication is the only convenient operation toperform.
Convergence properties depend strongly on the matrix entries.
Provide approximations only to a subset of the eigenvalues and
eigenvectors.
3n
15
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
16/58
Canonical Forms and similarity transformation
Most of our algorithms transforming the matrix A into
simpler (canonical forms)
From canonical forms, it is easy to compute its
eigenvalues and eigenvectors. The two must common canonical forms are:
Jordan Form
Schur Form Jordan form is useful theoretically but it is hard to
compute in a numerically stable fashion.
Matrix in Jordan or Schur form is triangular.16
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
17/58
Jordan and Schur Canonical Forms
Given A, there exist a non singular S such that
J is jordan canonical form and is block diagonal.
Schur: there exist a unitary matrix Q such that
T is upper triangular and the eigenvalues of A are the
diagonal entries of A
1S A S J
*Q A Q T
17
1 1 2 2( ( ), ( ),..., ( ))n n n k k J diag J J J
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
18/58
Matrix reduction
We seek algorithms with lowest computational
complexity.
We will initially reduce the matrix A to upper
Hessenberg form where A is not symmetric.
If A is symmetric we will reduce it to tridiagonalform.
Computational cost will reduce from to .
3( )O n 2( )O n
18
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
19/58
Direct Methods for Symmetric matrices
Tridiagonal QR iteration
Rayleigh Quotient iteration
Divide and conquer (Fastest availablealgorithm)
Bisection and inverse iteration
Jacobi method
19
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
20/58
Direct Methods for Symmetric matrices ,
continued
The fastest available algorithm for symmetric
eigenproblem ( finding all eigenvalues and all
eigenvectores of a large dense or banded symmetricmatrix) is divide and conquer method.
Jacobi algorithm can find tiny eigenvalues more
accurately than alternative algorithms like divide and
conquer although sometimes more slowly.
All the algorithms except Rayleigh and Jacobis method
assume that the matrix has first been reduced to
tridiagonal form. 20
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
21/58
Comparison of direct methods for the
symmetric Eigenproblem
Tridiagonal QR iteration
Find all eigenvalues and optionally eigenvectors.
It is fastest method to find all the eigenvalues.
Cost operation for finding eigenvalues :
Cost operation for finding eigenvectors:
This method is fastest for small matrices: n
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
22/58
Tridiagonal QR ITERATION
QR algorithm phases for non symmetric eigenproblem:
1.Given A, find an orthogonal Q so that is upper Hessenberg
2. Apply QR iteration to H to get a sequence of upper Hessenberg matrices
converging to real Schur form.
QR algorithm phases for non symmetric eigenproblem:
1.Given , find an orthogonal Q so that is tridiagonal.
2. Apply QR iteration to T to get a sequence of Tridiagonal matrices
converging to diagonal form.
Finding all eigenvalues and eigenvectors of T costs
The eigenvector Total cost to find the eigenvalues of A is
The total cost to find all eigenvalues and eigenvectors of A is
TQAQ H
TA A TQA Q H
3 26 ( )n o n3 24 ( )
3n o n
3 228 ( )3
n o n
22
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
23/58
Rayliegh quotint iteration
Given with
When stopping criterion is satisfied, is within
tolerance of an eigenvalue of A
0x 0 2 1x
i
23
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
24/58
Divide and Conquer
This method is fastest available now for computation of all
eigenvalues and eigenvectors of tridiagonal matrix whose
dimension is larger than 25.
This method express symmetric matrix T as:
24
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
25/58
Divide and Conquer, Continued
Suppose we have eigen decomposition of T1and T2
The eigenvalues of T are the same as those of similar matrix
We suppose:
The eigenvalues of T are the roots of so=called secular equation
TD uu
1
2
0
0
D
1det(D uu ) det((D I)(I (D ) )).T TI uu
1det(I (D ) ) 0Tuu 2
1 1
1
det(I (D ) ) 1 (D ) 1 ( )n
T T i
i i
uuu u u f
d
( ) 0f 25
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
26/58
Divide and Conquer, Continued
We can find a version of Newtons method that converges fast toeach root .
If is an eigenvalue of , Then is its
eigenvector. Since is diagonal, this cost O(n) flops to
compute. On the other hand, finding all n eigenvalues costs
TD uu 1
( )D I u
( )D I
2( )O n26
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
27/58
Divide and Conquer, Algorithm
Algorithm:
Computational Complexity:
In practice, c is usually much less than due to deflation phenomenon27
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
28/58
Divide and Conquer, Deflation
So far we have assumed that the di are distinct and ui.
Otherwise, Secular equation will have k
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
29/58
Bisection and inverse Iteration
To find only k eigenvalues at cost O(nk), Bisection method is an
applicable method.
Sylvesters Inertia Theorem: Let A be symmetric and X be
nonsingular. Then A and have the same inertia.
Inertia(A) = where are the number of negative, zero
and positive eigenvalues of A, respectively.
We can use Gaussian elimination to factorize , where
L is nonsingular and D diagonal.
Inertia(A-zI)= Inertia (D)
The number of eigenvalues in the interval [z1,z2) equals ( number of
eigenvalues of A
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
30/58
Bisection Algorithm
30
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
31/58
Bisection and inverse Iteration
Factorization of A-zI is quite simple for tridiagonal matrix.
To compute eigenvectors once we have computed eigenvalues,
we can use Inverse iteration.
Since we can use accurate eigenvalues as shift,convergence
usually takes one or two iteration31
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
32/58
Jacobi method
One of oldest methods for computing eigenvalues is Jacobimethod which uses similarity transformation based on plane
rotation.
Sequence of plane rotation chosen to annihilate symmetric pairs
of matrix entries, eventually converging to diagonal form. Choice of plane rotation slightly more complicated than in
Givens method for QR factorization.
To annihilate given off diagonal pair, Choose c and s so that
is diagonal32
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
33/58
Jacobi method, continued
Starting with symmetric matrix A0=A, each iteration has form
Where is plane rotation chosen to annihilate a symmetric pairof entries in Ak
Pane rotation repeatedly applied from both side in symmetric
sweeps through matrix until of- diagonal is reduced to within
tolerance of zero.
resulting diagonal matrix orthogonally similar to original matrix,
so diagonal entries are eigenvalues and eigenvectors are given by
product of plane rotations.
1
T
k k k k A J A J
kJ
33
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
34/58
Jacobi method, continued
Jacobi method is reliable, simple to program, and capable of high
accuracy, but converges rather slowly and difficult to generalize
beyond symmetric matrices
Except for small problem, more modern methods usually
requires 5 to 10 times less work than Jacobi.
One source of inefficiency is that previously annihilated entries
can subsequently become nonzero again, thereby requiring
repeated annihilation.
Newer methods such as QR iteration preserve zero entries
introduced into matrix.
34
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
35/58
Rootfinding methods
Our Goal is numerical approximation of the zeros of a real-valued
function of one variable. These methods are usually iterative.
The aim is to generate a sequence of values such that( )kx
35
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
36/58
Geometric approach to Rootfinding
Bisection Method
Chord method
Secant method False Position(Regula Falsi) method
Newton method
Increasing computational complexity
36
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
37/58
Bisection Method
Theorem of zeros for continuous function: Given a continuous
function such that f(a)f(b)
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
38/58
Bisection Method
Absolute error at step k:
If we want , Then
In Particular, to gain a significant figure in the accuracy of the
approximation of the root, one needs bisection.
( ) (k)ke x
( ) ( )0 lim 02
k k
kk
b ae k e
( )mx log(b a/ )0.6931
m
3.32
38
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
39/58
Bisection Method
First two step of bisection method Convergence history
Iteration
Absoluteerror
39
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
40/58
Introduction to Chord, Secant, Regula Falsi
and Newton's method
Expanding f in a Taylor series around .
Linearized version :
Basic equation for these methods:
Given , determine by solving
is suitable approximation of
We consider 4 particular choices of
( ) 0 f(x) ( )x f
( 1) ( ) 1 ( )( )k k kkx x q f x ( )kx ( 1)kx
( ) ( 1) ( ) 1( ) ( ) 0k k k kx x x q
kq ( )( )kf x
kq
40
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
41/58
Introduction to Chord,Secant, Regula Falsi
andNewtonssmethod
First step of chord method First three step of secant method
41
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
42/58
Chord method
Given an initial value
The sequence converges to the root
(b) f(a)0k
fq q k
b a
(0 )x
( 1) (0) ( )
( )(b) (a)
k kb a
x x f xf f
( )kx
42
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
43/58
Secant method
Giving two initial values
We need an extra point and the corresponding value than chord
method. The benefit due to the increase in the computational cost
is higher speed of convergence of the secant method. The order of sequence convergence in this method can be of to
1.63.
(k) (k 1)
( ) ( 1)
( ) f(x )0
k k k
f xq k
x x
( 1) (0),x x
( ) ( 1)( 1) ( ) (k)
(k) (k 1) ( ) 0
( ) f(x )
k kk k x xx x f x k
f x
43
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
44/58
Regual Falsi method
being the maximum index less than k such that
The above iteration terminates at the m-th step such that
The regula falsi method, though of the same complexity as the
secant method, has linear convergence order
Unlike the secant method, the iterates generated by thisalgorithm are all contained within the starting interval
( ) ( )( 1) ( ) (k)
(k) (k ) ( ) 0
( ) f(x )
k kk k x xx x f x k
f x
(k) (k )( ). ( ) 0f x f x
k
(m)( )f x
( 1) (0),x x
44
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
45/58
Regual Falsi method
45
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
46/58
Newtons method
Newtons method requires the two functional evaluations
and .
The increasing Computational compensated for bye a higher
order of convergence. It can be proven that it is more convenient to employ the secant
method whenever the number of floating point operations to
evaluate are about twice those needed for evaluating f
(k)( ) 0k
q f x k
( )( 1) ( )
(k)
( )0
( )
kk k f xx x k
f x
( )( )kx
( )( )kx
46
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
47/58
Newtons method
The first two steps of Newtons method Convergence history
1) Chord method
2) Bisection method
3) Secant method4)Newtons method
47
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
48/58
Fixed point iteration for Nonlinear Equation
This method is based on the fact that for a given , it is
always possible to transform the problem into a
equivalent such that whenever
So approximating the zeros of a function has become the
problem of finding the fixed points of
(fixed point iteration, Picard iteration, Functional iteration)
:[ , ]a b
( ) 0x
( ) 0x x ( ) ( ) 0
( 1) ( )( ) 0k kx x k
48
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
49/58
Fixed point iteration for Nonlinear Equation
The Choice of is not unique. For instance, any function of the
form , Where F is a continuous function such
that
Sufficient condition in order for the fixed point method to
converge to the root:
1)
( ) ( ( ))x x F f x
(0) 0F
49
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
50/58
Fixed point iteration for Nonlinear Equation
is called the asymptotic convergence factor.
The asymptotic convergence rate:
2)
( )
1log( )
R
50
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
51/58
Zeros of Algebraic Equations
N-degree polynomial
The two property to localize the zeros of a polynomial:
0
( )n
k
n k
k
P x a x
111
( ) ( ) ...( ) ,k
k
n n k l
l
P x a x m x m m n
51
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
52/58
Horner Method and deflation
Horners method is based on the observation that any polynomial
can be written as:
This method efficiently evaluates the polynomial at a point z
through synthetic division algorithm:
All the coefficients depend on z and The Polynomial
Has degree n-1 in the variable x and depends on the parameter z
through bk. It is called the associated polynomial of pn.
0 1 2( ) ( ( ... ( 1 )...)).n n nP x a x a x a x a a x
1 1, 2,....0
n n
k k k
b a
b a b z k n n
kb 0 ( )
nb p z
1 1 2 n n 1
1
( ; z) b b x ... b x 1n
n k k
k
q x b x
52
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
53/58
Horner Method and deflation
Deflation procedure for finding the roots of . For m=n,n-1,..,1
1. Find a root r of pm using suitable approximation method
2.evaluate qm-1(x;r)
3.letpm-1=qm-1
We can use following methods for step 1:
The Newton- Horner Method
The Muller Method
0 n 1( ) ( ) q (x; z)nP x b x z
nP
53
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
54/58
The NewtonHorner Method
This deflation being helped by the fact that:
The approximation of as root of is carried out until all the
roots of have been computed.
1 1 1( ; ) ( ) ( ; ) ( ) ( ; )n n n n nP q x z x z q x z p z q z z
( ) ( )
( 1) ( ) ( )( ) ( ) ( )
1
( ) ( )( ) q ( ; )
k k
n j n jk k kj j jk k k
n j n j j
p r p rr r rp r r r
1( ) ( ) ( )n j nP x x r P x 1 ( )nP x
( )n
P x
54
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
55/58
The Muller Method
Unlike Newtons or secant methods, Muller's method is able to
compute complex zeros of a given function f, even starting from
a real initial datum; moreover, its order of convergence is almost
quadratic.
55
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
56/58
The Muller Method
Given , the new point is determined by setting
This method extends the secant method, substituting the linear
polynomial with a second degree polynomial.
(0) (1) (2), , xx x (3)x(3)
2 (x ) 0p
(2) (2) (2) (1) (2) (1) (2) (1) (0)
2 ( ) f(x ) (x x ) f[x , x ] (x x )(x x ) f[x , x , x ]p x
( ) ( ) ( , ) ( , )[ , ] , [ , , ]
f f f ff
(2) (2) (2) 2 (2) (1) (0)
2 ( ) f(x ) (x x ) (x x ) f[x , x , x ]p x
(2) (1) (2) (1) (2) (1) (0)
(2) (1) (2) (0) (0) (1)
f[x , x ] (x x ) f[x , x , x ]
f[x , x ] f[x , x ] f[x , x ]
56
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
57/58
The Muller Method
( )( 1) ( )
0.52 ( ) ( ) ( 1) ( 2)
2 ( )
4 ( ) ( , , )
kk k
k k k k
f xx x
f x f x x x
( ) ( )k ke x
( 1)
( )
1 ( ), 1.84
6 ( )
k
k pk
e fLim p
fe
57
8/11/2019 Approximation of Eigenvalues and Eigenvectores by the Direct methods and investigation on the Rootfinding methods and their efficiency
58/58
Bibliography
1) Demmel ,J. Applied Numerical Linear Algebra, MIT 1996.
2)Quarteroni, A & Sacco.R and Saleri.F . Numerical
mathematics. Springer. 2000.
3) Linear algebra: numerical methods,2000
4) Heath. M.T, Scientific Computing: An Introductory Survey,
Chapter 4Eigenvalue Problems, University of Illinois at
Urbana-Champaign, 2002.
5) Kubcek.M, Janovska.D, Dubcova.M . numerical methods
and algorithms, 2005.
6) Wilkinson.J.H, The algebraic eigenvalue problem,Oxford,
1965
58