Upload
hilary-mcdonald
View
213
Download
0
Embed Size (px)
Citation preview
linear
2.3 Newton’s Method (Newton-Raphson Method)
1/12
Chapter 2 Solutions of Equations in One Variable – Newton’s Method
Idea: Linearize a nonlinear function using Taylor’s expansion.
Let p0 [a, b] be an approximation to p such that f ’(p0) 0. Consider the first Taylor polynomial of f (x) expanded about p0 :
20000 )(
!2
)())(()()( px
fpxpfpfxf x
where x lies between p0 and x .
Assume that | p p0| is small, then (p p0)2 is much smaller. Then:
))(()()(0 000 pppfpfpf )(
)(
0
00 pf
pfpp
x
y
pp0
1for ,)('
)(
1
11
n
pf
pfpp
n
nnn
Chapter 2 Solutions of Equations in One Variable – Newton’s Method
Theorem: Let f C2[a, b]. If p [a, b] is such that f(p) = 0 and f ’(p) 0, then there exists a > 0 such that Newton’s method generates a sequence { pn } (n = 1, 2, …) converging to p for any initial approximation
p0 [p – , p + ].
Proof: Newton’s method is just pn = g( pn – 1 ) for n 1 with
)('
)()(
xf
xfxxg
a. Is g(x) continuous in a neighborhood of p?
f ’(p) 0 and is continuous f ’(x) 0 in a neighborhood of p
b. Is g’(x) bounded by 0 < k < 1 in a neighborhood of p?
g’(x) =2)]('[
)(")(
xf
xfxf g’(p) = 0
f ”(x) is continuous g’(x) is small and is continuous in a neighborhood of p
2/12
Chapter 2 Solutions of Equations in One Variable – Newton’s Method
Proof (continued):
c. Does g(x) map [p – , p + ] into itself?
= | g(x) – g(p) | = | g’() | | x – p | k | x – p | < | x – p | | g(x) – p | <
Note: The convergence of Newton’s method depends on the selection Note: The convergence of Newton’s method depends on the selection of the initial approximation.of the initial approximation.
pp0p0
p0
3/12
Chapter 2 Solutions of Equations in One Variable – Newton’s Method
Secant Method :What is wrong: Newton’s method requires f ’(x) at each approximation. Frequently, f ’(x) is far more difficult and needs more arithmetic operations to calculate than f(x).
p0p1
tangent line
secant line
tangent secant
)()(
))((
21
2111
nn
nnnnn pfpf
pppfpp Have to start with 2 initial
approximations.
Slower than Newton’s Method and still requires a good
initial approximation.
1
1 )()()(
nn
nnn pp
pfpfpf
HW: p.75 #13 (b)(c),
p.76 #15
4/12
Chapter 2 Solutions of Equations in One Variable – Newton’s Method
Lab 02. Root of a Polynomial
Time Limit: 1 second; Points: 3
A polynomial of degree n has the common form as
Your task is to write a program to find a root of a given polynomial in a given interval.
011
1 ...)( axaxaxaxp nn
nn
5/12
Chapter 2 Solutions of Equations in One Variable – Error Analysis for Iterative Methods
2.4 Error Analysis for Iterative Methods
Definition: Suppose { pn } ( n = 0, 1, 2, …) is a sequence that converges to p, with pn p for all n. If positive constants and exist with
,||
||lim 1
pp
pp
n
n
n
then { pn } ( n = 0, 1, 2, …) converges to p of order , with asymptotic error constant .(i) If =1, the sequence is linearly convergent.(ii) If =2, the sequence is quadratically convergent.
The the value of , the faster
the convergence.
largerQ: What is the order of convergence for an iterative method with g’(p) 0?
A: |)('|||
||)('lim
||
||lim 1 pg
pp
ppg
pp
pp
n
nn
nn
n
n
Linearly convergent.
6/12
Chapter 2 Solutions of Equations in One Variable – Error Analysis for Iterative Methods
Q: What is the order of convergence for Newton’s method (where g’(p) = 0) ?
2)(!2
)())(()()(0 n
nnnn pp
fpppfpfpf
1np
A: From Taylor’s expansion we have
2)()(!2
)(
)(
)(n
n
n
n
nn pp
pf
f
pf
pfpp
)(2
)(
||
||2
1
n
n
n
n
pf
f
pp
pp
As long as f ’(p) 0, Newton’s method is at least quadratically convergent.
Fast near a simple root.
Q: How can we practically determine and ?
Theorem: Let p be a fixed point of g(x). If there exists some constant 2 such that g C [p – , p + ], g’(p) = … = g( – 1) (p) = 0, and g() (p) 0. Then the iterations with pn = g( pn – 1 ), n 1, is of order .
This is a one line proof...if we start sufficiently far to the
left.
)(!
)(...))(()()(
)(
1 ppg
pppgpgpgp nn
nnn
p n 7/12
Chapter 2 Solutions of Equations in One Variable – Error Analysis for Iterative Methods
Q: What is the order of convergence for Newton’s method if the root is NOT simple ?
A: If p is a root of f of multiplicity m, then f(x) = (x – p)mq(x) and q(x) 0.
Newton’s method is just pn = g( pn – 1 ) for n 1 with )('
)()(
xf
xfxxg
g’(p) =
2
2
)(
)()()(1
pf
pfpfpf1
11
m
It is convergent, but not quadratically.
Q: Is there anyway to speed it up?
A: Yes!
Equivalently transform the multiple root of f into the simple root of another function, and then apply Newton’s method.
8/12
Chapter 2 Solutions of Equations in One Variable – Error Analysis for Iterative Methods
Let , then the multiple root of f = the simple root of .)(
)()(
xf
xfx
Apply Newton’s method to :
)('
)()(
x
xxxg
)(")()]('[
)(')(2 xfxfxf
xfxfx
Quadratic convergence
Requires additional calculation of f ”(x); The denominator consists of the difference of two numbers that are both close to 0.
HW: p.86 #11
9/12
Chapter 2 Solutions of Equations in One Variable – Accelerating Convergence
2.5 Accelerating Convergence
Aitken’s 2 Method:
x
y
y = xy = g(x)
p p0
t(p0, p1)
p1 p2
p̂
t(p1, p2)
210
201
0 2
)(ˆ
ppp
pppp
21
21
2
)(ˆ
nnn
nnnn ppp
pppp
......
),(,ˆ
),(,ˆ
),(,ˆ
),(),(,
452
341
230
12010
pgpp
pgpp
pgpp
pgppgpp
10/12
Chapter 2 Solutions of Equations in One Variable – Accelerating Convergence
Definition: For a given sequence { pn } (n = 1, 2, …), the forward difference pn is defined by pn = pn+1 – pn for n 0.
Higher powers, kpn, are defined recursively by kpn = (k – 1pn ) for for k 2.
Aitken’s 2 Method:n
nnnn p
pppp 2
22 )(
)}({ˆ
for n 0.
Theorem: Suppose that { pn } (n = 1, 2, …) is a sequence that converges linearly to the limit p and that for all sufficiently large values of n we have ( pn – p )( pn+1 – p ) > 0. Then the sequence { } (n = 1, 2, …) converges to p faster than { pn } (n = 1, 2, …) in the sense that
np̂
.0ˆ
lim
pp
pp
n
n
n
Steffensen’s Method:
......),}({
),(),(),}({
),(),(,
)1(0
2)2(0
)1(1
)1(2
)1(0
)1(1
)0(0
2)1(0
)0(1
)0(2
)0(0
)0(1
)0(0
pp
pgppgppp
pgppgpp
Local quadratic convergence if g’(p) 1.
11/12
Chapter 2 Solutions of Equations in One Variable – Accelerating Convergence
Algorithm: Steffensen’s Acceleration
Find a solution to x = g(x) given an initial approximation p0.
Input: initial approximation p0; tolerance TOL; maximum number of iterations Nmax.
Output: approximate solution x or message of failure.
Step 1 Set i = 1;Step 2 While ( i Nmax) do steps 3-6
Step 3 Set p1 = g(p0) ; p2 = g(p1) ; p = p0 ( p1 p0 )2 / ( p2 2 p1 + p0 ) ;
Step 4 If | p p0 | < TOL then Output (p); /* successful */STOP;
Step 5 Set i ++;Step 6 Set p0 = p ; /* update p0 */
Step 7 Output (The method failed after Nmax iterations); /* unsuccessful */
STOP.
12/12