Upload
elija2020
View
233
Download
0
Embed Size (px)
Citation preview
8/8/2019 Ibro Final Seminar
1/31
MULTISTEP METHOD FOR SOLVING ORDINARY DIFFERENTIAL
EQUATIONS
M.Sc. Graduate Seminar
IBRAHIM ZERGA
January 2011
Haramaya University
8/8/2019 Ibro Final Seminar
2/31
MULTISTEP METHOD FOR SOLVING ORDINARY DIFFERENTIAL
EQUATIONS
A Seminar Submitted to the Department of Mathematics,
School of Graduate Studies
HARAMAYA UNIVERSITY
In Partial Fulfillment of the Requirements for the Degree of
MASTER OF SCIENCE IN MATHEMATICS
(NUMERICAL ANALYSIS)
By
Ibrahim Zerga
Advisor
Getinet Alemayehu (PhD)
January 2011
1
8/8/2019 Ibro Final Seminar
3/31
Haramaya University
SCHOOL OF GRADUATE STUDIES
HARAMAYA UNIVERSITY
As members of the Examination Board of the Final M. Sc. Open Defense, we certify that we
have read and evaluated this Graduate Seminar prepared by Ibrahim Zerga
Entitled:multistep method for solving ordinary differential equations and recommended
that it be accepted as fulfilling the Graduate Seminar requirements for the Degree of Master
of Science in Mathematics.
______________________ _________________ _______________
Chairperson Signature Date
Getinet Alemayehu (Ph D) __________________ _______________Advisor Signature Date
______________________ _________________ _______________
Examiner Signature Date
Final approval and acceptance of the Graduate Seminar is contingent upon the submission of
the final copy of the Graduate Seminar to the Council of Graduate Study (CGS) through the
Department Graduate Committee (DGC) of the candidates department.
I hereby certify that I have read this Graduate Seminar prepared under my direction and
recommended that it be accepted as fulfilling the Graduate Seminar requirement.
Getinet Alemayehu (Ph D) __________________ _______________
Advisor Signature Date
1
8/8/2019 Ibro Final Seminar
4/31
PREFACE
Among different kinds of numerical method to solve ordinary differential equation, thisseminar report constitutes the most popular and efficient method called linear multistepmethod, for solving first order initial value problems.
In general the report consists of three chapters, among which the first part is about some
mathematical preliminaries (definitions and theorems) that will be helpful for the main bodyof seminar.
The second chapter discusses about the derivation of multistep method and presents somepowerful multistep method (Adams bashforth and Adams Moulton). And examples are alsodiscussed to come up with the definition called predicator corrector method.
The final part is about the analysis of multistep method and defines some basic terms whichare used to control the error occurred while using starting points. Moreover some basictheorems (Dahlquists) are discussed without a proof. And also includes example to show thatconsistency is not a sufficient condition for the convergence of the method, to make it brief
the above example is supported by the graph in detail.
2
8/8/2019 Ibro Final Seminar
5/31
ACKNOWLEDGEMENT
My advisor Dr Getinet Alemayehu has always been extremely encouraging towards me, andhe has basically taught me how to do mathematics and his outstanding contributions to thefield of numerical analysis have, always been of encouragement to me. And next I wish to
thank my family and friends for all the wonderful times we have had.
2
8/8/2019 Ibro Final Seminar
6/31
TABLE OF CONTENTS
PREFACE.........................................................................................................III
ACKNOWLEDGEMENT...............................................................................IVINTRODUTION.................................................................................................1
CHAPTER ONE ...............................................................................................2
MATHEMATICAL PRELIMINARIES..........................................................2
CHAPTER TWO................................................................................................6
LINEAR MULTISTEP METHOD...................................................................6
2.1. Explicit Multistep Methods..............................................................................7
2.1.1. Adams Bashforth Methods......................................................................................9
2.1.2. Nystrm Methods................................................................................................9
2.2. Implicit Multistep Methods............................................................................10
2.2.1. Adams-Moulton Method......................................................................................11
2.2.2. Milne-Simpson Method.......................................................................................12
CHAPTER THREE..........................................................................................16
ANALYSIS OF MULTISTEP METHOD......................................................16
3.1. Zero-Stability...........................................................................................16
3.2. Consistency..............................................................................................20
SUMMARY.......................................................................................................24
REFERENCES.................................................................................................25
REFERENCES
1
8/8/2019 Ibro Final Seminar
7/31
2
8/8/2019 Ibro Final Seminar
8/31
INTRODUTION
Differential equations are used to model problems in science and engineering that involvesthe change of some variable with respect to the other. Most of these problems require thesolution to the initial problem, that is, the solution to a differential equation that satisfies agiven initial condition.
In most real-life situation, the differential equation that models the problem is toocomplicated to solve exactly, and one of the two approaches is taken to approximate thesolution. The first approach is to simplify the differential equation. The other approach usesmethods for approximating the solution of the original problem.
Numerical methods will always be concerned with solving a perturbed problem since anyround-of error introduced in the representation perturbs the original problem. Unless theoriginal problem is well-posed, there is a little reason to expect that the numerical solution tothe perturbed problem will accurately approximate the solution to the original problem.
One-step methods construct an approximate solution yn+1yxn+1 using only one previousapproximation yn. The method discussed in this seminar enjoys the virtue that the step size hcan be changed at every iteration, if desired, thus providing a mechanism for error control,
but in our case we consider only constant step size. And they are good in computer time thanthe Runga-Kutta methods, multistep methods are frequently used in commercial routines
because of their combined accuracy, stability, and computational efficiency properties.
Several drawbacks to this scheme are evident: it is difficult to adjust the step size, and valuesy0,y1,,yk-1 should be known before starting the method, it is not self started. The formerconcern can be addressed in practice through interpolation techniques. To handle the latterconcern, initial data can be generated using a one-step method with small step size h.
1
8/8/2019 Ibro Final Seminar
9/31
CHAPTER ONE
MATHEMATICAL PRELIMINARIES
This chapter includes some basic definition and theorem which will be helpful for the next
chapter that is for the main parts of the seminar.
Definition (1.1) an ordinary differential equation is the relation between a function and, its
derivatives, and the variable upon which they depends. The most general form of an ordinary
differential equation is given by
fx,y'
,y''
, ,y)(m
=0 (1.1)
Where m represents the highest order derivatives, and y and its derivativesare functions ofx.
The order of the differential equation is the order of its higher derivatives and its degree is
the degree of the highest derivative of the highest order after the equation has been
rationalized in derivatives. If the initial conditions
Y)(i
(x0) =
( )i
for i=0, 1,
, m-1 (1.2)
at a point x=x0 are given then the deferential equation together with the initial condition is
called an mth order initial value problem.
Definition (1.2)a function f
( )yt,
is said to satisfy a Lipchitz condition in the variable y on a
set D which is a subset ofR2
if a constant L>0 exists with
( )1, ytf
-
( )2, ytf
Ly1-y2
1
8/8/2019 Ibro Final Seminar
10/31
Whenever
( )1, yt
,
( )2, yt
belongs to D, the constant L is called the Lipchitz constant forf.
Definition (1.3)a set2RD
is said to be convex if whenever
( )11, yt
and
( )22 , yt
belong to
D
and
[ ]1,0
the point
( ) 211 tt +
,
( ) 211 yy +
also belong to
D
.
Theorem: suppose
( )ytf ,
is defined on a convex set2RD
. If a constant L>0 exists with
( )L
y
ytf
,
for all
( ) Dyt ,
, then fsatisfies a Lipchitz condition on
D
in variable y with
Lipchitz constantL.
Theorem: suppose
( ){ }
8/8/2019 Ibro Final Seminar
11/31
The initial value problem
( ) ( )tztf
dt
dz+= ,
, atb,
( ) 0+= aaz
Has a unique solution
( )tz
that satisfies
( ) ( ) ktytz
8/8/2019 Ibro Final Seminar
12/31
If f(x) is continuous and posses continuous derivatives of order n in an interval that
includes x=a, then in the interval
)(
)!1(
)()(...
!2
)('')()(')()()(
112
xR
n
afaxafaxafaxafxf n
nn
+
++
++=
Where Rn(x) =
,!
)()(
n
faxnn
xa
8/8/2019 Ibro Final Seminar
13/31
CHAPTER TWO
LINEAR MULTISTEP METHOD
While Runga-kutta method give an improvement over Eulers method in terms of accuracy,
this is achieved by investigating additional computational effort; in fact, Runga-kutta method
requires more evaluation off (.,.) then would seems necessary. For example, the fourth-order
method involves four functional evaluations per step. For comparison, by considering three
consecutive points xn-1, xn =xn-1+h, xn+1=xn-1+2h, integrating the differential
equation between xn-1 and xn+1 , yields
yxn+1 =yxn-1+xn-1xn+1fx,y(x)dx ,
3
8/8/2019 Ibro Final Seminar
14/31
And applying Simpsons rule to approximate the integral on the right hand side then leads to
the method
yn+1=yn-1+ 13hfxn-1,yn-1+ 4fxn,yn+fxn+1,yn+1, (2.1)
requiring only three functional evaluations per step. In contrast with the one-step method
considered in the previous section where only a single value ynwas required to compute the
next approximation yn+1, here we need two preceding values yn and yn-1, to be able to
calculate yn+1 , and therefore (2.1) is not a one-step method.
In this section we consider a class of methods of the type (2.1) for the numerical solution
of the initial value problem (1.1), (1.2), called linear multistep method
Given a sequence of equally spaced mesh points xn with step size h, we consider the
general linear k-step method
j=1kjyn+j =hj=0kjfxn+j,yn+j (2.2)
Where the coefficients
k ,,0
and
k ,,0
are real constants. In order to avoid
degenerate cases, we shall assume that k0 and that 0 and 0 are not both equal to zero.
Ifk=0,then yn+k is obtained explicitly from previous value ofyj and fxj,yj, and the k-
step method is then explicit. On the other hand, ifk0,then yn+kappears not only in the
left hand side but also on the right, within fxn+k,yn+k; due to this implicit dependence on
yn+k the method is then is called implicit. The method (2.2) is called linear because it
involves only linear combinations of the yn+k and fxn+k,yn+k,
;,,1,0 kj =
for the sake
of notational simplicity, henceforth we shall often write fn instead offxn,yn.
Most Runga-Kutta methods though one-step method, are not multistep methods. Eulers
method is an example of a one-step method that also fits multistep templates. Here are a few
examples of a linear multistep method.
Eulers method: yn+1-yn=hfn 0=-1,1=1,0=1,1=0
Trapezoidal rule: yn+1-yn=h2fn+fn+1 0=-1,1=1,0=12,1=12
Adams-Bashforth: yn+2-yn+1=h23fn+1-fn 0=0,1=-1,2=1,
1
8/8/2019 Ibro Final Seminar
15/31
0=-12,1=32,2=12,
2.1. Explicit Multistep Methods
To begin the derivation of explicit multistep method, first note that the solution to the initial
value problem
y'x=fx,y, axb, ya=
If we integrate the above differential equation from xj-i to xj+1 for i0, we get that
yxj+1-yxj-i=xi-ixi+1y'xdx=xi-ixi+1fx,yxdx.Consequently
yxj+1=yxj-i+xj-ixj+1fx,yxdx . (2.3)
Since the integrand in (2.3) involves the unknown function yx , we cannot integrate it
directly rather we replace f(x,yx) by some polynomial pk-1(x) of degree k-1which
interpolates fx,yx at k poins xj, xj-1 , ,xj-k+1. The Newton backward difference
polynomial interpolating the data xj,fj, xj-1,fj-1, ,xj-k+1,fj-k+1 is given by
Pk-1x=fj+x-xjhfj+x-xjx-xj-12!h22fj+
+x-xjx-xj-1 . . .x-xj-k+2(k-1)!hk-1 k-1fj+x-xjx-xj-1 . . .x-xj-k+1k!fk
(2.4)
Where lies in the interval containing the points xj,xj+1,xj-k+1 and t..
Substituting x-xj=hs in (2.4) we have
Pk-1xj+hs= fj+sfj+ss+12!2fj++
ss+1 . . .s+k-2k-1!k-1fj+ss+1. . .s+k-1k!hkfk()
=m=0k-1-1m-smmfj+-1k-skhkfk
Where -sm=-1mss+1 . . .(s+m-1)m!
Now by replacing f(x,y(x)) by the interpolating polynomial Pk-1(xj+hs) we get that
yxj+1=yxj-i+h-i1m=0k-1-1m-smmfj+-1k-skhkfkds
3
8/8/2019 Ibro Final Seminar
16/31
=yxj-i+hm=0k-1mimfj+Tk(i) (2.5)
Where mi=-i1-1m-smds and (2.6)
Tk(i)=hk-i1-1k-skfkds.
Neglecting the error termTk(i) in (2.5), we obtain the explicit multistep method
yj+1=yj-i+hm=0k-1mimfj. (2.7)
Since the error is of O (hk+1), the method (2.7) is at least of order k, now from (2.6)
0i=-i1ds=1+i.
1i=-i1sds=121+i1-i.
2i=12-i1ss+1ds=1125-3i2+2i3.
3i=16-i1ss+1s+2ds=1243-i3+i-i2+i3.
4i=124-i1ss+1s+2s+3ds=1720251-90i2+110i3-45i4+6i5 And so on.
Hence for different value ofi in (2.7) we obtain different methods.
2.1.1. Adams Bashforth Methods
yj+1=yj+hfj+12fj+5122fj+383fj+2517204fj+
The truncation error in the Adams-bashforth methods using (2.6) is given by
Tk0=hk+101-1k-skfkds
=hk+101gsfkds
Where gs= ss+1 s+k-1k!
Since gsdoes not change the sign in 0,1, we have by the mean value theorem of integral
calculus
Tk0=hk+1k0fk1
3
8/8/2019 Ibro Final Seminar
17/31
Where 1lies in the interval containing the points tj-k+1, tj-k+2, ,tj and t.
2.1.2. Nystrm Methods
On substituting the value ofm1in equation (2.7), we get
yj+1=yj-1+h2fj+0fj+132fj+133fj+29904fj+ .
So the truncation error is given by
Tk1=hk+1-11gsfkds
Where gs=ss+1 s+k-1k!
Since gschanges sign in-1,1, the mean value theorem cannot be applied. However, a bound
for the error can be written as
Tk1hk+1Mk-11gsds
Where Mk=
11
k
|(x)f|max k
.
2.2. Implicit Multistep Methods
We replace fx,y in (2.3) by the polynomial pkx of degree k, which interpolates fx,y at k+1
points, xj+1, xj,xj-k+1. The Newton backward difference interpolating polynomial
interpolating the data xj+1,fj+1, xj,fj, , xj-k+1,fj-k+1 is given by
pkx=fj+1+x-xj+1hfj+1+x-xj+1x-xj2!h22fj+1+
+ x-xj+1x-xj
x-xj-k+1k!hkkfj+1+x-xj+1x-xj
x-xj-k+1k+1! f(k+1)
Where lies in the interval containing the points xj+1, xj,
,xj-k+1 and x.
Substituting x-xj=hs in the above interpolating polynomial, we get
3
8/8/2019 Ibro Final Seminar
18/31
pkxj+hs=m=0k-1m1-smmfj+1+-1k+11-sk+1hk+1fk+1
Replacing fx,y in (2.3) by the polynomial pkxj+hs, we get
yxj+1=yxj-i+h-i1m=0k-1m1-smmfj+1+-1k+11-sk+1hk+1fk+1ds
=yxj-i+hm=0kmimfj+1+k+1i (2.8)
Where mi=-i1-1m1-smds and (2.9)
k+1i=hk+2-i1-1k+11-sk+1fk+1ds (2.10)
Neglecting the error term in (2.8), we obtain the multistep method
yj+1=yj-i+hm=0kmimfj+1 (2.11)
Since the truncation error in (2.8), is ofOhk+2, the method (2.11) is at least of order (k+1).
We can calculate mi, m=o, 1, , we obtain
0i=-i1ds=1+i
1i=-i11-sds=-121+i2
2i=-12-i1s1-sds=-1121+i21-2i
3i=-16-i1s1-ss+1ds=-1241+i21-i2
4i=-124-i1s1-ss+1s+2ds
=-17201+i219- 38i+27i2-6i3 and so on.
For different value ofi in (2.8), we obtain different methods.
2.2.1. Adams-Moulton Method
Substituting the values ofm0 in (2.8), we get
yj+1=yj+hfj+1-12fj+1-1122fj+1-1243fj+1-197204fj+1-
The truncation error of the method is given by
k+1(0)=hk+201-1k+11-sk+1f(k+1)ds
3
8/8/2019 Ibro Final Seminar
19/31
= hk+201gsfk+1ds
where gs=s-1ss+1 s+k-1k+1!.
Since g(s) does not change sign in 0,1, we have by the mean value theorem
k+1(0)=hk+1k+1(0)fk+1
2.2.2. Milne-Simpson Method
Substituting the values ofm1 in (2.8), we get
yj+1=yj-1+h2fj+1-2fj+1132fj+1+03fj+1-1904fj+1-
The truncation error of the method is given by
k+1(1)=hk+2-11gsfk+1ds
Since gs changes sign in-1,1, the mean value theorem cannot be applied. However, a bound
for the error can be written as
k+1(1)hk+1Mk+1-11gsds
Where,( ) ( ) .max 1
111 xfM
k
xK
+
+ =
Some additional, special interesting formulas of (2.3) are those corresponding to k=1, i=1and to k=3, i=3. these formulas together with their local-error terms are
yj+1=yj-1+2hfj E=h33y''' (2.11)
yj+1=yj-3+4h32fj-fj-1+2fj-2 E=1445h5yv (2.12)
Formula (2.11), which is comparable in simplicity to Eulers method, has a more favorablediscretization error. Similarly (2.12), which requires knowledge of fx,y at only three points,has a discretization error comparable with that of the Adams-bashforth method. It can beshown that all formula of type (2.3) with k odd and k=i have the property that thecoefficient of the kth difference vanishes, thus yielding a formula of higher order than might
be expected. On the other hand, these formulas are subject to greater instability, a conceptwhich will be developed on the next chapter.
To compare the above two Adams family methods and to come up with the definition calledpredicator-corrector method let us consider the following example.
3
8/8/2019 Ibro Final Seminar
20/31
Example consider the initial value problem
y'=y-x2+1, 0x2, y0=0.5,
And the approximations given by the explicit Adams-bashforth four-step method and the
implicit Adams-Moulton three-steps method, both using h=0.5
yi+1=yi+h2455fxi,yi-59fxi-1,yi-1+37fxi-2,yi-2-9fxi-3,yi-3,
Fori=3, 4, ,9. when simplified using fx,y=y-x2+1, h=0.2, and xi=0.2i, it becomes
yi+1=yi+12435yi-11.8yi-1+7.4yi-2-1.8yi-3-0.192i2-0.192i+4.736,
Similarly the simplified Adams-Moulton method becomes
yi+1=1241.8yi+1+27.8yi-yi-1+0.2yi-2-0.192i2-0.192i+4.736,
for i=2,3, ,9.
To use this method explicitly, we solve foryi+1, which gives
yi+1=122.227.8yi-yi-1+0.2yi-2-0.192i2-0.192i+4.736, for i=2,3, ,9.
The result in table 5.11 was obtained using the exact values from yx=x+12-
0.5ex for , 1,2, and 3 in the explicit Adams-bashForth case and , 1, and
2, in the implicit Adams-Moulton case.
Adams-
Bashforth
Adams-
Moulton
xi Exact yi Error yi Error
0.0 0.5000000
0.2 0.8292986
1
8/8/2019 Ibro Final Seminar
21/31
0.4 1.2140877
0.6 1.6489406 1.6489341 0.0000065
0.8 2.1272295 2.1273124 0.0000828 2.1272136 0.0000160
1.0 2.6408591 2.6410810 0.0002219 2.6408298 0.0000293
1.2 3.1799415 3.1803480 0.0004065 3.1798937 0.0000478
1.4 3.7324000 3.7330601 0.0006601 3.7323270 0.0000731
1.6 4.2834838 4.2844931 0.0010093 4.2833767 0.0001071
1.8 4.8151763 4.8166575 0.0014812 4.8150236 0.0001527
2.0 5.3054720 5.3075838 0.0021119 5.3052587 0.0002132
In the above example the implicit Adam-Moulton method gave better result than the explicit
Adam-bashforth method of the same order. Although this is generally the case, the implicit
methods have the inherent weakness of first having to convert the method algebraically to an
explicit representation for,yj+1.This procedure is not always possible, as can be seen by
considering the elementary initial-value problem
y'=ey, 0x0.25, y0=1.
Since fx,y=ey, the three-step Adams-Moulton method has
yi+1=yi+h249eyi+1+19eyi-5eyi-1+eyi-2,as the difference equation and
this equation cannot be solved explicitly foryi+1.
We could use Newtons method or the secant method to approximate yi+1, but this
complicates the procedure considerably. In practice implicit multistep method are not used as
described above. Rather, they are used to improve approximations obtained by explicit
methods. The combination of an explicit and implicit technique is called a predictor-
corrector method. The explicit method predicts an approximation, and the implicit methods
correct this prediction:
y40=y3+12455fx3,y3-59fx2,yy+37fx1,y1-9fx0,y0.
This approximation is improved by inserting y40 in the right side of the three-step implicit
Adams-Moulton method and using the method as a corrector. This gives
1
8/8/2019 Ibro Final Seminar
22/31
y41=y3+h249fx4,y40+19fx3,y3-5fx2,y2+fx1,y1.
The only new function evaluation requires in this procedure is fx4,y40 in the corrector
equation; all the other values offhave been calculated for earlier approximations.
The value y41 is then used as the approximation to yx4, and the technique of using the
Adams-bashforth method as a predicator and the Adams-Moulton method as a corrector is
repeated to findy51 and y50 , the initial and final approximation to yx5. This process is
continued until we obtain an approximation to yxN=yb.
CHAPTER THREE
ANALYSIS OF MULTISTEP METHOD
This chapter discusses about the analysis of the method discussed above in chapter two, andintroduces the concepts ofzero-stability, consistency and convergence. The significance of
2
8/8/2019 Ibro Final Seminar
23/31
these properties cannot be over emphasized: the failure of any of the three will render the
linear multistep method practically useless.
3.1. Zero-Stability
As is clear from (2.2) we need k starting points
10 ,, kyy
, before we can apply a linear k
step method to the initial value problem (1.1), (1.2); of these, y0is given by the initial
condition (1.2), but the others ,
11 ,, kyy
, have to be computed by other means : say, by
using one step method (Euler, Runga-kutta or Taylor method). At any rate the starting values
will contain numerical error and it is important to know how will this affect further
approximation yn, nk, which are calculated by means of (2.3). Thus, we wish to consider
the stability of the numerical method with respect to small perturbation in the starting
conditions.
We are interested in the behavior of linear multistep methods as h0. In this limit, the
right hand side of the formula for the generic multistep method,
j=1kjyn+j =hj=0kjfxn+j,yn+j,
makes a negligible consideration. This motivates our consideration of the trivial initial model
problem y'x=0 with y0=0. Does the linear multistep method recover the exact solution
yx=0?. When y'x=0, clearly we have fn+j=0 for all j.Tthe condition k0, allows as
writing
yk=-0y0+1y1+ +yk-1k
Hence if the method is started with exact datas y0=y1= =yk-1=0, then
yk=-0.0+1.0+ +k-1.0k =0,
And this pattern will continue: yk+1=0, yk+2=0, . Any linear method with exact
starting data produces the exact solution for this special problem, regardless of the time-step.
2
8/8/2019 Ibro Final Seminar
24/31
Of course, for more complicated problems it is unusual to have the exact starting values y1,
y2, yk-1 ; typically, these values are only approximate, obtained from some high-order
one-step ODE solver or from an asymptotic expansion of the solution that is accurate in a
neighborhood of x0. To discover how multistep method behave, we must first understand
how this errors in the initial data pollute future iterations of the linear multistep method.
Definition: a linear k-step method (for ordinary differential equation y,=fx,y) is said to
be zero-stable if there exist a constant k such that, for any two sequence yn and zn that
have been generated by the same formulae different starting values y0,y1,,yk-1 and
z0,z1,,zk-1, respectively, we have
{ }111100 ,,,max kknn zyzyzykzy (3.1)
For xnkM, and as h tends to 0. More plainly, a method is zero-stable for a particular
problem if errors in the starting values are not magnified in unbounded fashion. Let us first
consider a particular example.
Example (A novel second order method)
The truncation error formulas from the previous chapter can be used to derive a variety oflinear multistep methods that satisfies a given order of truncation error. So one can use those
conditions two verify that the explicit two-step method
yk+2=2yk-yk+1+h12fk+52fk+1, is second order accurate. Now we
will test the zero-stability of this algorithm on the trivial model problem, y'x=0 with y0=0.
Since fx,y=0 in this, the method reduces to
yk+2=2yk-yk+1.
As seen above, this method produces the exact solution if given initial data y0=y1=0. But
what ify0=0 but y1= for some >0 ? This method produces the iterates
y2=2y0-y1=2.0- =-
y3=2y1-y2=2--=3
y4=2y2-y3=2--3=-5
y5=2y3-y4=23--5=11
2
8/8/2019 Ibro Final Seminar
25/31
y6=2y4-y5=2-5-11=-21
y7=2y5-y6=211-(-21)=43
y8=2y6-y7=2-21-4343=-85
In just seven steps, the error has been multiplied 85 fold. The error is roughly doubling ateach step, and before long the approximation solution is complete garbage. This is illustratedin the plot on the left below, which shows the evolution ofyk, when h=0.1 and =0.01.There is another quirk. When applied to this particular model problem, the linear multistepmethod reduces to j=1kjyn+j =0, and thus never incorporates the time-step, h. Hence theerror at some fixed time xfinal=hn gets worse as h gets smaller and n grows accordingly!The figure on the right below illustrates the fact, showing yk overx 0,1for three differentvalues ofh. Clearly the smallest h leads to the most rapid error growth.
Though this method has second-order local (truncation) error, it blows up if fed incorrectinitial data for y1. Decreasing h can magnify this effect: for linear multistep methods,consistency (that is, Tn0 as h0) is not sufficient to insure convergence as one-stepmethods.
Proving zero-stability directly from the above definition would be a chore. So theorem showsas an easy technique to determine whether a particular linear multistep method is zero-stableor not.
Theorem: (root condition) A linear multistep method is zero-stable for an initial value
problem of the form (1.1), (1.2) wherefsatisfies the hypothesis of Picards theorem, if, and
only if, all roots of the first characteristic polynomial of the method are inside the closed unit
disk in the complex plane, with any which lie on the unit circle being simple.
Note that given linear k-step method (2.2) we define the first and second characteristic
polynomials, respectively as follows
z=j=0kjzj,
3
8/8/2019 Ibro Final Seminar
26/31
z=j=0kjzj,
Proof (Necessity).
Consider the method (2.2), applied to y'=0:
kyn+k++1yn+1+0yn=0 (3.2)
According to lemma (1.1), every solution of this kth-order linear recurrence relation has the
form
r=1lprnzrn,
Where zr is a root, of multiplicity mr1, of the first characteristic polynomial of the
method, and the polynomial pr has degree mr-1, 1rl, lk. If zr>1, then there are
starting values y0,y1,,yk-1 for which the corresponding solution grows likezrn,and if
zr=1and the multiplicity is mr>1, then there is a solution growing like n mr-1. In any
case there are solutions that grow unboundedly as n, that is h0 with nh fixed.
Considering starting values y0,y1,,yk-1 which give rise to such an unbounded solution
yn, and startingvaluesz0=z1=zk-1=0for which the correspondingsolution of (3.2) is
zn with zn=0for all n, we see that (3.1) cannot hold. Tosummarize, if the root condition is
violated, then the method is not zero-stable.
Sufficiency, the proof that the root condition is sufficient for zero-stability is long and
technical, and will be omitted here.
Example
The Euler method and the implicit Euler method have characteristic polynomial pz=z-1
with simpleroot z=1, so both methods are zero-stable.
The three-step method
11yn-3+27yn+2-27yn+1+11yn
=3hfn+3+9fn+2+9fn+1+fn
3
8/8/2019 Ibro Final Seminar
27/31
is not zero stable. Indeed, the corresponding first characteristic polynomial pz-
11z3+27z2-27z-11=0 has roots at z1=1,z2-0.32, z3=-3.14,soz3>1.
3.2. Consistency
In this section we consider the accuracy of the linear k-step method (2.2). For this purpose,
as in the case of one-step method, we introduce the notion of truncation error. Recall that the
truncation error of one-step methods of the form yn+1=yn+hxn,yn;h was given by
Tn=yxn+1-yxnh-xn,yn;h
With general linear multistep method is associated an analogous formula, based on
substituting the exact solution yxn for the approximation yn, and rearranging terms:
Tn=j=0kjyxn+j-hjfxn+j,yxn+jhj=0kj (3.3)
Of course, the definition requires implicitly that 1=j=0kj0. which is called the
normalization tern; if it were absent, multiplying the entire multistep formula by a constant
would alter the truncation error, but not the iterates yj. Again, as in the case of one-step
method, the truncation error can be thought of as the residual that is obtained by inserting the
solution of the differential equation in to the formula (2.2) and scaling this residual
appropriately (in this case dividing through by hj=0kj) ,so thatTn resembles y'=fx,yx.
Definition: the numerical method (2.3) is said to be consistent with the differential equation
(1.1) if the truncation error defined by (3.2) is such that for any >0 there is h for which
Tn
8/8/2019 Ibro Final Seminar
28/31
Tn=1h1c0yxn+c1hy'xn+c2h2y''xn+ (3.4)
Where
c0=j=0kjc1=j=1kjj-j=0kjc2=j=1kj22!j-j=0kjjcq=j=1kjqq!-j=1kjq-1q-1!j
(3.5)
For consistency we need that h0 and n with xnxx0,XM, the truncation errorTn
tends to 0. This requires that c0=0 and c1=0 in (3.4).
Theorem an m-step linear multistep method of the form
j=0kjyn+j=hj=0kjfn+j , is consistent if and only if
j=0kj=0 And j=0kjj=j=0kj
Definition: thenumerical method (2.3) is said to have order of accuracyp, if p is the largestpositive integer such that, for any sufficiently smooth solution curve in D of the initial value
problem (1.1),(1.2), there exist constants K and h0 such that
TnKhp for 0
8/8/2019 Ibro Final Seminar
29/31
Clearly 0+1=-1+1=0 and 00+11-0+1=0 . When we analyzed this algorithm as
one-step method, we saw it hadTk=Oh. we expect the same result from this multistep
analysis. Indeed,
12020+12121-00+11=120.
Thus,Tk=Oh.
Example (Trapezoidal rule)
0=-1, 1=1, 0=12, 1=12.
Again, consistency is easy to verify:
0+ 1=-1+1=0 and 00+11-0+1=0. Furthermore,
12020+12121-00+11=12-12=0,So Tk=Oh2, but
16030+16131-12020+121211=16-140,
So the trapezoidal rule is not third order accurate.
Example (2-step Adams-bashforth)
0=0, 1=-1, 2=1, 0=-12, 1=32,2=0
Is consistent and has second order accuracy.
Theorem 12.5 (Dahlquists equivalence theorem) for a linear k-step method that is consistent
with the ordinary differential equation (1.1) where f is said to satisfies a Lipchitz condition,
and with consistent starting values, zero-stability is necessary and sufficient for convergence.
Moreover if the solution y has a continuous derivatives of order p+1 and truncation error ofthe method, en=yxn-yn is also Ohp.
Theorem 12.6 (Dahlquists Barrier Theorem) the order of accuracy of a zero-stable k-step
method cannot exceed k+1 if k is odd, or k+2 if k is even.
3
8/8/2019 Ibro Final Seminar
30/31
SUMMARY
In general multistep methods are efficient and they have better computer time than thecorresponding one step methods. Among different kinds of multistep method the most knownand used in most practical problems are the explicit and implicit Adams family.
One step methods like those of Runge-Kutta type do not exhibit any numerical instability forh sufficiently small. Multistep methods may, in some cases, be unstable for all values of h.Moreover unlike the one step method consistency is not a sufficient condition for theconvergence that is it needs some additional property called zero-stability.
2
8/8/2019 Ibro Final Seminar
31/31
REFERENCES
[1]. Carl de Boor, 1965. Elementary Numerical Analysis an Algorithmic Approach,McGraw-Hill book company 3rdedition.
[2]. Curtis F.Gerald and Patrick O.Wheatle, 1970. Applied Numerical Analysis Addison Wesley publishers 5th edition.
[3]. Endre Suli and David Mayer, 2003. An introduction to Numerical Analysis,Cambridge university press, UK.
[4]. Grewal, B.S. Ph.D, 2002. Numerical methods in engineering and science programs
in FORTRAN 77, C+, C++, Khanna Publishers 6thedition.
[5]. Jain M.K., S.R.K. Iyengar, R.K. Jain, 2007. Numerical methods for scientific andEngineering computation. New age international publishers, 5thedition.
[6]. Richard L. Burden, J. Douglas Faires, 2005. Numerical Analysis, Books /cool, pacificGrove, 5thedition.
[7]. Sastry, S.S., 2003. Introductory methods for Numerical Analysis, Prentice-Hall ofIndia, New Delhi, 3rdedition.