newton Interpolations

Embed Size (px)

Citation preview

  • 8/12/2019 newton Interpolations

    1/126

    ELEMENTARYNUMERICAL

    METHODS&FOURIER

    ANALYSISF. KOUTN

    _______________________________

    Zln, CZE 2006

  • 8/12/2019 newton Interpolations

    2/126

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    CONTENTS PagePREFACE

    1 INTRODUCTION 1

    2 INTERPOLATION 31.1 Polynomial Approximation - - - - - - - - - - - - - - - - - - - - 31.2 Approximation by Splines - - - - - - - - - - - - - - - - - - - - 9

    2 INTEGRATION 172.1 Numerical Quadrature - - - - - - - - - - - - - - - - - - - - - - - 19

    Simpsons Method - - - - - - - - - - - - - - - - - - - - - - - - - - 21Gauss Method - - - - - - - - - - - - - - - - - - - - - - - - - - - - 23Chebyshev Formulas - - - - - - - - - - - - - - - - - - - - - - - - 25Precision and Richardson Interpolation - - - - - - - - - - - 25

    On Multiple Integration - - - - - - - - - - - - - - - - - - - - - - - 272.2 Monte Carlo Methods - - - - - - - - - - - - - - - - - - - - - - - 28

    3 NONLINEAR EQUATIONS 353.1 Solution of the Equation f ( x) = 0 - - - - - - - - - - - - - - - 35

    Bisection Method - - - - - - - - - - - - - - - - - - - - - - - - - - 36Multiple Equidistant Partition - - - - - - - - - - - - - - - - - - 38Regula falsi - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 39Quadratic Interpolation Method - - - - - - - - - - - - - - - - - 41Inverse Interpolation Method - - - - - - - - - - - - - - - - - - 43Newton-Raphson Method - - - - - - - - - - - - - - - - - - - - 44Simple Iteration Method - - - - - - - - - - - - - - - - - - - - - 45Localization and Global Determination of Roots - - - - - 47

    3.2 Systems of Nonlinear Equations - - - - - - - - - - - - - - - 49Newton Method - - - - - - - - - - - - - - - - - - - - - - - - - - - 51Genetic Algorithm - - - - - - - - - - - - - - - - - - - - - - - - - 54

    4 ORD. DIFFERENTIAL EQUATIONS 574.1 Runge-Kutta Formulas - - - - - - - - - - - - - - - - - - - - - - - 604.2 Systems of Ordinary Diff. Equations - - - - - - - - - - - - - - 654.3 Adams Methods - - - - - - - - - - - - - - - - - - - - - - - - - - - 684.4 Boundary Problems - - - - - - - - - - - - - - - - - - - - - - - - - 71

    5 LINEAR SPACES & FOURIER SERIES 755.1 Linear Spaces - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 755.2 Normed Linear Spaces - - - - - - - - - - - - - - - - - - - - - - - 795.3 Scalar Product. Hilbert Space - - - - - - - - - - - - - - - - - - 825.4 Orthogonality. General Fourier Series - - - - - - - - - - - - 855.5 Trigonometric Fourier Series - - - - - - - - - - - - - - - - - - 935.6 Numerical Calculation of Fourier Coefficients - - - - - - - 995.7 Fejrs Summation of Fourier Series - - - - - - - - - - - - - - 1145.8 Fourier Integral - - - - - - - - - - - - - - - - - - - - - - - - - - - - 116

    REFERENCES 120INDEX 121

  • 8/12/2019 newton Interpolations

    3/126

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    PREFACE

    This text completes the Mathematical Base for Applications in the field of numericalcalculations which is an inevitable part of practical use of mathematical knowledge. Thearea of numerical mathematics is very rich and miscellaneous and here only a sketchyglimpse of its basic fragments can be given. I tried to emphasize the creative andsportive side of mathematics.

    The text as a whole is unbalanced. Some topics are just mentioned by a few words,others are treated concisely and others are discussed in more details. For example, thechapter on differential equations offers but a glance at the huge area of various methods.

    Simple numerical examples are computed by means of todays standard MicrosoftEXCEL to illustrate the explained method. But in some more complicated methods alsothe programming language PASCAL was used.

    Methods of Fourier analysis belong to the tools applied very frequently in manyengineering areas. They represent a bridge from functional analysis to various numericalmethods.

    I wish the readers would find something useful, interesting and inspiring here.Please, accept my apology for the heterogeneity of the text and my imperfectness,

    lacks and trespasses concerning the language, explanations as well as errors of all kinds.

    F. Koutny

  • 8/12/2019 newton Interpolations

    4/126

    INTRODUCTION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    1

    INTRODUCTIONThe aim of the following chapters is to show some of applications of the mathematical

    analysis in numerical manner. This in the present time is necessarily connected with useof a computer as a self-evident computation tool.There are different number types accordingly the memory units (bytes) needed for

    their preserving in computer memory. But in any case the range of memory units isfinal. This implies that in computers we can work only with rational numbers andrational approximations. The presence of rounding errors can cause a big global error oreven break down of computing when a large number of iterations is carried out.

    Example. Forn N let us define

    I n =

    1

    0t n cost dt .

    Due ton

    lim t n = 0 and cos t 1 the Lebesgue theorem [1,2,8,9] says

    nlim I n = 0

    1n

    lim t n cost dt = 01 0 . cost dt = 0.

    Further on, I 0 I 1 I 2 ... Obviously, I 0 = 01 cos t dt = sin 1. Integration by parts

    yields the following recurrence I n = 0

    1 t n cost dt = t n sin t 0

    1 n 0

    1 t n1 sin t dt

    = sin 1 n [t n1 (cost ) 10 + (n1) 01 t n2 cost dt ] = sin 1 n cos 1 n(n1) I n2.

    Using this relation to compute the sequence { I n : n= 2, 4, ...}, e.g. by EXCEL, oneobtains values I iter in the second column of the following table:

    n I iter I G 0 0.841470984808 0.841470984808

    2 0.239133626928 0.239133626928 4 0.133076685140 0.133076685140 6 0.090984265821 0.090984265821 8 0.068770545780 0.068770545782

    10 0.055144923310 0.055144923195 12 0.045968778333 0.045968793697 14 0.039385610319 0.039382814230 16 0.033761402064 0.034432462358 18 0.23592345887 0.030579002875 20 7.800340E+01 0.027495989810 22 3.605030E+04 0.024974369569 24 1.989975E+07 0.022874203587 26 1.293484E+10 0.021098356794 28 9.778737E+12 0.019577332382 30 8.507502E+15 0.018260115763

  • 8/12/2019 newton Interpolations

    5/126

    INTRODUCTION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    2

    It can be seen that starting with the 18th member the sequence I iter (n) has interruptedits monotone decrease starts to diverge. Let us find out the reason of this behavior.

    The recurrent relation can be written as follows

    I 2k = 2k cos 1 + sin 1 ( )!( )!22 2k k I 2k 2.

    The first members of the sequence I 2k are: I 0 = sin 1,

    I 2 = 2 cos 1 + sin 1 20!!

    I 0 = 2! [cos 1 + (1/2! 1) sin 1],

    I 4 = 4 cos 1 + sin 1 42!!

    2! [cos 1 + (1/2! 1) sin 1] =

    4! [(1/3!1) cos 1 + (1/4! 1/2! + 1) sin 1],

    I 6 = 6 cos 1 + sin 1 64!!4! [(1/3!1) cos 1 + (1/4! 1/2! + 1) sin 1] =

    6! [(1/5! 1/3! + 1) cos 1 + (1/6! 1/4! + 1/2! 1) sin 1],

    I 8 = 8 cos 1 + sin 1 86!!6! [(1/5! 1/3! + 1) cos 1 + (1/6! 1/4! + 1/2! 1) sin 1]

    = 8! [(1/7! 1/5! + 1/3! 1) cos 1 + (1/8! 1/6! +1/4! 1/2! + 1) sin 1],...

    Coefficients at cos 1 and sin 1 are (fork >2)(2k )! [(1/(2k 1)! (1/(2k 3)!+...]

    = 2k [1 (2k 1)(2k 2) + (2k 1)(2k 2)(2k 3)(2k 4) ... ],(2k )! [(1/(2k )! (1/(2k 2)!+...] = [1 (2k )(2k 1) + (2k )(2k 1) (2k 2)(2k 3) ... ].In brackets there are divergent alternating series. They cause an early overflowing therange of the chosen numerical type. Then, of course, the numbers generated incomputers differ from the correct numbers and this is why the computation of I n fails.

    The iteration results can be compared with results of numerical integration by Gaussthree-node formula I G (Chapter 2) in which no problems with numerical stability arise ifn does not exceed reasonable limits.

    This warning example shows that numerical computing is also a kind of art.Sometimes more different methods need to be tested, evaluated, compared and adaptedto find a suitable method for a problem being solved.

  • 8/12/2019 newton Interpolations

    6/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    3

    1 INTERPOLATIONA relatively frequent task is to reveal the structure, internal links and relations in a dataset obtained e.g. by measurement. If there some stochastic influences can be expected,statistical methods are to be used. In the following text, however, mostly deterministicquantities will be considered, i.e. to any independent quantity x a numerical object O isassigned uniquely, xO. The simplest and most frequent case is, when both x, O arenumbers, i.e. O is a value of scalar function of a real variable f .

    1.1 Polynomial Approximation For the sake of simplicity a set of pairs {( xi, yi): i = 1, ...,n} is considered in whichi j implies xi x j (repeating equal abscissas xi is eliminated). Now such an acceptablefunction f is to be found that fits them well, thus, makes the magnitude of differences f ( xi) yi, i = 1, ...,n, small. In Rn with a metricd this means minimizingd (f , y). Thisproblem can be viewed from different standpoints.

    If the sought function f is supposed to be continuous, yi = f ( xi), the problem can bereduced to a search for a polynomialP well approximating f [1,2].

    Weierstrass theorem.If f is a continuous function on a closed interval [a, b], f C0 ([a, b]), then to any>0there is such a polynomialP that f ( x) P( x) < for any x [a, b].

    If k is supposed to be 1/ k or 1/2k , a sequence of polynomialsPk , Pk ( x) f ( x) < k , x [a, b] exists that converges uniformly to the function f on the interval [a, b].

    Remark . Weierstrass theorem can be formulated also for functions of several variables inm-dimensionalintervals and vector functions. The choice of basis enables another generalization. A polynomial is alinear combination of the elements of the basis B={1= x0, x= x1, x2, ..., xn}; trigonometric polynomial is anelement of the space with the basis {1=cos 0, cos x, sin x, , cosnx, sin nx} or {ei x: i = 0, 1, ,n};orthogonal polynomials on some intervals generate also linear spaces, etc.

    Weierstrass theorem creates an existential background for seeking a polynomialPn( x) =an xn +an1 xn1+ ... +a1 x +a0

    that approximates a continuous function f in the sense that on the set { xi: i=0, ...,m} thesum of squares of differences

    S (a0, ...,an) =i

    m

    =

    0[Pn( xi) f ( xi)]2 .

    is minimal. To assure uniqueness it must ben

  • 8/12/2019 newton Interpolations

    7/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    4

    The unknown coefficientsa0, ..., an in this case are given through the following systemof m = n+1 equations

    an xin +an1 xin1+ ... +a1 xi +a0 = yi, i = 0, 1, ...,n.

    In the matrix form

    Ma =

    nnn

    n

    n

    x x

    x x x x

    1............

    ...1

    ...101

    00

    na

    aa

    ...10

    =

    n y

    y y

    ...10

    = y. (S)

    The determinant of the matrixM is the so calledVandermonde determinant (J. A. Vandermonde (17351796), a French mathematician, one the founders of thetheory of determinants).

    Theorem. If x j are different, x j xk , j, k = 0, ...,n, then detM = det

    nnn

    n

    n

    x x

    x x x x

    1............

    ...1

    ...101

    00

    0.

    Proof . detM does not change if to some column ofM a multiple of another column ofM is added. Multiplying the first column by x0 and adding it to the second one gives

    D = detM =nnn

    n

    n

    x x

    x x x x

    ...1............

    ...1

    ...101

    00

    =nnn

    n

    n

    x x x

    x x x x x x

    ...1............

    ...1

    ...1

    0

    101

    000

    =nnn

    n

    n

    x x x

    x x x x

    ...1............

    ...1

    ...01

    0

    101

    0

    .

    Multiplying the first column by x02 and subtracting it from the third column annuls theelement in the first row of the third column. This process can be continued till the lastcolumn, from which the x0n-multiple of the first column is subtracted. So one gets

    D =nn

    nn

    nn

    x x x x

    x x x x

    00

    0101

    ...1............

    ...10...01

    =nn

    nnn

    nn

    nn

    x x x x x x

    x x x x x x x x x x x x

    0200

    0220202

    0120

    2101

    ...............

    ...

    ...

    =

    )...)((...))((............ )...)((...))((

    )...)((...))((

    10

    200

    210000

    102020221202020202

    10

    2010

    21

    1101010101

    ++++++++++

    +++++

    nnn

    nn

    nnnnnn

    nnnn

    nnnn

    x x x x x x x x x x x x x x

    x x x x x x x x x x x x x x

    x x x x x x x x x x x x x x

    .

    The factor x1 x0 appears in all members of the first row, x2 x0 is in all members of thesecond row, , xn x0 is in all members of thenth row. Thus,

    D = ( x1 x0) ( x2 x0) ... ( xn x0)1

    0

    2

    00

    21

    0

    10

    2020

    22

    1202

    10

    2010

    21

    1101

    ......1..................1......1

    +++++

    ++++++++++

    nn

    n

    n

    n

    n

    nn

    nnnn

    nnnn

    x x x x x x x x

    x x x x x x x x x x x x x x x x

    .

  • 8/12/2019 newton Interpolations

    8/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    5

    By equivalent operations, i.e. not changing the value of determinant (like successivesubtracting the multiples of the first column by powers of x0 from the second and nextcolumns etc.), the original determinant can be rearranged to

    D = ( x1 x0) ( x2 x0) ... ( xn x0)1

    122

    111

    ...1............

    ...1

    ...1

    nnn

    nn

    x x

    x x x x

    .

    Repeating the whole procedure with the new determinant (of thenn matrix), weobtain

    D = )(...)()(...)()(

    11200201

    x x x x x x x x x x

    nn

    2

    233

    222

    ...1............

    ...1

    ...1

    nnn

    n

    n

    x x

    x x x x

    ,

    etc. Finally we get

    D =)(

    ...............................................)()(...)()()(...)()(

    1

    111120010201

    nn

    nnnn

    x x

    x x x x x x x x x x x x x x

    .

    Because x j xk for jk , all the ( x j xk ) 0. Thus, their product D 0.

    Regularity ofM (i.e. D0) is the necessary and sufficient condition for the existenceand uniqueness of the interpolation polynomialP( x) of thenth degree whose graph goesthroughn+1 nodes ( xi, yi), i = 0, 1, ...,n, i.e.P( xi) = yi.

    In interpolation polynomial, as usual, we are interested rather in its values among theinterpolation nodes than in its coefficients. Then the interpolation polynomial needs tobe written in some shape that contains the values xi, yi explicitly.

    The Lagrange interpolation polynomial forn+1 different interpolation nodes

    Ln( x) = ))...()(())...()((

    0201021

    n

    n x x x x x x

    x x x x x x

    y0 + ... + ))...()(())...()((110

    110

    nnnn

    n x x x x x x

    x x x x x x yn

    = k nk n k ii ik

    nk ii i y

    x x x x

    = ==

    0 ,0,0

    )()( .

    is one of the possible forms [4-7]. But this one is more convenient for theoreticalconsiderations than for practical use. Remark . Nodes are to be chosen with forethought. When xk = 2/((4k +1)), k = 0, 1, 2, 3, 4, are chosen fornodes of the function sin (1/ x) on interval [1/30, 21/30] the corresponding Lagrange interpolationpolynomial is constant, L1( x) = 1. This follows from the system (S): the coefficienta0 = D / D = 1 while innumerator determinants fork = 1, ... , 4 two columns consist of mere 1, thusa1 = ... =a4 = 0. Anothernegative example can be represented by Lagrange interpolation polynomial L2( x) constructed for the same

    function on the interval [0.1, 0.6] with nodes (i /10, sin (10/ i) ), i = 1, ..., 6.

  • 8/12/2019 newton Interpolations

    9/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    6

    -1.5

    -1

    -0.5

    0

    0.5

    1

    1.5

    0 0.2 0.4 0.6 0.8 x

    sin (1/ x )

    Fig. 1.1 Function sin (1/ x) and its interpolation polynomials L1, L2 for bad choices of nodes xi.

    In most cases of practical interpolation equidistant nodes xi+1 xi = h, i = 0, 1, ...,n,are preferable. There are many kinds of interpolation polynomials. For example, Newtons forwards interpolation polynomial

    N n( x) = y0 + !10 y q[1] +

    !20

    2 y q[2] +!3

    03 y q[3] + ... +

    !0

    n yn q[n] .

    It is written here in the shape remembering the Taylor polynomial but the usedsymbols need some explanation. denotes the difference operator, which in a repeateduse gives the following relations: y0 = y1 y0,2 y0 = y1 y0 = y2 2 y1 + y0,3 y0 = 2 y1 2 y0 = y3 2 y2 + y1 ( y2 2 y1 + y0) = y3 3 y2 + 3 y1 y0 ,n y0 = n1 y1 n1 y0 = n2 y2 2n2 y1 n2 y0 =n3 y3 3n3 y2 + 3n3 y1 n3 y0 =

    ... =

    0n yn

    1n yn1 +

    2n yn2 ... +

    1nn (1)n1 y1 +

    nn (1)n y0 .

    A practical example makes computing the differences easily understandable. The

    following table shows the values of xi and yi for i=0, ..., 6 as well as differencesk yi

    obtained by the procedure described above.

    i xi yi yi 2 yi 3 yi 4 yi 0 3 100 65 38 18 01 2 35 27 20 18 02 1 8 7 2 18 03 0 1 5 16 184 1 4 21 345 2 25 556 3 80

  • 8/12/2019 newton Interpolations

    10/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    7

    The fact that the third differences are constant (and difference of the 4th and higherorders are therefore zero) says that the corresponding interpolation polynomial is of the3rd order (cubic polynomial).

    The interpolation nodes are assumed xi = x0 + ih, i = 0, 1, 2, Let beq = ( x x0)/ h and further on we defineq[1] = q, q[2] = q(q1), q[2] = q (q1) (q2), ...,q[k ] = q (q1) (qk +1).

    Example. Let seven values of the function sine be given, yi = sin (i10),i = 0, 1, ..., 6,and the value of sin (5) is to be determined.Obviously, x0 = 0,h = 10,q = (50)/10 = 0.5 and

    sin 5 N 6 (q) = y0 + !10 y q[1] +

    !20

    2 y q[2] + ... +!6

    06 y q[6] .

    Defining 0 yi = yi, q[0] = 1 enables to calculate differencesk y in tabular form:i xi sin xi 1 yi 2 yi 3 yi 4 yi 5 yi 6 yi

    0 0 0.00000000 0.17364818 0.00527621 0.00511590 0.00031576 0.00014585 0.000014031 10 0.17364818 0.16837197 0.01039211 0.00480014 0.00046161 0.000131822 20 0.34202014 0.15797986 0.01519225 0.00433853 0.000593433 30 0.50000000 0.14278761 0.01953078 0.003745104 40 0.64278761 0.12325683 0.023275875 50 0.76604444 0.099980966 60 0.86602540

    Computation of sums may be arranged as follows:k 0 1 2 3 4 5 6

    k y0 0.00000000 0.17364818 0.00527621 0.00511590 0.00031576 0.00014585 0.00001403k! 1 1 2 6 24 120 720

    q[k ] 1.00000000 0.5 0.25 0.375 0.9375 3.28125 14.765625 N k 0.00000000 0.08682409 0.08748362 0.08716387 0.0871515 0.0871555 0.08715581

    In the last row there are sums N n(0.5) =k

    n

    =

    1 !0

    k yk q[k ]. The error of approximation is

    E = sin 5 N 6(0.5) = 0.0871 5574 0.0871 5581 = 7108

    .The calculation can be carried out very easily in EXCEL. E.g. for x = 6, 8, ..., 14onlyq is changed in the corresponding formula. Results are shown in the table below:

    x 6 8 10 12 14q 0.6 0.8 1.0 1.2 1.4 N 6 0.1045 2852 0.1391 7313 0.1736 4818 0.2079 1168 0.2419 2188sin x N 6 5.64108 2.55108 0 1.52108 1.99108

  • 8/12/2019 newton Interpolations

    11/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    8

    Values of the interpolation polynomialP can also be obtained by repeated (iterated)linear interpolation which may be summarized up into simple formulas as follow

    P( x) =ii x x +1

    1

    11 ++

    ii

    ii

    y x x

    y x x =ii x x +1

    1

    +++ 1111

    1

    ii

    ii

    i

    i

    y x

    y x

    y

    y x , i = 0, ,n1.

    In the above case the following table is obtained.i xi sin xi Li, i+1 L i, i+1,i+2 L i, i+1, i+2,i+3 L i, ... , i+4 L i, ... , i+5 L i, ... , i+6

    0 0 0.00000000 0.08682409 0.08748362 0.08716387 0.0871515 0.0871555 0.087155811 10 0.17364818 0.08946219 0.08556515 0.08706520 0.0871914 0.08715902 20 0.34202014 0.10505036 0.07656490 0.08605543 0.08751583 30 0.50000000 0.14303098 0.05758383 0.082161034 40 0.64278761 0.21138869 0.02809119 sin 5 L6 = 7.0408E085 50 0.76604444 0.316130126 60 0.86602540

    Here the following denotation is used. Li, i+1 =ii x x +1

    111 ++

    iiii

    y x x y x x corresponds

    to intervals [ xi, xi+1], i = 0, ..., 61, and x = 5. With those values the second step is

    performed by linear interpolation L i, i+1, i+2 =ii x x +2

    12,12

    1,+++

    +

    iiiiii

    L x x L x x

    over intervals

    [ xi, xi+2], i = 0, ..., 62. In the same way the third step is carried out over intervals[ xi, xi+3], i = 0, ..., 63, etc. A graphical scheme of iterated linear interpolation (theAitken-Neville algorithm) is shown in Fig. 1.2.

    Fig. 1.2 Interpolation as the iteration of linear interpolations (Aitken-Neville algorithm).

    Remark . Interpolation is meaningful only when values of the function cannot be obtained easily. Its rightplace was in work with laboriously computed tables of functions. Linear interpolation between twoneighboring values was used as usual, quadratic interpolation was used rarely and interpolations of higherorder were rather of an academic value. Today, values of functions given analytically can be computedvery effectively with sufficient precision by computers and meaning of interpolation has been displacedto the background of many numerical methods. There are many types of interpolation polynomials but we

    do not see any reason to mention them here. Preciseness of interpolation can also be limited to noticing

  • 8/12/2019 newton Interpolations

    12/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    9

    similarity between Newtons polynomial and Taylors polynomial. But evaluation of derivatives inunknown functions appears to be problematic.

    So far the interpolation polynomial has been defined only by giving its values atn+1different nodes. Defining the Taylor polynomial of thenth degree by giving the valuesof a function and its firstn derivatives at a single point appears in some sense oppositeto the above interpolation. Between these two extreme cases a lot of combinations mayexist when at some points the values of a function as well as values of its derivatives aregiven. In such cases the so called Hermite interpolation can be defined [6].

    Another possibility is represented by piecewise connecting polynomials of the samedegree with or without some further conditions of their interconnections. As a rule,continuity and smooth links, respectively, are of special importance.

    Example. Let a function f be approximated by a continuous and piecewise linearfunctionS 1 assembled of connecting linesS 1, i between two neighboring points ( xi, yi),( xi+1, yi+1),

    S 1, i( x) = yi + ai ( x xi) , i = 0, 1, ...,n1.Heren new constantsai are introduced butai = ( yi+1 yi)/( xi+1 xi). Coefficientsai can bedetermined from the condition thatS 1, i( x) = yi + ai ( x xi) shall go through the point( xi+1, yi+1), i.e. S 1, i( xi+1) = yi+1 = yi + ai ( xi+1 xi). This way we obtain the followingsystem ofn equations forn unknownsai:

    A a =

    1

    1201

    ...00............ 0...00...0

    nn x x x x

    x x

    naaa

    ...10

    =

    1

    1201

    ...nn y y y y y y

    = b .

    The matrixA is diagonal one,A = diag ( x1 x0, ..., xn xn1), and its inverse is alsodiagonal,A1 = diag (1/( x1 x0), ..., (1/( xn xn1)). Therefore,

    a =

    na

    aa

    ...10

    = A1 b =

    ) /()(...

    ) /()() /()(

    11

    12120101

    nnnn x x y y

    x x y y x x y y

    .

    This trivial example of continuous broken line may inspire to using some moreversatile functions instead of linear segments. Using polynomials of the second or thirddegree (quadratic or cubic polynomials) offers the simplest possibility.

    1.2 Spline Approximation Websters Encyclopedic Unabridged Dictionary (1996) says: spline = 1. a long, narrow, thin strip ofwood, metal etc., 2. a long flexible strip of wood or the like used in drawing curves.

    Let points ( xi, yi), i = 0, 1, ...,n, x0< x1

  • 8/12/2019 newton Interpolations

    13/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    10

    Quadratic spline.Quadratic spline is a smooth (differentiable) piecewise quadratic functionS 2. Hence, itconsists ofn parabolic arcsS 2, i each spanned between two neighboring points ( xi, yi),

    ( xi+1, yi+1). Those arcs can be written asS 2, i( x) = yi + ai ( x xi) +bi ( x xi)2 , i = 0, 1, ...,n1.Now the number of unknown coefficientsai, bi has been doubled to 2n. The requirementthatS 2 shall go through all the points ( xi, yi), i.e.

    S 2, i( xi+1) = yi + ai ( xi+1 xi) +bi ( xi+1 xi)2 = yi+1, i = 0, 1, ...,n1,provides onlyn conditions. Thus, furthern conditions are to be added. Smooth joiningindividual segmentsS 2, i, S 2, i+1, i.e.S 2, i( xi+1) = S 2, i+1( xi+1) at points x1, ..., xn1 providesanother n1 equations:

    ai + 2bi ( xi+1 xi) =ai+1 + 2bi+1 ( xi+1 xi+1) =ai+1 , i = 0, 1, ...,n2.(As a matter of fact, the segmentS 2, i( x) is used only on interval [ xi, xi+1] and derivativesfrom the left or right should be used correctly but for the sake of simplicity the fact isused that polynomialsS 2, i( x) and their derivatives are defined on the whole R1).

    One equation is still to be given. Let it be, e.g.S 2( xn) =S 2, n1 ( xn) =cn.This completes the system of 2n equations for 2n unknownsai, bi, A a = c, whose

    matrix is tridiagonal and therefore an easily invertible one

    A =

    )(210...00000010...000000

    ..............................000...)(210000000...10000000...01)(2100 000...00100

    000...0001)(21000...00001

    11

    2323

    1212

    0101

    nnnn

    x x x x

    x x x x

    x x x x

    x x x x

    .

    The right side vector is

    c = (0101

    x x y y

    , 0,

    1212

    x x y y

    , 0,

    2323

    x x y y

    , 0, ...,

    11

    nn

    nn x x y y , cn)T.

    In order to simplify writing the following denotation is introduced

    hi = xi+1 xi, Di =ii

    ii x x y y

    ++11 for i = 0, 1, ...,n1.

    In manual computing the systemA a = c may be represented by the augmented matrix

    (A c) =

    n

    n

    nn

    c D

    D

    D

    hh

    hh

    hh

    1

    1

    0

    11

    11

    00

    ...0

    0

    210...0000010...00000

    ...........................000...01210000...0010000...00121000...0001

    .

  • 8/12/2019 newton Interpolations

    14/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    11

    This is transformed by equivalent operations so that the matrixA becomes the 2n2n unit matrix and the columnc turns to the vector of coefficientsa = (a0, b0, ...,an1, bn1).

    Example. Let the points ( x0, y0) = (1, 2), ( x1, y1) = (4, 1), ( x2, y2) = (6, 1) be given. Let usfind a quadratic splineS 2( x) running through those points whileS 2 ( x2) = c. Thecorresponding augmented matrix is transformed by equivalent operations

    (A c) =

    c00

    3 / 1

    4100210001610031

    c03 / 13 / 1

    2000210001300031

    2 /

    3 / 13 / 2

    1000010020300101

    cc

    +

    2 /

    3 / 13 / 2

    1000010000300001

    cc

    cc

    2 /

    9 / )31(3 / )23(

    1000010000100001

    ccc

    c,

    i.e.

    1100

    baba

    =

    2 /

    9 / )31(3 / 2

    ccc

    c.

    If, for example,c=1 we obtainS 2, 0( x) = 2 + (1/3) ( x 1) (2/9) ( x 1)2,S 2, 1( x) = 1 ( x 4) + ( x 4)2 /2 .

    Fig. 1.3 Quadratic splines S 2 when derivatives at point x2 = 6 are dS 2( x2)/d x = c = 1, 0.5, 0, 0.5, 1.

    -1

    0

    1

    2

    3

    4

    0 1 2 3 4 5 6 7 x

    y

    c=1

    c=0.5

    c=0

    c=-0.5

    c=-1

    S 2,0( x) S 2,1( x)

    -1

    0

    1

    2

    3

    4

    0 1 2 3 4 5 6 7 x

    y

    c=1

    c=0.5

    c=0

    c=-0.5

    c=-1

    S2,0(x) S2,1(x)

    -1

    0

    1

    2

    3

    4

    0 1 2 3 4 5 6 7 x

    y

    c=1

    c=0.5

    c=0

    c=-0.5

    c=-1

    S 2,0( x) S 2,1( x)

  • 8/12/2019 newton Interpolations

    15/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    12

    Cubic spline.Cubic spline is a smooth functionS 3 composed of arcs of polynomials of the 3rd degree,i.e. by arcs of cubic parabolasS 3, i between two neighboring points ( xi, yi), ( xi+1, yi+1).

    Those polynomial segments can again be written asS 3, i( x) = yi + ai ( x xi) +bi ( x xi)2 + ci ( x xi)3, i = 0, 1, ...,n1.Now 3n unknown coefficientsai, bi, ci , i = 0, 1, ...,n1, are to be determined. Thus 3n conditions must be given.

    The fact thatS 3 contains all the points ( xi, yi) provides the firstn conditionsS 3, i( xi+1) = yi + ai ( xi+1 xi) +bi ( xi+1 xi)2 + ci ( xi+1 xi)3 = yi+1,

    for i = 0, 1, ...,n1.Smoothness of joints at points x1, ..., xn1 gives nextn1 equations

    S 3, i( xi+1) =ai + 2bi ( xi+1 xi) + 3ci ( xi+1 xi)2 = ai+1 = S 3, i+1( xi+1).In polynomials of the 3rd degree also equality of the second derivatives of

    neighboring segments can be required, i.e. furthern1 equations are addedS 3, i( xi+1) = 2bi + 6ci ( xi+1 xi) = 2bi+1 = S 3, i+1( xi+1).

    Now thosen + (n1) + (n1) = 3n 2 equations need to be completed with some tworeasonable requirements depending on situation.For example: Equality of all derivatives at boundary points fork = 0, 1, 2 can be required, i.e.

    S (k )3, 0( x0) =S (k )3, n1( xn), which gives the periodic spline.

    Values of the first derivatives at the first and the last points can be required to beequal, i.e.S 3, 0( x0) =a0, S 3, n1( xn) =an1 + 2b n1 ( x n x n1) + 3c n1 ( xn x n1)2 = d n.But most preferably, S 3, 0( x0) = S 3, n1( xn) = 0 is required, which defines the so callednatural spline

    (mechanically realized by a thin elastic strip with fixed endpoints and going throughall the points ( x0, y0), ..., ( xn, yn) ).The natural spline minimizes the functional n x x f 0

    2)( d x on the set of all twice

    differentiable functions on [ x0, xn] that fulfill the conditions f ( xi) = yi ( Holladay). From

    the differential geometry [9] we know that the curvature of the graph of f is = f (1+ f 2)3/2 and if f 2

  • 8/12/2019 newton Interpolations

    16/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    13

    ci = (bi+1 bi)/(3hi). (c)for i = 0, ...,n2. The condition of continuity of spline yields

    yi + ai hi + bi hi2 + ci hi3 = yi+1 ,

    thus, ai = [( yi+1 yi) bi hi2

    ci hi3

    ] / hi. Using (c) and simple rearrangements giveai = ( yi+1 yi)/ hi bi hi (bi+1 bi)/(3hi) hi2] = ( yi+1 yi)/ hi (bi+1 +2bi) hi /3= Di (bi+1 +2bi) hi /3 . (a)

    So bothai andci are expressed as functions ofbi for i = 0, ...,n2.The identity of first derivatives at internal nodes implies

    ai + 2bi hi + 3ci hi 2 = ai+1 , i.e.2bi hi + 3ci hi 2 = ai+1 ai , i.e.

    2bi + 3ci hi = (ai+1 ai)/ hi.Substitutions ofai by (a) andci by (c) yield

    2bi + 3(bi+1 bi)/(3hi) hi =(( yi+2 yi+1)/ hi+1 (bi+2 +2bi+1) hi+1 /3 ( yi+1 yi)/ hi + (bi+1 +2bi) hi /3)/ hi ,

    i.e.bi + bi+1 = ( Di+1 Di (bi+2 +2bi+1) hi+1 /3 (bi+1 +2bi) /3)/ hi.

    Multiplication of the last equation by 3hi giveshibi + 2(hi + hi+1) bi+1 + hi+1 bi+2 = 3 ( Di+1 Di) .

    Thus, the matrix of the system for coefficientsbi of the natural spline isB =

    ++

    ++

    +

    )(20...0000)(2...0000

    ........................000...)(20000...0)(2000...00)(2

    1222233

    33222211 110

    nnnnnnn

    hhhhhhh

    hhhhhhhh

    hhh

    and the right hand side vector

    p =

    )(3...

    )(3)(3

    21

    1201

    nn D D

    D D D D

    .

    The tridiagonal matrixB is regular. The coefficientsb0, b1, ..., bn1 are components of avectorb = B1 p . Now it is sufficient to definebn = 0. Then coefficientsan1 andcn1 aredefined by equations (a) and (c).

    Example. Let us construct the natural spline going through the nodes of the foregoingexample, i.e. ( x0, y0) = (1, 2), ( x1, y1) = (4, 1), ( x2, y2) = (6, 1). In this case coefficients ofthe two cubic parts

    S 3,0( x) = y0 + a0 ( x x0) +b0 ( x x0)2 + c0 ( x x0)3,S 3,1( x) = y1 + a1 ( x x1) +b1 ( x x1)2 + c1 ( x x1)3

  • 8/12/2019 newton Interpolations

    17/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    14

    can be calculated directly from the definition conditions:S 3,0( x1) =S 3,1( x1), i.e. a0 + b0 ( x1 x0) +c0 ( x1 x0)2 = ( y1 y0)/( x1 x0),S 3,0( x1) =S 3,1( x1), i.e. a0 + 2b0 ( x1 x0) + 3c0 ( x1 x0)2 a1 = 0,

    S 3,0( x1) =S 3,1( x1), i.e. b0 + 3c0 ( x1 x0) b1 = 0,S 3,1( x2) = y2, i.e. a1 + b1 ( x2 x1) +c1 ( x2 x1)2 = ( y2 y1)/( x2 x1),S 3,0( x0) =b0 = 0,S 3,1( x2) = 0, i.e. b1 + 3c1 ( x2 x1) = 0

    Substituting x1 x0 = 3, ( y1 y0)/( x1 x0) = 1/3, x2 x1 = 2, ( y2 y1)/( x2 x1) = 0yields the following augmented matrix for the coefficients of spline which in the nextsteps is transformed onto 55 unit matrix and vector of coefficients (a0, c0, a1, b1, c1)T

    0000

    3 / 1

    61000010900012714210000091

    0003 / 13 / 1

    61000421000109000118000091

    012 / 1

    3 / 154 / 1

    3 / 1

    6100011000021000018 / 11000091

    60 / 112 / 1

    6 / 154 / 1

    3 / 1

    1000011000201000018 / 11000091

    60 / 110 / 115 / 2

    90 / 130 / 13

    1000001000001000001000001

    .

    Both the parts of the sought spline are thenS 3, 0( x) = 2 (13/30) ( x 1) + (1/90) ( x 1)3,S 3, 1( x) = 1 (2/15) ( x 4) + (1/10) ( x 4)2 (1/60) ( x 4)3.

    Remark . For the sake of simplifying calculations and use of general relations let us assemble the table:

    i xi yi hi Di 0 1 2 3 1/31 4 1 2 02 6 1

    Because in this casen = 2 the system of equations for coefficientsbi is reduced to one equation (the lastone in the systemB b = p for a generaln)

    h0 b0 + 2 (h0 + h1) b1 = 3( D1 D0).The equation b0 = 0 yields b1 = (3/2) ( D1 D0)/(h0 + h1) = 1/10.When we put b2 = 0, the relations mentioned above enable to calculate

    a0 = D0 (b1 + 2b0) h0 /3 = 13/30, a1 = D1 (b2 + 2b1) h1 /3 = 2/15,c0 = (b1b0)/(3h0) = 1/90, c1 = (b2 b1)/(3h1) = 1/60.

    The resulting natural cubic spline is shown in Fig. 1.4. Its derivative at the endpoint x=6 isS 3, 1(6) =151 .

    This value and preserved internal nodes determine quadratic spline composed of two parabolic segmentsS 2, 0( x) = 2 + ( x1)[ 3/5 + 4/45 ( x1)], S 2, 1( x) = 1 + ( x4)[ 1/15 + 1/30 ( x4)].

  • 8/12/2019 newton Interpolations

    18/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    15

    0

    1

    2

    3

    0 1 2 3 4 5 6 7 8 x

    y

    (xi, yi)

    Quadr. Spline

    Cubic Spline

    Fig. 1.4 Cubic versus quadratic spline for the same slope S 3(6) = S 2(6) = 1/15.

    Example. Let us compare the Newton polynomial with the natural spline in the function

    f ( x) = x2 sin ( x

    2 ) on the interval [0, 4] for nodes given by abscissas x = 0, 1, 2, 3, 4.

    Fig. 1.5 Approximation of f ( x) = 2 x/ sin ( x /2) on [0, 4]by cubic spline S 3 and Newton polynomial N.

    -2.5

    -2

    -1.5

    -1

    -0.5

    0

    0.5

    1

    0 0.5 1 1.5 2 2.5 3 3.5 4

    ( x)

    S 3 ( x )

    N ( x )

    error

  • 8/12/2019 newton Interpolations

    19/126

    1 INTERPOLATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    16

    Results are shown in Fig. 1.5. Because f is an even function and f (0) = 2 whileS 3 (0) = 0, the splineS 3 cannot be expected to provide a good approximation near x = 0.This is shown also by the error f ( x) S 3( x) . Nevertheless, the average error (in [0, 4])

    is less by a third in the spline than that in the Newton polynomial and the total errormax { f ( x) S 3( x) : x [0, 4]} 2

    1 max { f ( x) N ( x) : x [0, 4]}.

    The approximation can be enhanced if the derivative at both endpoints is respected.To do so two interpolation nodes are added from their close neighborhoods, x = 0.01and x = 4.01. Then x0 = 0.01, x1 = 0, x2 = 1, ..., x5 = 4, x6 = 4.01. The reduction of errorto about one third of the original error is shown in Fig. 1.6. Further improvement wouldbe obtained by adding more nodes, of course.

    Fig. 1.6 Approximation of the function f ( x) = 2 x/ sin ( x /2) by natural spline for added nodes at x = 0.01, 4.01.

    Final remarks. Splines are very useful tool especially in two and three dimensional spaces due to theirability to provide elegant smooth solutions of many problems solved by FEM. For the classicalcomputation tools (pencil and paper), however, their computing is too complex. They represent a typicalarea for computers due a huge number of elementary operations but are treacherous for computingabilities of human individuals.

    -2

    -1.5

    -1

    -0.5

    0

    0..5

    1

    0 0.5 1 1.5 2 2.5 3 3.5 4

    y ( x)

    error

    S 3 ( x )

  • 8/12/2019 newton Interpolations

    20/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    17

    2 INTEGRATION It happens very often that for a given function no simple analytically expressibleantiderivative can be found. This is the case with sin x2, exp ( x2), sin x / x etc. Alsosometimes not the function f itself but its integral over some area is physicallymeaningful but values of f can be obtained only in some difficult way (measurement).

    As a rule, in numerical integration functions integrable without any doubts overconsidered interval are dealt with and such functions are approximated locally by someeasily integrable functions. Polynomials are the first choice at hand and theirsuccessfulness in continuous functions is granted by the Weierstrass theorem.

    The definition of Riemann integral [9] is based on substitution of the given function f by a step function. Let

    D = { x0, x1, ..., xn: a = x0< x1 < ... < xn1 < xn = b}be a partition of an interval [a, b] andt k Ik = [ xk 1, xk ] for anyk = 1, ...,n and interval.Riemann sum

    Rn = k n= 1 f (t k ) ( xk xk 1)

    represents the area generated by composition of rectangle areas.Let us define M k = sup { f ( x): x Ik }, mk = inf { f ( x): x Ik }. Obviously,

    mk ( xk xk 1) k k

    x x 1

    f M k ( xk xk 1)and

    k k

    x x f 1 2

    k k m M + ( xk xk 1). (i)

    is an improved estimate of the integral over the partial interval [ xk 1, xk ]. The right handside can be interpreted geometrically as the area of the trapezoid of the heighthk = xk xk 1and with parallel sides of the lengths M k , mk .

    If f is monotone on [ xk , xk 1], M k + mk = f ( xk 1) + f ( xk ). Then the right side of therelation (i) may be rewritten as follows

    ( f ( xk 1) + f ( xk )) hk /2 .Additivity of integral gives immediately the so calledtrapezoid method for general

    (non-equidistant) partition of the interval [a, b]

    ba f = k n= 1 k k

    x x f 1 k

    n= 1 ( f ( xk 1) + f ( xk )) hk /2 = Ln.

    In equidistant partition, xk xk 1 = h = (ba)/ n , k = 1, ...,n, the sum Ln can be written as Ln = h k

    n= 1 ( f ( xk 1) + f ( xk ))/2 ,

    i.e. as thesum of moving averages of values of the function f multiplied by the steplength.

    This can be written shortly in form of the scalar product

    Ln = [(ba)/(2n)] (A.F ), (T)

  • 8/12/2019 newton Interpolations

    21/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    18

    where A = (1, 2, 2, ..., 2, 1)T andF = ( f (a+0.(ba)/ n), f (a+1.(ba)/ n), f (a+2.(ba)/ n), ..., f (a+(n1).(ba)/ n), f (b))T .

    When integrating functions that do not change (oscillate) fast the trapezoid formula (T)

    yields quite acceptable results also for constant steps. Otherwise the step length xk xk 1 needs to be adapted with respect to change rates of the integrand.

    Example 1. Let us calculate I = 1

    0 x . The antiderivative for f ( x) = x is (2/3) x3/2,

    so I = 2/3 = 0.6666666... The integrand f is increasing but its derivative from the rightat 0 is infinite. For variousn N and constant steph = 1/ n the corresponding Ln can beeasily computed in EXCEL.

    n Ln 1 0.52 0.603 5534 0.643 2838 0.658 130

    16 0.66358132 0.665 55964 0.666 271

    128 0.666 526256 0.666 617

    The convergence Ln 2/3 is slow because secants do not provide properapproximation of x near the origin. To improve the results it would be convenient toshorten the length of partition when approaching zero. It can be done in the simplestway by uniform partition on ordinate axis choosing yk = k / n, k = 0, 1, ,n. Then xk = k 2 / n2 and

    Ln = 12

    =

    +

    n

    k nk

    nk

    nk

    nk

    1

    22 11 = 12 3n

    ( )[ ]=

    n

    k k k

    11212 = 1

    2 3n( )

    =

    n

    k k

    1

    212 .

    Due to

    ( )2 1 21

    k k

    n = = k

    k

    n 2

    1

    2

    =

    =

    n

    k k

    1

    24 = 2 2 1 4 16

    4 1 2 16

    n n n n n n( )( ) ( )( )+ + + + = n n4 132

    one gets

    Ln =32

    1n 3

    14 2 nn = 261

    32

    n = I 26

    1n

    .

    At the same time this formula also gives the total error in this function and method ofnumerical integration (approximation of x by linear spline withn+1 nodes shownabove)

    Ln I 261n .

  • 8/12/2019 newton Interpolations

    22/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    19

    It also says that Ln < I for anyn N (Fig. 2.1). If the required precision is = 102m ,thenn > 10m / 6. E.g. = 104 needs n > 100/ 6 = 40.

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1

    x

    y

    Fig. 2.1 Approximation of x by linear spline with 6 nodes equidistant on the axis y.

    This transparent example shows that the precision of numerical computing does notdepend only on the number of interpolation nodes but also on their distribution. Aproper choice of interpolation nodes needs to respect and utilize the knowledge of theintegrand.

    2.1 Numerical Quadrature The principle of numerical quadrature consists in approximating the given function f bysome simple function p on the integration interval [a, b] and putting

    ba f ba p .

  • 8/12/2019 newton Interpolations

    23/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    20

    Most often, f is given in some way and p is an interpolation function (polynomial orspline). The symbol indicates that f may be given with some uncertainty, e.g. like adata set of a measurement. The approximation function p is then constructed to fit the

    data in some sense. So far only step functions (with 1 interpolation node in each partialinterval) and piecewise linear functions (with 2 interpolation nodes in each partialinterval) have been used in computing integrals. It can be expected, of course, thatformulas based on more interpolation nodes will be more precious.

    To simplify considerations let a general interval [a, b] be reduced onto the interval[1, 1] by the following homothetic transform (Fig. 2.2)

    x(t ) =2

    ab t +2

    ab+ .

    Fig. 2.2 Homothetic transform of interval[a, b] onto interval[1, 1].

    Then, obviously,

    ba p( x) d x = 1

    1 p( x(t )) d x(t ) =

    2ab

    1

    1P(t ) dt ,

    whereP(t ) = p( x(t )).Thetrapezoid formula can now be written as

    2ab (1 (1))

    2)1()1( PP + =

    2ab ( ))1()1( PP + = (b a)

    2)1()1( PP + ,

    i.e. the length of the integration interval multiplied by the mean of function values at theendpoints (nodes) in the interval [1, 1].

    If the number of interpolation nodes is increased ton>2, the general integrationformula can be sought in form

    ba p( x) d x = 2ab

    1

    1P(t ) dt =

    2ab ( ))(...)( 11 nn x p A x p A ++ , (G)

    where coefficients Ai are called alsoweights andt i are interpolation nodes (i = 1, ,n).In linear functionP we haven=2 and A1 = A2 = 1, t 1 = t 2 = 1. Ifn>2 and Ak , t k are

    chosen from a richer set, one can expect an increase of the precision of the integrationformula.

  • 8/12/2019 newton Interpolations

    24/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    21

    Example2. Let us choose, e.g.,t 1 = 7/8,t 0 = 0, t 1 = 7/8. The three correspondingcoefficients Ai can be found by the requirement that the formula is precise forpolynomials till the second degree, i.e.

    1

    1t 0 dt = A1 + A2 + A3 = 2,

    1

    1t 1 dt = 7/8 A1 + 0 A2 + 7/8 A3 = 0,

    1

    1t 2 dt = (7/8)2 A1 + 0 A2 + (7/8)2 A3 = 2/3.

    The second equation yields A3 = A1. This after being put in the third equation gives A1 = 64/147. The first equation then implies A2 = 2 (1 A1) = 166/147. Thus, the soughtformula is

    ba p( x) d x = 1472ab [64P(7/8) + 166P(0) + 64P(7/8)] =

    147ab +++ )

    8311(32)

    2(83)

    8311(32 ab pba pba p .

    But this does not seem very convenient for practical use because it does not fit theprincipal requirement of mathematical esthetics maximal simplicity. In spite of this,however, it shows the advantage of the symmetry of the nodest k with respect to thecenter of the integration interval.

    We have said already that a way how to enhance the precision of integrationformulas consists in increasing the numbern of interpolation nodes.Equidistant nodes and interpolation polynomial lead to Newton-Cotes integration formulas [4-8].

    However, a more practical way towards higher preciseness consists in repeatedpartitioning the integration area while applying a fixed, simple integration formula (witha small number of nodes and interpolation polynomial of low order). This wasillustrated by Example 1 with trapezoid formula. Therefore, further text will showseveral formulas limited by the number n=2, 3.

    Simpsons Formula Let us putt 1 = 1,t 0 = 0,t 1 = 1 and seek the corresponding integration formula

    1

    1P(t ) dt = ( ) A P t A P t A P t + +1 1 0 0 1 1( ) ( ) ( ) .

    Like in Example 2 we obtain the following equations for coefficients A:

    1

    1t 0 dt = A1 + A0 + A1 = 2,

    1

    1t 1 dt = 1 A1 + 0 A0 + 1 A1 = 0,

  • 8/12/2019 newton Interpolations

    25/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    22

    1

    1t 2 dt = (1)2 A1 + 0 A0 + (1)2 A1 = 2/3.

    This system expressed in the matrix form is

    101101 111

    101

    A A A =

    3 / 202 .

    VectorA can be found by equivalent operations in the augmented matrix

    3 / 202

    101101111

    3 / 222

    200210111

    3 / 13 / 4

    2

    100010111

    3 / 13 / 43 / 1

    100010001

    .

    Hence,

    1

    1

    P(t ) dt = [P(1) + 4P(0) +P(1)]/3

    or

    ba p( x) d x = 6ab +++ )()

    2(4)( b pba pa p .

    Because

    1

    1t 2k +1 dt = 0 fork = 0, 1, 2, ..., the formula above is correct also for cubic

    polynomials (this is valid also in formula from the Example 2, of course).

    Example 3. Let us calculate I = 1

    0

    x for the uniform partition of [0, 1] on the

    y-axis, yk = k /5,k =1, ..., 5 (Fig. 2.3).

    The calculation performed in EXCEL may be summarized up in the following table:

    SIMPSON TRAPEZOIDStep xi p( xi) Ai I Ai I

    0.00 0 1 10.02 0.141421 4 2

    1 0.04 0.2 1 0.005105 1 0.0048280.04 0.2 1 1

    0.10 0.316228 4 22 0.16 0.4 1 0.037298 1 0.0369740.16 0.4 1 10.26 0.509902 4 2

    3 0.36 0.6 1 0.101320 1 0.1009900.36 0.6 1 10.50 0.707107 4 2

    4 0.64 0.8 1 0.197327 1 0.1969950.64 0.8 1 10.82 0.905539 4 2

    5 1.00 1 1 0.325329 1 0.324997

    I = 0.666379 I = 0.664784

  • 8/12/2019 newton Interpolations

    26/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    23

    As expected the Simpsons formula gives a more precise result than the trapezoidformula while the number of nodes remains the same.

    Gauss Quadrature The nodest k and coefficients Ak can be required to give the quadrature formulamaximum preciseness, i.e. integrate polynomials up to degree as high as possible. Intwo nodest 1, t 1 and two coefficients A1, A1 one gets the formula

    1

    1P(t ) dt = A1P(t 1) + A1P(t 1).

    There are 4 unknowns on the right side, thus 4 equations are needed, i.e.

    1

    1t 0 dt = A1 + A1 = 2,

    11 t 1 dt = A1 t 1 + A1 t 1 = 0,

    1

    1t 2 dt = A1 t 12+ A1 t 12 = 2/3,

    1

    1t 3 dt = A1 t 13+ A1 t 13 = 0.

    This system is nonlinear, which is a complication but the symmetry with respect to 0simplifies the situation. The equality A1 = A1 and the first equation immediately imply A1 = A1 = 1, the second and fourth equations are satisfied automatically, and the third

    one givest 12

    = 1/3 due tot 1 = t 1. Then, t 1 = t 1 = 3/3 = 0.577 350 269...The Gaussian formula with 2 nodes

    1

    1P(t ) dt = P(3/3) +P(3/3)

    or, with the original variable x

    ba p( x) d x = 2ab +++++ )

    6)33()33(()

    6)33()33(( ab pba p ,

    is as precise as Simpsons formula with 3 nodes.

    Let now ben=3 and let us look for the maximally precise formula

    1

    1P(t ) dt = A P t A P t A P t + +1 1 0 0 1 1( ) ( ) ( ).

    The symmetry with respect to 0 givest 0 = 0, t 1 = t 1, A1 = A1, which reduces thenumber of unknowns to three:t 1, A0, A1. These can be determined from equations foreven powers oft (odd powers result in zeros and are useless)

    1

    1t 0 dt = A0 + 2 A1 = 2,

  • 8/12/2019 newton Interpolations

    27/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    24

    1

    1t 2 dt = 2 A1 t 12 = 2/3,

    1

    1t 4 dt = 2 A1 t 14 = 2/5.

    Dividing the last equation by the second one yieldst 12 = 3/5, thust 1 = 53 = 6.0 .

    Then the second equation yields A1 = 2131t

    =95 . Substituting it in the first equation gives

    A0 = 2 10/9 = 8/9. Thus, the Gaussian 3-node formula is

    1

    1P(t ) dt =

    91 [5P((3/5)) + 8P(0) + 5P((3/5))]

    or

    ba p( x) d x = 18ab ++++ )

    2)6.01)(((5)

    2(8)

    2)6.01)(((5 abb pba paba p .

    Example 4. We stay at the integral I = 1

    1 x and the same partition of the integration

    area as in Simpsons rule in Example 3. Results obtained in EXCEL with the Gaussian3-point formula are shown in the table below:

    Step Interval Nodes xi p( xi) Ai I 0.00 0.004508 0.067142 50.02 0.141421 8

    1 0.04 0.035492 0.188393 5 0.0053530.04 0.053524 0.231353 5

    0.1 0.316228 82 0.16 0.146476 0.382722 5 0.037335

    0.16 0.18254 0.427247 50.26 0.509902 8

    3 0.36 0.33746 0.580913 5 0.1013340.36 0.391556 0.625745 5

    0.5 0.707107 8

    4 0.64 0.608444 0.780028 5 0.1973330.64 0.680573 0.824968 50.82 0.905539 8

    5 1.00 0.959427 0.979504 5 0.325333

    I = 0.666688

    Remark . Deriving Gauss formulas shown here is very simple. But in formulas with more nodes wewould have to use the traditional way through orthogonal polynomials. This interpretation can be found inmany books on numerical mathematics, e.g. [4-8]. Legendre polynomials create orthogonal basis in theset of all polynomials on the interval [1, 1]. If their roots are taken as interpolation nodest k , the above

  • 8/12/2019 newton Interpolations

    28/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    25

    nonlinear system of equations is, due to orthogonality, reduced to linear system for Ak . This is theprincipal idea of Gauss quadrature.

    Remark . Because Gaussian formulas do not contain endpoints of integration intervals they can be usedalso in estimating convergent improper integrals of functions whose absolute value tends to infinity. The

    Laguerre and Hermitte orthogonal polynomial allows to extend the Gaussian type quadrature also toinfinite intervals [4-8].

    Chebyshev Formulas Generally, these formulas are of the type

    1

    1P(t ) dt = A ( )P t P t P t n( ) ( ) ... ( )1 2+ + +

    with only one coefficient A (all the Ak in the general formula (G) are the same). Thus,e.g. for three nodes

    11 P(t ) dt = A ( )P t P t P t ( ) ( ) ( ) + +1 0 1 .The equation

    1

    1t 0 dt = 3 A = 2

    implies A = 2/3. If nodes are symmetric with respect tot 0 = 0 thent 1 = t 1 is obtainedfrom

    1

    1t 2 dt = 2 A t 12 = 4/3 t 12 = 2/3.

    Hence, t 1 = 1/ 2 = 0.707 106 781 ...and

    1

    1P(t ) dt = 2/3 [P(1/ 2) +P(0) +P(1/ 2)] or

    ba p( x) d x =b a

    3 p a b b a p a b p a b b a( ( ) ) ( ) ( ( ) )+ + + + + +

    22

    2 2 22

    2.

    This formula is of the same preciseness order as the Simpsons one.

    Precision and Richardsons Extrapolation In every integration formula examples of its failure can be constructed. This is aconsequence of the fact that all the information concerning the integrand is limited to asmall number of interpolation nodes while the successfulness of the integration formulagenerally depends on the global quality of approximation. In smooth functionswith small oscillations the error is substantially given by the remainder of the Taylorpolynomial round the middle of the integration interval, i.e. the degree of the lowestpower of the integration variable when the integration formula ceases to be precise. Anoverview is shown in the following table.

  • 8/12/2019 newton Interpolations

    29/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    26

    Rule Trapezoid Simpson, Gauss 2, Chebyshev 3 Gauss 3Degree of error p 2 4 6

    As said above numerical computing integrals is carried out preferably by a simplequadrature formula (the Simpsons or Gauss 3-node formula) using a convenient(non-uniform) partition of the area of integration. Refining the partition is usuallyperformed by bisection of the foregoing partial subintervals. Suppose a partition Dm givesm partial intervals. A quadrature rule used on them yields the sumS m of partialresults as a numerical approximation of the integral I ,

    I = S m + R p h p.On the right side p > 1 is the degree of preciseness of the quadrature formula, R p is thecorresponding remainder andh = max { xi xi1: i = 1, ,m} is the norm of partition.

    By bisection ofm partial intervals new 2m intervals are obtained. Summing up theresults of the same quadrature formula on those new intervals gives a more preciseestimateS 2m

    I = S 2m + R p (h /2) p.Multiplying this equation by 2 p yields

    2 p I = 2 p S 2m + R p h p.Subtracting I = S m + R p h p from this equation gives (2 p 1) I = 2 p S 2m S m and thefollowing enhanced estimate of the integral I

    I 122 2

    pmm

    p S S = S 2m + 12

    2

    pmm S S

    .This is the principle of Richardsons extrapolation.

    Obviously, the higher precision p of the quadrature formula (degree of error) thesmaller the correction (S 2m S m)/(2 p 1). The following table illustrates it well bybisection of intervals from tables in Examples 3 and 4 and corresponding quadraturerules.

    Rule m=5 2m=10 S 2m S m R. extrapolation Trapezoid 0.664784 0.666131 0.001347 0.666580Simpson 0.666379 0.666580 0.000201 0.666594Gauss 3 0.666688 0.666674 0.000014 0.666674

    The next table shows the effect of Richardsons extrapolation in the trapezoid rule

    with constant steph = 1/ n in computation of10 x :

    n Ln R.. Extrapolation 1 0.52 0.603 553 0.638 0714 0.643 283 0.656 526

  • 8/12/2019 newton Interpolations

    30/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    27

    8 0.658 130 0.663 07916 0.663581 0.665 39832 0.665 559 0.666 21864 0.666 271 0.666 508

    128 0.666 526 0.666 611256 0.666 617 0.666 647

    On Multiple IntegrationNumerical calculation of integrals is limited neither to one-dimensional intervals nor tofinal integration areas. The Fubini theorem [9] is very important in multidimensionalareas because it may transform a multiple integration into several successiveintegrations in one dimension.

    Example. Let us estimate the volume of the ball B = {( x, y, z) : x2 + y2 + z2 < 1} by

    the 3-node Gauss rule. In spherical coordinates B = {(r , , ) : 0

  • 8/12/2019 newton Interpolations

    31/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    28

    = {0.5(1 6.0 ), 0.5, 0.5(1+ 6.0 )}{ /4.(1 6.0 ), /4, /4.(1+ 6.0 )}.Then

    V 41818

    2 / 1

    (25(0.5(1 6.0 ))2 sin ( /4.(1 6.0 )) + 40(0.5)2 sin ( /4) + ) =

    = 41818

    2 / 1 68.755 494 = 40.333 336 4/3 .

    But Fubini theorem gives also

    V = 4 1

    0

    2 /

    0sin d r 2 dr = 4

    2 /

    0sin d

    1

    0r 2 dr .

    The first integral is estimated by 3-node Gauss formula as follows:

    2 /

    0

    sin d 36

    (5 sin ((1 6.0 ) /4) + 8 sin ( /4) + 5 sin ((1+ 6.0 ) /4)

    =36 (50.176 108 + 82 /2 + 50.984 371) = 1.000 008 1.

    The Gauss formula gives for second integral the estimate

    1

    0r 2 dr

    4181 (5(1 6.0 )

    2 + 812 + 5(1+ 6.0 )2 ) =418

    82.35

    + =31 .

    Thus,

    V = 4 13

    1 .

    Remark. The numerical integration was used here only in two-dimensional interval. The successfulness ofthe method was conditioned by its integrand as product of functions dependent just on one variable each.In n dimensions the same Gauss formula would need 3n interpolation nodes and 3n coefficients, whichwould make it very cumbersome. Moreover, complicated cases must be solved individually [6].

    2.2 Monte Carlo Methods The Monte Carlo methods are based on the law of large numbers of the probability

    theory that says approximately this: as the number of experiments increases the relativefrequency of an event approaches unlimitedly to its theoretical probability [3].

    Let f : DR1 be a function,u = sup{ f (x): x D} and l = inf{ f ( x) : x D}. Of course, f = l + ( f l), whereg = f l is a non-negative function.

    When estimating an integral two procedures shown in Fig. 2.4 may be applied.1. Estimate by global probability. The set G =U {(0, g ( x)) : x D} (i.e. the area

    limited from the upper side by the graph ofg and by 0 from below) is merged into asimple set M, whose measure(M) = M 1 is known (can be determined simply)

  • 8/12/2019 newton Interpolations

    32/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    29

    and which can be uniformly covered by randomly generated pointsx. Now a propercriterion is needed to identify the inclusionx G and determine the ratio(G)/ (M).Supposen random points are uniformly distributed over the set M andm of them

    fall into G. Then forn the probability ofm / n(G)/ (M) will tend to 1following the law of large numbers. Thus, ifn is large, one can put

    (G) nm (M) and D f ( n

    m +lu

    l ) (M) .

    2. Estimate by the integration area average. In the integration area D random

    points xi, i = 1, ...,n are generated and the sumS n = =ni i xg1 )( is calculated.

    Obviously,

    D

    g )( lun

    S n

    (u l) =S n / n and

    D f S n / n +

    Dl .

    Fig. 2.4 Integration of a real function by Monte Carlo Method.

  • 8/12/2019 newton Interpolations

    33/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    30

    The following examples should elucidate the explanation.

    Example1. Let us compute I = 0 sin x d x = cos x 0 = 2 by Monte Carlo method.

    Obviously, l = 0, u = 1 and the considered arc of sinusoid defines the setG = {( x, y): x [0, ] y [0, sin x]}. This is inlaid into the rectangle M = [0,][0, 1],(M) =. Now both the possibilities above can be used.1. n random points ( x, y) are generated, where x is a random number from the interval

    [0, ) and y is a random number from [0, 1). The points ( x, y) belong to the uniformdistribution on the set M = [0,)[0, 1). If y sin x, the point ( x, y) belongs to themeasured set G = {( x, y): 0 x

  • 8/12/2019 newton Interpolations

    34/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    31

    n Calculation 100 1000 10 000 100 000 1 000 000

    1 1.84798 1.95824 1.99301 1.99863 1.99815

    2 2.26357 2.01396 1.98859 2.00301 2.000103 1.94581 1.99292 1.98872 2.00591 2.000684 1.95159 1.93058 1.99574 2.00086 2.000955 2.00085 1.95867 1.99054 1.99968 1.99962

    Average p 2.001960 1.970874 1.991320 2.001618 1.999900Standard deviations 0.156398 0.032690 0.003050 0.002899 0.001106Confidence limits t 0.05(4)/ 5 0.194162 0.040584 0.003787 0.003599 0.001373

    Remark . Also the interval [0, 1] on the y-axis can be taken for the integration area, evidently:

    (A) = (arcsin

    arcsin

    y

    y

    0

    1d x) d y =

    0

    1 ( 2 arcsin y) d y = 2

    0

    1 arcsin y d y.

    Then random numbers y [0, 1) would be generated and the values x can be obtained by numericalsolution of the equation f ( x) = sin x y = 0 in the interval 0 x

  • 8/12/2019 newton Interpolations

    35/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    32

    The calculation of I was transferred to integration over a one-dimensional interval.In the integration by Monte Carlo method we can proceed in the same way as inExample 1.

    But the Monte Carlo method can also be used in two dimensions, when calculating[ ]{ }

    U )2,0[:)(,0 t t r

    r dr dt .

    The surface element in the Cartesian coordinates x, y is simply d xd y. However, in polarcoordinates dr d must be multiplied by the lengthr , (and generally by the absolutevalue of Jacobian), to obtain the corresponding surface element. In this way a weightis assigned to the random points in the rectangle of polar coordinates that arrangesuniform covering of the circle x2 + y2 a2 in Cartesian coordinates.

    Fig. 2.5 Seven-foil.

    Both the Monte Carlo calculations of the integral I are shown in the following table.

    I = )2,0[

    r 2(t )/2 dt I =[ ]{ }

    U )2,0[:)(,0 t t r

    r dr dt

    n = 100 n = 10 000 n = 1 000 000 n = 100 n = 10 000 n = 1 000 000Calculation 1 1.45509 1.59752 1.57007 1.34212 1.52297 1.56992

    2 1.76443 1.59352 1.57015 1.77171 1.58822 1.569743 1.58099 1.58101 1.57024 1.62027 1.59032 1.572734 2.05876 1.55375 1.56987 1.69898 1.55693 1.569715 1.35604 1.58872 1.57214 1.54655 1.57229 1.57151

    Average 1.643062 1.582904 1.570494 1.595926 1.566146 1.570722

  • 8/12/2019 newton Interpolations

    36/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    33

    It is advantageous in computing integrals by Monte Carlo method that greaterdimension is manifested only by more complicated conditions when the frequency sumsare created. This is shown by the following example.

    Example3. Let us estimate the volumeV of the Viviani window [5,6,9], i.e. the set ofpoints ( x, y, z) that fulfill the following two conditions: x2 + y2 + z2 a2, ( xa /2)2 + y2 (a /2)2.

    The volume of the cube circumscribing the ball x2 + y2 + z2 a2 is (2a)3 = 8a3. From[9], Chapter 3, we knowV = )( 3

    432 a3. Hence, Q = V /(8a3) = 12

    19 0.15069.

    Now the ratioQ will be determined by random generating the points ( x, y, z) in thecube [0, 1] [0, 1] [0, 1] representing the part of the cube in the octant x, y, z > 0. Firstthe counterm (incidence number) is annulled. Then a random point ( x, y, z) is generated

    n times whose coordinates x, y, z are random numbers from [0, 1). If x2

    + y2

    + z2

    1and x2 + ( y 1/2)2 1/4, m is increased by 1. The Viviani window lies in 4 octants (ofthe half-space x 0) that take the quarter of its volume each. Therefore, for greatn weobviously getQ m /(2n).

    Results of calculatingQ withn = 1 000 000 are shown in the table below:

    Calculation 1 2 3 4 5Q 0.15081 0.15062 0.15115 0.15043 0.15073

    The averageQ = 0.15074 differs from the exact value by 5105.

    Example 4. Let us compute the following non-elementary integral I =

    ]1,0[]1,0[cos ( x2 y) d x d y.

    1. Using the Gauss 3-node quadrature in 2 dimensions leads to the following table forvalues of cos ( x2 y) at the nodes ( x j, yk ) { x1, x0, x1}{ y1, y0, y1}:

    y1 y0 y1 0.112702 0.500000 0.887298

    x1 0.112702 0.999999 0.999980 0.999936 x0 0.500000 0.999603 0.992198 0.975498 x1 0.887298 0.996066 0.923516 0.765764

    The corresponding coefficients A jk = A j Ak can be arranged in the matrix

    A = 21811

    254025406440254025

    .

    The Gauss quadrature yields

    G =218

    11 =

    1

    1 j=

    1

    1k A jk f ( x j, yk ) = 0.967 557

  • 8/12/2019 newton Interpolations

    37/126

    2 INTEGRATION

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    34

    2. The Fubini theorem gives

    I = ]1,0[]1,0[

    cos ( x2 y) d x d y =

    1

    0

    1

    0

    2 )(cos y y x d d x =1

    02

    21

    0

    sin x

    y x d x = 1

    02

    2sin x

    x d x.

    The integrand may be expanded into the following uniformly convergent series

    2

    2sin x

    x = 2

    52322 ...

    !5)(

    !3)(

    x

    x x x + = 1

    !3

    4 x +!5

    8 x

    ( x = 1 gives its convergent majorizing series 1 +!3

    1 +!5

    1 + = sinh 1 = 1.1752

    in the integration area) that can be integrated term by term

    2

    21

    0

    sin x

    x

    d x = x

    !35

    5

    x +

    !59

    9

    x

    1

    0 = 1

    !35

    1

    +

    !59

    1

    =

    =0k )!12()14()1(

    ++

    k k

    k

    The last sum converges very fast and may be easily obtained in EXCEL

    k)!12()14(

    )1(++

    k k

    k

    0 11 0.033333333

    2 0.0009259263 1.52625E054 1.62102E07

    The second column of the table above gives the sum I = 0.967 577.

    3. The two Monte-Carlo methods mentioned above give results presented in thefollowing table:

    Computation MC1 MC21 0.967437 0.9675122 0.967869 0.9676143 0.967465 0.9674884 0.967463 0.9676955 0.967539 0.967473

    Average 0.967555 0.967556St. Deviation 0.000161 0.000085

  • 8/12/2019 newton Interpolations

    38/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    35

    3 NONLINEAR EQUATIONS

    Solution of the equationP( x) = 0, whereP is a polynomial with coefficients from a

    number field is one of the basic topics in algebra.Every linear binomialaz + b, a, b C1, a0, has always the root z = b / a (C1 is the

    set of complex numbers).Every quadratic trinomialaz2 + bz + c ( a, b, c C1, a0), has always two roots

    z1,2 = aacbb

    242

    (if b2 4ac = 0, then z1 = z2 = b /(2a) is its double root).In polynomials of the 3rd and 4th degree it is still possible to express the roots

    algebraically (in cubic equations by Cardans formulas whose practical usage islimited). It is well known (and proved by E. Galois (1811-1832) by means of the theoryof groups founded by him) that the equations of the 5th and higher degrees are generallyunsolvable in algebraic form. The fundamental theorem of algebra says that everypolynomial over the field of complex numbers has at least one complex root z1. Divisionof the original polynomial by ( z z1) yields a polynomial of then1 degree, which againhas at least a root z2, etc. A simple consequence of it is that the polynomial of thenthdegree has generallyn complex roots. C. F. Gauss (1777-1855) proved the fundamentaltheorem of algebra (incompletely in todays point of view) in 1799 and later he gave 3other proofs. A very short and elegant proof can be obtained by the Liouville theoremknown from the theory of functions of a complex variable [9].

    There are several special types of ever solvable algebraic equations, e.g. reciprocalones. Also very important is the fact that a real polynomial of an odd degree has at leastone real root. (This follows from the continuity of the polynomial, the dominant role ofthe highest power of the variable x and from x2k +1 as x). There are manytheorems concerning localization or estimating roots of polynomials. But here wefollow only general aims connected with real roots.

    Solution of systems of linear algebraic equations and methods of linear algebrarepresent another large area of numerical methods. But today, for example, algebraicoperations with vectors and matrices, matrix inversion, calculation of determinants etc.belong to the standard wide spread software (e.g. in EXCEL), therefore these topics canbe omitted here.

    3.1. Solution of the Equation f ( x) = 0 R1 is considered automatically a metric space with Euclidean metricd ( x, y) = x y .A continuous function maps a connected compact [a, b] on a connected compact [c, d ].

  • 8/12/2019 newton Interpolations

    39/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    36

    If f (a). f (b)

  • 8/12/2019 newton Interpolations

    40/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    37

    i xa xb t 0 1.000000000 4.000000000 01 2.500000000 4.000000000 12 2.500000000 3.250000000 03 2.500000000 2.875000000 04 2.687500000 2.875000000 15 2.781250000 2.875000000 16 2.828125000 2.875000000 17 2.828125000 2.851562500 08 2.828125000 2.839843750 09 2.828125000 2.833984375 0

    10 2.828125000 2.831054688 011 2.828125000 2.829589844 012 2.828857422 2.829589844 113 2.829223633 2.829589844 114 2.829223633 2.829406738 015 2.829223633 2.829315186 016 2.829269409 2.829315186 117 2.829292297 2.829315186 118 2.829292297 2.829303741 019 2.829292297 2.829298019 020 2.829295158 2.829298019 121 2.829296589 2.829298019 122 2.829297304 2.829298019 123 2.829297662 2.829298019 124 2.829297841 2.829298019 125 2.829297930 2.829298019 126 2.829297930 2.829297975 027 2.829297952 2.829297975 1

    28 2.829297952 2.829297964 029 2.829297952 2.829297958 030 2.829297955 2.829297958 131 2.829297955 2.829297957 032 2.829297956 2.829297957 133 2.829297956 2.829297956 0

    The bisection method can bewell traced in the neighboring

    table that shows the successivereducing the length of intervalsat whose endpoints the function f has opposite signs.

    The first iterations (bold inthe table) are shown on the axis0 x in Fig. 4.1. The table showsalso the connection betweensign changes and additionalvalues oft i.

    The binary numbert = i= 1

    33 t i 2 i = 0.609 765 985 ... corresponds tor = tb + (1t ) a = 4t + 1 t =1+3t = 2.829 297 95...Of course, the values xa and xb converge to the same numberwhich can be seen from the lastrows of the table.

    Remark . In bisection method a family of centered closed intervals {Ii: i=0, 1, ...} is constructed whoselengths decrease as quickly as 2i. A well known theorem on compact sets says thatI {Ii: i=0, 1, ...} [9], which proves the existence of a zero of f . If in the k th step at any of endpoints of Ik the function f becomes zero the search is finished and fori = k +1,k +2, ... it can be put Ii = {r }.

    Remark . The limit point of the bisection algorithm depends on the choice of interval [a, b], f (a) f (b)

  • 8/12/2019 newton Interpolations

    41/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    38

    q Number of iterations Zeror 0.1 98 2.829 297 956 1...0.2 47 2.829 297 956 1...0.3 37 2.829 297 956 1...0.4 37 2.829 297 956 1...0.5 35 1.542 927 532 7...0.6 35 1.046 341 398 4...0.7 40 1.046 341 398 4...0.8 35 1.046 341 398 4...0.9 63 1.046 341 398 4...

    Multiple Uniform Partition Let f be a function continuous on an interval [a, b], f (a) f (b)2.Pointsih for i = 0, 1, ,m create the uniform partition of [a, b] intom parts [ xi1, xi] ofthe same length.

    A further algorithm for calculating a zeror with a tolerance>0 could be like this:(0)Setsa = sign f (a), xa = a, xb = b. Define counteri and seti = 0.(i) Increasei by 1. Puth = ( xb xa)/ m. For j = 0, 1, ,m put x j = a + jh and calculate

    y j = f ( x j). If y j = 0, thenr = x j and the calculation is finished. Otherwise computesa sign y j. If sa sign y j > 0 the integer j is increased by 1 untilsa sign y j = 1.Then xa = x j1, xb = x j and t i = j1.

    (e) If xb xa < , finish the computation otherwise go to (i).

    The following table shows two examples.m = 3 m =10

    i x a x b t i x a x b t i 0 1.0000000000 2.0000000000 0 1.0000000000 2.0000000000 01 1.3333333333 1.6666666667 1 1.0000000000 1.1000000000 02 1.4444444444 1.5555555556 1 1.0400000000 1.0500000000 43 1.5185185185 1.5555555556 2 1.0460000000 1.0470000000 64 1.5308641975 1.5432098765 1 1.0463000000 1.0464000000 35 1.5390946502 1.5432098765 2 1.0463400000 1.0463500000 46 1.5418381344 1.5432098765 2 1.0463410000 1.0463420000 17 1.5427526292 1.5432098765 2 1.0463413000 1.0463414000 38 1.5429050450 1.5430574607 1 1.0463413900 1.0463414000 99 1.5429050450 1.5429558502 0 1.0463413980 1.0463413990 8

    10 1.5429219800 1.5429389151 1 1.0463413983 1.0463413984 311 1.5429219800 1.5429276251 0 1.0463413984 1.0463413984 712 1.5429257434 1.5429276251 213 1.5429269979 1.5429276251 214 1.5429274160 1.5429276251 215 1.5429274857 1.5429275554 116 1.5429275322 1.5429275554 217 1.5429275322 1.5429275399 018 1.5429275322 1.5429275347 019 1.5429275322 1.5429275330 020 1.5429275327 1.5429275330 221 1.5429275327 1.5429275328 0

  • 8/12/2019 newton Interpolations

    42/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    39

    Like in the bisection method the numbert = 0.t 1 t 2 t 3 ... = t 1m1 + t 2m2 + t 3m3 + (m-adic expansion oft ) enables to writer as a convex combination of the pointsa, b:

    r = tb + (1 t ) a = a + (b a) t .

    This can be shown by the function from Example 1. Let [1, 2] be the initial intervalof iteration. Fig. 4.1 shows 3 zeros of f in this interval. The described algorithm findsone of them. The above table presents iterations form=3 andm=10.

    Form = 3 the triadic expansion of the rootr 3 = 1 + (0.11212221010222120002)3 , which isr 3 = 1.5429275327 decimally.

    In m = 10 the decimal number 0.t 1t 2 = 0.04634139837 is obtained directly and noexplanation comment is needed (the numbers in the last rows are rounded).

    It is evident that the information about a function with an a priori unknown behavioris increasing together with increasingm. EXCEL withm=10 may be a very efficient

    tool for a quick determination of a zero without programming. If the length of the initialinterval is chosen as an entire power of 10, i.e. [k .10l, (k +1) 10l], k Z, then everyiteration step means the reduction of an foregoing interval [ xa, xb] to one tenth of itslength, thus increasing the precision of the foregoing root estimate by a decimal order.This can be done by successive copying and easy arrangement.

    The following table shows the first five steps of this process.

    k = 1 k = 2 k = 3 k = 4 k = 5 j x j y j x j y j x j y j x j y j x j y j 0 1 2.65178 1 2.65178 1.040 0.43095 1.0460 0.02333 1.04630 0.002831 1.1 3.25168 1.01 2.22702 1.041 0.36345 1.0461 0.01650 1.04631 0.002152 1.2 1.02 1.69678 1.042 0.29574 1.0462 0.00967 1.04632 0.001463 1.3 1.03 1.08885 1.043 0.22785 1.0463 0.00283 1.04633 0.000784 1.4 1.04 0.43095 1.044 0.15980 1.0464 0.00401 1.04634 9.6E055 1.5 1.05 0.25040 1.045 0.09162 1.0465 1.04635 0.000596 1.6 1.06 1.046 0.02333 1.0466 1.046367 1.7 1.07 1.047 0.04504 1.0467 1.046378 1.8 1.08 1.048 1.0468 1.046389 1.9 1.09 1.049 1.0469 1.04639

    Regula falsi ( false rule)The principle of regula falsi consists in substituting the zero of a considered functionthat changes its sign at the endpoints of an interval by the zero of the linear binomialthrough the same endpoints (Fig. 4.2).

    Suppose the function f is continuous in interval [a, b], f (a) f (b)

  • 8/12/2019 newton Interpolations

    43/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    40

    Fig. 4. 2 Principle of regula falsi.

    The intersection point of L(t ) and the x-axis is defined bytf (a) + (1 t ) f (b) = t [ f (a) f (b)] + f (b) = 0 ,

    which givest = f (b) / [ f (b) f (a)]. Thus, the false zeror of the function f is

    r = t a + (1 t ) b = b + t (a b) =b f (b))()( b f a f

    ba = b f (b)

    )()( a f b f ab

    .

    This can be rearranged tor =

    )()()()(

    a f b f abf baf

    = det

    )()(

    b f ba f a / det

    )(1)(1

    b f a f .

    Now f (r ) can be computed. If f (r ) = 0, the root of f is found. In the opposite case thatof the endpointsa, b, at which the function f has the same sign as f (r ), is replaced byr .This way the original interval length is reduced and the procedure can be repeated.

    Settingr 1 = a, r 0 = b allows to define a sequence {r n} converging to the zero of f asfollows

    r n+1 =)()(

    )()(1

    11

    nn

    nnnn

    r f r f

    r f r r f r , n = 0, 1, 2, ..., r 1 = a, r 0 = b.

    If f is concave or convex in [a, b], the sequencer n converges to zero of the function f from the left or right side while the second endpoint remains fixed. This so calledprimitive form of regula falsi, can, e.g. for fixeda, be written as

    r n+1 = )()()()(

    a f r f a f r r af

    n

    nn , n = 0, 1, 2, ..., r 0 = b.

    In order to accelerate the convergence the regula falsi is combined with the bisectionmethod. Results from EXCEL are shown in the following table.

  • 8/12/2019 newton Interpolations

    44/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    41

    Regula falsi Regula falsi + Bisection method xi f ( xi) r i f (r i) xi f ( xi) r i f (r i)1 2.65178 2.725407 0.27685 1 2.65178 2.725407 0.276854 1.958924 4 1.958924

    2.725407 0.27685 2.883236 0.156584 2.725407 0.27685 2.831146 0.0052724 1.958924 3.362704 1.391742.725407 0.27685 2.826218 0.00877 2.725407 0.27685 2.82917 0.000362.883236 0.156585 2.831146 0.0052722.826218 0.00877 2.829241 0.00016 2.82917 0.000362.829298 1.4E072.883236 0.156585 2.830158 0.0024522.829241 0.00016 2.829297 2.9E062.883236 0.1565852.829297 2.7E06 2.829298 4.9E082.883236 0.156585

    If the requirement of opposite signs of the continuous function f at the endpoints ofan interval is omitted and the warranty of existence of the root within the interior ofconsidered interval is lost, the corresponding iteration

    r n+1 = )()()()(

    111

    nn

    nnnnr f r f

    r f r r f r = r n f (r n) )()( 11

    nn

    nnr f r f

    r r

    is calledsecant method . However, if f >0 or f

  • 8/12/2019 newton Interpolations

    45/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    42

    The interpolation parabola can be expressed by coefficients A, B

    P( x) = 2 A x x x x

    0

    1 0

    2 B x x

    x x

    0

    1 0 + y0 .

    0

    Fig. 4. 3 Solution of the equation f ( x) = 1 (4/ x) sin (20/ x) = 0 by quadratic interpolation.

    Step xi f ( xi) A, B r f (r )1 2.65178 0.47309 2.869955 0.117602

    1 2.5 0.58297 3.664534 1.95892

    2.5 0.58297 0.273015 2.833481 0.0119442 2.685 0.36913 0.15468

    2.87 0.117732.833481 0.01194 0.026531 2.829382 0.00024

    3 2.759241 0.19186 0.4341332.685 0.36913

    2.829382 0.00024 0.004385 2.829298 1.1106 4 2.794312 0.09800 0.200864

    2.759241 0.19186

    In Fig. 4.3 two interpolation parabolas in the function f ( x) = 1 (4/ x) sin (20/ x) areshown,P1 on interval [1, 4] andP2 on [2.5, 2.87]. It can be seen that the graph of thepolynomialP2 is optically almost indistinguishable from the graph of the function f .

    -3

    -2

    -1

    0

    1

    2

    3

    4

    5

    1 2 3 4

    y

    f ( x)

    P 1 ( )

    P 2 ( )

  • 8/12/2019 newton Interpolations

    46/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    43

    This method contains automatically also the bisection method and this warrants theconvergence in the worst case [10]. Numerical problems could arise when A0. Butthose can be avoided by using linear interpolation

    r = x0 f x f x x x( ) ( )1 01 0 f ( x0)

    when A becomes small, say, A

  • 8/12/2019 newton Interpolations

    47/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    44

    Remark . Sharp monotony of the function f C0([a, b]) is a sufficient condition for the convergence ofthe sequence {r n} to the rootr . But the efficiency may seduce to risking. With an isolated problem offinding a root the initial interval can easily be changed and the calculation repeated. A worse situationarises, if finding the root is but a small part of a larger algorithm whose results are used as inputs of

    linked up complex computations. Then the initial localization should assure uniqueness of the root. Alsorecurrent interpolation of a small degree with fixed number of interpolation nodes is preferable.

    Inverse interpolation can be done with any suitable function, e.g. spline,trigonometric polynomial etc.

    Newton-Raphson Method If the function is supposed to be not only continuous but also differentiable (smooth),the limit

    xn+1 = xn f ( xn) nn x x 1lim )()( 11

    nnnn x f x f

    x x

    can be considered in the secant method. On the right hand side there is the reciprocalvalue of the derivative f ( xn). If f ( xn) 0, the Newtons iteration is obtained

    xn+1 = xn )()(

    n

    n x f x f , n = 0, 1, ...

    It can be proved that if f exists on [a, b], fand f do not change their signs (i.e. f isconvex or concave) and f (a) f (b) 0. Then f ( x) = bxb1 >0 and f ( x) =b(b1) xb2.If b = 1, we have immediately x = a, the function f is convex forb>1 and concave forb

  • 8/12/2019 newton Interpolations

    48/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    45

    iteration is very fast. The following table shows this in determining roots of f ( x) = 1 (4/ x) sin (20/ x) for x0 = 1, 2, 3.

    n xn f ( xn) f ( xn) xn f ( xn) f ( xn) xn f ( xn) f ( xn)

    0 1 36.29835 2.65178 2 8.93474 2.088042 3 2.914046 0.5011321 1.073055 62.58095 1.78139 2.233699 6.03273 0.187363 2.828029 2.847219 0.0036202 1.044590 68.19115 0.11960 2.264757 5.27164 0.011776 2.829299 2.850369 2.01E063 1.046344 68.35873 0.00016 2.266991 5.2164 6.17E05 2.829298 2.850368 6.09E134 1.046341 68.35856 1.98E10 2.267003 5.2161 1.73E09 2.829298 2.850368 3.33E165 1.046341 68.35856 4.66E15 2.267003 5.2161 4.44E16

    These calculations as well as starting the iteration from another initial point can bedone comfortably in EXCEL.

    Wrong estimate of the starting point or

    careless using can cause that the Newtonmethod fails. The table on the left side showsiterations of the function

    f ( x) = 1 (4/ x) sin (20/ x)(Example 1) for the starting point x0 = 4.

    It can be seen that the sequence ofiterations { xi: i = 0, 1, } diverges.

    Simple Iteration Method The equation f ( x) = 0 can be reshaped to x = f ( x) + x = g( x). The equality x= g( x) meansthat x is a fixed point of the functiong. In accordance with Banach fixed point theoremin spaces Rk provided with the Euclidean metric it is sufficient to verify that

    x x xg xg

    )()( q < 1 for x , x from the considered subset [1,2,9]. This condition is

    fulfilled, ifg is differentiable andg(x) q < 1.The fixed point of the functiong can be interpreted as the intersection point of the

    straight line y = x with the curve y = g( x). It can be found by iteration xn+1 = g( xn), n = 0, 1, ...

    Example. Let us solve the equationsin (2 x) = cos x

    in the segment (0, /2) . This can be rewritten as x =21 arsin (cos x)

    and the corresponding iteration is

    xn+1 =2

    1 arsin (cos xn) =g( xn)

    n xn f ( xn) f ( xn)0 4 0.114847 1.9589241 13.0569 0.02485 0.6938812 14.86861 0.023081 0.7377993 17.0976 0.01885 0.7846124 24.53529 0.008551 0.8813415 78.538 0.00032 0.9871706 2976.076 6.07E09 0.9999917 1.6E+08 3.6E23 1.0000008 2.79E+22 7.33E66 1.0000009 1.4E+65 6E194 1.000000

  • 8/12/2019 newton Interpolations

    49/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    46

    Obviously, g ( x) = x

    x2cos1

    sin21

    =

    21

  • 8/12/2019 newton Interpolations

    50/126

    3 NONLINEAR EQUATIONS

    F. KOUTNY: ELEMENTARY NUMERICAL METHODS & FOURIER ANALYSIS

    47

    Localization and Simultaneous Determination of Roots A very simple estimate of the roof of a function f continuous in interval [a, b] consistsin defining (n+1) equidistant points,

    x0 =a, x1 = a + (ba)/ n, x2 = a + 2(ba)/ n, ..., xn = b ,computing values f ( xi) and determiningm = min { f ( xi) : i=0, ...,n}. Ifm is very smallthe corresponding xm can be an estimate of the root.

    This method cannot fail but a too high price is paid due to work consumed incomputation of the othern1 values f ( xi). The work consumed on calculation of onevalue of the function is sometimes called a Horner . The search for a zero of a function f with the tolerance (ba)106 would need 106 computations of values of f , i.e. onemillion Horners. Thus, this method is inconvenient and the search for a root of f ininterval [a, b] has to be rationalized. This, in the end, is the leading idea of thepreceding methods working with efficiency of a few Horners.

    Computing value at newly added points supplies further information concerning theconsidered function f , especially about the existence of its roots. Then, in the originalinterval [a, b] some intervals [ xi1, xi] may be taken that satisfy the condition f ( xi1) f ( xi)0. Generatingrandom numbers x uniformly distributed in interval [a, b] and calculation o