interpolation.pdf

Embed Size (px)

Citation preview

  • 7/30/2019 interpolation.pdf

    1/13

  • 7/30/2019 interpolation.pdf

    2/13

    Interpolation MACM 316 2/13

    Linear Interpolation

    Equation of the line joining (1.0, 3.6788) and (2.0, 5.4134) is

    p(x) = 1.9442 + 1.7346x

    Use p(x) to approximate f(1.75) at an unknown point (inter-polate)

    f(1.75) p(1.75) = 4.9798

    Given that the exact value is f(1.75) = 5.3218, the interpola-

    tion error is

    |4.9798 5.3218||5.3218|

    = 0.0643 (relative error)

    We could extrapolate outside the interval [1, 2] to obtain

    p(2.25) = 5.8471

    . . . but this is usually inaccurate.

    Its typically safer to derive the linear interpolant on [2, 2.5]:

    q(x) = 6.5458 0.5662x = q(2.25) = 5.2719

    Interpolation vs. extrapolation

    1 1.5 2 2.5

    4

    4.5

    5

    5.5

    x

    y

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    3/13

    Interpolation MACM 316 3/13

    Higher Order Interpolation

    Note: We assume the original data comes from a smooth

    function, but linear interpolation on more than two points is

    not smooth.

    How do we get a smooth approximation for f(x)???

    = use higher order polynomials!

    Suppose we have a list of n + 1 points

    {(x0, y0), (x1, y1), . . . , (xn, yn)}where yi = f(xi) for i = 0, 1, 2, . . . , n.

    Theorem 3.3 (p. 108): Provided the points xi are all dis-

    tinct, there exists a UNIQUE polynomial of degree n

    Pn(x) = a0 + a1x + a2x2 + + anx

    n

    which satisfies Pn(xi) = yi for i = 0, 1, 2, . . . , n.Furthermore,

    f(x) = Pn(x) + En(x)

    where the error term

    En(x) = f(n+1)

    (c)(n + 1)!

    (x x0)(x x1) (x xn)

    for some c [x0, xn].

    Pn(x) is called an interpolating polynomial for f(x).

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    4/13

    Interpolation MACM 316 4/13

    Previous Example: The interpolating polynomial of degree 2 for

    f(x) on [1, 2.5] is

    P2(x) = 1.1235 + 6.3362x 1.5339x2,

    since P2(1) = 3.6788, P2(2) = 5.4134 and P2(2.5) = 5.1303.

    We can now approximate f(1.75) by P2(1.75) = 5.26735, which

    has relative error of 0.0102 much smaller than we saw for thelinear approximation!

    Quadratic vs. Linear Interpolation

    1 1.5 2 2.53

    3.5

    4

    4.5

    5

    5.5

    x

    y

    f(x)P

    1(x)

    P2(x)

    Note: Increasing the degree of the polynomial does not neces-

    sarily guarantee an improved approximation.

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    5/13

    Interpolation MACM 316 5/13

    Interpolating Polynomials The Mechanics

    Suppose we have 3 points: (x0, y0), (x1, y1), (x2, y2).

    We want to fit a quadratic (n = 2): y = c0 + c1x + c2x2

    . Substitute:

    y0 = c0 + c1x0 + c2x20

    y1 = c0 + c1x1 + c2x21

    y2 = c0 + c1x2 + c2x22

    This can be re-written as a linear system: 1 x0 x

    20

    1 x1 x21

    1 x2 x22

    c0c1

    c2

    =

    y0y1

    y2

    Generalize for n + 1 points: (x0, y0), (x1, y1), . . . (xn, yn)

    1 x0 x20 xn01 x1 x

    21 x

    n1

    ... ... ... ...

    1 xn x2n x

    nn

    A

    c0c1

    c2...

    cn

    c

    =

    y0y1

    y2...

    yn

    y

    . . . this is a linear system for the coefficients c.

    A is an (n + 1) (n + 1) matrix, and is called a Vandermondematrix.

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    6/13

    Interpolation MACM 316 6/13

    Vandermonde Example

    Given the following gasoline price data (obviously out of date!):

    Year 1986 1988 1990 1992 1994 1996

    Price (cents) 66.7 66.1 69.3 70.7 68.8 72.1

    Use polynomial interpolation to approximate the gas price in 1991.

    Problems:

    The Vandermonde matrix is usually ill-conditioned, which can

    cause large errors in the polynomial coefficients.

    Solving a dense matrix system requires O(n3

    ) floating pointoperations, which is expensive.

    We want a cheaper and more accurate method!

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    7/13

    Interpolation MACM 316 7/13

    Lagrange Interpolation

    Again, start with n + 1 points satisfying yi = f(xi)

    (x0, y0), (x1, y1), . . . (xn, yn)

    Idea: Construct a simple polynomial that is equal to 1 at x0and equal to 0 at every other point:

    l0(x) :1 0 0 0| | | |

    x0 x1 x2 xn

    The following does the trick:

    L0(x) =(x x1)

    (x0 x1)

    (x x2)

    (x0 x2)

    (x xn)

    (x0 xn)=

    ni=1

    x xi

    x0 xi

    Construct similar polynomials for the other points

    Lj(x) =n

    i=0i=jx xi

    xj xiwhich satisfy Lj(xi) =

    1 if i = j

    0 if i = j

    Lj(x) is called the jth Lagrange polynomial.

    We can now write the interpolating polynomial for f(x) as

    Pn(x) = y0L0(x) + y1L1(x) + + ynLn(x)

    =n

    j=0

    yjLj(x)

    so that

    Pn(xk) = y0L0(xk) + y1L1(xk) + + ynLn(xk)

    = ykLk(xk) = yk

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    8/13

    Interpolation MACM 316 8/13

    Lagrange Interpolation Example

    Back to the data from our first example:

    x y = f(x)

    0.0 0.00000.5 1.5163

    1.0 3.6788

    2.0 5.4134

    2.5 5.1303

    3.0 4.4808

    4.0 2.9305

    x y = f(x)

    5.0 1.68456.0 0.8924

    7.0 0.4468

    8.0 0.2147

    9.0 0.1000

    10.0 0.0454

    Find the quadratic interpolating polynomial P2(x) on [1, 2.5]:

    First set up the Lagrange polynomials:

    L0(x) =(x2)(12)

    (x2.5)(12.5)

    = (x2)(x2.5)1.5

    (leave unexpanded!!!)

    L1(x) =(x1)(21)

    (x2.5)(22.5)

    = (x1)(x2.5)0.5

    L2(x) =(x1)

    (2.51)(x2)

    (2.52)= (x1)(x2)

    0.75

    Then construct the interpolating polynomial:P2(x) =

    3.67881.5

    (x 2)(x 2.5) + 5.41340.5

    (x 1)(x 2.5)

    + 5.13030.75

    (x 1)(x 2)

    = 2.4525 (x 2)(x 2.5) 10.8268 (x 1)(x 2.

    + 6.8404(x 1)(x 2)

    and P2(1.75) 5.26735

    Never expand the Lagrange form . . . except to compare:

    P2(x) = 1.1235 + 6.3362x 1.5339x2

    (same as before . . . UNIQUE!!)

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    9/13

    Interpolation MACM 316 9/13

    Pros and Cons of the Lagrange Form

    Advantages of the Lagrange form of the interpolating polynomial:

    Polynomial can be written down easily great for manual

    computation!

    Cost of O(n2) is much better Vandermonde was O(n3).

    Disadvantages:

    If we add another point, (xn+1, yn+1), then we have to start

    over from scratch none of the Lk(x) can be reused.

    No simple way to estimate error were stuck using the errorterm En(x) from the Theorem on page 3:

    En(x) =f(n+1)(c)

    (n + 1)!(x x0)(x x1) (x xn)

    for which f(n+1)(x) is typically not known!

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    10/13

    Interpolation MACM 316 10/13

    Newton Divided Differences

    Another way to construct the interpolating polynomial, Pn(x).

    Idea: Build up one point and one degree at a time.

    Start at degree 0, where P0(x) interpolates a single point (x0, y0)

    P0(x) = y0

    Move to degree 1, by adding a term a1(x x0):

    P1(x) = y0 +y1 y0

    x1

    x0

    (x x0)

    This pattern can be continued to higher degree:

    P0(x) = a0

    P1(x) = a0 + a1(x x0)

    P2(x) = a0 + a1(x x0) + a2(x x0)(x x1)

    Pn(x) = a0 + a1(x x0) + a2(x x0)(x x1)+ + an(x x0)(x x1) (x xn1)

    =n

    j=0

    aj

    j1i=0

    (x xi)

    We now substitute Pn(xk):

    y0 = Pn(x0) = a0 = a0 = y0

    y1 = Pn(x1) = a0 + a1(x1 x0) = a1 =y1y0x1x0

    which is a divided difference.

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    11/13

    Interpolation MACM 316 11/13

    The next divided difference comes from

    y2 = Pn(x2) = a0 + a1(x2 x0) + a2(x2 x0)(x2 x1)

    = a2 =

    y2y1x2x1

    y1y0x1x0

    x2 x0

    It helps to introduce some notation:

    f[xi] = yi

    f[xi, xi+1] =f[xi+1] f[xi]

    xi+1 xi

    f[xi, xi+1, xi+2] =f[xi+1, xi+2] f[xi, xi+1]

    xi+2 xi...

    f[xi, . . . , xi+k] =f[xi, . . . , xi+k1] f[xi+1, . . . , xi+k]

    xi+k xi

    Then, the coefficient ak is given by

    ak = f[x0, x1, . . . , xk]

    and is called the kth divided difference.

    The calculations are much simplerwhen organized as a table:

    x y a1 a2 a3 a4x0 f[x0]

    f[x0, x1]x1 f[x1] f[x0, x1, x2]

    f[x1, x2] f[x0, x1, x2, x3]

    x2 f[x2] f[x1, x2, x3]. . .

    f[x2, x3] f[x1, x2, x3, x4]

    x3 f[x3] f[x2, x3, x4]. . .

    f[x3, x4] f[x2, x3, x4, x5]x4 f[x4] f[x3, x4, x5]

    f[x4, x5]x5 f[x5]

    ...

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    12/13

    Interpolation MACM 316 12/13

    Newton Divided Differences Example

    Use data from our previous example to construct the table.

    Take the points x = 0.0, 0.5, 1.0, 2.0, 2.5, 3.0.

    x y a1 a2 a3 a40.0 0.0000

    3.03270.5 1.5163 1.2923

    4.3249 1.50961.0 3.6788 1.7269 0.6424

    1.7346 0.09652.0 5.4134 1.5339 0.12160.5662 0.4006

    2.5 5.1303 0.7328

    1.29903.0 4.4808

    The coefficients ak can be read directly off the diagonals.

    Starting at the point x = 0.0:

    P0(x) = 0.0000

    P1(x) = 0.0000 + 3.0327(x 0.0)

    P2(x) = 3.0327(x 0.0) + 1.2923(x 0.0)(x 0.5)

    P3(x) = P2(x) 1.5096(x 0.0)(x 0.5)(x 1.0)

    etc.

    Starting at the pointx

    = 1.0 (as in the earlier examples):

    P0(x) = 3.6788

    P1(x) = 3.6788 + 1.7346(x 1.0)

    P2(x) = 3.6788 + 1.7346(x 1.0) 1.5339(x 1.0)(x 2.0)

    etc.

    . . . exactly as before!

    October 13, 2008 c Steven Rauch and John Stockie

  • 7/30/2019 interpolation.pdf

    13/13

    Interpolation MACM 316 13/13

    Advantages of Newton Divided Differences:

    Can extract the interpolating polynomial for any interval and

    any degree desired.

    Cost is half that of Lagrange polynomial.

    Easy access to an error estimate the error term can be writ-

    ten as

    En(x) = f[x0, x1, . . . , xn+1]n

    i=0

    (x xi)

    (see Exercise 20, page 129). In practice, error can be esti-

    mated by adding one more point and computing one extra

    row in the table.

    October 13, 2008 c Steven Rauch and John Stockie