25
Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007] CHAPTER V Interpolation and Regression Topics Interpolation Direct Method; Newton’s Divided Difference; Lagrangian Interpolation; Spline Interpolation. Regression Linear and non-linear. 1. What is interpolation? A function is, often, given only at discrete points such as . How does one find the value of y at any other value of x? Well, a continuous function may be used to represent the n+1 data values with passing through the n+1 point. Then we can find the value of y at any other value of x. This is called interpolation. Of course, if x falls outside the range of x for which the data is given, it is no longer interpolation, but instead, is called extrapolation. So what kind of function should we choose? A polynomial is a common choice for an interpolating function because polynomials are easy to - Evaluate - Differentiate, and - Integrate as opposed to other choices such as a sine or exponential series. Polynomial interpolation involves finding a polynomial of order ‘n’ that passes through the ‘n+1’ points. One of the methods is called the direct method of interpolation. Other methods include Newton’s divided difference polynomial method and Lagrangian interpolation method. Interpolation & Regression 60 (x 0 , y 0 ) (x 1 , y 1 ) (x 2 , y 2 ) (x 3 , y 3 ) f (x) x y

CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

  • Upload
    vantu

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

CHAPTER V

Interpolation and Regression

Topics

Interpolation Direct Method; Newton’s Divided Difference; Lagrangian Interpolation; Spline Interpolation.

Regression Linear and non-linear.

1. What is interpolation?

A function is, often, given only at discrete points such as

. How does one find the value of y at any other value of x? Well, a continuous function may be used to represent the n+1 data values with passing through the n+1 point. Then we can find the value of y at any other value of x. This is called interpolation. Of course, if x falls outside the range of x for which the data is given, it is no longer interpolation, but instead, is called extrapolation.

So what kind of function should we choose? A polynomial is a common choice for an interpolating function because polynomials are easy to

- Evaluate - Differentiate, and - Integrate

as opposed to other choices such as a sine or exponential series. Polynomial interpolation involves finding a polynomial of order ‘n’ that passes through the ‘n+1’ points. One of the methods is called the direct method of interpolation. Other methods include Newton’s divided difference polynomial method and Lagrangian interpolation method.

Interpolation & Regression 60

(x0, y0)

(x1, y1)

(x2, y2)

(x3, y3)

f (x)

x

y

Page 2: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

1.2. Direct Method

The direct method of interpolation is based on the following principle. If we have 'n+1' data points, fit a polynomial of order 'n' as given below

(1)through the data, where a0, a1, . . ., an are n+1 real constants. Since n+1 values of y are given at n+1 values of x, one can write n+1 equations. Then the 'n+1' constants, a 0, a1, . . ., an, can be found by solving the n+1 simultaneous linear equations (Ahaaa !!! do you remember previous course !!!). To find the value of y at a given value of x, simply substitute the value of x in the polynomial form.

But, it is not necessary to use all the data points. How does one then choose the order of the polynomial and what data points to use? This concept and the direct method of interpolation are best illustrated using an example.

1.2.1. Example

The upward velocity of a rocket is given as a function of time in Table 1.

Table 1. Velocity as a function of time

t [s] v(t) [m/s]0 010 227.0415 362.7820 517.3522.5 602.9730 901.67

1. Determine the value of the velocity at t=16 s using the direct method and a first order polynomial.

2.. Determine the value of the velocity at t=16 s using direct method and a third order polynomial interpolation using direct method.

Interpolation & Regression 61

Page 3: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

0

250

500

750

1000

0 10 20 30 40

t [s]

v (t) [s]

Figure 5.2. Velocity vs. time data for the rocket example.

1.3. Newton’s divided difference interpolation

To illustrate this method, we will start with linear and quadratic interpolation, then, the general form of the Newton’s Divided Difference Polynomial method will be presented.

1.3.1. Linear interpolation

Given fit a linear interpolant through the data. Note taht and

, assuming a linear interpolant means:

Since at : ,

and at : Then

so

And the linear interpolant,

Interpolation & Regression 62

Page 4: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

Becomes:

1.3.2. Quadratic interpolation

Given and fit a quadratic interpolant through the data. Note that

and assume the quadratic interpolant

given by

At

At

then

At

then

Hence the quadratic interpolant is given by

Interpolation & Regression 63

Page 5: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

Figure 5.4. Quadratic interpolation

1.3.3. General Form of Newton’s Divided Difference Polynomial

In the two previous cases, we found how linear and quadratic interpolation is derived by Newton’s Divided Difference polynomial method. Let us analyze the quadratic polynomial interpolant formula

where

Note that and are finite divided differences. and are first, second, and third finite divided differences, respectively. Denoting first divided difference by

the second divided difference by

and the third divided difference by

Interpolation & Regression 64

Page 6: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

where and are called bracketed functions of their variables enclosed in square brackets.

We can write:

This leads to the general form of the Newton’s divided difference polynomial for data points, as

where

where the definition of the divided difference is

From the above definition, it can be seen that the divided differences are calculated recursively.

For an example of a third order polynomial, given and

Interpolation & Regression 65

Example

Use the same previous data of the upward velocity of a rocket, to determine the value of the velocity at t=16 s using third order polynomial interpolation using Newton’s Divided Difference polynomial.

Page 7: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

1.4. Lagrangian Interpolation

Polynomial interpolation involves finding a polynomial of order ‘n’ that passes through the ‘n+1’ points. One of the methods to find this polynomial is called Lagrangian Interpolation.

Lagrangian interpolating polynomial is given by

where ‘ ’ in stands for the order polynomial that approximates the function

given at data points as , and

is a weighting function that includes a product of terms with terms of omitted.

1.5. Spline Method of Interpolation

Spline method was introduced to solve one of the drawbacks of the polynomial interpolation. In

fact, when the order (n) becomes large, in many cases, oscillations appear in the resulting

polynomial. This was shown by Runge when he interpolated data based on a simple function of

on an interval of [-1, 1]. For example, take six equidistantly spaced points in [-1, 1] and find y at

these points as given in Table 1.

Table 1: Six equidistantly spaced points in [-1, 1]

Interpolation & Regression

-1.0 0.038461

-0.6 0.1

-0.2 0.5

0.2 0.5

0.6 0.1

1.0 0.038461

66

Example

Use the same previous data of the upward velocity of a rocket, to determine the value of the velocity at t=16 s using third order polynomial interpolation using third order polynomial interpolation using Lagrangian polynomial interpolation.

Page 8: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

Figure.5.5. 5th order polynomial vs. exact function.

Now through these six points, we can pass a fifth order polynomial

through the six data points.

When plotting the fifth order polynomial and the original function, you can notice that the two do

not match well. So maybe you will consider choosing more points in the interval [-1, 1] to get a

better match, but it diverges even more (see figure below). In fact, Runge found that as the order

of the polynomial becomes infinite, the polynomial diverges in the interval of –1 < x < 0.726 and

0.726 < x < 1.

Figure.5.6. Higher order polynomial interpolation is a bad idea.1.5.1. Linear spline interpolation

Given , fit linear splines to the data. This simply involves forming the consecutive data through straight lines. So if the above data is given in an ascending order, the linear splines are given by

Interpolation & Regression 67

Page 9: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

Figure.5.7. Linear splines.

.

.

.

Note the terms of

in the above function are simply slopes between and .

1.5.2. Quadratic Splines

In these splines, a quadratic polynomial approximates the data between two consecutive data points. The splines are given by

...

Interpolation & Regression 68

Page 10: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

Now, how to find the coefficients of these quadratic splines? There are 3n such coefficients

1, 2, …, n1, 2, …, n1, 2, …, n

To find ‘3n’ unknowns, we need ‘3n’ equations and then simultaneously solve them. These ‘3n’ equations are found by the following.

1) Each quadratic spline goes through two consecutive data points

.

.

.

.

.

.

This condition gives 2n equations as there are ‘n’ quadratic splines going through two consecutive data points.

2) The first derivatives of two quadratic splines are continuous at the interior points. For example, the derivative of the first spline

is

The derivative of the second spline

is

and the two are equal at giving

Similarly at the other interior points,

Interpolation & Regression 69

Page 11: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

.

.

.

.

.

.

Since there are (n-1) interior points, we have (n-1) such equations. Now, the total number of equations is equations. We still then need one more equation.We can assume that the first spline is linear, that is:

This gives us ‘3n’ equations and ‘3n’ unknowns. These can be solved by a number of techniques used to solve simultaneous linear equations.

Interpolation & Regression 70

Page 12: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

2. Regression

2.2. What is regression?

Regression analysis gives information on the relationship between a response variable and one or more independent variables to the extent that information is contained in the data. The goal of regression analysis is to express the response variable as a function of the predictor variables. Duality of fit and accuracy of conclusion depend on the data used. Hence non-representative or improperly compiled data result into poor fits and conclusions. Thus, for effective use of regression analysis one must

Investigate the data collection process, Discover any limitations in data collected, Restrict conclusions accordingly.

Once regression analysis relationship is obtained, it can be used to predict values of the response variable, identify variables that most affect response, or verify hypothesized casual models of the response. The value of each predictor variable can be assessed through statistical tests on the estimated coefficients (multipliers) of the predictor variables.

2.3. Linear regression

Linear regression is the most popular regression model. In this model we wish to predict response to n data points (x1,y1), (x2,y2), ....., (xn, yn) data by a regression model given by

where a0 and a1 are the constants of the regression model.

A measure of goodness of fit, that is, how predicts the response variable y is the magnitude of the residual, at each of the n data points.

Ideally, if all the residuals are zero, one may have found an equation in which all the points lie on the model. Thus, minimization of the residual is an objective of obtaining regression coefficients.

The most popular method to minimize the residual is the least squares method, where the estimates of the constants of the models are chosen such that the sum of the squared residuals

is minimized, that is minimize .

Why minimize the sum of the square of the residuals? Why not, for instance, minimize the sum of the residual errors or the sum of the absolute values of the residuals? Alternatively, constants of the model can be chosen such that the average residual is zero without making individual residuals small. For example, let us analyze the following table.

x y2.0 4.03.0 6.02.0 6.03.0 8.0

To explain this data by a straight line regression model,

Interpolation & Regression 71

Page 13: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

and using minimizing as a criteria to find ao and a1, we find that for (Figure 5.8)

Y = 4x -4

Figure.5.8. Regression curve y = 4x – 4 for y vs. x data.

The sum of the residuals, as shown in the table below.

x y ypredicted ε = y - ypredicted

2.0 4.0 4.0 0.03.0 6.0 8.0 -2.02.0 6.0 4.0 2.03.0 8.0 8.0 0.0

So does this give us the smallest error? It does as . But it does not give unique values

for the parameters of the model. A straight-line of the model: Y = 6.

Interpolation & Regression 72

0

2

4

6

8

10

0 1 2 3 4

x

y

y =4x - 4

Page 14: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

Figure.5.9. Regression curve y = 6 for y vs. x data.

also makes as shown in the table below.

x y ypredicted ε = y - ypredicted

2.0 4.0 6.0 -2.03.0 6.0 6.0 0.02.0 6.0 6.0 0.03.0 8.0 6.0 2.0

Since this criterion does not give unique regression model, it cannot be used for finding the regression coefficients. Why? Because, we want to minimize

Differentiating this equation with respect to a0 and a1, we get

Putting these equations to zero, give n= 0 but this is impossible. Therefore, unique values of a0

and a1 do not exist.

Interpolation & Regression 73

y = 6

Page 15: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

You may think that the reason the minimization criterion does not work is that negative

residuals cancel with positive residuals. So is minimizing criterion may be better? Let us

look at the data given below for equation . It makes as shown in the

following table.

x y ypredicted |ε| = |y - ypredicted|2.0 4.0 4.0 0.03.0 6.0 8.0 2.02.0 6.0 4.0 2.03.0 8.0 8.0 0.0

The value of also exists for the straight line model y = 6. No other straight line for this

data has . Again, we find the regression coefficients are not unique, and hence this

criterion also cannot be used for finding the regression model.

Let us use the least squares criterion where we minimize

Sr is called the sum of the square of the residuals.

Interpolation & Regression 74

Page 16: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

Figure.5.10. Linear regression of y vs. x data showing residuals at a typical point, xi.

To find a0 and a1, we minimize Sr with respect to a0 and a1:

giving

Noting that

Solving the above equations gives:

Interpolation & Regression

x

xaay 10 11 , yx

22 , yx33 , yx

nn yx ,

ii yx ,

iii xaay 10

y

75

Page 17: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

Redefining

we can rewrite

2.4. Nonlinear models using least squares

2.4.1. Exponential model

Given , , . . . , we can fit to the data. The variables and are

the constants of the exponential model. The residual at each data point is

The sum of the square of the residuals is

To find the constants a and b of the exponential model, we minimize Sr by differentiating with respect to and and equating the resulting equations to zero.

Interpolation & Regression 76

Page 18: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

or

These equations are nonlinear in a and b and thus not in a closed form to be solved as was the case for the linear regression. In general, iterative methods must be used to find values of a and b.

However, in this case, can be written explicitly in terms of as

Substituting gives

This equation is still a nonlinear equation in and can be solved by numerical methods such as bisection method or secant method.

2.4.2. Growth model

Growth models common in scientific fields have been developed and used successfully for specific situations. The growth models are used to describe how something grows with changes in regressor variable (often the time). Examples in this category include growth of population with time. Growth models include

where a, b and c are the constants of the model. At x= 0, and as , .

The residuals at each data point, xi are

The sum of the square of the residuals is

Interpolation & Regression 77

Page 19: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

To find the constants a, b and c we minimize Sr by differentiating with respect to , and c, and equating the resulting equations to zero.

,

,

.

Then , it is possible to use the Newton-Raphson method to solve the above set of simultaneous nonlinear equations for a, b and c.

2.4.3. Polynomial Models

Given n data points (x1, y1), (x2, y2). . , (xn, yn) use least squares method to regress the data to an mth order polynomial.

The residual at each data point is given by

The sum of the square of the residuals is given by

To find the constants of the polynomial regression model, we put the derivatives with respect to ai

to zero, that is,

Interpolation & Regression 78

Page 20: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

Writing these equations in matrix form gives

The above system is solved for a0, a1,. . ., am

2.4.4. Logarithmic Functions

The form for the log regression models is

This is a linear function between and and the usual least squares method applies in which is the response variable and is the regressor.

2.4.5. Power Functions

The power function equation describes many scientific and engineering phenomena:

The method of least squares is applied to the power function by first linearizing the data (assumption is that b is not known). If the only unknown is a, then a linear relation exists between xb and . The linearization of the data is as follows:

The resulting equation shows a linear relation between and .We can put

Interpolation & Regression 79

Page 21: CHAPTER I - Concordia Universityusers.encs.concordia.ca/~kadem/Interpolation and... · Web viewOne of the methods is called the direct method of interpolation. Other methods include

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

then

we get

Since a0 and a1 can be found, the original constants of the model are

Interpolation & Regression 80