29
Solutions for the MA 5315 Assignments Numerical analysis is an art as well as a science. Often the answer to a question is a discussion of relevant issues and a weighing of factors in the present circumstances. Be- cause there may be no “right” answer, the brief solutions provided here may not the only acceptable answers to the questions. Aug. 24–Aug. 31 1. Work through all the computations in the reading assigned from the Matlab tutorial. You are to report the version of Matlab that you are using and affirm that you have done the work. Solution No solution is needed. 2. Many programs provide only a pure absolute error control. Suppose that you wish to compute a quantity that you know is about -261 to a relative accuracy of 10 -3 . What absolute error tolerance should you use to accomplish this? Suppose now that you have no idea how big the answer is, but you want to compute it to a relative accuracy of 10 -3 . How could you do this? Solution With an absolute error tolerance τ , a program is to produce an approximation approx to a desired quantity true with |true - approx|≤ τ This corresponds to a relative accuracy of true - approx true τ |true| The approximation will have a relative accuracy of 10 -3 if τ |true| 10 -3 or equivalently, if τ ≤|true10 -3 . If we know that true is about -261, this tells us that we should use an absolute error tolerance of about |-26110 -3 =0.261 to get a relative accuracy of about 10 -3 . If we have no idea how big true is, we can choose a nominal value for τ and try to compute an approximation. If the program is unable to compute an approximation with this tolerance, we must reduce τ and try again. Once the program is able to compute an approximation, we can use approx instead of true in the expression for an absolute error tolerance that will give the desired relative error. If τ/|approx| is rather less than 10 -3 , we appear to have the desired accuracy and 1

right answer, the brief solutions provided here may not the only ...faculty.smu.edu/shampine/MA5315/oldsolutions.pdf · Matlab tutorial. You are to report the version of Matlab that

  • Upload
    lamngoc

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Solutions for the MA 5315 Assignments

Numerical analysis is an art as well as a science. Oftenthe answer to a question is a discussion of relevant issuesand a weighing of factors in the present circumstances. Be-cause there may be no “right” answer, the brief solutionsprovided here may not the only acceptable answers to thequestions.

• Aug. 24–Aug. 31

1. Work through all the computations in the reading assigned from theMatlab tutorial. You are to report the version of Matlab that youare using and affirm that you have done the work.Solution No solution is needed.

2. Many programs provide only a pure absolute error control. Supposethat you wish to compute a quantity that you know is about −261to a relative accuracy of 10−3. What absolute error tolerance shouldyou use to accomplish this? Suppose now that you have no idea howbig the answer is, but you want to compute it to a relative accuracyof 10−3. How could you do this?Solution With an absolute error tolerance τ , a program is to producean approximation approx to a desired quantity true with

|true− approx| ≤ τ

This corresponds to a relative accuracy of∣∣∣∣true− approx

true

∣∣∣∣ ≤τ

|true|The approximation will have a relative accuracy of 10−3 if

τ

|true| ≤ 10−3

or equivalently, if τ ≤ |true| × 10−3. If we know that true is about−261, this tells us that we should use an absolute error tolerance ofabout |−261|×10−3 = 0.261 to get a relative accuracy of about 10−3.If we have no idea how big true is, we can choose a nominal value forτ and try to compute an approximation. If the program is unable tocompute an approximation with this tolerance, we must reduce τ andtry again. Once the program is able to compute an approximation,we can use approx instead of true in the expression for an absoluteerror tolerance that will give the desired relative error. If τ/|approx|is rather less than 10−3, we appear to have the desired accuracy and

1

we are done. If not, we try again with an absolute error toleranceτ = |approx| × 10−3.This argument is closely related to the fact that we cannot implementthe theoretical definition of relative error, |true−approx|/|true|, be-cause we do not know true. What is done in practice is to take theerror relative to the approximate solution, |true− approx|/|approx|.

• Sept. 3–Sept. 7

1. Many programs provide only a pure absolute error control. Supposethat you want to compute a quantity that you know is about 105. Isit meaningful to ask the code for an accuracy of 10−10? What about10−12? (Your answer should be for IEEE arithmetic, the standardnowadays.) If a tolerance is not meaningful, explain why.Solution An absolute error tolerance of τ corresponds to a relativeerror of ∣∣∣∣

true− approx

true

∣∣∣∣ ≤τ

|true|It is not meaningful to ask for a relative accuracy less than a unitroundoff because that is the best we can expect of the floating pointrepresentation of the true value. In the IEEE standard, the unitroundoff, called eps in Matlab, is about 10−16. Accordingly, if trueis about 105, an absolute error tolerance of 10−12 corresponds to arelative accuracy of about 10−12/105 = 10−17. This is smaller thana unit roundoff, so this tolerance is not meaningful in the precisionavailable. On the other hand, an absolute error tolerance of 10−10

corresponds to a relative error of about 10−15, which is very stringent,but still meaningful because it is bigger than a unit roundoff.

2. Do Exercise 5, p. 42. This does not involve any computing—justexplain why you might expect the last entry in the table to correspondto x = 1.0, but there is, in fact, another entry.Solution The quantity x is increased by 0.1 in each pass through thewhile loop, so it has values 0.1, 0.2, . . . , 0.9 and then the value 1.0,which would cause the program to exit the loop. The point of theexercise is that the computations are actually done in binary. Thereis first an error in converting the decimal number 0.1 to binary. Inbinary arithmetic the values of x are not exactly the values we getin decimal arithmetic. In particular, if we were to compute a valuethat is just slightly less than 1.0, the test x < 1.0 would be satisfiedand another pass made through the loop. The default format for dis-playing x rounds the binary representation to the values 0.1, 0.2, . . .that we expect.Generally we can ignore the difference between describing a problemusing decimal arithmetic and solving it using binary, but the situationillustrated by this exercise is not unusual in computing practice. You

2

should be very cautious about testing whether two floating pointnumbers are equal, indeed, some compilers flag this as a probableerror. Coding the while loop as a for loop avoids the difficulty ofthis exercise:

for i = 1:10x = i/10;% rest of computation here

end

Notice that x is being computed directly here rather than by summa-tion as in the text. This is not important in the circumstances, but itprovides more accurate values for x because errors do not accumulate.

• Sept. 10–Sept. 14

1. Do Exercise 7, p. 55. The text has a collection of Taylor’s seriesstarting on p. 16 that might be helpful.Solution The first task is to explain why evaluating

1− cos(x2)x

in a straightforward way is unsatisfactory for small x. The series forcos(x) on p. 17 shows that

cos(x2) = 1− x4

2+

x8

24− . . .

Obviously cos(x2) approaches 1 very quickly as x → 0. As a result,there is severe cancelation in forming the numerator of the fraction,making any error in evaluating cos(x2) relatively large in the differ-ence. The denominator of the fraction is small, which amplifies theerror in the numerator. These difficulties are avoided by performingthe operations analytically on the series. We have

Q(x) =∫ x

0

sin(xt) dt =1− cos(x2)

x=

x3

2− x7

24+ . . .

For small x we need evaluate only a few terms of the resulting series toget an accurate approximation to Q(x) in floating point arithmetic.Though not needed for this exercise, let us consider the relative ac-curacy of using, say, one term in the series to approximate Q(x). Forsmall x 6= 0, the relative error is

∣∣∣∣∣Q(x)− x3

2

Q(x)

∣∣∣∣∣ =

∣∣∣∣∣−x7

24 + . . .x3

2 − x7

24 + . . .

∣∣∣∣∣ =

∣∣∣∣∣x4

12 − . . .

1− x4

12 + . . .

∣∣∣∣∣ ≈x4

12

From this we see that even one term provides an approximation ac-curate to a unit roundoff if |x| ≤ 10−4.

3

2. The following definition of an anonymous function has a syntax error.You are first to correct this error and then recode the function usingarray operations so as to vectorize it.

@(x) f = x*sin(x^2);

Define the corrected and vectorized function at the command line.Use it to evaluate f([2,3]). Print either the whole command win-dow or just the portion showing these lines of code and the answercomputed. Turn this in.Solution

>> f = @(x) x.*sin(x.^2);>> f([2,3])ans =

-1.5136 1.2364

3. The two Taylor series

f(x0 ± h) = f(x0)± f ′(x0)h + f ′′(x0)h2 ± . . .

and a little manipulation establish the finite difference approximation

f ′(x0) ≈ f(x0 + h)− f(x0 − h)2h

and show that the error of this approximation goes to zero like h2. Inprinciple you can approximate f ′(x0) as accurately as you like withtwo evaluations of f(x) using sufficiently small h. Explain why thisapproximation is zero in finite precision arithmetic for all sufficientlysmall h. Squire and Trapp point out that complex arithmetic can beused to avoid this difficulty. With i =

√−1, the Taylor series

f(x0 + i h) = f(x0) + f ′(x0) ih− f ′′(x0)h2 + . . .

and a little manipulation establish the complex-step derivative ap-proximation

f ′(x0) ≈ Im(f(x0 + i h))h

and show that the error also goes to zero like h2. Here Im(z) isthe imaginary part of a complex number z. Download the programpartialCSDex.m and study it. The help entries for imag and complexwill help you understand the coding. Modify this program so thatit computes and displays finite difference approximations along withthe complex-step approximations. Add a legend and place it so thatyou end up with a figure like the one here.

4

10−20

10−18

10−16

10−14

10−12

−0.5

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

FDCSD

As you see in the figure, the argument you make for finite differ-ence approximations does not apply to complex-step differences—avery small h can be used to compute a very accurate approximatederivative. Along with your explanation of the difficulty with finitedifferences, you are to turn in a listing of your program and the plotit produces.Solution When h is less than a unit roundoff in x0, i.e., |h| < u|x0|,both x0+h and x0−h are evaluated as x0 in floating point arithmetic.The difference f(x0+h)−f(x0−h) is then computed as f(x0)−f(x0),which is zero. Only a few changes are needed in partialCSDex.Corresponding to the anonymous function CSDf, add the function

FDf = @(x0,h) (f(x0+h) - f(x0-h))/(2*h);

As with CSD, preallocate storage for the finite difference results:

FD = zeros(20,1);

Along with the evaluation of CSD(m) in the for loop, add

FD(m) = FDf(x0,h);

To add the results of finite differences and a legend in the lower rightcorner as shown in the figure, change the plotting commands to

semilogx(H(12:20),FD(12:20),’r*’,H(12:20),CSD(12:20),’bo’)axis([1e-20 1e-12 -0.5 4.5])legend(’FD’,’CSD’,’Location’,’SouthEast’)

5

• Sept. 17–Sept. 21

1. The harmonic series is∑∞

n=1 1/n. The partial sums tend to +∞,so the series does not converge. Nevertheless, if you add succes-sive terms in finite precision arithmetic, the partial sums eventuallystop changing and you get a finite approximation to the “sum” ofthis series— explain why this happens. We have argued that addingnumbers of the same sign is accomplished with a small relative error.This example makes clear that the conclusion requires more thanjust adding terms of the same sign. What else must we keep in mindwhen applying this rule of thumb?Solution The series is summed by defining S1 = 1 and then Sn+1 =Sn + 1/(n + 1) for n = 1, 2, . . .. Because we are adding positivenumbers, the partial sums Sn increase and obviously the terms added,1/(n + 1), decrease. This means that eventually we reach an n forwhich the term added is smaller than a unit roundoff in Sn. At thattime the floating point result

fl(Sn+1) = fl(Sn + 1/(n + 1)) = fl(Sn)

and all partial sums thereafter are the same. This is why we get afinite sum for the series in finite precision arithmetic. The boundwe derived on the relative error of adding numbers is generally smallwhen the numbers have the same sign. However, the number of termsappeared as a factor in that bound, so it is small only if the numberof terms is not too big. Here we are adding infinitely many terms, sothe bound we derived is not really applicable.

2. The solvers of Chapter 3 are far from being software, so you beginthis exercise by making a few improvements to one.

(a) Download from the web site for the text the bisect program.Unfortunately, it is a little different from the listing which startson p. 75. The program computes a root of f(x) = 0 for one oftwo functions that are hard coded as a single subfunction. Aninput variable index_f is used to select which is to be evalu-ated. Remove the subfunction from the file and the input vari-able index_f from the call list. Make the first argument of thecall list a handle for f(x), i.e., change the call list to

root = bisect(f,a0,b0,ep,max_iterate)

The calls to f(x) inside bisect are coded like f(a,index_f).You will have to change them to the form f(a). Let us makemax_iterate optional. At the beginning of bisect, insert thecode

6

if nargin < 5max_iterate = 10;

end

nargin is a system variable that counts the number of inputarguments, so the effect of this change is to set max_iterate toa default value of 10 if the user does not supply a value. Similarlymodify the program to use a default value of 1e-6 if ep is notspecified. There are two error returns that are coded with dispand return. Change them so that error is used instead. M-Lintsays that in the commandwhile b-c > ep & it_count < max_iterate

it would be better to use && instead of &, so make this change.Give the resulting solver a new name, e.g., modbisect. You areto provide a listing of this solver.Solution No solution is needed. The internal printing is usedfor pedagogical purposes, but a quality program would either notprint intermediate quantities or would give the user control overwhether this is done.

(b) The functions hard coded in bisect are vectorized. Does thesolver actually use this?Solution No. Examination of the solver shows that all calls tothe function are made with scalar arguments.

(c) Apply the new solver to calculating a root of tan(x) = 0 fora0 = 1, b0 = 2, and default values for ep and max_iterate.(You can call your program with @tan.) Provide a copy of yourprogram and its output. Can this solver report a “root” wherethe function is supposed to be zero, but is actually infinite?Solution The program returns the (slightly edited) outputroot = 1.570800781250000

but it has computed an approximation to the simple pole atx = π/2. Indeed, the (slightly edited) result of evaluating theresidual is>> residual = tan(root)

residual = -2.2449e+005

Bisection depends on the signs of function values, so the programlocates a change of sign in the interval [1, 2] where the functionhas a simple pole. The program is intended for odd-order rootsof a continuous function f(x). This example shows that you canget into trouble if f(x) has an odd-order pole and the programdoes not monitor the size of the residual to distinguish a poleand a root.

7

3. Suppose that f(x) is a smooth function with a root α of multiplicitym > 1, i.e., f(x) = (x − α)m Q(x) and Q(α) 6= 0. Argue that thefunction h(x) = f(x)/f ′(x) has α as a root of multiplicity 1, i.e.,there is a function R(x) such that h(x) = (x−α)R(x) and R(α) 6= 0.If an analytical expression for the first derivative is available, this isa good way to deal with the numerical difficulties of multiple roots.You do have to be careful to check that you are computing a rootand not a pole because h(x) has poles at points where f ′(x) vanishesand f(x) does not.Solution From f(x) = (x− α)m Q(x) we find that

f ′(x) = m (x− α)m−1 Q(x) + (x− α)m Q′(x)= (x− α)m−1 [m Q(x) + (x− α) Q′(x)]= (x− α)m−1W (x)

Because Q(α) 6= 0, we have W (α) = mQ(α) 6= 0. This says that αis a root of f ′(x) of multiplicity m− 1. With this expression for thederivative, we have

h(x) =f(x)f ′(x)

=(x− α)m Q(x)

(x− α)m−1 W (x)= (x− α)R(x)

which says that α is a simple root of h(x) because

R(α) =Q(α)W (α)

=1m6= 0

• Sept. 24–Sept. 28

1. On p. 111 it is shown how the behavior of the iterates of Newton’smethod when computing a multiple root can be used to determinethe multiplicity. Work out the multiplicity of the root for the dataof exercise 8, p. 116. Comment A variant of Newton’s method,

xi+1 = xi − pf(xi)f ′(xi)

is quadratically convergent to a root of known multiplicity p.Solution It is noted on p. 111 that for a root of multiplicity m anditerates xn, xn−1, . . . , the ratios

xn − xn−1

xn−1 − xn−2→ m− 1

m

For the data given, the ratios are

0.7675 0.7548 0.7516 0.7534

8

Taking into account that m is an integer, it appears that these ratiosare converging to 0.75 = (4− 1)/4, hence m = 4.

2. In his lecture notes on the numerical solution of PDEs, A. Baylissanalyzes the stability of the (2-4) leapfrog method. He shows thatif the ratio λ of the step size in time to the step size in space issmaller than λ0 ≈ 0.728, the computation is stable. The constantλ0 = 1/ max(|F (θ)|) where

F (θ) =43

sin(θ)− 16

sin(2θ)

The maximum is for −π ≤ θ ≤ +π, but because of symmetry, youneed consider only 0 ≤ θ ≤ +π. Using fzero, confirm the valuestated for λ0. Only a digit or two is meaningful in this application,but the computation is so cheap that you might as well use the defaulttolerances. You are to use an option that allows you to monitor thecomputation. Specifically, include the command

options = optimset(’Display’,’iter’)

and pass the resulting options structure to fzero. This solver uses bi-section for reliability and for speed it uses the secant rule or a higherorder method that is also based on interpolation. For a smooth func-tion like this one, you should expect that the solver will resort to thesuperlinearly convergent methods based on interpolation to get a lotof accuracy in just a few iterations. Explain how you get λ0 and com-pare your value to that given by Bayliss. Turn in your program andits output. Do the residuals show superlinear convergence? Explainyour answer.Solution As seen in the plot

9

−3 −2 −1 0 1 2 3−1.5

−1

−0.5

0

0.5

1

1.5

F(θ) is symmetric and has a unique maximum on [0,π]

we can locate the maximum of |F (θ)| by computing the root of F ′(θ)in [0, π]. The following program does this and then evaluates λ0.

function lambda0F = @(theta) (4/3)*sin(theta) - (1/6)*sin(2*theta);% FP is the first derivative of F:FP = @(theta) (4/3)*cos(theta) - (1/3)*cos(2*theta);options = optimset(’Display’,’iter’);root = fzero(FP,[0,pi],options)lambda0 = 1/abs(F(root))

Running this program results in the (slightly edited) output

Func-count x f(x) Procedure2 0 1 initial3 1.1781 0.745947 interpolation4 2.15984 -0.613199 bisection5 1.71691 0.125069 interpolation6 1.79195 0.00878053 interpolation7 1.79754 -9.3796e-005 interpolation8 1.79748 1.47393e-007 interpolation9 1.79748 2.4542e-012 interpolation

10 1.79748 0 interpolation

Zero found in the interval [0, 3.14159]

root = 1.7975

10

lambda0 = 0.7287

This confirms the value stated by Bayliss for λ0. According to theresiduals, the convergence is rapid once the program starts using amethod based on interpolation. If the convergence were linear, theratio of successive residuals, f(xn+1)/f(xn), would be approximatelyconstant and if the convergence were superlinear, the ratio wouldtend to zero. Obviously the ratio is decreasing rapidly to zero, whichshows that the convergence is superlinear. Notice that the residualf(x) of the final iterate is computed to be zero in finite precisionarithmetic.

• Oct.1–Oct. 5

1. A model of a lightning rod leads to an equation of the form

φ(λ) =x2

a2 + λ+

y2

b2 + λ+

z2

c2 + λ− 1 = 0

Nominal values for the physical parameters are x = y = z = 50and a = 1, b = 2, c = 100. A code returns an approximate rootσ = 5928.3659. There are some interesting bounds on the errors ofapproximate roots for polynomials, so we note that this equation canbe scaled to get a cubic polynomial equation

P (λ) = −(a2 + λ)(b2 + λ)(c2 + λ)φ(λ) = 0

In general, if

P (λ) = (λ− r1)(λ− r2) · · · (λ− rn) = λn + an−1λn−1 + . . . + a0

then

minj

∣∣∣∣σ − rj

σ

∣∣∣∣ ≤ n

∣∣∣∣P (σ)

σ P ′(σ)

∣∣∣∣For the problem at hand, n = 3, P (σ) = 1.6261594 × 104, P ′(σ) =8.5162684× 107, and a0 = −1.2497000× 108.

(a) The residual P (σ) seems very large. Evaluate the bound on therelative error to see that σ is actually a pretty good approxima-tion.Solution According to the bound, σ approximates a root withrelative error no more than

3∣∣∣∣

1.6261594× 104

5928.3659× 8.5162684× 107

∣∣∣∣ = 9.6627× 10−8

which says that the approximation is quite good.

11

(b) Suppose you were to take a step of Newton’s method from theapproximation σ. We have learned how to use this to estimatethe error in σ. What is the estimate of the error relative to σ forthis problem? How is this related to the bound stated?Solution The result σ∗ of a step of Newton’s method from anapproximation σ is

σ∗ = σ − P (σ)P ′(σ)

Because the method is superlinearly convergent when applied toa smooth function like a polynomial, the error in approximatinga root rj near σ is about

∣∣∣∣rj − σ

σ

∣∣∣∣ ≈∣∣∣∣σ∗ − σ

σ

∣∣∣∣ =∣∣∣∣−

P (σ)σP ′(σ)

∣∣∣∣

Obviously the bound on the error in a root of a polynomial is justn times this general approximation of the error. For this problemthe estimate is just one third of the bound stated earlier, namely3.2209× 10−8.

(c) The computed root σ is an exact root of a perturbed equation.How much do we have to perturb the a0 term in a relative senseto get an equation for which σ is an exact root? This is an easyway to see that the solver has performed in a satisfactory mannerdespite what seems to be a very large residual.Solution We have

P (σ) = σ3 + a2σ2 + a1σ + a0

hence σ is an exact root of the polynomial

σ3 + a2σ2 + a1σ + a0 = 0

where a0 = a0 − P (σ). The relative perturbation in a0 is∣∣∣∣a0 − a0

a0

∣∣∣∣ =∣∣∣∣P (σ)a0

∣∣∣∣ =1.6261594× 104

1.2497000× 108= 1.3012× 10−4

Despite what seems to be a very large residual, σ is an exactroot of a cubic polynomial with coefficients that are not greatlydifferent from those specified.

2. Interpolation frequently becomes awkward in the neighborhood of asingularity of a function. A commonly used technique is to introducean auxiliary function either additively or multiplicatively. Thus, wemight choose a function s(x) which makes the function S(x) = s(x)+f(x) smoother than f(x) itself or a function p(x) which makes P (x) =p(x) f(x) smoother. As an example, consider the task of interpolatingcsc(x) near x = 0 using the following data taken from a publishedtable:

12

x = [0.005 0.010 0.015 0.020 0.030];f = [200.0010 100.0020 66.6692 50.0033 33.3383];

The Laurent series

csc(x) =1x

+16x +

7360

x3 + . . .

shows that the functions csc(x)− 1/x and x csc(x) are well-behavednear x = 0. From the data given construct a table of one of thesetwo functions and interpolate in the table to approximate csc(10−3).The series shows that a cubic would be a good choice, so interpolateat four points. We’ll see that the “best” nodes are those closest tothe point where we want to approximate the function. Use them andestimate the relative error of your approximation by comparing it tothe result of interpolating at the five closest points. Compare thisestimate to the true relative error. You should use either Lint orNint for interpolating. Along with your numerical results, turn in alisting of your program.Solution My program uses both schemes and both Lint and Nintas subfunctions in the main function. It is best to use data as closeto xint = 10−3 as possible. Quite aside from the singularity, we mustbe concerned about the accuracy of the approximation because wehave to extrapolate to approximate the function at xint. Here is partof the program that shows one scheme applied to this data.

function ExOct1_5solx = [0.005 0.010 0.015 0.020 0.030];f = [200.0010 100.0020 66.6692 50.0033 33.3383];S = f - 1./x;xint = 1e-3;ftrue = csc(xint);S4 = Lint(x(1:4),S(1:4),xint) + 1/xintS5 = Lint(x(1:5),S(1:5),xint) + 1/xint;est = (S5 - S4)/S5err = (ftrue - S4)/ftrue

The desired quantities are output simply by leaving semicolons offthe appropriate lines. In slightly edited form this results in

S4 = 999.9994est = -4.2986e-007err = 7.7307e-007

If we were to use the fact that the function is odd, we could inter-polate at the four points [-0.010 -0.005 0.005 0.010]. For the

13

error estimation we would use the additional point 0.015 as beingthe closest to xint. We would expect this data set to provide a bet-ter result because we are interpolating instead of extrapolating, thedata is closer to xint, and xint is near the center of the data. Myprogram does this additional computation and returns (in slightlyedited form)

S4 = 1.0000e+003est = -4.4352e-008err = -3.3333e-008

Notice that not only is the approximation more accurate, but alsothe scheme for estimating the error worked better.

• Oct. 8–Oct. 12

1. To understand better the construction of splines, you are to work outthe case of a quadratic spline S(x). With nodes x0 < x1 < . . . xN

and fi = f(xi) given at each xi, this spline is to have the properties

(a) S(x) is a quadratic polynomial on [xi, xi+1] for i = 0, 1, . . . , N−1.(b) S(xi) = fi for i = 0, 1, . . . , N .(c) S(x) and S′(x) are continuous on [x0, xN ].

For i = 0, 1, . . . , N − 1. write the spline in the form

S(x) = ai + bi(x− xi) + ci(x− xi)(x− xi+1) for xi ≤ x ≤ xi+1

Use the interpolation conditions and continuity of S(x) to deriveexplicit expressions for all the ai and bi. Then use continuity ofS′(x) to show that

ci =bi − bi−1

xi+1 − xi− ci−1

xi − xi−1

xi+1 − xi, i = 1, 2, . . . , N − 1.

All the conditions have now been satisfied, so c0 is a free parameter.How is it related to S′′(x0)? One way to get a reasonable value for c0

is to use a divided difference approximation to this second derivative.Use material from pp. 124–126 to work out a formula for c0.Solution On each subinterval the spline is a quadratic that we writein the form

S(x) = ai + bi(x− xi) + ci(x− xi)(x− xi+1) for xi ≤ x ≤ xi+1

We need to find {ai, bi, ci} for i = 0, . . . , N − 1 so that the spline hasthe other properties we want. Interpolation requires that

fi = S(xi) = ai

fi+1 = S(xi+1) = ai + bi(xi+1 − xi)

14

The first equation determines all of the ai and using this, the secondsays that all the bi are given by

bi =fi+1 − fi

xi+1 − xi

Interpolating at both ends of the interval provides a continuous spline.We now consider how to choose the ci so as to make S′(x) continuous.First we need to write out the form of the derivative on [xi, xi+1],

S′(x) = bi + ci(x− xi+1) + ci(x− xi)

The derivative is to be continuous where the quadratic polynomialsconnect, so for i = 1, . . . , N − 1, we must have

S′(xi+) = S′(xi−)bi + ci(xi − xi+1) = bi−1 + ci−1(xi − xi−1)

Solving for ci, we get

ci =bi − bi−1

xi+1 − xi− ci−1

xi − xi−1

xi+1 − xi(1)

This defines ci uniquely for i = 1, . . . , N − 1 once we specify c0.All the conditions have been satisfied for any choice of c0, so we arefree to choose any “reasonable” value. Differentiating the expressionfor S′(x), we obtain

S′′(x) = 2ci for xi ≤ x ≤ xi+1

hence c0 = S′′(x)/2 for any x0 ≤ x ≤ x1. An obvious way to proceedis to use a divided difference approximation to the second derivativeof the underlying function. Equation (4.26) of the text states thatf [x0, x1, x2] = f ′′(ξ)/2 at some intermediate point ξ. Using the ex-pansion of the second divided difference in equation (4.28), we obtaina simple recipe for a suitable value of c0, namely

c0 =f0

(x0 − x1)(x0 − x2)

+f1

(x1 − x0)(x1 − x2)+

f2

(x2 − x0)(x2 − x1)

As noted already, the second derivative of the quadratic on [xi, xi+1]is the constant 2ci, which makes it easy to derive the “not-a-knot”condition at x1. Continuity of the second derivative at x1 impliesthat the quadratic polynomials are the same on [x0, x1] and [x1, x2].Obviously this means that c0 = c1. Using (1), we then have

c0 =b1 − b0

x2 − x1− c0

x1 − x0

x2 − x1

15

A little calculation shows that we must take

c0 =b1 − b0

x2 − x0

for the not-a-knot condition.

2. Atkinson and Han interpolate the data of Table 4.3 in several dif-ferent ways that are shown in Figures 4.7–4.10. Fig. 4.10 was com-puted using the spline function of Matlab. It provides a satisfac-tory graph, but we should be concerned about using this functionwith its default not-a-knot end conditions in these circumstances—explain why. Use pchip to interpolate this data. Turn in a listingof your program and a plot of the interpolant and the data. Includelegend(’PCHIP’,’data’) in your program. By construction this in-terpolant preserves monotonicity in the data, which is clearly notpreserved by the interpolant of Fig. 4.10, but whether this is impor-tant will depend on what you know about the underlying functionfrom which the data is drawn.Solution The interval of interest is split into only six subintervals andtwo subintervals at each end are used by the default end conditions ofspline. Relatively little data is left and spline performs best whenthere is a lot of data. In particular, monotonicity and convexityfollow from convergence and so may not be present when there arefew data points. The data of Table 4.3 is difficult not only becausethere are few points, but also because the values are the same at theends of two subintervals. As seen in Figure 4.10 the smooth cubicspline has large overshoots so as to fit the data on these subintervalswith a curve that has a continuous curvature. Here is a little programfor computing and plotting the interpolant of pchip:

function ExOct8_12solx = [0 1 2 2.5 3 3.5 4];y = [2.5 0.5 0.5 1.5 1.5 1.125 0];xint = linspace(x(1),x(end));yint = pchip(x,y,xint);plot(xint,yint,’b’,x,y,’ro’)legend(’PCHIP’,’data’)

end % ExOct8_12sol

By construction the interpolant of pchip is monotone, but as seenin the figure plotted by this program, it may not have a continuouscurvature. It might be remarked that the linear spline of Figure 4.7 isalso a monotone interpolant. That is why it resembles the interpolantof pchip, but it is only continuous whereas the interpolant of pchipis continuous and has a continuous first derivative.

16

0 0.5 1 1.5 2 2.5 3 3.5 40

0.5

1

1.5

2

2.5

PCHIPdata

• Oct. 15–Oct. 19

1. Download parametric.m and parametric.fig from the class site.parametric.m is based on a program found in the section “Inter-active Plotting” of “Help”. To illustrate ginput, a script is giventhat has a user specify interactively data points in the plane. Theprogram then defines a parametrization, interpolates the data with acubic spline, and plots a smooth curve through the data points. Un-fortunately, the simple parametrization used can result in an unsat-isfactory interpolant. To see an example of this, enter the command

open parametric.fig

The blue curve of this figure was computed with the scheme of the“Help” entry. It is a smooth curve through the data points, but zoom-ing in on certain portions shows that it is not all that we might want.You are to implement the scheme that produced the red curve, whichhas a much better qualitative behavior. Your assignment appears atthe beginning of the file parametric.m.Solution The assignment (modified to fit on this page) is

% Define the approximate arclength parameter, interpolate,% and plot the curve in red at 200 points equally spaced% in this parameter on the same graph as the simple% parametrization. (HOLD is already ON and TITLE has% already been modified.) Create an example which shows% a dramatic difference. Print off the figure as well as% zoomed portions of the curve that make the point. Turn

17

% in the figures along with a listing of your program.

All that is necessary is to add the commands

t = zeros(1,n);for m = 1:n-1

t(m+1) = t(m) + sqrt((xy(1,m+1) - xy(1,m))^2 + ...(xy(2,m+1) - xy(2,m))^2);

endts = linspace(t(1),t(n),200);xys = spline(t,xy,ts);plot(xys(1,:),xys(2,:),’r-’);

between the implementation of the simple parametrization and thehold off command. The dramatic example of parametric.fig wascreated in this way. The essence of the matter is that the simpleparametrization treats all points as being at equal distance. If somepoints are physically much closer than others, the extra length of thesimple parametrization leads to oscillations and even loops to use upthe extra distance.

• Oct. 22–Oct. 26 In Chapter 5, study §5.1, §5.2.4, and §5.4 up toTheorem 5.4.2. Read the handout on the trapezoidal rule.

1. Section 5.2.1 of the text and the handout show that when approxi-mating the integral of f(x) over [a, b] with the trapezoidal rule anduniform mesh spacing h = (b− a)/n, the error

ETn ≈ −h2

12[f ′(b)− f ′(a)] (2)

Generally f ′(x) is not available, so let us consider how to get a prac-tical estimate of the error. We have learned that by interpolating thefunction f(x) with a polynomial P (x), we have f ′(x) ≈ P ′(x). Takethis approach to deriving a computable estimate of the error of thetrapezoidal rule that uses values of the function at the nodes, i.e.,values f(a), f(a + h), f(a + 2h), . . . , f(b− 2h), f(b− h), f(b). You donot need a very accurate approximation of the first derivative becauseif P ′a(a) = f ′(a) + O(hm) and similarly at b, then

ETn ≈ −h2

12[P ′a(b)− P ′b(a)] + O(h2+m)

which is to say that a first order approximation (m = 1) to thederivative is all that is needed. On the other hand, the approximation(2) is good to terms that are O(h4), so it would be reasonable to usea second order approximation (m = 2) to the derivative. For the

18

sake of simplicity, use a first order approximation to the derivativein your derivation of a practical error estimate.Solution A standard first order difference approximation to the firstderivative is obtained from a divided difference in equations (4.17,4.18) and in a different way in equation (5.76). Still another way toderive the same approximation is to interpolate with a straight lineand use its (constant) derivative (the slope of the line). At x = a itis

f ′(a) ≈ f(a + h)− f(a)h

With the corresponding approximation at x = b, we have the com-putable error estimate

ETn ≈ −h2

12

[f(b)− f(b− h)

h− f(a + h)− f(a)

h

]

2. You are to approximate the integral from 0 to 4 of the function definedby Table 4.3. One method is to fit the data with a (monotonicity-preserving) linear spline as in Fig. 4.7 and integrate it analytically.This amounts to using the trapezoidal rule on each subinterval. Notethat the subintervals do not have the same length, so it is convenientto do the integration by applying the trapezoidal rule to each subin-terval and adding the results. Another method is to fit the data withthe monotonicity-preserving cubic spline of pchip and integrate itanalytically. A program, intpchip, that does this is available on theclass web site. Compute approximate integrals both ways. Consider-ing how well the two approximations agree, what would be a plausiblevalue for the integral? Provide a listing of your program for comput-ing the two approximations and the output of the program.Solution After defining the data of the table, the computations canbe done with

V = 0;for k = 1:length(x)-1

V = V + (x(k+1) - x(k))*(y(k+1) + y(k))/2;endfprintf(’Integrating a linear spline gives %g.\n’,V)v = intpchip(x,y,0,4);fprintf(’Integrating a cubic spline gives %g.\n’,v)

This results in

Integrating a linear spline gives 4.1875.Integrating a cubic spline gives 4.

The consistency of these results suggests that the integral is about 4.

19

• Oct. 29–Nov. 2 In Chapter 5, study §5.3. Take especially good notesin class because much of the material is not covered in the text.

1. Do Exercise 11(a) on p. 231.Solution We must find w1 and x1 so that

∫ 1

0

f(x)√

x dx = w1 f(x1)

when f(x) ≡ 1 and f(x) = x. The first equation is 2/3 = w1 1, whichimplies that w1 = 2/3. The second equation is 2/5 = w1 x1, whichimplies that x1 = 3/5.

2. The Sievert integral is a function of two variables defined by

S(x, θ) =∫ θ

0

e−x sec(φ) dφ

According to the Handbook of Mathematical Functions, S(1, π/2) ≈0.328286. You are to confirm this using Adapt. The default toler-ances are not small enough to do this. Use the error estimate of theprogram to gain some confidence that your answer is sufficiently ac-curate. You are to turn in a listing of your program and the valuesoutput for the approximation, an estimate of its error, and the num-ber of (vector) function evaluations, nfe. How does sec(φ) behave onthe interval of integration? (You might want to look at the plot inthe Matlab “Help”.) How does this behavior affect quadrature pro-grams that evaluate the integrand at the ends of the interval? Adaptis not troubled by this because it does not evaluate at the ends, butconsider the rest of the integrand—does the behavior of sec(φ) causethe integrand to behave badly on the interval of integration whenx = 1? Is underflow an issue when evaluating the integrand? Is nfeconsistent with the behavior that you expect of the integrand?Solution Because all the built-in special functions are vectorized,the computation requires only a couple of lines, e.g.,

format long ef = @(phi) exp(-sec(phi));[answer,errest,nfe] = Adapt(f,0,pi/2,0,1e-7)

The format statement causes answer to be displayed to enough dig-its to confirm the value in the table. The estimated error of about5×10−8 supports a belief that the program has produced a sufficientlyaccurate approximation to check the value in the table. The func-tion sec(φ) is well behaved on the interval except that as φ → π/2,the function goes to +∞. This causes an immediate failure due tooverflow in a quadrature program that evaluates at the ends of the

20

interval. Overflow is still possible when using a program like Adaptthat does not evaluate at end points, but it will not fail immedi-ately. As φ approaches π/2, the integrand exp(− sec(φ)) goes to 0very quickly. The integrand is easy to approximate, but underflow isa real possibility if the quadrature program evaluates at points nearthe right end of the interval. On underflow, Matlab sets quantitiesto 0, which is appropriate here because the very small integrand con-tributes almost nothing to the value of the integral. The integrandis very smooth and has no rapid changes of behavior on the intervalof integration. Indeed, there is no question about this except at theend and we have seen that the function behaves well there. Accord-ingly, Adapt finds the task to be easy as seen in the small number of(vector) function evaluations, nfe = 17.

• Nov. 5–Nov. 9 In Chapter 6, review §6.1 and §6.2 as needed. Study§6.3, §6.3.1, §6.4, §6.4.2, §6.4.3.

1. For 0 ≤ θ ≤ π, Clausen’s integral is the analytical sum of the series

f(θ) =∞∑

k=1

sin(kθ)k2

= −∫ θ

0

log (2 sin(t/2)) dt

You are not asked to evaluate Clausen’s integral in this exercise, justexplain the difficulties and how they might be addressed. Analyzeinformally the behavior of the integrand near the left end point tosee that it has an integrable singularity there. (Chapter 1 has someseries that might be helpful.) How is it possible to compute an ap-proximation with Adapt despite the singularity? Though Adapt dealswell enough with this integrand, better practice would be to use themethod of subtracting out the singularity. Explain in detail how toapply the method to Clausen’s integral.Solution The series for sin(x) on p. 17 shows that

2 sin(t/2) = t− t3

24+ . . . (3)

so near the origin, the integrand is approximately log(t). The limitof this function as t → 0 is −∞, but the function is integrable. Thefact that the integrand is infinite at one end of the interval does nottrouble Adapt because it does not evaluate at the end points. Themethod of subtracting out the singularity breaks the integral intotwo parts

−∫ θ

0

log (2 sin(t/2)) dt = −∫ θ

0

log(t) dt

−∫ θ

0

[log (2 sin(t/2))− log(t)] dt

21

The first integral on the right has a singular integrand, but it can beevaluated analytically as θ−θ log(θ). The second integral is evaluatednumerically without any special difficulty because we have subtractedout the singularity. It is better to use a property of logarithm to writethis integral as

−∫ θ

0

log(

2 sin(t/2)t

)dt

Using (3) we see that

log(

2 sin(t/2)t

)= log

(1− t2

24+ . . .

)

which makes it obvious that the integrand is well-behaved near theorigin and has the finite value log(1) = 0 at t = 0.

2. The text provides a program tridiag that exploits sparsity whensolving tridiagonal systems by Gaussian elimination without pivot-ing, but it does not exploit the speed of built-in functions nor doesit provide for pivoting, so you are to write a more effective function.Download sptridiag from the class site. At present it contains justthe call list of the function and usage notes—you are to completethe function. On entry you are to get the number of equations asthe length of the vector f. You will then form the tridiagonal ma-trix using spdiags and solve the linear system with the backslashoperation. The vectors you work on must be column vectors, but donot assume that the user has input column vectors. In preparationfor this exercise, download SparseMatrices and study it. Use yourversion of sptridiag to do Exercise 10 on p. 294. In addition tosolving this linear system, you are to form the matrix A as a sparsematrix and compute the (default Euclidean) norm of the residual ofthe solution. Include sptridiag as a subfunction in the file withyour main program. Print out this file and turn it in along with itsoutput (which is just the norm of the residual).Solution The function sptridiag provided is completed with thecommands

n = length(f);M = spdiags([a(:) b(:) c(:)], -1:1, n, n);x = M \ f(:);

Note here several instances of the construct f(:) which makes theinput vector into a column vector. The exercise of the text is solvedeasily with

function ExNov5_9soln = 100;a = ones(n,1);

22

b = 4*ones(n,1);c = ones(n,1);f = ones(n,1);x = sptridiag(a,b,c,f);A = spdiags([a b c], -1:1, n, n);norm(f - A*x)

end % ExNov5_9sol

The sptridiag program is a subfunction in the file ExNov5 9sol.m.The output of this program is 4.9651e-016, making clear that wehave an accurate solution.

• Nov. 12–Nov. 16 In Chapter 6, study §6.5, §6.5.1, and §6.5.2. StudyNotes on Linear Systems.

1. In §1 of the Notes on Linear Systems verify that the Givens trans-formation makes a′p,1 = 0. Further verify that for each j,

(a′1,j)2 + (a′p,j)

2 = (a1,j)2 + (ap,j)2

which shows that in a certain sense, the Givens transformation doesnot change the size of the entries in the matrix.Solution The entry a′p,1 vanishes because

a′p,1 = −s a1,1 + c ap,1 = −ap,1

da1,1 +

a1,1

dap,1 = 0

The other result requires a little more work:

(a′1,j)2 + (a′p,j)

2 = (c a1,j + s ap,j)2 + (−s a1,j + c ap,j)2

= c2a21,j + 2csa1,jap,j + s2a2

p,j +

s2a21,j − 2csa1,jap,j + c2a2

p,j

=[c2 + s2

] [(a1,j)2 + (ap,j)2

]

The desired result now follows from the identity

c2 + s2 =(a1,1

d

)2

+(ap,1

d

)2

=a21,1 + a2

p,1

d2= 1

2. This question is based on §9.3 of C.A.J. Fletcher, ComputationalTechniques for Fluid Dynamics. A model of steady convection anddiffusion is provided by the equation uT ′(x)−αT ′′(x) = 0. This is tohold for 0 ≤ x ≤ 1 and the solution is to satisfy boundary conditionsT (0) = 0, T (1) = 1. Here u and α are physical constants, the flowvelocity and the diffusion coefficient, respectively. Verify that theanalytical solution of this model problem is

T (x) =eux/α − 1eu/α − 1

23

In a finite difference solution of this problem, an integer N is chosenand a mesh spacing ∆ = 1/(N + 1) is defined. We compute Tj ≈T (j∆) for j = 0, 1, . . . , N +1. From the boundary conditions we haveT0 = T (0) = 0, TN+1 = T (1) = 1. Approximating the derivatives inthe differential equation at mesh points j∆ for j = 1, . . . , N by finitedifferences leads first to

u

(Tj+1 − Tj−1

2∆

)− α

(Tj+1 − 2Tj + Tj−1

∆2

)= 0

and then to

−(1 + 0.5R)Tj−1 + 2Tj − (1− 0.5R)Tj+1 = 0

Here R = u∆/α is the cell Reynolds number. Verify that if R < 2,the result on p. 289 guarantees that the matrix is non-singular andGaussian elimination can be applied without pivoting. Verify thatfor u/α = 20, choosing N = 20 results in a mesh spacing ∆ that issufficiently small for the result to be applicable. Solve the systemof linear equations with these values using the sptridiag programon the class web site. Keep in mind that the given boundary valuesaffect the right hand side of the equations with j = 1 and j = N .Plot the analytical solution T (x) and the Tj that you compute onthe same graph. For this plot, augment the numerical values Tnumthat you compute at interior mesh points with the boundary values:

Tnum = [0; Tnum; 1];

You will find that you get a reasonable approximation to T (x) at themesh points even with this crude mesh. Clearly a fine mesh is neededwhen convection dominates strongly (u À α) because of a boundarylayer at x = 1, but solving tridiagonal systems is so efficient that theapproach works well even in difficult circumstances. Turn in a listingof your program and the plot it produces.Solution For brevity, let us write β = u/α and d = eβ − 1 so thatT (x) = (eβx−1)/d. To verify that this is the solution of the boundaryvalue problem, we first check the boundary conditions: T (0) = (e0−1)/d = 0 and T (1) = (eβ − 1)/d = 1 as required. Now T ′(x) =βeβx/d and T ′′(x) = β2eβx/d. Substituting these expressions intothe differential equation we find that

uT ′(x)− αT ′′(x) = uβeβx

d− α

β2eβx

d=

βeβx

d[u− αβ] = 0

The conditions on p. 289 are that for an N×N matrix with aj , cj 6= 0for j = 2, . . . N − 1,

|b1| > |c1| > 0|bj | ≥ |aj |+ |cj ||bN | > |aN | > 0

24

If 0 < R < 2, we have 2 > |aj | = | − (1 + 0.5R)| = 1 + 0.5R > 1 and1 > |cj | = | − (1− 0.5R)| = 1− 0.5R > 0. With bj = 2 for all j, thefirst and last inequalities are obviously true. Further

2 = |bj | ≥ |aj |+ |cj | = (1 + 0.5R) + (1− 0.5R) = 2

With u/α = 20 and ∆ = 1/21, we have R < 1, so the result isapplicable. A program that does the specified computations is

function ExNov12_16solN = 20;delta = 1/(N+1);ua = 20; % Ratio of flow velocity to diffusionR = delta * ua; % Cell Reynolds numbera = -(ones(N,1) + 0.5*R);b = 2*ones(N,1);c = -(ones(N,1) - 0.5*R);f = zeros(N,1);f(N) = 1 - 0.5*R; % Account for boundary valueTnum = sptridiag(a,b,c,f);Tnum = [0; Tnum; 1];xnum = 0:delta:1;xtrue = linspace(0,1);Ttrue = (exp(ua*xtrue) - 1)/(exp(ua) - 1);plot(xnum,Tnum,’ro’,xtrue,Ttrue,’b’);title(’Convection-diffusion model’)legend(’numerical’,’analytical’,’Location’,’NorthWest’)

end % ExNov12_16sol

Of course, the program sptridiag must be available as a subfunctionof this file or appear in a separate file with the name sptridiag.m.The output of the main program is

25

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Convection−diffusion model

numericalanalytical

• Nov. 19–Nov. 23 Holidays

• Nov. 26–Nov. 30 In Chapter 8, review §8.1 as needed. Study §8.2 and§8.3 (but skip §8.3.1 and §8.3.2).

1. The Blasius equation is

d3f

dη3+ f

d2f

dη2= 0

Prepare it for numerical solution with one of the Matlab solversSolution If y1 = f , y2 = f ′, y3 = f ′′, then

y′1 = f ′ = y2

y′2 = f ′′ = y3

y′3 = f ′′′ = −f f ′′ = −y1 y3

2. Prepare the following system of equations for numerical solution withone of the Matlab solvers

d2u

dx2+

du

dx

dv

dx= sin(x)

dv

dx+ u + v = cos(x)

Solution Because the second derivative of u is present, we needvariables y1 = u and y2 = u′. Only the first derivative of v is present,

26

so we just need y3 = v. Introducing these variables and an equationfor y′1 = u′ = y2, the equations become

y′1 = y2

y′2 = sin(x)− y2 y′3y′3 = cos(x)− y1 − y3

This is not in standard form because y′3 appears in the second equa-tion. We eliminate it using the third equation to get a system,

y′1 = y2

y′2 = sin(x)− y2 (cos(x)− y1 − y3)y′3 = cos(x)− y1 − y3

in standard form.

3. You should get started on this computational exercise, butit is not due until next Wednesday. The response of a motorcontrolled by a governor can be modelled by

s′′ + 0.042 s′ + 0.961 s = θ′ + 0.063 θ

u′′ + 0.087 u′ = s′ + 0.025 s

v′ = 0.873 (u− v)w′ = 0.433 (v − w)x′ = 0.508 (w − x)θ′ = −0.396 (x− 47.6)

Here all derivatives are with respect to time t. You are to inte-grate these equations from t = 0 to t = 500 with initial valuess(0) = s′(0) = u′(0) = θ(0) = 0, u(0) = 50, v(0) = w(0) = x(0) = 75and plot v(t). The motor should approach a constant speed as t →∞.Work out a constant (steady–state) solution of the differential equa-tions. Turn in your analysis and the constant solution. You shouldfind that the v(t) you compute converges to the value you determinein this way. To show this graphically, plot the v component of thesteady–state solution on the same graph as v(t). (The plot routinedraws straight lines between data points, so plot([0,500],[3,3])plots a constant value of 3 for 0 ≤ t ≤ 500.) Turn in your plot anda listing of the program that produces it. You will have to writethe equations in standard form. Your program must containcomments that state the correspondence between the orig-inal variables and the components of the vector used in thecomputations. Be careful about the term θ′ in the first equa-tion when you work out an equivalent system in standard form. Useode23 with default tolerances for the integration.

27

• Dec. 3–Dec. 5 The class Wednesday, Dec. 5, will be the lastclass of the semester. In Chapter 8, study §8.5.

– The computational exercise of last week is due Wednesday.Solution It is almost straightforward to write the equations as asystem in standard form. The first equation involves θ′, which isto be eliminated using the last equation. The steady state is foundeasily: With θ′ = 0 for a steady state, the last equation says thatx = 47.6. The preceding equations say that w = x and then thatv = w. This provides the limit value v = 47.6 that we want. Thethird equation goes on to say that u = v. The second equation ismore interesting. With u′′, u′, s′ all zero for a constant solution, itsays that s = 0. Similarly, with s′′, s′, θ′ all zero for a steady state inthe first equation, the value s = 0 tells us that θ = 0. The followingprogram shows the system in standard form and generates the plotdisplayed.

function ExDec5soly0(1) = 0; % sy0(2) = 0; % s’y0(3) = 50; % uy0(4) = 0; % u’y0(5) = 75; % vy0(6) = 75; % wy0(7) = 75; % xy0(8) = 0; % theta

[t,y] = ode23(@odes,[0,500],y0);plot(t,y(:,5),’b’,[0,500],[47.6,47.6],’r’)legend(’v(t)’,’steady state’)

end % ExDec5sol

%===Subfunction==========================================function dydt = odes(t,y)% y(1) = s, y(2) = s’, y(3) = u, y(4) = u’,% y(5) = v, y(6) = w, y(7) = x, y(8) = theta.dydt = zeros(8,1);dydt(1) = y(2);thetap = -0.396*(y(7) - 47.6); % theta’dydt(2) = -0.042*y(2) - 0.961*y(1) + thetap + 0.063*y(8);dydt(3) = y(4);dydt(4) = - 0.087*y(4) + y(2) + 0.025*y(1);dydt(5) = 0.873*(y(3) - y(5));dydt(6) = 0.433*(y(5) - y(6));dydt(7) = 0.508*(y(6) - y(7));dydt(8) = thetap;

end % odes

28

0 100 200 300 400 500−10

0

10

20

30

40

50

60

70

80

90

v(t)steady state

• Final Examination is Wednesday, Dec. 12, at 3pm.

29