30

NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

  • Upload
    others

  • View
    20

  • Download
    0

Embed Size (px)

Citation preview

Page 1: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor
Page 2: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor
Page 3: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

NUMERICAL ANALYSIS WITHAPPLICATIONS IN MECHANICSAND ENGINEERING

Page 4: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor
Page 5: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

NUMERICAL ANALYSIS WITHAPPLICATIONS IN MECHANICSAND ENGINEERING

PETRE TEODORESCUNICOLAE-DORU STANESCUNICOLAE PANDREA

Page 6: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

Copyright © 2013 by The Institute of Electrical and Electronics Engineers, Inc.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey. All rights reserved

Published simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by anymeans, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted underSection 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of thePublisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center,Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web atwww.copyright.com. Requests to the Publisher for permission should be addressed to the PermissionsDepartment, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201)748-6008, or online at http://www.wiley.com/go/permission.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts inpreparing this book, they make no representations or warranties with respect to the accuracy or completeness ofthe contents of this book and specifically disclaim any implied warranties of merchantability or fitness for aparticular purpose. No warranty may be created or extended by sales representatives or written sales materials.The advice and strategies contained herein may not be suitable for your situation. You should consult with aprofessional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any othercommercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our CustomerCare Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax(317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not beavailable in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

Teodorescu, P. P.Numerical Analysis with Applications in Mechanics and Engineering / Petre Teodorescu,

Nicolae-Doru Stanescu, Nicolae Pandrea.pages cm

ISBN 978-1-118-07750-4 (cloth)1. Numerical analysis. 2. Engineering mathematics. I. Stanescu, Nicolae-Doru. II. Pandrea,Nicolae. III. Title.

QA297.T456 2013620.001′518–dc23

2012043659

Printed in the United States of America

ISBN: 9781118077504

10 9 8 7 6 5 4 3 2 1

Page 7: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

CONTENTS

Preface xi

1 Errors in Numerical Analysis 1

1.1 Enter Data Errors, 11.2 Approximation Errors, 21.3 Round-Off Errors, 31.4 Propagation of Errors, 3

1.4.1 Addition, 31.4.2 Multiplication, 51.4.3 Inversion of a Number, 71.4.4 Division of Two Numbers, 71.4.5 Raising to a Negative Entire Power, 71.4.6 Taking the Root of pth Order, 71.4.7 Subtraction, 81.4.8 Computation of Functions, 8

1.5 Applications, 8Further Reading, 14

2 Solution of Equations 17

2.1 The Bipartition (Bisection) Method, 172.2 The Chord (Secant) Method, 202.3 The Tangent Method (Newton), 262.4 The Contraction Method, 372.5 The Newton–Kantorovich Method, 422.6 Numerical Examples, 462.7 Applications, 49

Further Reading, 52

v

Page 8: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

vi CONTENTS

3 Solution of Algebraic Equations 55

3.1 Determination of Limits of the Roots of Polynomials, 553.2 Separation of Roots, 603.3 Lagrange’s Method, 693.4 The Lobachevski–Graeffe Method, 72

3.4.1 The Case of Distinct Real Roots, 723.4.2 The Case of a Pair of Complex Conjugate Roots, 743.4.3 The Case of Two Pairs of Complex Conjugate Roots, 75

3.5 The Bernoulli Method, 763.6 The Bierge–Viete Method, 793.7 Lin Methods, 793.8 Numerical Examples, 823.9 Applications, 94

Further Reading, 109

4 Linear Algebra 111

4.1 Calculation of Determinants, 1114.1.1 Use of Definition, 1114.1.2 Use of Equivalent Matrices, 112

4.2 Calculation of the Rank, 1134.3 Norm of a Matrix, 1144.4 Inversion of Matrices, 123

4.4.1 Direct Inversion, 1234.4.2 The Gauss–Jordan Method, 1244.4.3 The Determination of the Inverse Matrix by its Partition, 1254.4.4 Schur’s Method of Inversion of Matrices, 1274.4.5 The Iterative Method (Schulz), 1284.4.6 Inversion by Means of the Characteristic Polynomial, 1314.4.7 The Frame–Fadeev Method, 131

4.5 Solution of Linear Algebraic Systems of Equations, 1324.5.1 Cramer’s Rule, 1324.5.2 Gauss’s Method, 1334.5.3 The Gauss–Jordan Method, 1344.5.4 The LU Factorization, 1354.5.5 The Schur Method of Solving Systems of Linear Equations, 1374.5.6 The Iteration Method (Jacobi), 1424.5.7 The Gauss–Seidel Method, 1474.5.8 The Relaxation Method, 1494.5.9 The Monte Carlo Method, 150

4.5.10 Infinite Systems of Linear Equations, 1524.6 Determination of Eigenvalues and Eigenvectors, 153

4.6.1 Introduction, 1534.6.2 Krylov’s Method, 1554.6.3 Danilevski’s Method, 1574.6.4 The Direct Power Method, 1604.6.5 The Inverse Power Method, 1654.6.6 The Displacement Method, 1664.6.7 Leverrier’s Method, 166

Page 9: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

CONTENTS vii

4.6.8 The L–R (Left–Right) Method, 1664.6.9 The Rotation Method, 168

4.7 QR Decomposition, 1694.8 The Singular Value Decomposition (SVD), 1724.9 Use of the Least Squares Method in Solving the Linear

Overdetermined Systems, 1744.10 The Pseudo-Inverse of a Matrix, 1774.11 Solving of the Underdetermined Linear Systems, 1784.12 Numerical Examples, 1784.13 Applications, 211

Further Reading, 269

5 Solution of Systems of Nonlinear Equations 273

5.1 The Iteration Method (Jacobi), 2735.2 Newton’s Method, 2755.3 The Modified Newton’s Method, 2765.4 The Newton–Raphson Method, 2775.5 The Gradient Method, 2775.6 The Method of Entire Series, 2805.7 Numerical Example, 2815.8 Applications, 287

Further Reading, 304

6 Interpolation and Approximation of Functions 307

6.1 Lagrange’s Interpolation Polynomial, 3076.2 Taylor Polynomials, 3116.3 Finite Differences: Generalized Power, 3126.4 Newton’s Interpolation Polynomials, 3176.5 Central Differences: Gauss’s Formulae, Stirling’s Formula, Bessel’s

Formula, Everett’s Formulae, 3226.6 Divided Differences, 3276.7 Newton-Type Formula with Divided Differences, 3316.8 Inverse Interpolation, 3326.9 Determination of the Roots of an Equation by Inverse Interpolation, 333

6.10 Interpolation by Spline Functions, 3356.11 Hermite’s Interpolation, 3396.12 Chebyshev’s Polynomials, 3406.13 Mini–Max Approximation of Functions, 3446.14 Almost Mini–Max Approximation of Functions, 3456.15 Approximation of Functions by Trigonometric Functions (Fourier), 3466.16 Approximation of Functions by the Least Squares, 3526.17 Other Methods of Interpolation, 354

6.17.1 Interpolation with Rational Functions, 3546.17.2 The Method of Least Squares with Rational Functions, 3556.17.3 Interpolation with Exponentials, 355

6.18 Numerical Examples, 3566.19 Applications, 363

Further Reading, 374

Page 10: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

viii CONTENTS

7 Numerical Differentiation and Integration 377

7.1 Introduction, 3777.2 Numerical Differentiation by Means of an Expansion into a Taylor Series, 3777.3 Numerical Differentiation by Means of Interpolation Polynomials, 3807.4 Introduction to Numerical Integration, 3827.5 The Newton–Cotes Quadrature Formulae, 3847.6 The Trapezoid Formula, 3867.7 Simpson’s Formula, 3897.8 Euler’s and Gregory’s Formulae, 3937.9 Romberg’s Formula, 396

7.10 Chebyshev’s Quadrature Formulae, 3987.11 Legendre’s Polynomials, 4007.12 Gauss’s Quadrature Formulae, 4057.13 Orthogonal Polynomials, 406

7.13.1 Legendre Polynomials, 4077.13.2 Chebyshev Polynomials, 4077.13.3 Jacobi Polynomials, 4087.13.4 Hermite Polynomials, 4087.13.5 Laguerre Polynomials, 4097.13.6 General Properties of the Orthogonal Polynomials, 410

7.14 Quadrature Formulae of Gauss Type Obtained by Orthogonal Polynomials, 4127.14.1 Gauss–Jacobi Quadrature Formulae, 4137.14.2 Gauss–Hermite Quadrature Formulae, 4147.14.3 Gauss–Laguerre Quadrature Formulae, 415

7.15 Other Quadrature Formulae, 4177.15.1 Gauss Formulae with Imposed Points, 4177.15.2 Gauss Formulae in which the Derivatives of the Function Also

Appear, 4187.16 Calculation of Improper Integrals, 4207.17 Kantorovich’s Method, 4227.18 The Monte Carlo Method for Calculation of Definite Integrals, 423

7.18.1 The One-Dimensional Case, 4237.18.2 The Multidimensional Case, 425

7.19 Numerical Examples, 4277.20 Applications, 435

Further Reading, 447

8 Integration of Ordinary Differential Equationsand of Systems of Ordinary Differential Equations 451

8.1 State of the Problem, 4518.2 Euler’s Method, 4548.3 Taylor Method, 4578.4 The Runge–Kutta Methods, 4588.5 Multistep Methods, 4628.6 Adams’s Method, 4638.7 The Adams–Bashforth Methods, 4658.8 The Adams–Moulton Methods, 4678.9 Predictor–Corrector Methods, 469

8.9.1 Euler’s Predictor–Corrector Method, 469

Page 11: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

CONTENTS ix

8.9.2 Adams’s Predictor–Corrector Methods, 4698.9.3 Milne’s Fourth-Order Predictor–Corrector Method, 4708.9.4 Hamming’s Predictor–Corrector Method, 470

8.10 The Linear Equivalence Method (LEM), 4718.11 Considerations about the Errors, 4738.12 Numerical Example, 4748.13 Applications, 480

Further Reading, 525

9 Integration of Partial Differential Equations and ofSystems of Partial Differential Equations 529

9.1 Introduction, 5299.2 Partial Differential Equations of First Order, 529

9.2.1 Numerical Integration by Means of Explicit Schemata, 5319.2.2 Numerical Integration by Means of Implicit Schemata, 533

9.3 Partial Differential Equations of Second Order, 5349.4 Partial Differential Equations of Second Order of Elliptic Type, 5349.5 Partial Differential Equations of Second Order of Parabolic Type, 5389.6 Partial Differential Equations of Second Order of Hyperbolic Type, 5439.7 Point Matching Method, 5469.8 Variational Methods, 547

9.8.1 Ritz’s Method, 5499.8.2 Galerkin’s Method, 5519.8.3 Method of the Least Squares, 553

9.9 Numerical Examples, 5549.10 Applications, 562

Further Reading, 575

10 Optimizations 577

10.1 Introduction, 57710.2 Minimization Along a Direction, 578

10.2.1 Localization of the Minimum, 57910.2.2 Determination of the Minimum, 580

10.3 Conjugate Directions, 58310.4 Powell’s Algorithm, 58510.5 Methods of Gradient Type, 585

10.5.1 The Gradient Method, 58510.5.2 The Conjugate Gradient Method, 58710.5.3 Solution of Systems of Linear Equations by Means of Methods of

Gradient Type, 58910.6 Methods of Newton Type, 590

10.6.1 Newton’s Method, 59010.6.2 Quasi-Newton Method, 592

10.7 Linear Programming: The Simplex Algorithm, 59310.7.1 Introduction, 59310.7.2 Formulation of the Problem of Linear Programming, 59510.7.3 Geometrical Interpretation, 59710.7.4 The Primal Simplex Algorithm, 59710.7.5 The Dual Simplex Algorithm, 599

Page 12: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

x CONTENTS

10.8 Convex Programming, 60010.9 Numerical Methods for Problems of Convex Programming, 602

10.9.1 Method of Conditional Gradient, 60210.9.2 Method of Gradient’s Projection, 60210.9.3 Method of Possible Directions, 60310.9.4 Method of Penalizing Functions, 603

10.10 Quadratic Programming, 60310.11 Dynamic Programming, 60510.12 Pontryagin’s Principle of Maximum, 60710.13 Problems of Extremum, 60910.14 Numerical Examples, 61110.15 Applications, 623

Further Reading, 626

Index 629

Page 13: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

PREFACE

In writing this book, it is the authors’ wish to create a bridge between mathematical and technicaldisciplines, which requires knowledge of strong mathematical tools in the area of numerical analysis.Unlike other books in this area, this interdisciplinary work links the applicative part of numericalmethods, where mathematical results are used without understanding their proof, to the theoreticalpart of these methods, where each statement is rigorously demonstrated.

Each chapter is followed by problems of mechanics, physics, or engineering. The problem is firststated in its mechanical or technical form. Then the mathematical model is set up, emphasizing thephysical magnitudes playing the part of unknown functions and the laws that lead to the mathematicalproblem. The solution is then obtained by specifying the mathematical methods described in thecorresponding theoretical presentation. Finally, a mechanical, physical, and technical interpretationof the solution is provided, giving rise to complete knowledge of the studied phenomenon.

The book is organized into 10 chapters. Each of them begins with a theoretical presentation,which is based on practical computation—the “know-how” of the mathematical method—and endswith a range of applications.

The book contains some personal results of the authors, which have been found to be beneficialto readers.

The authors are grateful to Mrs. Eng. Ariadna–Carmen Stan for her valuable help in the pre-sentation of this book. The excellent cooperation from the team of John Wiley & Sons, Hoboken,USA, is gratefully acknowledged.

The prerequisites of this book are courses in elementary analysis and algebra, acquired by astudent in a technical university. The book is addressed to a broad audience—to all those interestedin using mathematical models and methods in various fields such as mechanics, physics, and civiland mechanical engineering; people involved in teaching, research, or design; as well as students.

Petre TeodorescuNicolae-Doru StanescuNicolae Pandrea

xi

Page 14: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor
Page 15: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

1ERRORS IN NUMERICAL ANALYSIS

In this chapter, we deal with the most encountered errors in numerical analysis, that is, enter dataerrors, approximation errors, round-off errors, and propagation of errors.

1.1 ENTER DATA ERRORS

Enter data errors appear, usually, if the enter data are obtained from measurements or experiments.In such a case, the errors corresponding to the estimation of the enter data are propagated, by meansof the calculation algorithm, to the exit data.

We define in what follows the notion of stability of errors.

Definition 1.1 A calculation process P is stable to errors if, for any ε > 0, there exists δ > 0 suchthat if for any two sets I1 and I2 of enter data we have ‖I1 − I2‖i < δ, then the two exit sets S1and S2, corresponding to I1 and I2, respectively, verify the relation ‖S1 − S2‖e < ε.

Observation 1.1 The two norms ‖‖i and ‖‖e of the enter and exit quantities, respectively, whichoccur in Definition 1.1, depend on the process considered.

Intuitively, according to Definition 1.1, the calculation process is stable if, for small variationsof the enter data, we obtain small variations of the exit data.

Hence, we must characterize the stable calculation process. Let us consider that the calculationprocess P is characterized by a family fk of functions defined on a set of enter data with values ina set of exit data. We consider such a vector function fk of vector variable fk : D → R

n, where Dis a domain in R

m (we propose to have m enter data and n exit data).

Definition 1.2 f : D → Rn is a Lipschitz function (has the Lipschitz property) if there exists

m > 0, constant, so as to have ‖f(x) − f(y)‖ < m‖x − y‖ for any x, y ∈ D (the first norm is in Rn

and the second one in Rm).

Numerical Analysis with Applications in Mechanics and Engineering, First Edition.Petre Teodorescu, Nicolae-Doru Stanescu, and Nicolae Pandrea.© 2013 The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.

1

Page 16: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

2 ERRORS IN NUMERICAL ANALYSIS

It is easy to see that a calculation process characterized by Lipschitz functions is a stable one.In addition, a function with the Lipschitz property is continuous (even uniform continuous) but

the converse does not hold; for example, the function f : R+ → R+, f (x) = √x, is continuous but

it is not Lipschitz. Indeed, let us suppose that f (x) = √x is Lipschitz, hence that it has a positive

constant m > 0 such that

|f (x) − f (y)| < m|x − y|, (∀)x, y ∈ R+. (1.1)

Let us choose x and y such that 0 < y < x < 1/4m2. Expression (1.1) leads to

√x − √

y < m(√

x − √y)(

√x + √

y), (1.2)

from which we get

1 < m(√

x + √y). (1.3)

From the choice of x and y, it follows that

√x + √

y <

√1

4m2+√

1

4m2= 1

m, (1.4)

so that relations (1.3) and (1.4) lead to

1 < m1

m= 1, (1.5)

which is absurd. Hence, the continuous function f : R+ → R+, f (x) = √x is not a Lipschitz one.

1.2 APPROXIMATION ERRORS

The approximation errors have to be accepted by the conception of the algorithms because of variousobjective considerations.

Let us determine the limit of a sequence using a computer; it is supposed that the sequence isconvergent. Let the sequence {xn}n∈N

be defined by the relation

xn+1 = 1

1 + x2n

, n ∈ N, x0 ∈ R. (1.6)

We observe that the terms of the sequence are positive, excepting eventually x0. The limit of thissequence, denoted by x , is the positive root of the equation

x = 1

1 + x2. (1.7)

If we wish to determine x with two exact decimal digits, then we take an arbitrary value of x0, forexample, x0 = 0, and calculate the successive terms of the sequence {xn} (Table 1.1).

Page 17: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

PROPAGATION OF ERRORS 3

TABLE 1.1 Calculation of x with Two Exact Decimal Digits

n xn n xn n xn n xn

0 0 4 0.6028 8 0.6705 12 0.68041 1 5 0.7290 9 0.6899 13 0.68362 0.5 6 0.6530 10 0.6775 14 0.68153 0.8 7 0.7011 11 0.6854 15 0.6828

We obtain x = 0.68 . . .

1.3 ROUND-OFF ERRORS

Round-off errors are due to the mode of representation of the data in the computer. For instance, thenumber 0.8125 in base 10 is represented in base 2 in the form 0.8125 = 0.11012 and the number0.75 in the form 0.112. Let us suppose that we have a computer that works with three significantdigits. The sum 0.8125 + 0.75 becomes

1.5625 = 0.11012 + 0.112 ≈ 0.1102 + 0.112 = 1.1002 = 1.5. (1.8)

Such errors may also appear because of the choice of inadequate types of data in the programmingrealized on the computer.

1.4 PROPAGATION OF ERRORS

Let us consider the number x and let x be an approximation of it.

Definition 1.3

(i) We call absolute error the expression

E = x − x. (1.9)

(ii) We call relative error the expression

R = x − x

x. (1.10)

1.4.1 Addition

Let x1, x2, . . . , xn be the numbers for which the relative errors are R1, R2, . . . , Rn, while theirabsolute errors read E1, E2, . . . , En.

The relative error of the sum is

R

(n∑

i=1

xi

)=∑n

i=1Ei∑n

i=1xi

(1.11)

and we may write the relation

mini=1,n

|Ri | ≤∣∣∣∣∣R(

n∑i=1

xi

)∣∣∣∣∣ ≤ maxi=1,n

|Ri |, (1.12)

Page 18: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

4 ERRORS IN NUMERICAL ANALYSIS

that is, the modulus of the relative error of the sum is contained between the lower and the highervalues in the modulus of the relative errors of the component members.

Thus, if the terms x1, x2, . . . , xn are positive and of the same order of magnitude,

maxi=1,n

xi

mini=1,n

xi

< 10, (1.13)

then we must take the same number of significant digits for each term xi, i = 1, n, the samenumber of significant digits occurring in the sum too.

If the numbers x1, x2, . . . , xn are much different among them, then the number of the significantdigits after the comma is given by the greatest number xi (we suppose that xi > 0, i = 1, n). Forinstance, if we have to add the numbers

x1 = 100.32, x2 = 0.57381, (1.14)

both numbers having five significant digits, then we will round off x2 to two digits (as x1) and write

x1 + x2 = 100.32 + 0.57 = 100.89. (1.15)

It is observed that addition may result in a compensation of the errors, in the sense that theabsolute error of the sum is, in general, smaller than the sum of the absolute error of each term.

We consider that the absolute error has a Gauss distribution for each of the terms xi, i = 1, n,

given by the distribution density

φ(x) = 1

σ√

2πe− x2

2σ2 , (1.16)

from which we obtain the distribution function

�(x) =∫ x

−∞φ(x)dx, (1.17)

with the properties

�(−∞) = 0, �(∞) = 1, �(x) ∈ (0, 1), −∞ < x < ∞. (1.18)

The probability that x is contained between −x0 and x0, with x0 > 0 is

P(|x| < x0) = �(x0) − �(−x0) =∫ x0

−x0

φ(t)dt =√

2

σ√

π

∫ x0

0e− t2

2σ2 dt . (1.19)

Because φ(x) is an even function, it follows that the mean value of a variable with a normalGauss distribution is

xmed =∫ ∞

−∞xφ(x)dx = 0, (1.20)

while its mean square deviation reads

(x2)max =∫ ∞

−∞x2φ(x)dx = σ2. (1.21)

Usually, we choose σ as being the mean square root

σ = σRMS =√√√√1

n

n∑i=1

R2i . (1.22)

Page 19: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

PROPAGATION OF ERRORS 5

1.4.2 Multiplication

Let us consider two numbers x1, x2 for which the relative errors are R1, R2, while the approxima-tions are x1, x2, respectively. We have

x1x2 = x1(1 + R1)x2(1 + R2) = x1x2(1 + R1 + R2 + R1R2). (1.23)

Because R1 and R2 are small, we may consider R1R2 ≈ 0, hence

x1x2 = x1x2(1 + R1 + R2), (1.24)

so that the relative error of the product of the two numbers reads

R(x1x2) = R1 + R2. (1.25)

Similarly, for n numbers x1, x2, . . . , xn, of relative errors R1, R2, . . . , Rn, we have

R

(n∏

i=1

xi

)=

n∑i=1

Ri. (1.26)

Let x be a number that may be written in the form

x = x∗ × 10r , 1 ≤ x∗ < 10, 10r ≤ x < 10r+1, x∗ ∈ Z. (1.27)

The absolute error is|E| ≤ 10r−n, (1.28)

while the relative one is

|R| = |E|x

≤ 10r−n+1

x∗ × 10r = 10−n+1

x∗ ≤ 10−n+1, (1.29)

where we have supposed that x has n significant digits.If x is the round-off of x at n significant digits, then

|E| ≤ 5 × 10r−n, |R| ≤ 5

x× 10r−n. (1.30)

The error of the last significant digit, the nth, is

ε = E

10r−n+1 = xR

10r−n+1 = x∗R × 10n−1. (1.31)

Let x1, x2 now be two numbers of relative errors R1, R2 and let R be the relative error of theproduct x1x2. We have

R = x1x2 − x1x2

x1x2= R1 + R2 − R1R2. (1.32)

Moreover, |R| takes its greatest value if R1 and R2 are negative; hence, we may write

|R| ≤ 5

(1

x∗1

+ 1

x∗2

)× 10−n + 25

x∗1x∗

2

× 10−2n, (1.33)

Page 20: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

6 ERRORS IN NUMERICAL ANALYSIS

where the error of the digit on the nth position is

|ε(x1x2)| ≤ (x1x2)∗

2

(1

x∗1

+ 1

x∗2

)+ 5

2

(x1x2)∗

x∗1x∗

2

× 10−n. (1.34)

On the other hand,(x1x2)

∗ = x∗1x∗

2 × 10−p, (1.35)

where p = 0 or p = 1, the most disadvantageous case being that described by p = 0.The function

φ(x∗1 , x∗

2 ) = 10p

2(x∗

1 + x∗2 + 5 × 10−n) (1.36)

defined for 1 ≤ x∗1 < 10, 1 ≤ x∗

2 < 10, 1 ≤ x∗1x∗

2 < 10 will attain its maximum on the frontier ofthe above domain, that is, for x∗

1 = 1, x∗2 = 10 or x∗

1 = 10, x∗2 = 1. It follows that

φ(x∗1 , x∗

2 ) ≤ 10−p

2(11 + 5 × 10−n), (1.37)

and hence

|ε(x1, x2)| ≤ 11

2+ 5

2× 10−n < 6, (1.38)

so that the error of the nth digit of the response will have at the most six units.If x1 = x2 = x, then the most disadvantageous case is given by

(x∗)2 = (x2)∗ = 10 (1.39)

when

|ε(x2)| ≤ 1012 + 5

2× 10−n < 4, (1.40)

that is, the nth digit of x2 is given by an approximation of four units.Let x1, . . . , xm now be m numbers; then∣∣∣∣∣ε

(m∏

i=1

xi

)∣∣∣∣∣ ≤ (x1 · · · xm)∗

2 × 5 × 10−n

[m∏

i=1

(1 + 5 × 10−n

x∗i

)− 1

], (1.41)

the most disadvantageous case being that in which m − 1 numbers x∗i are equal to 1, while one

number is equal 10. In this case, we have∣∣∣∣∣ε(

m∏i=1

xi

)∣∣∣∣∣ ≤ 5

5 × 10−n

[(1 + 5 × 10−n

)m−1(

1 + 5 × 10−n

10

)− 1

]. (1.42)

If all the m numbers are equal, xi = x, i = 1,m, then the most disadvantageous situationappears for (x∗)m = (xm)∗ = 10, and hence it follows that

|ε(xm)| ≤ 5

5 × 10−n

[(1 + 5 × 10−m

10

)m

− 1

]. (1.43)

Page 21: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

PROPAGATION OF ERRORS 7

1.4.3 Inversion of a Number

Let x be a number, x its approximation, and R its relative error. We may write

1

x= 1

x(1 + R)= 1

x(1 − R + R2 − R3 + · · ·) ≈ 1

x(1 − R), (1.44)

hence

R

(1

x

)=

1x

− 1x

1x

= R, (1.45)

so that the relative error remains the same.In general,

E

(1

x

)= − E

x2 . (1.46)

1.4.4 Division of Two Numbers

We may imagine the division of x1 by x2 as the multiplication of x1 by 1/x2, so that

R

(x1

x2

)= R(x1) + R(x2); (1.47)

hence, the relative errors are summed up.

1.4.5 Raising to a Negative Entire Power

We may write

R

(1

xm

)= R

(1

x

1

x· · · 1

x

)=

m∑i=1

R

(1

x

)=

m∑i=1

R(x), m ∈ N, m �= 0, (1.48)

so that the relative errors are summed up.

1.4.6 Taking the Root of pth Order

We have, successively,

x1p = (x + R)

1p = x

1p (1 + R)

1p

= x1p

[1 + R

p+ 1

p

(1

p− 1

)R2

2!+ 1

p

(1

p− 1

)(1

p− 2

)R3

3!+ · · ·

], (1.49)

R(x

1p

)= x

1p − x

1p

x1p

≈ −R

p. (1.50)

The maximum error for the nth digit is now obtained for x = 10(k−m)/m, x∗ = 1, (x∗)m = 101−m,

m = 1/p, k entire, and is given by∣∣∣∣ε(x∗) 1p

∣∣∣∣ ≤ 101−m

2 × 5 × 10−n[(1 + 5 × 10−n)m − 1] = 10n−m[(1 + 5 × 10−n)m − 1]. (1.51)

Page 22: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

8 ERRORS IN NUMERICAL ANALYSIS

1.4.7 Subtraction

Subtraction is the most disadvantageous operation if the result is small with respect to the minuendand the subtrahend.

Let us consider the subtraction 20.003 − 19.998 in which the first four digits of each number areknown with precision; concerning the fifth digit, we can say that it is determined with a precisionof 1 unit. It follows that for 20.003 the relative error is

R1 ≤ 10−3

20.003< 5 × 10−5, (1.52)

while for 19.998 the relative error is

R1 ≤ 10−3

19.998< 5.1 × 10−5. (1.53)

The result of the subtraction operation is 5 × 10−3, while the last digit may be wrong with twounits, so that the relative error of the difference is

R = 2 × 10−3

5 × 10−3 = 400 × 10−3, (1.54)

that is, a relative error that is approximately 8000 times greater than R1 or R2.

It follows the rule that the difference of two quantities must be directly calculated, withoutpreviously calculating the two quantities.

1.4.8 Computation of Functions

Starting from Taylor’s relation

f (x) − f (x) = (x − x)f ′(ξ), (1.55)

where ξ is a point situated between x and x, it follows that the absolute error is

|E(f )| ≤ |E| supξ∈Int(x,x)

|f ′(ξ)|, (1.56)

while the relative error reads

|R(f )| ≤ |E||f (x)| sup

ξ∈Int(x,x)

|f ′(ξ)|, (1.57)

where Int(x, x) defines the real interval of ends x and x.

1.5 APPLICATIONS

Problem 1.1

Let us consider the sequence of integrals

In =∫ 1

0xnexdx, n ∈ N. (1.58)

(i) Determine a recurrence formula for {In}n∈N.

Page 23: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

APPLICATIONS 9

Solution: To calculate In, n ≥ 1, we use integration by parts and have

In =∫ 1

0xnexdx = xnex

∣∣10 − n

∫ 0

0xn−1exdx = e − In−1. (1.59)

(ii) Show that limn→∞ In does exist.

Solution: For x ∈ [0, 1] we havexn+1ex ≤ xnex, (1.60)

hence In+1 ≤ In for any n ∈ N. It follows that {In}n∈Nis a decreasing sequence of real numbers.

On the other hand,xnex ≥ 0, x ∈ [0, 1], n ∈ N, (1.61)

so that {In}n∈Nis a positive sequence of real numbers.

We get0 ≤ · · · ≤ In+1 ≤ In ≤ · · · ≤ I1 ≤ I0, (1.62)

so that {In}n∈Nis convergent and, moreover,

0 ≤ limn→∞ In ≤ I0 =

∫ 1

0exdx = e − 1. (1.63)

(iii) Calculate I13.

Solution: To calculate the integral we have two methods.

Method 1.

I0 =∫ 1

0exdx = ex

∣∣10 = e − 1, (1.64)

I1 = e − 1I0 = 1, (1.65)

I2 = e − 2I1 = e − 2, (1.66)

I3 = e − 3I2 = 6 − 2e, (1.67)

I4 = e − 4I3 = 9e − 24, (1.68)

I5 = e − 5I4 = 120 − 44e, (1.69)

I6 = e − 6I5 = 265e − 720, (1.70)

I7 = e − 7I6 = 5040 − 1854e, (1.71)

I8 = e − 8I7 = 14833e − 40320, (1.72)

I9 = e − 9I8 = 362880 − 133496e, (1.73)

I10 = e − 10I9 = 1334961e − 3628800, (1.74)

I11 = e − 11I10 = 39916800 − 14684570e, (1.75)

I12 = e − 12I11 = 176214841e − 479001600, (1.76)

I13 = e − 13I12 = 6227020800 − 2290792932e. (1.77)

It follows thatI13 = 0.1798. (1.78)

Page 24: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

10 ERRORS IN NUMERICAL ANALYSIS

Method 2. In this case, we replace directly the calculated values, thus obtaining

I0 = e − 1 = 1.718281828, (1.79)

I1 = e − 1I0 = 1, (1.80)

I2 = e − 2I1 = 0.718281828, (1.81)

I3 = e − 3I2 = 0.563436344, (1.82)

I4 = e − 4I3 = 0.464536452, (1.83)

I5 = e − 5I4 = 0.395599568, (1.84)

I6 = e − 6I5 = 0.34468442, (1.85)

I7 = e − 7I6 = 0.305490888, (1.86)

I8 = e − 8I7 = 0.274354724, (1.87)

I9 = e − 9I8 = 0.249089312, (1.88)

I10 = e − 10I9 = 0.227388708, (1.89)

I11 = e − 11I10 = 0.21700604, (1.90)

I12 = e − 12I11 = 0.114209348, (1.91)

I13 = e − 13I12 = 1.233560304. (1.92)

We observe that, because of the propagation of errors, the second method cannot be used tocalculate In, n ≥ 12.

Problem 1.2

Let the sequences {xn}n∈Nand {yn}n∈N

be defined recursively:

xn+1 = 1

2

(xn + 0.5

xn

), x0 = 1, (1.93)

yn+1 = yn − λ(y2n − 0.5), y0 = 1. (1.94)

(i) Calculate x1, x2, . . . , x7.

Solution: We have, successively,

x1 = 1

2

(x0 + 0.5

x0

)= 3

4, (1.95)

x2 = 1

2

(x1 + 0.5

x1

)= 17

24, (1.96)

x3 = 1

2

(x2 + 0.5

x2

)= 577

816, (1.97)

x4 = 1

2

(x3 + 0.5

x3

)= 0.707107, (1.98)

x5 = 1

2

(x4 + 0.5

x4

)= 0.707107, (1.99)

Page 25: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

APPLICATIONS 11

x6 = 1

2

(x5 + 0.5

x5

)= 0.707107, (1.100)

x7 = 1

2

(x6 + 0.5

x6

)= 0.707107. (1.101)

(ii) Calculate y1, y2, . . . , y7 for λ = 0.49.

Solution: There result the values

y1 = y0 − 0.49(y20 − 0.5) = 0.755, (1.102)

y2 = y1 − 0.49(y21 − 0.5) = 0.720688, (1.103)

y3 = y2 − 0.49(y22 − 0.5) = 0.711186, (1.104)

y4 = y3 − 0.49(y23 − 0.5) = 0.708351, (1.105)

y5 = y4 − 0.49(y24 − 0.5) = 0.707488, (1.106)

y6 = y5 − 0.49(y25 − 0.5) = 0.707224, (1.107)

y7 = y8 − 0.49(y28 − 0.5) = 0.707143. (1.108)

(iii) Calculate y1, y2, . . . , y7 for λ = 49.

Solution: In this case, we obtain the values

y1 = y0 − 49(y20 − 0.5) = −23.5, (1.109)

y2 = y1 − 49(y21 − 0.5) = −27059.25, (1.110)

y3 = y2 − 49(y22 − 0.5) = −3.587797 × 1010, (1.111)

y4 = y3 − 49(y23 − 0.5) = −6.307422 × 1022, (1.112)

y5 = y4 − 49(y24 − 0.5) = −1.949395 × 1047, (1.113)

y6 = y5 − 49(y25 − 0.5) = −1.862070 × 1096, (1.114)

y7 = y8 − 49(y28 − 0.5) = −1.698979 × 10194. (1.115)

We observe that the sequences {xn}n∈Nand {yn}n∈N

converge to√

0.5 = 0.707107 for λ = 0.49,

while the sequence {yn}n∈Nis divergent for λ = 49.

Problem 1.3

If the independent aleatory variables X1 and X2 have the density distributions p1(x) and p2(x),

respectively, then the aleatory variable X1 + X2 has the density distribution

p(x) =∫ ∞

−∞p1(x − s)p2(s)ds. (1.116)

(i) Demonstrate that if the aleatory variables X1 and X2 have a normal distribution by zero meanand standard deviations σ1 and σ2, then the aleatory variable X1 + X2 has a normal distribution.

Solution: From equation (1.116) we have

p(x) =∫ ∞

−∞

1

σ1

√2π

e−

(x − s)2

2σ21

1

σ2

√2π

e−

x2

2σ22 ds = 1

2πσ1σ2

∫ ∞

−∞e−

(x − s)2

2σ21 e

−s2

2σ22 ds. (1.117)

Page 26: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

12 ERRORS IN NUMERICAL ANALYSIS

We require the values λ1,λ2, and a real, such that

(x − s)2

2σ21

+ s2

2σ22

= x2

λ21

+ (s − ax)2

λ22

, (1.118)

from whichx2

σ21

= x22

λ21

+ a2x2

λ22

,s2

σ21

+ s2

σ22

= s2

λ22

, −2xs

σ21

= −2asx

λ22

, (1.119)

with the solution

λ22 = σ2

1σ22

σ21 + σ2

2

, a = σ22

σ21 + σ2

2

, λ21 = σ2

1 + σ22. (1.120)

We make the change of variable

s − ax =√

2λ2t, ds =√

2λ2dt (1.121)

and expression (1.118) becomes

p(x) = 1

2πσ1σ2

∫ ∞

−∞e−

x2

2λ21 e−t2

λ2dt = 1√σ2

1 + σ22

√2π

e−

x2

2(σ21 + σ2

2) . (1.122)

(ii) Calculate the mean and the standard deviation of the aleatory variable X1 + X2 of point (i).

Solution: We calculate

∫ ∞

−∞xp(x)dx = 1√

σ21 + σ2

2

√2π

∫ ∞

−∞xe

− x2

2(σ2

1+σ22

)dx = 0, (1.123)

∫ ∞

−∞x2p(x)dx = 1√

σ21 + σ2

2

√2π

∫ ∞

−∞x2e

−x2

2(σ2

1 + σ22

)dx

=

⎛⎜⎝−√

σ21 + σ2

2√2π

xe− x2

σ21+σ2

2

⎞⎟⎠∣∣∣∣∣∣∣∞

−∞

+√

σ21 + σ2

2√2π

∫ ∞

−∞e− x2

2(σ21+σ2

2) dx

= σ21 + σ2

2. (1.124)

(iii) Let X be an aleatory variable with a normal distribution, a zero mean, and standard deviationσ. Calculate

I1 = 1

σ√

∫ ∞

−∞e− x2

2σ2 dx (1.125)

and

I2 = 1

σ√

∫ σ

−σ

e− x2

2σ2 dx. (1.126)

Solution: Through the change of variable

x = σ√

2u, dx = σ√

2du, (1.127)

Page 27: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

APPLICATIONS 13

it follows that

I1 = 1

σ√

∫ ∞

−∞e−u2

σ√

2du = 1. (1.128)

Similarly, we have

I2 = 1√π

∫ σ

−σ

e−u2du. (1.129)

On the other hand,

∫ σ

−σ

e−u2du =

√∫ 2π

0

∫ σ

0e−ρ2

ρdρdθ =√

π(1 − e−σ2), (1.130)

so thatI2 =

√1 − e−σ2

. (1.131)

(iv) Let 0 < ε < 1, fixed. Determine R > 0 so that

1√π

∫ R

−R

e−x2dx < ε. (1.132)

Solution: Proceeding as with point (iii), it follows that∫ R

−R

e−x2dx =

√π(1 − e−R2

), (1.133)

so that we obtain the inequality √1 − e−R2

< ε, (1.134)

from whichR <

√− ln(1 − ε2). (1.135)

(v) Calculate

I3 = 1

σ√

∫ R

−R

e− x2

2σ2 dx (1.136)

and

I4 = 1

σ√

∫ ∞

R

e− x2

2σ2 dx (1.137)

Solution: We again make the change of variable (1.127) and obtain

I3 = 1√π

∫ R

σ√

2

− R

σ√

2

e−u2du. (1.138)

Point (ii) shows that ∫ A

−A

e−x2dx =

√π(1 − e−A2

), A > 0; (1.139)

hence, it follows that

I3 =√

1 − e− R2

2σ2 . (1.140)

Page 28: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

14 ERRORS IN NUMERICAL ANALYSIS

On the other hand, we have seen that I1 = 1 and we may write

I1 = 1

σ√

(2∫ ∞

R

e− x2

σ2 dx +∫ R

−R

e− x2

2σ2 dx

)= 2I4 + I3. (1.141)

Immediately, it follows that

I4 = I1 − I3

2= 1 −

√1 − e

− R2

2σ2

2. (1.142)

(vi) Let X1 and X2 be two aleatory variables with a normal distribution, a zero mean, and standarddeviation σ. Determine the density distribution of the aleatory variable X1 + X2, as well as its meanand standard deviation.

Solution: It is a particular case of points (i) and (ii); hence, we obtain

p(x) = 1

2σ√

πe− x2

4σ2 , (1.143)

that is, a normal aleatory variable of zero mean and standard deviation σ√

2.

(vii) Let N1 and N2 be numbers estimated with errors ε1 and ε2, respectively, considered to bealeatory variables with normal distribution, zero mean, and standard deviation σ. Calculate the sumN1 + N2 so that the error is less than a value ε > 0.

Solution: The requested probability is given by

I =∫ ε

−∞

1

2σ√

πe− x2

4σ2 dx =∫ −ε

−∞

1

2σ√

πe− x2

4σ2 dx +∫ ε

−ε

1

2σ√

πe− x2

4σ2 dx. (1.144)

Taking into account the previous results, we obtain

∫ −ε

−∞

1

2σ√

πe− x2

4σ2 dx = 1 −√

1 − e− ε2

4σ2

2, (1.145)

∫ ε

−ε

1

2σ√

πe− x2

4σ2 dx =√

1 − e− ε2

4σ2 , (1.146)

so that

I = 1

2

(1 +

√1 − e

− ε2

4σ2

). (1.147)

FURTHER READING

Acton FS (1990). Numerical Methods that Work. 4th ed. Washington: Mathematical Association ofAmerica.

Ackleh AS, Allen EJ, Hearfott RB, Seshaiyer P (2009). Classical and Modern Numerical Analysis:Theory, Methods and Practice. Boca Raton: CRC Press.

Atkinson KE (1989). An Introduction to Numerical Analysis. 2nd ed. New York: John Wiley & Sons,Inc.

Page 29: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor

FURTHER READING 15

Atkinson KE (2003). Elementary Numerical Analysis. 2nd ed. New York: John Wiley & Sons, Inc.

Bakhvalov N (1976). Methodes Numerique. Moscou: Editions Mir (in French).

Berbente C, Mitran S, Zancu S (1997). Metode Numerice. Bucuresti: Editura Tehnica (in Romanian).

Burden RL, Faires L (2009). Numerical Analysis. 9th ed. Boston: Brooks/Cole.

Chapra SC (1996). Applied Numerical Methods with MATLAB for Engineers and Scientists. Boston:McGraw-Hill.

Cheney EW, Kincaid DR (1997). Numerical Mathematics and Computing. 6th ed. Belmont: Thomson.

Dahlquist G, Bjorck ´A (1974). Numerical Methods. Englewood Cliffs: Prentice Hall.

Demidovitch B, Maron I (1973). Elements de Calcul Numerique. Moscou: Editions Mir (in French).

Epperson JF (2007). An Introduction to Numerical Methods and Analysis. Hoboken: John Wiley &Sons, Inc.

Gautschi W (1997). Numerical Analysis: An Introduction. Boston: Birkhauser.

Greenbaum A, Chartier TP (2012). Numerical Methods: Design, Analysis, and Computer Implemen-tation of Algorithms. Princeton: Princeton University Press.

Hamming RW (1987). Numerical Methods for Scientists and Engineers. 2nd ed. New York: DoverPublications.

Hamming RW (2012). Introduction to Applied Numerical Analysis. New York: Dover Publications.

Heinbockel JH (2006). Numerical Methods for Scientific Computing. Victoria: Trafford Publishing.

Higham NJ (2002). Accuracy and Stability of Numerical Algorithms. 2nd ed. Philadelphia: SIAM.

Hildebrand FB (1987). Introduction to Numerical Analysis. 2nd ed. New York: Dover Publications.

Hoffman JD (1992). Numerical Methods for Engineers and Scientists. New York: McGraw-Hill.

Kharab A, Guenther RB (2011). An Introduction to Numerical Methods: A MATLAB Approach. 3rded. Boca Raton: CRC Press.

Krılov AN (1957). Lectii de Calcule prin Aproximatii. Bucuresti: Editura Tehnica (in Romanian).

Kunz KS (1957). Numerical Analysis. New York: McGraw-Hill.

Levine L (1964). Methods for Solving Engineering Problems Using Analog Computers. New York:McGraw-Hill.

Marinescu G (1974). Analiza Numerica. Bucuresti: Editura Academiei Romane (in Romanian).

Press WH, Teukolski SA, Vetterling WT, Flannery BP (2007). Numerical Recipes: The Art of ScientificComputing. 3rd ed. Cambridge: Cambridge University Press.

Quarteroni A, Sacco R, Saleri F (2010). Numerical Mathematics. 2nd ed. Berlin: Springer-Verlag.

Ralston A, Rabinowitz P (2001). A First Course in Numerical Analysis. 2nd ed. New York: DoverPublications.

Ridgway Scott L (2011). Numerical Analysis. Princeton: Princeton University Press.

Sauer T (2011). Numerical Analysis. 2nd ed. London: Pearson.

Simionescu I, Dranga M, Moise V (1995). Metode Numerice ın Tehnica. Aplicatii ın FORTRAN.Bucuresti: Editura Tehnica (in Romanian).

Stanescu ND (2007). Metode Numerice. Bucuresti: Editura Didactica si Pedagogica (in Romanian).

Stoer J, Bulirsh R (2010). Introduction to Numerical Analysis. 3rd ed. New York: Springer-Verlag.

Page 30: NUMERICAL ANALYSIS WITH · 2013-07-18 · 7 Numerical Differentiation and Integration 377 7.1 Introduction, 377 7.2 Numerical Differentiation by Means of an Expansion into a Taylor