83

Improved Newton’s Method for Solving Nonlinear Equations

  • Upload
    tgi25

  • View
    275

  • Download
    4

Embed Size (px)

DESCRIPTION

An iterative scheme is introduced improving Newton's method which is widely used for solving nonlinear equations. The method is developed for both functions of one variable and two variables.Proposed scheme replaces the rectangular approximation of the indefinite integral involved in Newton's Method by a trapezium. It is shown that 'the order of convergence of the new method is at least three for functions of one variable. Computational results overwhelmingly support this theory and the computational order of convergence is even more than three for certain functions.Algorithms constructed were implemented by using the high level computer language Turbo Pascal (Ver. 7)

Citation preview

Page 1: Improved Newton’s Method for Solving Nonlinear Equations
Page 2: Improved Newton’s Method for Solving Nonlinear Equations

DEPARTMENT OF MATHEMAnCS University of Sri Jayewardenepura

NUFlOOa Sd 'Mn. (CeJ

Page 3: Improved Newton’s Method for Solving Nonlinear Equations

University of Sri Jayewardenepura

Faculty of Graduate Studies

Department of Mathematics

IMPROVED NEWTON'S METHOD FOR SOLVING NONLINEAR EQUATIONS

A Thesis

By

T.G.I. FERNANDO

Submitted in Partial Fulfilment of the Requirements

for the Degree of

Master of Science . In

Industrial Mathematics

I grant the University of Sri Jayewardenepura the nonexclusive right to uo;;e this work for the University's own purposes and to make single copies of the work available to the public on a not-far-profit basis if copies are not otherwise available.

u'J~~ (T.G.I. F ando) . .

Page 4: Improved Newton’s Method for Solving Nonlinear Equations

We approve the Masters degree m Industrial Mathematics, thesis of

T.G.I. Fernando.

Date:

J\ ' , ... . ........ \ ...... ' ......... . ............ .......... .

Dr. SunetlITa Weerakoon

Principal Supervisor

Co-ordinator

M. Sc. in Industrial Mathematics

Department of Mathematics

University of Sri Jayewardenepma .

..... ?~J .t.t . '1 1 ..... . . .. . .. ~ ................. . Dr. G.K. Watugala

Depaltment of Mechanical ELgineeling

University ofMoratuwa .

....... ~ .. y../ .Y! . i.? .... . ...... ~ .. ~ .. 4 ......... ...... ...... . Ms. G.S. Makalanda

Co-ordinator of Computer Centre

Department of Mathematics

University of Sri Jayewardencpura.

Page 5: Improved Newton’s Method for Solving Nonlinear Equations

ABSTRACT

An iterative scheme is introduced improving Newton's method which is widely

used for solving nonlinear equations. The method is developed for both functions of

one variable and two variables.

Proposed scheme replaces the rectangular approximation of the indefinite

integral involved in Newton's Method by a trapezium. It is shown that '~he order of

convergence of the new method is at least three for functions of one variable.

Computational results overwhelmingly support this theory and the computational order

of convergence is even more than three for certain functions.

Algorithms constlUcted were implemented by using the high level computer

language Turbo Pascal (Ver. 7)

Key words: Convergence, Newton's method, Improved Newton's method, Nonlinear

equations, Root finding, Order of convergence, Iterative methods

Page 6: Improved Newton’s Method for Solving Nonlinear Equations

Abstract

Chapter 1

Introduction

Chapter 2

Preliminaries

2.1

2.2

Chapter 3

CONTENTS

Functions of One Variable

Functions of Two Variables

Numerical schemes for functions of one variable

3.1

3.2

Newton's Method

Improved Newton's Method

3.3 Third Order Convergent Methods

3.3.1 Chebyshev Method

3.3 .2 Multipoint Iteration Methods

3.3.3 Parabolic Extension of Newton's Method

Page

01

04

07

14

16

17

18

19

Page 7: Improved Newton’s Method for Solving Nonlinear Equations

Chapter 4

Numelical schemes for functions of two variables

4.1

4.2

Chapter 5

Newton's Method

Improved Newton's Method

20

23

Analysis of Convergence of Improved Newton's Method for functions of one valiable

5.1

5.2

Chapter 6

Second Order Convergence

Third Order Convergence

Computational Results and Discussion

Appendix A (Real world problems)

Appendix B

References

28

31

34

41

47

73

Page 8: Improved Newton’s Method for Solving Nonlinear Equations

ACKNOWLEDGEMENT

I am deeply indebted to my supervisors Dr. Sunethra Weerakoon and Ms. G.S.

Makalanda of the Department of Mathematics, Ulliversity of Sli Jayewardenepura and

Dr. G.K. Watugala of Department of Mechanical Engineelmg, University of Moratuwa

for providing me invaluable guidance, helpful suggestions and encouragement at every

stage in the research process.

I also thank Prof M.K. Jain ofDepaltment of Mathematics, Faculty of Science,

University of Mamitius, Reduit, Mamitius and Prof S.R.K. Iyengar, Head of

Mathematics Depaltment, Indian Institute of Technology, New Delhi for providing me

proofs of third order convergence of Multipoint Iterative Methods.

I wish to pay my gratitude to the staff of Department of Mathematics,

University of Sri Jayewardenepura for encouraging me during all the peIiod of research

work.

Finally I wish to express my sincere appreciations to my loving parents and my

friend Mr. H.K. G. De Z. Amarasekara who gave me valuable assistance to make this

endeavour a success.

Page 9: Improved Newton’s Method for Solving Nonlinear Equations

Chapter 1

INTRODUCTION

1

One topic which has always been of paramount imp0l1ance ill numerical

analysis is that of approximating roots of nonlinear equations in one variable, be they

algebraic or transcendental. According to the great French mathematician Galois, even

for some polynomials of degree n ~ 5, no closed-form solutions can be found using

integers and the operations +, -, x, -:-, exponentiation, and taking second through nth

roots. Even though the closed form solutions for polynomials with degree n = 3, 4 are

available they are very complex and not easy to memorize as the quadratic formula.

Thus, one can only give numerical approximations for this type of rroblems. In

practical situations we do not require the actual foot and we need only a solution

which is close to the tme solution to some extent. Newton ' s method that approximates

the root of a nonlinear equation in one variable using the value of the function and its

delivative, in an iterative fashion, is probably the best known and most widely used

algOlithm and it converges to the root quadratically. In other words, after some

iterations, this process approximately doubles the number of conect decimal places or

significant digits at each iteration.

In our high school education and undergraduate level we leamt how to solve

systems of linear equations in two or more variables. But in most of the practical

problems we should deal with systems of nonlinear equations. Solving these systems

analytically, is impossible for most of the cases. Newton's method can be readily

extended to solve systems of nonlinear equations numerically and it uses the function

Page 10: Improved Newton’s Method for Solving Nonlinear Equations

2

(of several variables) and its Jacobian at each iteration. It is known that Newton's

method converges to the root quadratically(3) even for functions of several variables.

In this study, we suggest an improvement to the iterations of Newton's method

at the expense of one additional first derivative evaluation of the function for the one

dimensional case. Furthermore, we have extended the suggested method to systems of

nonlinear equations in two variables. Derivation of Newton's method involves an

indefinite integral of the derivative of the function to be solved and the relevant area is

approximated by a rectangle. In the proposed method, we approximate this indefinite

integral by a trapezium instead of a rectangle.

It is shown that the suggested method converges to the root and the order of

convergence is at least 3 in a neighbourhood of the root whenever the first, second and

third derivatives of the function exist in a neighbourhood of the root. i. e. our method

approximately trebles the number of significant digits after some iterations.

Computational results overwhelmingly support this theory and the computational order

of convergence is even more than three for certain functions. Further, even in the

absence of the second derivative and higher derivatives, the second order convergence

of the new method can be guaranteed.

Furthermore, we discuss some of the existing 3rd order convergent methods

and compare these methods with the new method.

Page 11: Improved Newton’s Method for Solving Nonlinear Equations

3

Even though we did not prove that our method is third order for the systems of

nonlinear equations in two variables, the computational results suggest that our

method is of order three.

Page 12: Improved Newton’s Method for Solving Nonlinear Equations

Chapter 2

PRELIl\1IN ARIES

(2.1) Functions of One Variable

Definition (2.1.1) :

A function f is Lipschitz continuous with constant y in a set A, written f E Lip y (A),

if V X,y E A => I f{x) - f{y) I :::; y I x- y I (2 . l.1)

Lemma (2.1.1) :

For an open intexval 0 , let f : 0 ~ 91 and let f l E Lip y (D). Then for any x,YED,

Proof:

I f{y) - f{x) - f l (x)(y - x) I :::; y I y - x I 2

I f{y) - f{x) - f l (x) (y- x) I

= I f l (~) (y - x) - f l (x) (y -x) I

= I Y - xi i f l (~) - f l (x) I

:::; Iy - xl y I ~ - x I

:::; y Iy - Xl2

Lemma (2.1.2)[3] :

(2.1.2)

(by Mean Value Theorem)

(by Lipschitz continuity)

(since x :::; ~ :::; y) o

For an open intexval 0 , let f : 0 ~ 91 and let e E Lip y (D). Then for any x,YED,

I f{y) - f{x) - f l (x) (y - x) I :::; (y / 2 )1 y - x I 2 (2 .1.3)

4

Page 13: Improved Newton’s Method for Solving Nonlinear Equations

5

Proof:

y

f(y)- f(x)- f'(x)(y-x) = f[f'(z)- f'(x)}dz x

Let z = x + (y - x) t , then dz = (y - x) dt

Substituting these in the integral gives:

1

f(y)- f(x)- f'(x)(y-x) = f [f'(x+(y-x)t)- f'(x)](y-x)dt o

Lipschitz continuity of f l together with triangular inequality gives :

I

If(y)- f(x)- f'(x)(y-x)1 ::::; Iy-xlf rlt(y-x)ldt = Lly_ x l2 0 ° 2

Definition (2.1.2)13) :

Let a E~, X n E~, n = 0,1,2, .... Then the sequence {x n} = {x O,X I,X 2, ... } is said to

converge to a if

Ifin addition, there exists a constant c;::: 0, an integer no;::: ° and p ;::: ° such that for all

n > n 0,

Ix n+1 - al::::; clx n - al P, (2 .1.4 )

then {x n} is said to converge to a with q-order at least p. If p = 2 or 3, the

convergence is said to be q-quadratic or q-cubic, respectively.

Page 14: Improved Newton’s Method for Solving Nonlinear Equations

Iffor some {c n} which converges to 0,

IXn+l- al:::; cn lx n- a1 2,

then {x n} is said to converge q-super-quadratically to a.

Definition (2.1.3) :

6

(2 .1.5)

Let a be a root of the function f(x) and suppose that x n+l, X n & x n -1 be the

consecutive iterations closer to the root, a .

Then the Computational Order of Convergence p can be approximated by :

1 oX n . a

n .X a

p "" 1

oX a n

oX n - I a

Theorem (2.1.1) - Intermediate Value Theorem:

Suppose that f : [a,b] ~ iR is continuous. Then f has the intermediate property on

[a,b]. That is, if k is any value between f (a) and f (b) [ i.e. f (a) < k < f (b) or

feb) < k < f(a)], then there exists c E (a,b) S.t. fCc) = k.

Defmition (2.1.4) - Machine Precision(3) :

Usually numerical algOIithms will depend on the machine precision. It is impOltant,

therefore, to characterise machine precision in such a way that discussions and

computer programs can be reasonably independent of any particular machine. The

concept commonly used is machine epSilon, abbreviated macheps; it is defined as the

smallest positive number 1" such that 1 +r > 1 on the computer in question.

Page 15: Improved Newton’s Method for Solving Nonlinear Equations

7

Defrnition (2.1.4) - Stopping Criteria[31 :

Since there may be no computer representable a such that f{a) = 0, we must expect

our algorithms to provide only solutions to approximate the problem or the last two

iterates stayed in virtually the same place. So we adopt the following stopping criteria

for computer programs.

(i) I f{Xn+l) I < -V (macheps)

(ii) I Xn+l - Xn I < -V (macheps)

(2.2) Functions of Two Varia hIes

Definition (2.2.1) :

Let D c ~2. A function f of two variables is a rule that assigns to each ordered pair

(x,y) in D, a unique real number denoted by f{x,y). The set D is the domain off and its

range is the set of values that ftakes on, that is, {f{x,y) I (x,y) ED}. We often write

z = f{x,y) to make explicit the value taken on by f at the general point (x,y). The

variables x and yare independent variables and z is the dependent variable.

Definition (2.2.2) :

lffis a function of two variables with domain D, the graph offis the set

S = {(x,y,z) E 91 3 I z = f{x,y), (x,y) E D}

Defmition (2.2.3) :

The level curves or contour curves of a function f of two variables are the curves with

equations f{x,y) = k, where k is a constant (in the range off)

Page 16: Improved Newton’s Method for Solving Nonlinear Equations

8

Definition (2.2.4) :

Iff is a function of two variables, its partial derivative with respect to x and yare the

functions f x and f y defined by

f x (x,y) = lim f(x+h,y)- f(x,y) h-->O h

f ( " ) = lim f(x ,y+h)- f(x ,y) y x,y . h-->O h

To give a geometlic interpretation of partial derivatives we should understand that the

equation z = f(x,y) represents a surface S (the graph of f). If f(a,b) = c then the point

P(a,b,c) lies on S. The vertical plane y = b intersects S in a curve C1 (In other words,

C I is the trace of S in the plane y = b). Likewise the veltical plane x = a intersects S in

a curve C2 . Both of the CUlves C1 and C2 pass through the point P (see Figure 2.2.1).

Notice that the curve C1 is the graph of the function g(x) = f(x,b), so the slope of its

tangent TI at P is g I(a) = f x (a,b). The cwve C2 is the graph of the function

G(y) = f(a,y) , so the slope of its tangent T2 at Pis G I (b) = fy(a,b)

Thus the partial derivatives fx (a,b) and f y (a ,b) can be interpreted geometrically as the

slope of the tangent lines at P( a,b,c) to the traces C I and C2 of S in the planes y = b

and x = a.

Page 17: Improved Newton’s Method for Solving Nonlinear Equations

9

z

~------------------------. y

x Figure (2.2.1)

Definition (2.2.5) :

Suppose a smface has equation z = f{x,y), where f has continuous first partial

derivatives, and let P(a,b,c) be a point on S. As in the previous definition, let C, and C2

be the curves obtained by intersecting the vertical planes y = b and x = a with the

surface S. Then the point P lies on both C] and C2. Let T] and T2 be the tangent lines

to the CUIves C] and C2 at the point P. Then the tangent plane to the surface S at the

point P is defined to be the plane that contains both of the tangent lines T, and T2. (See

Figure 2.2.1)

It can be sho\\ITI that if C is any other curve that lies on the sulface S which

passes through P, then its tangent line at P also lies in the tangent plane. Therefore, we

Page 18: Improved Newton’s Method for Solving Nonlinear Equations

10

can think of the tangent plane to S at P as consisting of all possible tangent lines at P to

curves that lie on S which pass through P. The tangent plane at P is the plane that most

closely approximates the surface S near the point P.

It can be shown that the equation of the tangent plane to the surface z = f(x,y)

(2.2.1)

Definition (2.2.6),3) :

A continuous function f : ~2 -+ ~ is said to be continuously differentiable at

x = (x,y)T E ~2, if fx (x,y) and f y (x,y) exist and is continuous. The gradient of f at

(x,y) is then defined as

V'f(x) = V'f(x,y) = [fx (x,y) , f y (x,y)] T

Note (2.2.1) :

We denote the open and closed line segments connecting x , x 1 E ~2 by (x,x I) and

[X,X/] respectively, and D c ~2 is called a convex set, if for every x,xl E D, [x,x I] C D.

Lemma (2.2.1)[3) :

Let f: 9:e -+ ~ be continuously differentiable in an open convex set D c ~2. Then, for

x = (X,y)T E D and any nonzero pelturbation p = (p J , P2)T E ~2, the directional

derivative off at x in the direction of p, defined by

Dpf(x) = Dpf(x,y) = lim f(x + h Pl'y +: pJ - f(x,y) ~o

Page 19: Improved Newton’s Method for Solving Nonlinear Equations

11

exists and is equal to Vf(X)T . p . For any x, x+p ED,

1

f(x +p) = f(x) + J V'f(x + tp)T . pdt (2.2.2) o

and there exists Z E (x , x+p) such that

f{x + p) = f{x) + V'f{z{p (2 .2.3)

[This is the Mean Value Theorem for functions of several variables]

Proof:

We simply parameterise f along the line through x and x+p as a function of one

variable

g : 9t ~ 9t, get) = f(x + tp) = f(x + tp[,y + tP2)

and apply calculus of one variable to g.

Differentiating g with respect to t

(2 .2.4)

Then by the fundamental theorem of calculus or Newton's theorem,

1

gel) = g(O)+ J g'(t)dt o

which, by the definition of g and (2.2.4), is equivalent to

I

J(x+p)= f(x)+ JVf(x+tp)T .pdt o

and proves (2.2 .2). Finally, by the mean value theorem for functions of one valiable,

g(l) g(O) + gl (~), ~ E(O, l)

Page 20: Improved Newton’s Method for Solving Nonlinear Equations

12

which by the definition of g and (2 .2.4), is equivalent to

f{x+p) = f{x) + V'f(x + ~ p(p ~ E (0,1)

and proves (2.2 .3). 0

Definition (2.2.7)[3] :

A continuously differentiable function f : 91 2 ~ 91 is said to be twice continuously

differentiable at x E 912, if(02f l OXiOxj)(X) exists and is continuous, 1 ~ ij ~ 2. The

Hessian off at x is then defined as the 2 x 2 matrix: whose (i,j)th element is

Clairaut's Theorem (2.2.1) :

Suppose fis defined on Dc 9-12 that contains the point (a,b). If the functions fx v and

fyx are both continuous on D, then

C y (a,b) = f yx (a,b)

Definition (2.2.8)13] :

A continuous function F : 91 2 ~ 912 is continuously differentiable at x E 91 2 if each

component function f{x,Y) and g(x,y) is continuously differentiable at x. The derivative

of F at x is sometimes called the Jacobian (matrix:) of F at x, and its transpose is

sometimes called the gradient ofF at x.

Page 21: Improved Newton’s Method for Solving Nonlinear Equations

13

The common notations are:

Definition (2.2.9) :

Let a be a solution of the system of nonlinear equations

f(x) = 0

g(x) = 0

and suppose that x 0+1, X 0 & x 0 -1 be the consecutive iterations closer to the root, a .

Then the Computational Order of Convergence can be approximated by :

p ::::::

Ilx n + I - a II In Ilx n - a II

Ilx " - a II In IIx II n - I - a

where II . II is the norm infinity.

Page 22: Improved Newton’s Method for Solving Nonlinear Equations

14

Chapter 3

NUMERICAL SCHEMES FOR

FUNCTIONS OF ONE VARIABLE

(3.1) Newton's Method [NM] :

Newton's algOlithm to approximate the root a of the nonlinear

equation f{x) = 0 is to start with an initial approximation x '0 sufficiently close to a and

to use the one point iteration scheme

(3.1.1)

where x* n is the nth iterate.

It is impOltant to understand how Newton's method is constmcted. At

each iteration step we constmct a local linear model of our function f{x) at the point

X*n and solve for the root (X*n+l) of the local model. In Newton's method, [Figure

(3.1.1)] this local linear model is the tangent drawn to the function f{x) at the CUlTent

point x* n.

Local linear model at x* n is :

(3.] .2)

This local linear model can be interpreted(3) in another way.

From the Newton's Theorem:

x

f(x) f(x:)+ f f'().,)dA (3.1.3) . Xn

Page 23: Improved Newton’s Method for Solving Nonlinear Equations

15

In Newton's method, the indefinite integral involved in (3. 1.3) is approximated by the

rectangle ABCD. [See figure (3.l.2)]

x

l.e. f f' (A ) d A ~ f' ( x : ) (x - x: ) ~ (3.l.4) .

X n

which will result in the model given in (3.1.2).

y

x

Figure (3.1.1) : Newton's iterative step

E

f/(A) ~

D C

A B

x

Figure (3.1.2) : Approximating the area by the rectangle ABeD

Page 24: Improved Newton’s Method for Solving Nonlinear Equations

16

(3.2) Improved Newton's Method [INM] :

From the Newton's Theorem:

J ( x ) J(X n )+ J J'(/L)d/L (3.2.1) Xn

In the proposed scheme, we approximate the indefinite integral involved in (3.6) by the

trapezium ABED.

1. e. J f ' ( A) d A ~ (t) (x - X 11 ) [ f ' ( X /1 ) + f' ( x )] ~ (3.2.2) x

A

* X n

E

~

··· .. c

B

x

Figm'e (3.2.1) : Approximating the area by the trapezium ABED

Thus the local model equivalent to (3.l.2) is :

M n (x)= !(x,,)+(+)(x-x/1H!'(x ll )+ !'(X)]~ (3.2.3)

Note that just like the linear model for Newton's Method this nonlinear model and the

delivative of the model agree with the function f{x) and the derivative of the

function f I(X) respectively when x = x n. In addition to these prope11ies, the second

delivatives of the model and the function agree at the CUlTent iterate x = x n. Note that

this property does not hold for the local model for Newton's method. i.e. the model

Page 25: Improved Newton’s Method for Solving Nonlinear Equations

17

matches with the values of the slope, f/(x n), of the function, as well as with its

cUlVature in terms of el (Xn). The resultant model is a tangential nonlinear CU1ve to the

function f{x) at x = X n.

We take the next iterative point as the root ofthe local model (3.2.3).

M n (x n +1) = 0

l.e. ~ f ( X II ) + (t)( x n +1 - X II )[ f ' (x n ) +

2f(x n )

[f'(xll

)+ f'(x n + I )]

Obviously, this is an implicit scheme, which requires to have the delivative of the

function at the (n+ 1 )th iterative step to calculate the (n+ 1 )th iterate itself We could

overcome this difficulty by making use of the Newton's iterative step to compute the

(n+ 1 )th iterate on the right hand side.

Thus the resulting new scheme, which is very appropriately named as Improved

Newton's Method (JNM) is :

Xn+l = xn [f'(x n) + f'(x:+1)]'

where * xn+l=xn f(xn)

f'(x n )

(3.3) Third Order Convergent Methods:

(3.3.1) Chebyshev Method [5] [CM] :

n=012··· , , ,

-+ (3.2.4)

Chebyshev method uses a second degree Taylor polynomial as the local model of the

function near the point x = x n .

Page 26: Improved Newton’s Method for Solving Nonlinear Equations

18

Thus, the local model for the Chebyshev method is :

The next iterative value ( x n + 1) is taken as the root of this local model.

Replacing (Xn+ 1 - Xn) on the right hand side by -f(xn) / f l (xn) [by using of Newton's

method] , we get the Chebyshev method

x = x _ j (x /1 ) _ ~ ( j (x n ) ) 2 ( j /I (x n ») n = 0 1 2 .. . ~ n+ 1 n j'(x

n) 2 f'(x /1 ) f'(x

n' " ,

(3.3 .1)

(3.3.2) Multipoint iteration methods [51 :

It is possible to modifY the Chebyshev method and obtain third order iterative methods

which do not require the evaluation of the second derivative. We give below two

multipoint iterative methods.

(i) [MPMl]

f(x,,) x +1 = X - ., n = 0,1,2, ...

" "f'( ) xn+l (3.3.2)

• 1 f(x n ) where x +1 = xn - --'---:"""":":"':""

n 2 f'(xn

)

This method requires one function and two first derivative evaluations per iteration.

(ii) [MPM2]

j(X; +1 )

xn+l=xn - j'(xn

) , n=O,l,2, . .. (3 .3.3)

• j(xn) where x +1 = X - -'---:........:..:..':""

n n j'(xn

)

This method requires two functions and one first derivative evaluations per iteration.

Page 27: Improved Newton’s Method for Solving Nonlinear Equations

19

(3.3.3) Parabolic Extension of Newton's Method [4] [PENM] :

This method involves the construction of a parabola through the point (x n,i{x n»,

which also matches the value of the slope, f I (x n), of the function, as well as its

curvature in terms of fll (x n). The desired quadratic is precisely the second degree

Taylor polynomial

The desired root a of i{x) should lie near a root x n+l, of P(x). Solving for (x n+l - X n)

by the quadratic formula, gives

x =X + -j'(xn)±[f'(xn)2 -2j(xn)j"(xn)] X n+ 1 n j "( X n ) (3.3.4)

First there is the ambiguity of the '±' sign. Here, the sign opposite to that of the term

in front of it is taken (i.e. taking the sign of f I (x n», so that this adjustment term is

forced to become zero. It can be handled automatically by dividing numerator and

denominator by f I (x n) (assuming a non-zero derivative) and selecting the positive

branch. Then (3.13) becomes

x = x + -1 + {1- 2[j(xn) I J'(xn)][j"(xJ I J'(xJ]}Yz n+l n J"(xn) I J'(xJ ~(3.3.5)

Rationalising the numerator, gives

2[f(xJ I J'(xJ] [PENlVl] xn+1 =xn -1+{1-2[f(x,JI f'(x,J][f"(x,JI f'(x,J]} X ~(3.3.6)

Page 28: Improved Newton’s Method for Solving Nonlinear Equations

Chapter 4

NUMERICAL SCHEMES FOR

FUNCTIONS OF TWO VARIABLES

20

(4.1) Newton's Method for systems of nonlinear equations with two variables:

Suppose that f: Dc ill2 ~ ill and g : D c ill2 ~ ill then the problem of finding the

solution of a system of two nonlinear equations can be stated as :

Find (a,[3) ED s.t.

f(a,[3) = 0

g(a,[3) = 0 (4.1.1)

Now the Newton's scheme for this problem is that solving the system of linear

equations

fx (X*n, Y*n) (X*n+l - X*n) + fy (X*n, Y*n) (Y*n+l - yOn) = - f(X*n, Y*n)

gx (x' n , y' n ) ( x' n+l - x' n) + g y (x' n , y' n) ( y' n+l - Y* n) = - g(x* n , Y' n) ~(4.l.2)

where (x * n , y' n) is the nth iterate.

Letting PI = X'n+l - X*n and P2 = y'n+l - y'n, we can find the (n+ 1)th iterate

y' n+l = P2 + y' n (4.l.3)

As in the one dimensional case, it is possible to interpret Newtcn's scheme

geometl1cally for the two dimensional case. Hence we also approximate the two

functions f(x,y) and g(x,y) by local linear models in the neighbourhood of (x' n , y* n). In

Newton's method, these local linear models are the tangent planes drawn to the

functions f(x,y) and g(x,y) at the CWTent iterate (X*n, Y*n). [say MI (x,y) and M2 (x,y)

resp ectively]

Page 29: Improved Newton’s Method for Solving Nonlinear Equations

21

Then the correspondillg equations of these planes are:

MI (x,y) = f{X*n, Y*n) + Ex (X' n , y' n) ( x - X' n) + f y (X' n , Y*n) (y - Y*n)

M2 (x,y) = g(x* n , y* n) + gx (x* n , y* n ) ( X - x' n) + g y (x' n , y' n ) (y- y* n) ~(4.l.4)

At the next iterate (x' n+1 , y' n+I), it is assumed that both MI (x,y) and M2 (x,y) vanish.

L. e.

Now we solve these two equations simultaneously to obtain (x' n+1 , y' n+I).

We can visualise (X' n+l , y' n+ l) geometrically as follows. Since we are solving the two

nonlinear equations f{x,y) = 0 and g(x,y) = 0 in the xy-plane, we are interested in only

the traces of planes MJ (x,Y) and M2 (x,y) in the xy-plane. These two traces are lines

(say Ll and L2) in the xy-plane and corresponding equations are:

Then (X*n+l , Y*n+l) is the intersecting point of these lines LI and L2.

As in thp, one dimensional case, above linear model can be interpreted in another way.

Let x*n = (x' n , y' n ), p = (PJ,P2) and by (2.2.2) we shall obtain

I

I(x: + p) = l(x:J + f Vf(x: + tp/ .pdt o

Page 30: Improved Newton’s Method for Solving Nonlinear Equations

22

I I

=/(x:)+ Plf[/y(x: +tpl'Y: +tp2)dt +P2f / y(x: +tppy: +tpz)]dt

° 0

Let x = x*u + p , then we have

I

/ (x, y) = f ( x : , y: ) + (x - x: ) f [It (x: + t PI ' y: + t P 2 )dt

° 1 ~(4.1.7)

+(y- y:)f f/x: +tP1>Y: +tp2)]dt o

The local model MJ (x,y) [see equation (4.1.4)] is obtained by approximating the two

indefinite integrals by the rectangles

I

f Iy (x: + l PI' y: + t P2 )dt ~ It (x:, y:) and o

I

f / y(x: +tPl'Y: +tp2)dt ~ /y(x:,y:) o

Similarly, we can obtain the local model M2 (x,y).

AJgorithm (4.1.1) :

Newton's Method for Systems of Nonlinear Equations with Two Variables

Given f : 912 ~ 91 and g : 91 2 491 continuously differentiable and Xo = (xo,yo)T E 91 2 :

at each iteration n, solve

X*11+1 = x*u + pu (4.l.8)

Page 31: Improved Newton’s Method for Solving Nonlinear Equations

23

(4.2) Improved Newton's Method for systems of nonlinear equations with two

variables:

(4.2.1) Formulation oflocal models:

From the equation (4.1. 7) :

I

!(x ,y )= !(Xn,Yn)+(x-xn)f[!x(Xn +tPPYn +tpz)dt o

1

+(Y- Yn)f ! y(Xn +tPPYn +tpz)]dt o

In this case, we approximate indefinite integrals by trapeziums

I

f [,(X n +tPI,Yn +tp2)dt ~ (Yz)[[,(Xn,Yn)+ [,(X n + PI ,Yn + pz)] o

I

f Iv (x n + t PI , Y n + t P2 )dt ~ (Yz)[ly (x n' Y n) + !y (x n + PI ,Y n + P2)] o

Let x = Xu + p, then we have

I

f [,(x n + t PpYn + t P2)dt ~ (Yz)[[y(xn,Yn)+ [,(x,y)] o

1

f Iy(xn +tPl'Yn +tp2)dt ~(Yz)[/I'(xn,Yn)+!J'(X'Y)] o

Thus the local model for function f(x,y) in a neighbourhood ofx n is:

m I (x,y) = f{ X n , y n) + (112) (x - x n)[ f x (x n , y n) + f x (x,y)]

+ (112) (y - y n)[ f y (x n , y n) + f y (x,y)]~ (4 .2.1)

Similarly, the local model for the function g (x,y) in a neighbourhood ofx n is:

+ (112) (y - Yn)[ gy (xn, Yn) + gy (x,y)]~ (4.2.2)

Page 32: Improved Newton’s Method for Solving Nonlinear Equations

24

(4.2.2) Properties of the local models:

(i) When x = X nand y = y n

(ii) The gradient ofm 1 at x

\lm I (x) = [ (1/2)( fx (Xu) + fx (x» + (112) (x - x n) f xx (x) + (1/2) (y - Yn) fy x (x),

(112)( f y (x u) + f y (x» + (1/2) (y - y n) f y y (x) + (1/2) (x - x n) f x y (x)] T

When x = X n and y = y n

Similarly, it can be sho\Vll that

Since the normal line to a smface at a point is parallel to gradient vector at that point

and the equivalency of the function values and the gradient vectors imply that the

resultant models are tangent smfaces to the corresponding functions at the point x n.

(iii) Let H (x) be the Hessian matlix of the model m (x) and then we have

where

= (1I2)[2fx x (x) + (x - xn)fxxx (x) + (y - Yn)fyxx (x)]

Page 33: Improved Newton’s Method for Solving Nonlinear Equations

25

= (1I2)[fxy (x) + fyx (x) + (x - xn)fxxy (x) + (y - Yn)fyxy (x)]

= (1I2)[fxy (x) + fyx (x) + (y - Yn)fyyx (x) + (x - xn)fxyx (x)]

= (1I2)[2fyy (x) + (y - Yn)fyyy (x) + (x - xn)fxyy (x)]

When x = Xn, the elements of the Hessian matrix becomes

[Assuming both fxy and fyx continuous on D and by the

use of Clair aut's Theorem]

h21 = (1I2)[fx y (x) + fyx (x)]

[Assuming both f x y and fy x continuous on D and by the

use of Clairaut' s Theorem]

Thus, the Hessian matrix of the local model m 1 (x) at x = XlI

Similarly, it can be shown that, if the Hessian matIix of the local model m2 (x) IS

H2 (x) then

Page 34: Improved Newton’s Method for Solving Nonlinear Equations

26

Thus, the Hessian matrices H I (x) and H 2 (x) for the local models m I (x) and m 2 (x)

agree with the Hessian matrices of the two functions f (x) and g (x) at x = Xu

respectively. Note that this property does not hold for the local models for Newton's

method.

At the next iterative point x n + I, we assume both the local models m I (x) and ill 2 (x)

vanish. Then we obtain the following system of equations

(Xn+ 1- Xn)[ fx (Xu) + fx (Xn+ I)] + (Yn+ 1- Yn)[ fy (xu) + fy (Xn+ 1)] = -2 f{xu)

(Xn+ 1 - Xn)[ gx (Xn) + gx (xu+d] + (Yn+ 1 - Yn)[ gy (xu) + gy (Xu+ I)] = -2 g(xu)

Obviously, as in the one dimensional case, this is an implicit scheme, which requires to

have the first partial derivatives of the functions f and g at the (n+ I )th iterative step to

calculate the (n+ l)th iterate itself We could overcome this problem by making use of

the Newton's iterative step to compute the first partial derivatives of f and g at the

(n+ 1 )th iterate.

Then the resulting scheme is :

(Xn+ 1 - Xn)[ f.: (xu) + f.: (X*n+ 1)] + (Yn+ 1 - Yn)[ fy (xu) + fy (X*u+I)] = -2 f{xu)

(Xn+ 1 - Xn)[ gx (xu) + g" (X*n+I)] + (Yn+ 1- Yn)[ gy (Xu) + gy (X*u+I)] = -2 g(x n)

where x*u+ I is (n+l)th iterate obtained by applying Newton's Method.

Page 35: Improved Newton’s Method for Solving Nonlinear Equations

27

Algorithm (4.2.1) :

Improved Newton's Method for Systems of Nonlinear Equations with Two

Variables

Given f: i}12 ~ i}1 and g : i}12 ~ i}1 continuously differentiable and Xo = (Xo,yo)T E i}12 :

at each iteration n, solve

Xn+l=Xn+Pn (4.2.3)

where

Page 36: Improved Newton’s Method for Solving Nonlinear Equations

Chapter 5

ANALYSIS OF CONVERGENCE OF INM FOR

FUNCTIONS OF ONE VARIABLE

5.1 Second Order Convergence

Lemma (5.1.1) :

28

Let f: D ~ 91 for an open interval D and let fiE Lip y (D). Assume that for some

p>o, Ie (x)1 ~ p for every XED. Iff{x) = ° has a solution a E D, then

I x ,: + 1 - a I S; (:/zp) I x II - a I 2

Proof:

x* n+1

X*n+1 - a

Theorem (5.1.1) :

<

xn-f{xn)/e(x n)

(x n - a) - f{ x n) I e (x n)

[e(x n)r' { f{a) - f{x n) - e(x n)( a - x n)}

I [e(x n)r' ll f{a) - f{x n) - e(x n)( a - x n)1

~ I [e(x n)r'l ( yl2 )( a - x n) 2 [by lemma (2.2)]

~ (y/2p)lx n-aI 2 [sincelf/(x)l~p] 0

Let f: D~91 for an open interval D and fl E Lip ·1 (D). Assume that for some p > 0,

If/(x)1 ~ p for every XED. If f{x) has a simple root at a E D and x 0 is sufficiently

close to a, then ::J e > ° such that the Improved Ne-wton's method defined by (3.9)

satisfYing the following inequality :

Page 37: Improved Newton’s Method for Solving Nonlinear Equations

29

2r r { 1+-e }e2 () 4p n n

(5.1.1)

Here e n = 1 x n - a I·

Proof:

By the Improved Newton's Method:

x = X 11 +1 TI

2f(xn) . n = 0,1,2,···

[!'(X n )+ !'(X:+1 )]'

where f( x n )

f '(X" )

• x = X n+l n

and f{a) = 0.

Thus (X n+l - a) = (x n - a) 2f(X ,J

Then by the triangular inequality,

e n+l :::;

By the Lemma (2.1.2),

e n+l :::; 1 [f/(X n) + e(X*n+l)]"ll {(y/2)(a - X n)2

+ 1 f{a) - f{x n) - f/(X*n+l)(a - Xn)I}

:::; 1 [e(x n) + f/(X*n+l)r11 { (y/2)e n2 + 1 f{a) - f{x n)

- f/(X n)(a - x n) +f/(x n)(a - x n)- f/(X* n+l)(a-xn)l}

Then again by the triangular inequality,

e n+l :::;

Page 38: Improved Newton’s Method for Solving Nonlinear Equations

Lemma (2 . 1.2) and Lipschitz continuity ofe implies,

e n+1 ~ ![f/(X n) + f /(X*n+l)r l l { (y/2)e n2 + (y/2)e n2 + ylx*n+1 - x nle n}

I [e(x n) + e(x* n+l)r l l {y e n2 + y I(X*n+1 - a) + (a - x n)le n}

~ l[f/(X n) + f/(X*n+I)]"11 {y e n2 + y IX*n+1 - al en + y la - x nle n}

By the Lemma (5 .1.1),

e n+1 ~ l[f/(X n) + f /(X*n+I)]"11 { y e n2 + y (y/2p)la - x nl 2 e n + Y e n2 }

~ I[f /(x n) + e(X*n+I)]"11 {2 Y e n2 + (y-/2p) e n3 } ~

Obviously 1 e (x) + e (y) I -:;t; 0, '\j x , Y E D.

For

if l [ I (x) + [ I (y) 1 = 0 for some x, y E D

~ e (x) = e (y) = 0 or e (x) = - f l (y)

iff l (x) = f l (y) = 0, it contradicts the assumption If I (x)1 ~ p > 0, '\j XED.

if e (x) = - f l (y)

~ [ I (x) < 0 < fl (y) or e (y) < 0 < [ I (x)

(5 .1.2)

30

~ ::l zED s.t. f l (z) = 0 (by the Intermediate Value Theorem, f l E Lip "I (D) &

hence f I is continuous on D) which contradicts our assumption that

[ I (x)1 ~ p > 0, '\j xED.

Thus 1 f l (x) + f l (y) I > 0, '\j x, Y E D.

Hence::l 8 > 0 S.t. I f l (x) + [ I (y) 1 > 8 > 0 , '\j x, Y E D.

In particular, we have

I f /(X n) + [ I(x* n+l) I ~ 8

[ I f l(X n) + f /(x* n+l) 1 rl ~ (118) (5 .1.3)

Page 39: Improved Newton’s Method for Solving Nonlinear Equations

Substituting (5.1.3) in (5.1.2)

e n+1

e n+l

2y

e

5.2 Third Order Convergence

Theorem (5.2.1) :

31

o

Let f : D~91 for an open interval D. Assume that f has fiTst , second and third

delivatives in the interval D. If f{x) has a simple root at a E D and x 0 is sufficiently

close to a , then the Improved Newton's method defined by (3 .2.4) satisfies the

following enol' equation:

where en = X n - a and C j = (1/ jl) fG)(a)/f(I)(a),j = L2,3, ...

Proof:

Improved Newton Method [INM] is

Xn + l = XII [f'(x n ) + f'(x:+ 1)]'

* where xn+l = xn f(xl/)

f'(x n )

n=012··· , , ,

Let a is a simple root off{x) [i.e. f{a) = 0 and e(a) =I:. 0] and Xn = a+e n

We use the following Taylor expansions

(5 .2.1)

Page 40: Improved Newton’s Method for Solving Nonlinear Equations

32

f(l)(a)[e n + C2 e} + C 3 e n3 + O(e n4)]

where C j = (11 j!) f(j) (a)/f(l) (a)

(5.2 .2)

f (ll (Xn) = f(ll(a+e n)

rl)(a) + f(2) (a)e n + (112!) f(3l (a) e} + O(e n3)

fO\a)[l + f(2) (a) I f(l)(a) en + (1I2!) f(3l (a) I fOl(a) e/ + O(e n3)]

= (5.2.3)

Dividing (5.2.2) by (5.2.3)

ttxn)/f(l)(x n)= [en + C2 e n2 + C 3 e n3 + O(e n4)][1 + 2C 2 en + 3C 3 e n2 + O(e n3)r l

[en + C2 e n2 + C 3 e n3 + O(e n4)]{1- [2C 2 en + 3C 3 e n2 + O(e n3)] +

[2C 2 en + 3C 3 e n2 + O(e/)] 2 - ... }

[en + C2 e / + C 3 e n3 + O(e n4)]{ 1- [2C 2 en + 3C 3 e n2 + O(en~)] +

4C/ e n2 + .. . }

[en + C2 e n2 + C 3 e n3 + O(e n4)][ 1 - 2C 2 en + (4C/ - 3C 3 ) e n2 +

O(e n3)]

(5.2.4)

X*n+l xn-ttxn)/f(ll(Xn)

a + en - [en - C2 e n2 + (2C 22 - 2C 3 ) e n3 + O(e n4)] (By 5.2.4)

(5.2.5)

Page 41: Improved Newton’s Method for Solving Nonlinear Equations

Again by (5.2.5) and the Taylor's expansion

Adding (5.2.3) and (5.2.6)

(5.2.7)

From equations (5.2.2) and (5.2.7)

(5.2.8)

Thus

Xn+J

(by 5.2.8)

(5.2.9) 0

(5.2.9) establishes the third order convergence of the INM, beyond any doubt.

Page 42: Improved Newton’s Method for Solving Nonlinear Equations

Chapter 6

COlVlPUTATIONAL RESULTS

AND DISCUSSION

34

The objective of this study was to improve Newton's Method which is used to

solve functions of one variable and systems of nonlinear equations. We designed a new

method called Improved Newton's Method [INM] for functions of one variable and

functions of two variables. Even though we have developed INM up to two variables

only, we can readily extend INM for functions of several variables, in general. The

improvement was made by replacing the indefinite integral involved in the derivation of

Newton's Method by the area of a trapezium instead of a rectangle.

In chapter 2 we have discussed preliminaries required to design and analyse

Newton's Method [NM] and Improved Newton's Method. Furthermore, we defined

order of convergence of a numerical scheme and how to measure the order of

convergence approximately when three consecutive iterations closer to the required

root are available.

In chapter 3, we have discussed Newton's Method [NM], Improved Newton's

Method [INM] and existing third order convergent methods such as Chebyshev

Method [CM], Multipoint Iterative Methods· [MPMI and MPM2] and Parabolic

Extension of Newton's Method [PENM] for nonlinear equations in one variable.

Page 43: Improved Newton’s Method for Solving Nonlinear Equations

35

We applied these iterative methods to polynomial, exponential and

trigonometIic functions [Table 6.3]. The efficiency of these methods were compared

by the number of iterations required to reach the actual root to 15 decimal places

starting from the same initial guess. Note that in all cases INM converges to the root

faster than the NM. Compared to the third order convergent methods available, we can

see that in most cases U\TM take the same number of iterations or sometimes it requires

even a lesser number of iterations than the existing third order convergent methods.

Computational order of convergence [COC] suggests that INM is of third order. For

certain functions COC even more than three and for Newton's Method, it is even less

than two for most functions. In table (6.3), PENM does not converge to the root due

to the OCCUlTence of the negative value in the square root of the scheme. [Eqn 3.3.4]

CompaIing with the third order convergent methods, INM is simpler than the

Chebyshev Method and PENM. The other important characteristic of the INM is that

unlike the other third or higher order methods, it is not required to compute second or

higher derivatives of the function to carryout iterations.

The local model for the INM has an additional property which does not hold

for the local linear model of Newton's Method. i.e. the second derivative of the local

model agrees with the second derivative of the function at the cunent iterate. In other

words, the model matches with the values of the slope, as well as with its curvature in

terms of the second derivative of the function.

Page 44: Improved Newton’s Method for Solving Nonlinear Equations

36

In chapter 5, we have shown that the INM is at least third order convergent

provided the first, second and third derivatives of the function exist [Theorem 5.2.1].

Moreover the suggested method guarantees the second order convergence whenever

the first derivative of the function exists and is Lipschitz continuous in a

neighbourhood of the root [Theorem 5.l.1].

Apparently, the INM needs one more function evaluation at each iteration,

when compared to Newton's Method. However, it is evident by the computed results

[Table 6.3] that, the total number of function evaluations required is less than that of

Newton's Method.

In chapter 4, we have discussed Newton's Method for systems of nonlinear

equations with two variables and giving a geometric interpretation. In section 4.2.1 we

extended the INM to systems of nonlinear equations with two variables. Moreover, we

showed that local models for INM are tangent surfaces to the corresponding function

at the current iterative point. Just as in the case of functions of one variable, for

functions of several variables, the local models for INM has an additional propelty than

Newton's Method [Section 4.2.2]. i.e. Hessian matrices oflocal models agree with the

Hessian matrices of corresponding functions at the present iterate.

We applied these two iterative methods [NM and INM] to systems of nonlinear

equations of two variables with the same initial guess [Table 6.4]. As in the one

dimensional case, the efficiency of these methods were compared by the number of

iterations required to reach the actual root to 15 decimal places. As we expected, by

applying INM to the systems of nonlinear equations, we can arrive at the root faster.

Page 45: Improved Newton’s Method for Solving Nonlinear Equations

37

Computational order of convergence [COC] for the INM was almost three for most of

the systems we checked. For certain functions COC is even more than three and for

Newton's Method, it is less than two for most systems. These results suggest that the

third order convergence of INM is valid for systems of nonlinear equations as well.

By applying both Newton's Method and Improved Newton's Method for the

car loan problem in section A3 of Appendix - A, the following results were obtained.

n Newton's Method Error

1 2.3886456015E-02 7.6113543985E-02

2 1.2742981700E-02 1. 1143474316E-02

3 8.9187859799E-03 3.8241957199E-03

4 7.8291580298E-03 1.0896279701E-03

5 7.7032682357E-03 1.2588977406E-04

6 7.7014728591E-03 1. 7953765763E-06

7 7.7014724900E-03 3.6913405665E-I0

Table (6.1) Newton's Method for the car loan problem

Monthly rate = 0.77%

Annual rate = 9.24%

Page 46: Improved Newton’s Method for Solving Nonlinear Equations

38

n Improved Newton's Method Error

1 6.5887577790E-03 9.3411242221E-02

2 7.6750417329E-03 1.0862839539E-03

-, 7.7014722681E-03 2.6430535158E-05 .J

4 7.7014724894E-03 2.2135537847E-10

Table (6.2) Improved Newton's Method for the car loan problem

Monthly rate = 0.77%

Annual rate = 9.24%

In both cases we used the same initial guess (= 0.10) and ultimately obtained the

annual interest rate as 9.24%. According to the above tables, INM gives the monthly

interest rate faster than that of Newton's Method. Notice that the number of function

evaluations required to obtain this result is 14 for NM and only 12 for INM.

Page 47: Improved Newton’s Method for Solving Nonlinear Equations

Function xo

f(x) 1) x-j+4XL_10 -0.5

1 2

-0.3 2) sinL(x)-xL+1 1

3

3) xL-ex-3x+2 2 3

4) cos(x)-x 1 1.7 -0.3

5) (x-1),j-1 3.5 2.5

6) x,j-10 1.5

7) xexp(xL)-sinL(x)+3cos(x)+5 -2

8) xLsinL(x)+exp[xLcos(x)sin(x)]-28 5

9) exp(xL+ 7x-30)-1 3.5 3.25

NM - Newton's Method INM -Improved Newton Method CM - Chebyshev Method MPM1 - Multi-Point Method 1 MPM2 - Multi-Point Method 2

i

I N N C M M M

109 6 5 5 3 3 5 3 3

113 6 8

5 4 5 6 3 4

4 4 3 6 4 4 4 2 3 4 3 3 5 3 NC

7 4 5 5 4 4

5 4 4

8 5 6

9 5 I\lC

11 8 8 8 5 5

PENM - Parabolic Extension of Newton's Method

COC P M M P M E P P I E P N M M N N C N M M 1 2 M M M M 1 NC 10 8 1.98 2.96 2.91 NO 2.92 3 3 3 1.98 NO NO NO NO 3 3 3 1.99 NO NO NO NO

NC 17 69 1.99 3.05 2.76 NO 2.73

3 4 16 1.98 3.04 3.05 NO 3.03 4 4 4 1.98 NO 2.76 2.84 2.91

I\lC 3 3 1.56 3.01 1\10 NO 1\10 NC 4 4 1.66 3.04 3.43 NO 3.39 3 3 3 1.99 NO NO NO NO 3 3 3 1.99 NO NO NO 1\10 3 4 6 1.98 NO NO NO 3.05

NC 4 5 1.98 2.67 2.55 NO 2.75 3 3 4 1.99 2.99 2.96 NO NO

3 4 5 1.99 3.01 3.16 NO 2.87

I\lC 5 6 1.99 2.92 2.01 NO 2.99

I\lC 5 7 1.99 2.87 1\10 NO 2.98

I\lC 7 8 1.99 2.96 2.68 NO 2.91 NC 5 6 1.99 2.81 2.82 NO 2.93

NC - Does not converge to the root COC - Computational Order of Convergence ND - Not defined NOFE - Total no of function evaluations

M P M 2

2.92 NO NO

2.69

2.85 2.91

1\10 3.39 NO 1\10 3.32

2.74 2.95

3.02

2.99

2.98

2.91 2.93

NOFE Root

I N N M M

218 18 1.36523001341448 10 9 -00-10 9 -00-

226 18 -00-

10 12 1.40449164821621 12 9 -00-

8 12 0.257530285439771 12 12 -00-

8 6 0.739085133214758 8 9 -00-10 9 -00-

14 12 2 10 12 -00-

10 12 2.15443469003367 16 15 -1 .20764782713013 18 15 4.82458931731526 22 24 3 16 15 -00-

Xo - Initial guess

- No of iterations to approximate the root to 15 places

Table (6.3) Computed results for functions of one variable

Page 48: Improved Newton’s Method for Solving Nonlinear Equations

Functions Initial Guess No of Iterations f(x,y) & g(x,Y) (xo,Yo) NM

1) XL+yL_2 (1,2) 6 X2+y2 -1

2) X4+y4_67 (10,20) 16 x3-3xy2+35

(1.8,2.7) 6

3) XL -1 Ox+yL+8 (-1,-2) 6 xl+x-10y+8

(5,-2) 106

4) XL+yL_2 (2,3) 8 eX

.1+l-2

5) _XL -x+2y-18 (-5,5) 9 2 2 5 (x-1) +(y-6) -2

6) 2cos(y)+ 7sin(x)-1 Ox (200,0) 13 7 cos(x)-2si n(y)-1 Oy

(2,2) 6

7) 16xL -80X+yL+32 (-1,-2) 6 xl+4x-10Y+16

Newton Method NM INM CDC

Improved Newton Method Computational Order of Convergence

INM

4

11

4

4

17

5

6

6

4

4

CDC Root NM INM

1.97 2.93 (1.22474487139152,0.70706781186667)

1.99 2.40 (1.88364520891082 , 2.71594753880345)

1.95 2.91 -00-

1.83 3.04 (1 , 1)

2.03 2.96 -00-

1.70 3.51 (1 , 1)

1.99 2.46 (-2,10)

2.01 2.99 (0.526522621918048 , 0.507919719037091)

1.99 2.99 -00-

1.23 2.97 (0.5 , 2)

Table (6.4) Computed results for functions of two variables

Page 49: Improved Newton’s Method for Solving Nonlinear Equations

41

APPENDIX-A

We gIVe below some practical situations where the solution of nonlinear

equations, becomes the major problem We have tried the suggested INM along with

NM to show the advantages of adopting the former.

A.I The Ladder in the Mine [2)

1 7'

Figure A.I.I

There are two intersecting mine shafts that meet at an angle of 123 0, as shown

in Fig. (A. 1. 1). The straight shaft has a width of 7 ft, while the entrance shaft is 9 ft

wide. Here we want to find the longest ladder that can negotiate the tum. We can

neglect the thickness of the ladder members and assume it is not tipped as it is

manoeuvred around the comer. Our solution should provide for the general case in

which the angle A is a variable, as well as for the widths of the shafts.

Here is one way to analyse our ladder problem Visualise the ladder in

successive locations as we carry it around the comer; there will be a critical position in

which the two ends of the ladder touch the walls while a point along the ladder touches

Page 50: Improved Newton’s Method for Solving Nonlinear Equations

42

the corner where the two shafts intersect. [See Fig. (Al.2)] Let C be the angle

between the ladder and the wall when it is in this critical position. It is usually

preferable to solve this problem in general telms, so we work with variables C, A, B,

Consider a series of lines drawn in this critical position - their lengths vary with

the angle C, and the following relations hold (angles are expressed in radian measme):

I=~· I sinB'

W 1 = __ 1_.

2 sin C'

B = n - A - C;

W W I=f +1 = 2 +_1_

I 2 sine n - A - C) sin C

Figure A.1.2

The maximum length of the ladder that can negotiate the turn is the minimum

of f as a function of angle C. We hence set d f / dC = o.

df w 2 cos(n - A - C) WI cos C -= - =0 de sin 2 (n - A - C) sin 2 C

Page 51: Improved Newton’s Method for Solving Nonlinear Equations

43

We can solve the general problem if we can find the value of C that satisfies

this equation. With the critical angle determined, the ladder length is given by

1= w 2 +~ sin(n - A - C) sin C

As this analysis shows; to solve the specific problem we must solve a

transcendental equation for the value ofC:

9 cos (n - 2.147 - C) _ 7 cos C = 0 sin 2 (n - 2.147 - C) sin 2 C

aud then substitute C into

1= 9 +_7_ sin(n - 2.147 - C) sin C'

where we have convelted 123 0 into 2.147 radians.

A.2 Molecular Configuration of a Compound [3)

A scientist may wish to determine the molecular configuration of a celtain

compound. The researcher derives an equation t{x) giving the potential energy of a

possible configuration as a function of the tangent x of the angle between its two

components. Then, since the nature will force the molecule to assume the

configuration with the minimum potential energy, it is desirable to find the x for which

f{x) is minimised. This is a minimisation problem in the single variable x and we should

find clitical x values s. t. df / dx = o. The equation is likely to be highly nonlinear,

owing to the physics of the function f

Page 52: Improved Newton’s Method for Solving Nonlinear Equations

44

A.3 Interest Rate of a Loan

In real life we borr-ow loans at the expense of certain interest rates. Sometimes

we need to find the interest rate when the principaL periodic payment for an annuity,

and total number of periodic payments in an annuity of a loan are available. For

example, suppose you get a four-year monthly car loan ofRs. 200,000/- and the lender

ask you to pay Rs. 5,000/- at the end of each month. Now the problem is that finding

the annual interest rate of the loan.

To solve this problem we should find the roots [between 0 and 1] of the following

equation.

where P Ptincipal

A Periodic instalment (Payment)

n Total number of instalments

I 0 or 1 (0 ifpayments are made at the end of the period and 1 if

payments are made at the beginning of the period)

For solving the car loan problem, substituting

P 200,000

A - 5,000 (cash you payout represented by a negative number) and

n 4.12 = 48

I 0 (since payments are made at the end of the month)

Page 53: Improved Newton’s Method for Solving Nonlinear Equations

we shall obtain,

=>

200,000(1 + r)" -5,000(1 + r.ofl+ r;" -1] ~ 0

200,000(1 + r)" - 5,000[ (J +rr -1] ~ 0

( 40r - 1)(1 + r t 8 + 1 = 0

45

This is a higher degree polynomial equation and according to the Galois Theorem, we

can't find closed form solutions. Thus we should apply a numerical method for finding

a root between 0 and 1 (Since interest rate is a ratio).

A.4 Nonlinear Least Squares(3)

Many real world problems involve, selecting the best one from family of

curves to fit data provided by some experiment or some sample population. Usually, in

Regression analysis, we deal with curves which are linear in parameters. Thus, the

equations obtained are linear when the residual least squares are minimised.

But one may want to fit a curve that is nonlinear in parameters. For example, a

researcher may want to fit a bell-shaped curve for his data collected from an

experiment. Suppose that (t l,y d, (t 2,y 2), ... , (t n,y n) be any n such pieces of data. In

practice, however, there was experimental elTor in the points and in order to draw

conclusions from the data, the researcher wants to find the bell-shaped curve that

comes "closest" to the n points.

Page 54: Improved Newton’s Method for Solving Nonlinear Equations

46

Since the general equation for a bell-shaped CUIVe is

it requires choosing x I, x 2, X 3, and x 4 to minimise some aggregate measure of the

discrepancies (residuals) between the data points and the CUIVe; they are given by

The most commonly used aggregate measure is the sum of squares of the r j 's, leading

to determination of the bell-shaped curve by the solution of the nonlinear least-squares

problem,

n

mIll f(x) x e91 '

L (XI + x 2 e-(I+X3)2/ X4 - Yi)2. i= I

The problem is called a nonlinear least-squares problem because the residual

functions r j(x) are nonlinear functions of some of the parameters x I, X2, X3, X4.

When f is minimising, we obtain four nonlinear equations and it is impossible to

find a closed form solution and one can only give numerical approximations. In this

case, numerical methods play an important role in finding the suitable approximations

for XI, X2, X3, and X4.

Page 55: Improved Newton’s Method for Solving Nonlinear Equations

APPENDIX -B

B.t This program is used to compare numerical algorithms for finding roots of non·linear equations in one variable.

{$N+}

uses crt , pro I_com, pro3_com; (Units prol_com and pro3 Jom defined in Appendix - D)

type

menu type

fc

element

var

record op: string[ 10]; hp:integer;

end;

object(fu) (fu is defined in prol_com) end;

object(item) (Item is defined in pro3 _com)

end;

op,ch,ch 1 char; menuar array[l.. 9] of menutype; m boolean; ij,k Integer; xiO,xcO, (Current iterative values) xi,xc, (Next iterative values) ierror,cerror, {En"or values} pi,pc, (Computational Order Convergence) aierror,acerror, (Actual En·or Values) epsilon (lvfachine Precision)

el,ec fcn el

real; array [1..3] ofreal; (Keeps three consecutive errors) fc; element;

procedure menusc; (Draws the Main Menu) begin

clrscr; eldrawbox(8,2,72,24) ; el.drawbox(6,1,74,25); eJ.drawbox(27,3,53,5); textcolor(O); textbackground(7); eJ.centertxt(4,'Dv1PROVED NEWfON METHOD'); textcolor(7); textbackground(O); eLdrawbox(28,21 ,51 ,23); e1.centertxt(22,'ENTER YOUR SELECTION'); gotoxy(l6,8); write('ewton Method');

47

Page 56: Improved Newton’s Method for Solving Nonlinear Equations

gotoxy(l6,12); writeChebyshev Method'); gotoxy( 46,8); write(,arabolie Extension of NM'); gotoxy( 46,12); Write('ultipoi nt Methods'); gotoxy(37,16)~

writeCUIT') ; gotoxy(38,19); write("); texteolor(O); textbackground(7); gotoxy(l5, 8); write('N'); gotoxy(l5,12); wri tee C'); gotoxy(45,8); writeCP); gotoxy( 45,12); writeCM); gotoxy(3 6,16); writeCQ'); gotoxy(3 7,19); write(")~

texteolor(7); textbaekground(O); texteolor(O); textbackground(O); gotoxy(52,22); write("); textcolor(7); textbaekground(O);

endj{ofmenusc}

function eps : real; (Finds the machine epsilon) va,"

ep : real; begin

ep:= 1; while I +ep > I do

ep := ep/2; eps := ep;

end;{of eps}

procedure initialize; begin

clrser; fen.fen menu; fen.fnehoiee := readkey; write(fen.fnehoiee); writeln~

while not(fcn.fnchoice in ['a' .. 'p']) do begin

sound(250); delay(500); nowund; clrser;

48

Page 57: Improved Newton’s Method for Solving Nonlinear Equations

fen.fen _menu; fen.fnehoice := readkey; write(fcn.fnchoice); writeln;

end; {of wh ile) writeln ; writelnC Acnlal root ',fcn.root); writeln; write (' Enter the initial point '); readln(xiO); xeO:= xiO;

end;(of initialize)

procedure cal_nm (yO : real; var y : real ); (Newton Method) begin

y := yO - Cfcn.fx(yO)/fcn.fd(yO)); end;

procedure cal_inm (yc : real; var y : real); (improved Vewtol1 's Method) var

z,t real ; begin

z := yc - (fcn.fx(yc)/fcn.fd(ye)); t := fcn.fd(yc)+fcn.fdCz); y= yc - 2*fcn.fx(yc)/t;

end;

procedure cal_cbysbev (yc : real; var y : real); (Chebyshev Method) begin

y := yc - fcn .fx(yc)/fcn.fd(yc) - (l12)*sqrCfcn.fx(ye)/fcn .fd(Yc))*Cfcn.fsd(yc)/fcn.fd(yc)); end;

procedure cal_multi! (yc : real; var y : real); (J'vlultipoint Iterative Method I) var

z real; begin

z := yc - (l/2)*(fcn.fx(yc)/fcn .fd(yc)); y := yc - (fcn.fx(yc)/fcn.fdCz));

end;

procedure cal_multi2 (yc : real; var y : rea!); (.'v/ultipoint Iterative Method 2) var

z real ; begin

z := yc - Cfcn.fx(yc)/fcn .fd(yc)); y := z - fcn.fxCz)/fcn.fd(yc);

end;

procedure cal_para (ye : real; var y : real); (Parabolic Extension Newton Method) begin

y := yc - 2 *(fcn.fx(yc)/fcn.fd(yc ))/C l+sqrt(l-2*(fcn.fx(yc )/fcn.fd(yc ))*(fcn. fsd(yc )/fcn. fdCyc )))); end;

49

Page 58: Improved Newton’s Method for Solving Nonlinear Equations

procedure error(x,y : real; var e : real); (Returns e as the absolute error between two numbers x & y) begin

e= abs(x-y); end;

procedure title (wl,w2 : string; i,j : integer); begin

writelnC ',wi,' ':i,w2,' ':j,'i'); wri te I n(' --------------------------------------------------');

end;

procedure line; begin

wri t el n(' --------------------------------------------------'); end;

procedure print_result (n : integer; x,y : real;wl,w2 : string; i,j : integer); var

k : integer; begin

k :=nmod15; if (k = 0) then

begin writeC ',x,' ',y,' ',n); writeln; line; writelnC Enter the RETURN key'); readin; cIrscr; title(wl,w2,i,j);

end {ofi/k=O} else

begin writeC ',x,' ';-;y,' I,n); writeln

end;{of else k=O) end; (ofprintJ esult)

procedure cal_compare_mtds;

50

(Selects the appropriate Numerical method and calculates the new value and the error accordingly) begin

i:=i+l; case upcase(ch) of 'C' : cal_cbyshev(xcO,xc); 'P : calyara(xcO,xc); '0' : cal_multil(xcO,xc); 'T' : cal_ multi2(xcO,xc); 'N' : cal_nm(xcO,xc); end;{of case ch) cal_inmCxiO,xi); error(xc,xcO,cerror); error(xi,xiO,ierror\ errore xcJcn. root,acerror); error(xiJcn.root,aierror); xcO := xc; xiO := xi;

end;{of cal_compare _ mtds)

Page 59: Improved Newton’s Method for Solving Nonlinear Equations

51

procedure compare_mtds; (Compare Numerical iVlethods with Improved Newton's Alethod, Errors and Computational Order 0/ Convergence) begin

drscr; gotoxy(20,4); writelnC * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *'); gotoxy(20,5); case upcase(ch) of 'C' : writeln(,B : Chebyshev and Improved methods'); 'P' : writelnCB : PENM and Improved methods'); '0' : writeJnCB : .MPMI and Improved methods'); 'T' : writelnCB : .MPM2 and Improved methods'); 'N' : writelnCB : Newton and Improved methods'); end; (a/case ch) gotoxy(20,6); writelnCE : Errors'); gotoxy(20,7); writeln('C : Computational Order of Convergence'); gotoxy(20,8); wri tel nC * * * * * * * * * * * * * * * * * * * * *** * * * * * * * * * * * * * * *'); gotoxy(20, 1 0); wTite(,Enter your selection : '); chI := readkey; case upcase(chl) of

'E':begin initialize; i=O; drscr; titleCError(Comp) ','Error(INM) ',12,13); t"epeat

cal_compare _ mtds; print_ result(i,acerror,aierror,'Error(Comp) ','Error(INM) ',12,13);

until «(cerror < sqrt(epsilon» and (abs(fcn.fx(xc» < sqrt(epsilon») and ((ierror < sqrt(epsilon» and (abs(fcn.fx(xi» < sqrt(epsilon»»or (i>299);

end; (a/case E) 'B':begin

initialize; i:=O; clrscr; title('Comp. Mtd ','Improved Mtd',13,13); repeat cal_compare _ mtds; print_ result(i,xc,xi,'Comp. Mtd ','Improved Mtd', 13, 13);

until «(cerror < sqrt(epsilon» and (abs(fcn.fx(xc» < sqrt(epsilon») and ((ierror < sqrt(epsilon» and (abs(fcn.fx(xi» < sqrt(epsilon»»or (i>299);

end; (a/case B) 'C':begin

initialize; i:=O; j:=O; drscr; writeln(, Computational order of convergence'); writelnC * **** ****** **** **************** ***'); writeln; titie(,Comp. Mtd ','Improved Mtd',16,13); for k := I to 3 do {Keeps three consecutive errors}

Page 60: Improved Newton’s Method for Solving Nonlinear Equations

begin ei[k] := abs(xiO-fcn .root); ec[k] := abs(xcO-fcn.root) ;

end; {offor} repeat

cal_compare _ mtds; ei[l] := ei[2J; ei[2] := ei[3]; ei[3] := aierror; ec[l] := ec[2]; ec[2] = ec[3]; ec[3] := acerror; if i > 2 then

begin if (ei[1] <>ei[2]) and (ei[3] > sqrt(epsiIon)) then

pi := abs((ln( ei[3]/ei[2]))/(ln( ei[2]/ei[ 1 ]))); if (ec[I]<>ec[2]) and (ec[3] > sqrt(epsiIon)) then

pc := abs((ln( ec[3]/ee[2]))/(In( ee[2]/ee[ 1 ]))); j := i mod 10; if (j = 0) then

begin if (ei[I] <>ei[2]) and (ei[3] > sqrt(epsilon)) then if (ee[ 1]<>ec[2]) and (ee[3] > sqrt(epsilon)) then

writeC ',pc,' ',pi,' ',i) else

writeC ',' * ',pi,' ',i) else if (ec[I] <>ee[2]) and (ee[3] > sqrt(epsilon)) then

writeC ',pc,' else

write(' * writeln; line;

* , ,i)

*

writelnC Enter the RETURN key'); readIn; elrscr; writeinC Computational order of convergence'); writeIn(, *** ** * ************** * * **** * ** ** ** *'); writeln; titJe('Comp. Mtd ','Improved mtd' ,16,13);

end (ofifj=O) else

begin if (ei[J] <>ei[2]) and (ei[3] > sqrt(epsilon)) then if (ee[l] <>ee[2]) and (ee[3] > sqrt(epsi\on)) then

writeC ',pc,' ',pi,' ',i) else

writeC ',' * ',pi,' ',i) else if (ee[I] <>ee[2]) and (ee[3] > sqrt(epsiIon)) then

writeC ',pc,' * ',i) else

write(' * writeln;

end;{of else j=O} end;{ofifi >2}

* I , i )~

until «(cerror < sqrt(epsilon)) and (abs(fcn.fx(xc)) < sqrt(epsilon))) and «ierror < sqrt(epsilon)) and (abs(fcD.fx(xi)) < sqrt(epsilon)))) or (i>299);

S2

Page 61: Improved Newton’s Method for Solving Nonlinear Equations

end;{of case C) end;(ofcase chi) line; writeln~

writeln(, Enter the RETURN key); readln;

end; (of Compare _ mtds)

procedure menusc_multi_mtds; (Draws the sub-menu jar Nfuihpoint Methoru) begin

cJrscc gotoxy(25,9)~

wri tel n(' >I< >I< * * >I< * >I< >I< >I< >I< >I< * * * * * * * * * * * *'); gotoxy(25,lO); writeln(,O : Multipoint Method I'); gotoxy(25,l1); writelnCT : Multipoint Method 2'); gotoxy(25, 12); wri tel nC * * * * * * * * * * * * * ** ** * * * ***'); gotoxy(25 , 14); writeCEnter your selection : '); ch : = readkey;

end; {ofmenusc_multi _ mtd,>}

begin (of main program) m:=true; epsilon:=eps; WHILEmDO begin

menusc; ch : =readkey~

case upcase(cb) of 'e': compare_mtds; 'P: compare_mtds; 'N' : compare_mtds; 'M': begin

menusc _multi _ mtds; compare _ mtds; end;{of case AI}

'Q','g' :m:=false else

begin sound(250); delay(500); nosound;

end;{of cas'e else) end;{ofcase ch)

end;(ofwhile m} el.endsc;

end.{ofmain program}

53

Page 62: Improved Newton’s Method for Solving Nonlinear Equations

54

B.2 This program is used to compare Newton's Method and Improved Newton's Method for solving systems of nonlinear equations in two variables.

{$N+}

uses crt, matrix, pro2 _com, pr03 _ com;{Uni ts matrix. pro2 _com and pr03 _com defined in Appendix-D}

type

menu type record op: string[ 1 0]; hp:integer;

end;

fc object(fu) (fit is defined in pr02J om) end;

matrices object(m) (m is del/ned in matrix) end;

element object(item) (item is defined in pro3 _com) end;

var ch menuar m

char; array[l.. 9] of menutype; boolean;

i,j,k integer; xnO,xiO, (Cun'ent iterative vectors) xn,xi, (Next iterative vectors) root (.4ctual root) el,en nerror,ierror, (Error vectors)

vector; array [1..3] ofreal; (Keeps three consecutive errors)

pi ,pn, , (Computational Order Convergence) anerror,aierror, (.4ctual Error vector!>') epsilon (.\lachine Precision)

fcn mx el

pt"ocedure menusc; (Draws the Alain Afenu) begin

c1rscr; el.drawbox(8,2, 72,24); el.drawbox(6, 1,74,25); el.drawbox(27,3,53,5); textcolor(O); textbackground(7);

real; fc; matrices; element;

el.centertxt(4,'I.MPROVED NEWTON METHOD'); textcolor(7); textbackground(O); eldrawbox(28,21,51,23); elcentertxt(22,'ENTER YOUR SELECTION');

Page 63: Improved Newton’s Method for Solving Nonlinear Equations

gotoxy(l6,8); writeCMPROVED NEWTON) ; gotoxy(l6,14) ; writeCUIT'); gotoxy( 46,8); writeCEWTON'): gotoxy(46,14); writeCOMP ORDER CVG '); texteo1or(O); textbackground(7); gotoxy(l5,8); wri te(T); gotoxy(l5,14); writeCQ'): gotoxy( 45,8); writeCN'); gotoxy( 45, 14); write('C); texteolor(7); textbaekground(O); texteolor(O); textbaekground(O); gotoxy(52,22); write("); texteolor(7); textbaekground(O);

end; {of menus c)

function eps : real; (Finds the machine epsilon) var

ep real ; begin

ep:= 1; while 1 +ep > 1 do

ep := ep/2; eps := ep;

end;((if eps)

procedUl·e ioitialize; begin

clrser; fen .fen_menu; fen .fnehoiee := readkey; write(fcnfnehoiee ); writeln; while not(fcn.fnchoice in ['a' . .'0']) do

begin sOl'nd(250); delay(500); nosound; clrser; fen.fen_menu; fen.fnehoiee := readkey; write(fcn.fnchoice); writeln;

end; {while) writeln;

55

Page 64: Improved Newton’s Method for Solving Nonlinear Equations

writelnC Actual root x writelnC Actual root y writeln; root(1] := fen .rootx; root(2] = fen.roory, write (' Enter the initial point readln(xnO[ 1 ],xnO[2]); xiO[I] := xnO[l]; xiO[2] := xnO[2];

end; {of initialize)

, ,fen .rootx); ',fen. rooty);

');

p.-ocedure Jacobian (x : vector; var J : mtrix); (Find'S the Jacobian matrix J of F(x)) begin

J[I,1] := fcn.fdx(x); J[I,2] := fcn.fdy(x); J[2,1] := fcn.gdx(x); J[2 ,2] := fcn.gdy(x);

end;{of Jacobian)

procedure CombF (x : vector; val" F : vector); (Returns I'aector valuedfunction F(x)) begin

F[l] := fcn.f(x); F[2] := fcn.g(x);

end; {of Comb F)

procedure cal_nm (yO: vector; var y : vector); (Newton ;\.Jethod) var

s,F,Fneg J,Jinv

vector; mtrix;

begin Jacobian(yO,J); CombP(yO,F); mx .invert(J,Jinv); mx . vnegate(F ,Fneg); rnx .vtnultiply(Jinv,fneg,s) ; mx .vadd(yO,s,y)

end; (ofcal_nm)

procedure cal~nm (yO : vector; var y : vector); (improved Newton ·s l'vfethod) va ..

s,F,Fneg,yt,p,twoFneg vector; JO,JOinv,Jt,J,Jinv mtrix;

begin Jacobian(yO,JO); CombF(yO,f ); mx.invert(JO,JOinv); mx . vnegate(F,f negt mx .vmultiply(JOinv,Fneg,s); mx.vadd(yO,s,yt) ; Jacobian(yt,Jt) ; mx.add(JO,Jt,J); mx.invert(J,Jinv); twoFneg[l] := 2*Fneg[1J; twoFneg[2] := 2*Fneg[2J;

S6

Page 65: Improved Newton’s Method for Solving Nonlinear Equations

mx.vmultiply(Jinv,twoFneg,p); mx.vadd(yO,p,Y);

end; {ofcaUnm}

procedure error(x,y : vector; var e : real); (Returns the norm infinity of error vector of two vectors x and y) var

n vector; begin

mx. vsubstract(x,Y ,n); if abs(n[l]) >= abs(n[2]) then

e := abs(n[l]) else

e := abs(n[2]); end;{oferror)

procedure title (wl,w2 : string; i,j : integer); begin

writeln(, ',wI,' ':i,w2,' ':j,'i'); wri tel nC' --------------------------------------------------');

end; {of title)

procedure line; begin

wri tel n(' --------------------------------------------------'); end; {of lin e)

procedure print_result (n : integer; x,y : real;wl,w2 : string; i,j : integer); var

k : integer; begin

k:=nmod15 ; if (k = 0) then

begin \Vrite(' ',x; ',Y,' I,n)~

writeln; line; writeln(' Enter the RETURN key'); readln; c1rscr; title(wl,w2,i ,j);

end {ofifk=O} else

begin writeC ',x,' ',y,' I , n) ~

writeln end; {of else k=O)

end; {of print JesuIt}

procedure cal_mtds; {Calculate lIIumerical Methods andjinds errors} begin

initialize; i:=O; clrscr; title('Xi repeat

i:=i+l;

','Yi ',14,13);

57

Page 66: Improved Newton’s Method for Solving Nonlinear Equations

case upcase(cb) of N: cal_nm(xnO,xn); '1': cal_inmlxnO,xn); end;{case ch) error(xn,xnO,nerror); error(xn,root,anerror); xnO:= xn; print_resuJt(i,xn[1],xn[2],'Xi ','Yi

until (nerror < sqrt(epsilon)) or (i>299); line, writeln; writelnC Enter the RETURN key); readln;

end;{of cal_mtds)

procedure coc;

',14,13);

(Calculate Computational Order of Convergence ) begin

initialize; i:=O; j:=O; cJrscr; writelnC Computational order of convergence'); writeln(' *** * ** ** * ** *** *** *** **************I)~ writeln; titleCNewton mtd','Improved mtd',16,13); for k := 1 to 3 do (Keeps three consecutive errors)

begin error(xiO,root,ei [k]); error(xnO,root,en[k]);

end; (offor) repeat

i:=i+l; cal_ nm(xnO,xn); caUnm(xiO,xi); error(xn,xnO,nerror); error(xi,xiO,ierror); error(xn,root,anerror); errore xi ,root,aierror); xnO := xn; xiO :=xi; ei[l] = ei[2]; ei[2] = ei[3]; ei[3] := alerror; en[l] = en[2J; en[2] := en[3]; en[3] := anerror; ifi > 2 then

begin if (ei[I]<>ei[2]) and (ei[3] > sqrt(epsilon)) then

pi = abs((ln( ei[3]/ei[2]))/(ln( ei[2]/ei[l]))); if (en[1] <>en[2]) and (en[3] > sqrt(epsilon)) then

pn := abs((ln( en[3]/en[2]))/(ln( en[2]/en[ 1 ]))); j := i mod 10; if (j = 0) then

begin if (ei[I]<>ei[2]) and (ei[3] > sqrt(epsilon)) then

if (en[1]<>en[2]) and (en[3] > sqrt(epsilon)) then

S8

Page 67: Improved Newton’s Method for Solving Nonlinear Equations

writeC ',pn,' ',pi,' ',i) else

writeC ' ,1 * ',pi,' ',i) else

if (en[1] <>en[2]) and (en[3] > sqrt(epsilon)) then writeC ',pn,'

else write(, *

writeln; line;

* ',i)

*

writeln(' Enter the RETURN key'); readln; elrscr;

',i);

writelnC Computational order of convergence'); writelnC * * ********* ********************* **'); writeln; titleCNewton mtd','Improved mtd',16,13);

end{ofifj=O} else

begin if (ei[I] <>ei[2]) and (ei[3] > sqrt(epsilon)) then if (en[1] <>en[2]) and (en[3] > sqrt(epsilon)) then

writeC ',pn,' ',pi,' ',i) else

writeC ',' * ',pi,' ',i) else if (en[I] <>en[2]) and (en[3] > sqrt(epsilon)) then

writeC ',pn,' else

writeC * writeln;

end; {of elsej=O} end;{ofifi>2}

* , ,i)

*

until «nerror < sql't(epsilon)) and (ierror < sqrt(epsilon») or (i>299); line; writeln; writelnC Enter the RETURN key'); readln;

end;{ofcoc)

begin(ofmain program} m:=true; epsilon = eps; while m do begin menusc~

ch:=readkey; case upcase(ch) of

'N':cal_mtds; 'I':cal mtds; 'C':coc; 'Q':m:=false else

begin sound(2S0); delay(SOO); nosound;

end;{of case else}

59

Page 68: Improved Newton’s Method for Solving Nonlinear Equations

end;{«f case ch} end ; {ofwhilej

el.endsc; end.{of main program)

60

Page 69: Improved Newton’s Method for Solving Nonlinear Equations

61

B.3 This unit is used in the program inm jcn _ oCone_variable to input functions of one variable and it's properties.

unit prol_com;

{$N+}

interface

uses crt;

type fll = object

fnchoice

end;

procedure fcn_menu; function root function fx (x:real) function fd (x: real) function fsd (x:real)

implementation

: char;

: real ; : real ; : real ; : real;

function power (x : real; n : integer) : real; {Evaluate x to the power n} var

a : real; 1 : integer;

begin a:=I; i:=l; while i <= abs(n) do

begin a:=a*x; i:=i+l;

end; {o(while} ifn> 0 then

power:=a else

ifn < 0 then power:=lIa

else power:=I;

end; {o/pmller}

procedure fu.feD_menu; {Drm!:" the Function Menu} begin

clrscr; writelnC FUNCTION MENU); writelnC * ** *** **** * *** *********** **************** *** ** *** ** ** *'); writelnC writelnC writelnC writelnC writelnC writeln('

a b c d e f

x/\3+4x/\2-IO'); (sinx)/\2-x/\2+ I'); x/\2-expx-3x+2'); cosx-x') ; (x-l )/\3-1'); x/\3-10');

Page 70: Improved Newton’s Method for Solving Nonlinear Equations

writelnC writelnC writeln(, writelnC write]n(' writeln(, writelnC writelnC writelnC writelne writelnC writeln;

g xexp(x"2)-(sinxY\2+ 3eosx+5'); h (xsinx)"2+exp(x"2eosx+sinx)-28');

exp(x"2+ 7x-30)-I'); J exp(eosx)-I'); k 16x+l/(l+x)"20-1');

x"2-] '); m x"2-2x+l'); n sin(x"2+ 1 0)'); o sinx-x'); p x"lO-lO');

******************************************************I)~

write (' SELECT YOUR CHOICE '); end j (of fil·fen _menu)

function fu.root : real; {4ctual roots of the functions} begin

case fnchoice of 'a' root 'b' root 'e' root 'd' root 'e' root 'f root 'g' root 'h' root 'i' root 'j' root 'k' root 'I' root 'm' root 'n' root '0' root 'p' root

end j {of case fnchoice) endj {ofJiI. root)

function fu.fx (x:real) : realj (Evaluate function fx at x) begin

case fnchoice of 'a' fx 'b' fx 'e' fx 'd' fx 'e' fx 'f fx 'g' fx 'h' fx IiI fx 'j' fx 'k' fx 'I' fx 'm' fx 'n' fx '0' fx 'p' fx

1.36523001341448; 1. 40449164821621; 0.257530285439771 ; 0.739085133214758; 2; 2.15443469003367; -1.20764782713013; 4.82458931731526; 3; pil2; 0.0222623113 071165; 1 ; 1 ; sqrt( 4*pi-1O); I; 1.2589254] ]79383;

x*sqr(x)+4*sqr(x)-10; sqr(sin(x»-sqr(x)+ 1; sqr(x)-exp(x)-3 *x+2; eos(x)-x; (x-l )*sqr(x-l )-1; x*sqr(x)-IO; x*exp(sqr(x»-sqr(sin(x»+ 3*eos(x)+ 5; sqr(x*sin(x»+exp(sqr(x)*eos(x)+sin(x»-28; exp(sqr(x)+7*x-30)-I; exp(cos(x»-I; 16*x+power( 1 +x, -20)-1; sqr(x)-I; sqr(x)-2*x+ I; sin(sqr(x)+ I 0); sin(x)-x; power(x, 1 0)-1 0;

62

Page 71: Improved Newton’s Method for Solving Nonlinear Equations

end; (oIcase fochOlce) end; (offu./x)

function fu.fd (x:real) : real; {Evaluate first derivative oIIx at x} begin

case fnchoice of 'a' fd 'b' fd Ie' fd 'd' fd 'e' fd 'f fd 'g' fd 'h' fd

Iii fd 'j' fd 'k' fd ']' fd ']' fd 'n' fd '0' fd 'p' fd

end;{oI casefnchoice) end; {offil.jd}

3*sqr(x)+8*x; sin(2*x)-2*x; 2*x-exp(x)-3; -sin(x)-l; 3*sqr(x-l); 3*sqr(x); (1 +2 *sqr(x»*exp(sqr(x) )-sin(x)*(3+ 2 *cos(x»); 2*x*sin(x)*(sin(x)+x*cos(x))+( cos(x)*( 1 +2*x)­sqr( x)* sine x) )*exp( sqr( x )*cos( x )+si n(x»; (2 *x+ 7)*exp(sqr(x)+7*x-30); -sin(x)*exp( cos(x»; 16-20*power(1+x,-21); 2*x; 2*x-2; 2*x*cos(sqr(x)+ 1 0); cos(x)-l; 10*power(x,9);

function fu.fsd (x:real) : real; (Evaluate second derivative of the Ix at x) begin

case fnchoice of 'a' fsd 'b' fsd 'c' fsd 'd' 'e' 'f 'g'

'h'

Ii' 'j' 'k' ']'

'1' 'n' '0'

fsd fsd fsd fsd fsd

fsd fsd fsd fsd fsd fsd fsd

'p' fsd end;(oIcasefr/choice)

end; (oIfuJsd) end.{ofunit)

6*x+8; 2*cos(2*x)-2; 2-exp(x); -cos(x); 6*(x-I ); 6*x; 4*x*( 1 +sqr(x»*exp(sqr(x »-3 *x*cos(x)-2 *cos(2*x); 4*x*sin(2*x)+(2*sqr(x)-])*cos(2*x)+ 1 +((2*cos(x)-sin(x)-4*x*si n(x)-sqr(x)*cos(x»+( cos(x)*( 1 +2 *x)-sqr(x)* sin(x »*(3 *x*cos(x)-

sqr(x)*sin(x»))*exp(sqr(x)*cos(x)+sin(x)); (2+sqr(2*x+ 7)*exp(sqr(x)+ 7*x-30); (sqr( sin(x) )-cos(x»* exp( cos( x»); 420*power( 1+x,-22); 2; 2; 2*cos(sqr(x)+ 1 0)-4*sqr(x)*sin(sqr(x)+ 1 0); -sin(x); 90*power(x,8);

63

Page 72: Improved Newton’s Method for Solving Nonlinear Equations

B.4 ThIs unit is used in the program inmjcn_oCtwo_ variables to input functions and it's properties

unit pro2_com;

{$N+}

interlace

uses crt,matrix;

type matrices end;

object(m)

fu = object

end;

fnchoice : char; procedure fcn_menu; function rootx : real; function rooty : real; function f (x: vector) function fdx (x : vector) function fdy (x : vector) function g (x: vector) function gdx (x : vector) function gdy (x : vector)

: real; : real; : real; : real; : real; : real;

var mx matrices;

implementation

function power (x:real;n:integer) : real; (Evaluate x ta the power nj va ..

a:real ; i:integer;

begin a:=l; i:=l; while i <= abs(n) do

begin a:=a*x; i:=i+l;

end;{afwhilej if n > 0 then

power:=a else

ifn < 0 then power:=lIa

else power:=l;

end; {o("pawer/

64

Page 73: Improved Newton’s Method for Solving Nonlinear Equations

pmcedure fu.fcn_menu; {Prints the f unction menu} begin

clrscr; FUNCTION MENU);

65

writelnC writelnC writelnC \.\TitelnC writelnC writelnC writelnC \.\TitelnC writelnC writelnC writelnC writelnC writelnC writelnC

*********************************************************************'); a f(x,y)=x2.+f-2 , g(X,y)=x2_y2_1'); b f(X,y)=(X2)2.-f-(y2)Z-67, g(x,y)=x*x2-3*xy2.-f-35 .); c f(x,y)=xqO*x+y2.-f-S ,g(x,y)=x*y2.-f-x-lO*y+S·); d f(x ,y)=x2.-f-y2-x, g(x,y)=xZ_y2_y'); e : f(x,y)=x+y*y2-5y2-2y-IO, g(x,y)=x+y*y2.-f-y2-14y-29'); f : f(X,y)=-X2_X+2y-JS, g(x,y)=(x-l )2.-f-Cy-6)2-25'); g : f(x,y)=5x2_y2, g(x,y)=y-0.25*(sin(x)+cos(y)) '); h : f(x,y)=3x2_y2, g(x,y)=3xy2-x*x2-1 '); i : f(x,y)=16x2-S0x+y2.-f-32, g(x,y)=xy2.-f-4x-lOy+16'); j f(x,y)=2cos(y)+ 7sin(x)-1 Ox , g(x,y)=7cos(x)-2sin(y)-1 Oy'); k : f(X,y)=x2.-f-y2_2, g(x,y)=exp(x-l)+y*y2_2

*********************************************************************'): writeln; write (' SELECT YOUR CHOICE ');

end; {offufcn _menu}

function fu.rootx : real; {x component of the Actual root} begin

case fnchoice of 'a' : rootx= sqrtCl.5); 'b' : rootx := 1883645208910S2; 'c' : root x := 1; 'd' : rootx := 0; 'e' : rootx := 34.1993336862070; 'f : root x := -2; 'g' : root x := 0; 'h' : rootx := 0; 'i' : rootx := 0.5; 'j' : rootx := o 52652262191S04S; 'k' : rootx := 1;

end; (offi1choice) end; (ojjil. rootx)

function fu.rooty : real; {y component of the Actual root} begin

case fnchoice of 'a' : rooty 'b' : rooty 'c' : rooty 'd' : rooty 'e' : moty 'f : rooty 'g' : rooty 'h' : rooty 'i' : rooty 'j' : rooty 'k' : rooty

end; (offochoice) end; {offil. rooty}

sqrt(0.5); 2.71594753880345; 1 ; 0; 3. 041241452319S1; 10; 0; 0; 2; 0.507919719037091 ; l;

Page 74: Improved Newton’s Method for Solving Nonlinear Equations

function fu.f (x : vector) : real; (Returns the value off) begin

case fnchoice of 'a' : f:= sqrex[I])+sqr(x[2])-2; 'b' . f:= sqr(sqrex[I]))+sqr(sqr(x[2]))-67; 'e' : f := sqr(x[ 1])-(1 O*x[ 1])+sqr(x[2])+8; 'd' : f:= sqr(x[I])+sqr(x[2])-x[l]; 'e' : f:= x[I]+x[2]*sqr(x[2])-5*sqr(x[2])-2*x[2]-10; 'f : f := -sqr(x[1])-x[I]+2*x[2]-18; 'g' : f:= 5*sqr(x[1])-sqr(x[2]); 'h' : f:= 3*sqr(x[I])-sqr(x[2]); 'i' : f= 16*sqr(x[l])-80*x[1]+sqr(x[2])+32; 'j' : f= 2*eos(x[2])+7*sin(x[1])-10*x[l); 'k' : f= sqr(x[J])+sqr(x[2])-2; end; (vffochoice)

end; {offu.j}

function fu.fdx (x : vector) : real; (Returns the first partial derivative offw.r.t. x) begin

case fnchoice of 'a' : fdx 'b' : fdx 'e' : fdx 'd' : fdx 'e' : fdx 'f : fdx 'g' : fdx 'h' : fdx 'i' : fdx 'j' : fdx 'k' : fdx

end; {ojfhchoice} end; (offufdx)

2*x[l); 4*x[ I] *sqr(x[ I]); (2*x[ID-IO; 2*x[I]-1; 1 ; -(2*x[I]+ 1); 10*x[1]; 6*x[1]; 32*x[l]-80; 7*eos(x [I ])-1 0; 2*x[1);

function fu.fdy (x : vector) : real; (Returns thejirstpartial derivative offwr.t. y) begin

case fnchoice of 'a' : fdy 'b' : fdy 'e' : fdy 'd' : fdy 'e' : fdy 'f : fdy 'g' : fdy 'h': fdy 'i' : fdy 'j' : fdy 'k': fdy

end; (offochoice) end; (offiIJdy)

2*x[2]; 4*x[2]*sqr(x[2]); 2*x[2]; 2*x[2]; 3*sqr(x[2])-10*x[2]-2; 2· , -2*x[2J; -2*x[2J; 2*x[2J; -2 *sin(x [2]); 2*x[2];

66

Page 75: Improved Newton’s Method for Solving Nonlinear Equations

function fu.g (x : vector) : real; (Returns the value of g) begin

case fnchoice of 'a': g 'b': g 'e': g 'd': g 'e': g 'f : g 'g'. g 'h' : g 'i' : g 'j ' : g 'k': g

end; (offochoice) end; (offu.g)

sqr(x[ 1 ])-sqr(x [2])-1; x[ 1] *sqr(x [1 ])-3 *x[1] *sqr(x[2])+ 35; (x [1] *sqr(x[2])+x[ 1 ]-( 1 O*x [2])+8; sqr(x [1 ])-sqrex[2])-x[2]; x[ 1 ]+x[2]*sqr(x[2])+sqrex[2])-14*x[2]-29;

sqr(x[ 1 ]-1 )+sqr(x[2]-6)-25; x[2]-0 .25*(sin(x[ 1])+eos(x[2])); 3 *x[1] *sqr(x[2])-x[ 1 ]*sqr(x[ 1 ])-1 ;

x[l]*sqr(x[2])+4*x[ 1]-1 0*x[2]+ 16; 7*eos(x[ I ])-2*sin(x[2])-1 0*x[2] ; exp(x[ 1 ]-l)+(sqr(x [2])*x[2])-2;

function fu.gdx (x : vector) : real; (Returns the first partial derivative of g w.r.l. x) begin

case fnchoice of 'a'. gdx 'b' : gdx 'e' : gdx 'd': gdx 'e' : gdx 'f: gdx 'g': gdx 'h' : gdx 'i' : gdx 'j' : gdx 'k' : gdx

end; (offnchoice) end; (offu.gdx)

2*x[l]; 3 *sqr(x[ 1 ])-3 *sqr(x[2]); sqr(x[2])+ 1; 2*x[l]; 1 ; 2*ex[1]-1); -0.25 *eos(x[l]); 3 * sqr(x [2])-3 *sqrex[ 1]); sqr(x[2])+4; -7*sinex[1]); exp(x[I]-I);

function fu.gdy (x : vector) : I·eal; (Returns ihe/irst partial derivative ofg w.r.t. y) begin

case fnchoice of 'a' : gdy 'b': gdy 'e': gdy 'd': gdy 'e': gdy 'f: gdy 'g': gdy 'h': gdy 'j': gdy 'j' : gdy 'k': gdy

end; (offochoice) end; (offu.gdy)

end.(o(l/nit pro2Jom}

-2*x[2]; -6*x[I]*x[2]; (2*x( 1]*x[2])-1 0; -2*x[2]-1 ; 3 *sqr(x[2])+2*x[2]-14; 2*(x[2]-6); I +0.25*sin(x[2]); 6*x[ 1] *x[2]; 2*x[l]*x[2]- I 0; -2*eos(x[2])- I 0; 3*sqr(x[2]);

67

Page 76: Improved Newton’s Method for Solving Nonlinear Equations

B.S Create an object that uses to draw a box, center a text, etc,

unit pro3_com; interface

uses crt;

type item = object

end;

procedure drawbox (xl,yl,x2,y2 : integer); procedure centertxt (r : integer; its : string); procedure endsc;

implementation

procedure item.drawbox(xl,yl,xl,y2:integer); va.· i:integer; begin for i:=x J+ 1 to x2-1 do begin

goto)':y(i,yl ); write(chr(205)); gotoxy(i,y2); write(chr(205));

end; for i:=yJ+ 1 to y2-1 do begin

gotoxy(x I ,i); write(chr(l86)); gotoxy(x2,i); write(chr(l86));

end; gotoxy(x 1 ,yl); write(chr(201)); gotoxy(x2,y2); write(chr(l8S)); gotoxy(x 1 ,y2); write(chr(200)); gotoxy(x2,yl); write(chr(lS7));

end;{ofilem,drawbox}

procedure item.centertxt(r:integerjlts:string)j val" x,l:integer; begin

I:=length(lts); x=round«SO-I)I2); gotoxy(x,r); write(lts);

end;{of item, centertxI}

68

Page 77: Improved Newton’s Method for Solving Nonlinear Equations

procedure item.endsc; begin cJrscr~

textcolor(O)~

textbackground(7); centertxt( 14,'end of program'); textcolor(7)~

textbackground(O)~

end; {o/ item. endsc}

end.{o/unit pro3 Jam}

69

Page 78: Improved Newton’s Method for Solving Nonlinear Equations

B.6 Create an object to perform matrix operations.

unit matrix;

{$N+}

interface

uses crt;

const n = 2;

type mtrix = array [1..n, 1..n] ofreal; vector = array [1..n] of real;

m = object

end;

procedure multiply procedure add procedure substract procedure negate procedure invert procedure transpose procedure vmultiply procedure vadd procedure vsubstract procedure vnegate

implementation

(a,b : mtrix; var c : mtrix); (a,b : mtrix; var c : mtrix);

(a,b : mtrix ; var c : mtrix); (a : mtrix; var c : mtrix); (a : mtrix; var c : mtrix); (a : mtrix; var c : mtrix); (a : mtrix; b : vector; var c : vector); (a,b : vector; var c : vector); (a,b : vector; var c : vector); (a : vector; var c : vector);

procedure m.multiply (a,b : mtrix; var c : mtrix); (Multiply matrices a and b) var

ij ,k : integer; begin

for i := 1 to n do for j := 1 to n do

begin c[ij] := 0; for k := 1 to n do

c[i,j] := c[i,j] + a[i,k]*b[kj]; end; {offor j)

end; (ofm. multiply)

procedure m.add (a,b : mtrix; var c : mtrix); (4dd matrices a and b) var

1,J: integer; begin

for i : = 1 to n do for j : = 1 to n do

c[i,j] := a[ij] + b[ij]; end;{ofm.add)

70

Page 79: Improved Newton’s Method for Solving Nonlinear Equations

procedure m.substl·act (a,b : mtrix; var c : mtrix); (Substract matrix h fj-om matrix a) val'

i,j: integer; begin

for i := 1 to n do for j := 1 to n do

e[i,j] := a[i,j] - b[i,j]; end;{of m.substract}

procedure m.negate (a : mtrix; var c : mtrix); ( egate vector a) var

I,J: integer; begin

for i := 1 to n do for j := 1 to n do

e[iJ] := -a[i,j]; end;{ofm.negate)

procedure m.invert (a : mtrix; var c : mtrix); (Invert matrix a) var

det: real; begin

det:= a[1 ,1]*a[2,2] - a[1,2]*a[2,1]; (Determinant of matrix a) if (det = 0) then

writeln(,Matrix is Singular') else

begin e[l,l] := a[2,2]/det; e[l,2] := -a[l,2]/det; e[2,l] := -a[2,l]/det; e[2,2] := a[l,l]/det;

end; end;{ofm.invert)

procedure m.transpose (a : mtrix; var c : mtrix); (Returns the transpose of matrix a) var

IJ: integer; begin

for i := 1 to n do for j = 1 to n do

e[i,j] .= a[j,i]; end; (m. transpose)

71

procedure m.vmultiply (a : mtrix; b : vector; val' c : vector); {lvJultiply matrix a and vector h} var

I,J: integer; begin

for i := 1 to n do begin

eli] := 0; for j : = 1 to n do

eli] := eli] + a[i,j]*b[j]; end;

end; {ofm. vmultiply)

Page 80: Improved Newton’s Method for Solving Nonlinear Equations

procedure m.vadd (a,b : vector;var c : vector); ~4dd vectors a and b) val·

integer; begin

for i := 1 to n do eli] := ali] + b[i];

end;{ofm- vadd)

procedul·e m.vsubstract (a,b : vector;var c : vector); (Substract vector b from vector a) var

1 integer; begin

for i := 1 to n do eli] := ali] - b[i];

end;{ofm- vsubstract)

procedure m.vnegate (a : vector; var c : vector); ( Negate vector a) var

I integer; begin

for i := 1 to n do eli] := -ali];

end;{ofm. vnegate)

end. (afmatrix)

72

Page 81: Improved Newton’s Method for Solving Nonlinear Equations

73

REFERENCES

1. Burden Richard L & Faires Douglas 1. (1993). Numerical Analysis - PWS - Kent

Publishing Co, Bosten.

2. Curtis F. Gerald & Patrick O. Wheatley (1989). Applied Numerical Analysis.

Addison-Wesley Publishing Company.

3. Dennis 1.E. & Schnable Robert B. (1983). Numerical methods for unconstrained

optimisation and nonlinear equations. Prentice Hall, Inc.

4 . Gordon Sheldon P. & Von Eschen Ellis R. (1988). A Parabolic extension of

Newton's method. International Jomnal for Math. Educ. Sci. Technol.

5. Jain M.K:, Iyenger S.R.K. & Jain R.K. (1993). Numerical Methods (Problems and .,. " Solutions) - New Age International Limited / Wiley Eastern Limited.

;!

46. Weerakoon Sunethra (1996). Numerical solution of nonlinear equations in the

absence of the derivative. Journal for Natn. Sci .. Coun. Sri Lanka 1996 24(4) :

309-318

• ."-

\

.. . . , "

Page 82: Improved Newton’s Method for Solving Nonlinear Equations

(g~17mm,;ta £"oL }3inJQ,.. /K. fin. 1),.gm .. da.a

21.,q., dol .. ", .1",,,. G:»tt9"tlca""ilta, ttl UIJ eq"J. •.

10. 1). 0741 1422

Page 83: Improved Newton’s Method for Solving Nonlinear Equations