Upload
vudung
View
235
Download
0
Embed Size (px)
Citation preview
Chapter 4
Geometrically constructed families ofiterative methods
4.1 Introduction
The aim of this CHAPTER3 is to derive one-parameter families of Newton’s method [7, 11–
16], Chebyshev’s method [7, 30–32, 59–61], Halley’s method [7, 30–32, 60–73], super-
Halley method [30–32, 61, 71–73], Chebyshev-Halley type methods [74], C-methods [30],
Euler’s method [7, 30, 64], osculating circle method [33] and ellipse method [31, 32, 61]
for finding simple roots of nonlinear equations, permitting f ′(xn) = 0 at some points in
the vicinity of required root. Halley’s method, Chebyshev’s method, super-Halley method,
Chebyshev-Halley type methods, Euler’s method, osculating circle method and as an excep-
tional case, Newton’s method are seen as the special cases of these families. All methods
of various families are cubically convergent to simple roots except Newton’s or a family of
Newton’s method. Further, by making some semi-discrete modifications to above proposed
cubically convergent families, a number of interesting new classes of third-order as well as
fourth-order multipoint iterative methods can be derived which are free from second-order
derivative. Furthermore, numerical examples show that all proposed one-point as well as
multipoint iterative families are effective and comparable to the well-known existing classi-
cal methods.
As discussed previously, one of the most basic problems in computational mathemat-
ics is to find solutions of nonlinear equations (1.1). To solve equation (1.1), one can use
3The main contents of this CHAPTER have been published in [206].
93
iterative methods such as Newton’s method [7, 11–16] and its variants namely, Halley’s
method [7, 30–32, 60–73], Chebyshev’s method [7, 30–32, 59–61] and super-Halley method
[30–32, 61, 71–73] respectively. Among all these iterative methods, Newton’s method is
probably the well-known and mostly used algorithm. Its geometric construction consists in
considering a straight line
y = ax+b, (4.1)
and then determining unknowns ‘a’ and ‘b’ by imposing the tangency conditions :
y(xn) = f (xn), y′(xn) = f ′(xn), (4.2)
thereby obtaining a tangent line
y(x)− f (xn) = f ′(xn)(x− xn), (4.3)
to the graph of f (x) at {xn, f (xn)}. The point of intersection of this line with x−axis, gives
the celebrated Newton’s method
xn+1 = xn− f (xn)f ′(xn)
. (4.4)
The tangent line in Newton’s method can be seen as first degree polynomial of f (x) at x = xn.
Though, Newton’s method is simple and fast with quadratic convergence, but its major draw-
back is that, it may fail miserably if at any stage of computation, the derivative of the function
f (x) is either zero or very small in the vicinity of the required root. Therefore, it has poor
convergence and stability problems as it is very sensitive to initial guess.
The well-known third-order methods which entail an evaluation of f ′′(x) are close rel-
atives of Newton’s method and can be obtained by admitting geometric derivation from the
quadratic curves, e.g. parabola, hyperbola, circle or ellipse.
In this category, the Euler’s method [7, 30, 64] can be constructed by considering a
parabola
x2 +ax+by+ c = 0, (4.5)
and imposing the tangency conditions :
y(xn) = f (xn), y′(xn) = f ′(xn) and y′′(xn) = f ′′(xn), (4.6)
94
one gets
y(x)− f (xn) = f ′(xn)(x− xn)+f ′′(xn)
2(x− xn)2. (4.7)
The point of intersection of (4.7) with x−axis gives an iteration scheme
xn+1 = xn−{
21±√
1−2L f (xn)
}f (xn)f ′(xn)
, (4.8)
where
L f (xn) =f (xn) f ′′(xn){ f ′(xn)}2 . (4.9)
The well-known Halley’s method [7, 30–32, 60–73] admits its geometric derivation from
a hyperbola in the form
axy+ y+bx+ c = 0. (4.10)
After the imposition of conditions (4.6) on (4.10) and taking the intersection of (4.10) with
x−axis as next iterate, the Halley’s iteration process reads as
xn+1 = xn−{
22−L f (xn)
}f (xn)f ′(xn)
. (4.11)
The classical Chebyshev’s method [7, 30–32, 59–61] admits its geometric derivation
from parabola in a following form :
ay2 + y+bx+ c = 0. (4.12)
Similarly after imposing the tangency conditions (4.6) and taking the intersection of (4.12)
with x−axis as next iterate, Chebyshev’s iteration process reads as
xn+1 = xn−{
1− L f (xn)2
}f (xn)f ′(xn)
. (4.13)
Amat et al. [30] studied another type of third-order methods by considering a hyperbola
in a following form :
any2 +bnxy+ y+ cnx+dn = 0, (4.14)
and by the similar conditions as before, one can get an iteration process reads as
xn+1 = xn−1+
12
L f (xn)
1+bn
(f (xn)f ′(xn)
)
f (xn)f ′(xn)
, (4.15)
where ‘bn’ is a parameter depending on ‘n’.
For particular values of ‘bn’, some interesting particular cases of this family are
95
(i) For bn = 0, formula (4.15) corresponds to classical Chebyshev’s method [7, 30–32, 59].
(ii) For bn =− f ′′(xn)2 f ′(xn)
, formula (4.15) corresponds to famous Halley’s method [60, 30–32].
(iii) For bn =− f ′′(xn)f ′(xn)
, formula (4.15) corresponds to super-Halley method [71–73].
(iv) For bn →±∞, formula (4.15) corresponds to classical Newton’s method [7, 11–16].
Osculating circle method [33] and ellipse method [31, 32, 61] are more cumbersome from
computing point of view as compared to Halley’s, Chebyshev’s or super-Halley methods.
For more detailed survey of these most important techniques, some excellent text books
[1, 7, 14, 21] are available in literature. The problem with existing methods is that these
techniques may fail to converge in some cases if an initial guess is far from zero point or if
the derivative of the function is small or even zero in the vicinity of required root. Recently,
Kanwar and Tomar [33, 231, 232] using different approach derived new classes of iterative
techniques having quadratic and cubic convergence respectively. These techniques provide
an alternative to the failure situation of existing classical techniques.
The purpose of this study is to eliminate the defects of existing classical methods by
simple modifications of iteration processes.
4.2 Development of iteration schemes
Based on the two theorems [5], namely Theorem 1.2.2 and Theorem 1.2.3 (discussed in
Chapter 1), proposed methods can be derived by admitting geometrical derivation from the
hyperbolic curve, cubic curve and general quadratic curve.
4.2.1 Derivation from a hyperbola
Let us now, develop an iteration scheme by fitting the function ‘ f (x)’ of equation (1.1) in the
following form of a hyperbola :
an
{ye−α(x−xn)− f (xn)
}2+bn(x− xn)
{ye−α(x−xn)− f (xn)
}
+{
ye−α(x−xn)− f (xn)}
+ cn(x− xn)+dn = 0, (4.16)
96
where α ∈ R. The unknowns an,cn and dn are now determined in terms of ‘bn’ by using
tangency conditions (4.6) on equation (4.16). This gives
an =− f ′′(xn)+α2 f (xn)−2α f ′(xn)
2{ f ′(xn)−α f (xn)}2 − bn
f ′(xn)−α f (xn),
cn =−{ f ′(xn)−α f (xn)} and dn = 0.
. (4.17)
At a root
y(xn+1) = 0, (4.18)
and it follows from (4.16) that
xn+1 = xn− {an{ f (xn)}2− f (xn)}cn−bn f (xn)
. (4.19)
If (4.16) is an exponentially fitted straight line, then an = 0 = bn and from (4.19), one can
have
xn+1 = xn− f (xn)f ′(xn)−α f (xn)
. (4.20)
This is a one-parameter family of Newton’s method [205, 231] already discussed in Chapter
2 and do not fail even if f ′(xn) = 0 or very small in the vicinity of required root, unlike
Newton’s method. Ignoring the term in ‘α’, one gets Newton’s method [7, 11–16].
Further, making use of ‘an’ and ‘cn’ in (4.19), one can obtain
xn+1 = xn−1+
12
L∗f (xn)
1+bn
{f (xn)
{ f ′(xn)−α f (xn)}}
f (xn)f ′(xn)−α f (xn)
, (4.21)
where ‘bn’ is a parameter and
L∗f (xn) =f (xn)
{f ′′(xn)+α2 f (xn)−2α f ′(xn)
}{
f ′(xn)−α f (xn)}2 . (4.22)
This is a modification over the formulas (4.15) of Amat et al. [30] and do not fail even if
f ′(xn) is very small or zero in the vicinity of required root.
For particular values of ‘bn’, some interesting particular cases of (4.21) are as under :
(i) For bn = 0, one can obtain
xn+1 = xn−{
1+12
L∗f (xn)}
f (xn)f ′(xn)−α f (xn)
. (4.23)
This is a modification over the well-known cubically convergent Chebyshev’s method.
97
(ii) Putting bn =−L∗f (xn)2u , one gets
xn+1 = xn− 22−L∗f (xn)
f (xn)f ′(xn)−α f (xn)
. (4.24)
This is a modification over the well-known cubically convergent Halley’s method.
(iii) Substituting bn =−L∗f (xn)u , one can get
xn+1 = xn−{
1+12
L∗f (xn)
1−L∗f (xn)
}f (xn)
f ′(xn)−α f (xn). (4.25)
This is a modification over the well-known cubically convergent super-Halley method.
(iv) Letting bn =−α∗L∗f (xn)
u ,α∗ ∈ R, one can have
xn+1 = xn−{
1+12
L∗f (xn)
1−α∗L∗f (xn)
}f (xn)
f ′(xn)−α f (xn). (4.26)
This is a modification over the well-known cubically convergent Chebyshev-Halley
type methods [74].
(v) If bn →±∞, one obtains
xn+1 = xn− f (xn)f ′(xn)−α f (xn)
. (4.27)
This is a modification over the well-known quadratically convergent Newton’s method.
It can be verified that in the absence of the term in parameter ‘α’, method (4.23),
method (4.24), method (4.25), method (4.26) and method (4.27) reduce to Chebyshev’s
method [7, 30–32, 59–61], Halley’s method [60, 30–32, 61–73], super-Halley method [30–
32, 61, 71–73], Chebyshev-Halley type methods [74] and Newton’s method [7, 11–16] re-
spectively. The sign of parameter ‘α’ in all these methods is chosen so that the denominator
is largest in magnitude.
Following theorem gives an order of convergence of the proposed iterative scheme (4.21) :
98
Theorem 4.2.1. Let f : I ⊆ R→ R be a sufficiently differentiable function defined on an
open interval I, enclosing a simple zero of f (x) (say x = r ∈ I). Assume that initial guess
x = x0 is sufficiently close to ‘r’ and f ′(xn)−α f (xn) 6= 0 in I. Then an iteration scheme
defined by formula (4.21) is cubically convergent and satisfies the following error equation:
en+1 ={
2C22 −C3−3αC2 +
12
α2 +bn (C2−α)}
e3n +O
(e4
n), (4.28)
where en = xn− r is an error at nth iteration and Ck =( 1
k!
) f k(r)f ′(r) ,k = 2,3, . . .
Proof. Using Taylor’s series expansion for f (xn) about x = r and taking into account that
f (r) = 0 and f ′(r) 6= 0, one can have
f (xn) = f ′(r)ben +C2e2n +C3e3
n +O(e4n)c, (4.29)
f ′(xn) = f ′(r)b1+2C2en +3C3e2n +O(e3
n)c, (4.30)
and
f ′′(xn) = f ′(r)b2C2 +6C3en +12C4e2n +O(e3
n)c. (4.31)
Using equations (4.29)-(4.31), one gets
f (xn)f ′(xn)−α f (xn)
= en− (C2−α)e2n−
(2C3−2C2
2 +2αC2−α2)e3n +O
(e4
n), (4.32)
and
L∗f (xn) =f (xn)
{f ′′(xn)+α2 f (xn)−2α f ′(xn)
}{
f ′(xn)−α f (xn)}2 =2(C2−α)en− (6C2
2 −6C3−6C2α+3α2)e2n
+O(e3n). (4.33)
Using equations (4.32) and (4.33), one can obtain
12
L∗f (xn)
1+bn
{f (xn)
{ f ′(xn)−α f (xn)}} =(C2−α)en−
(3C2
2−3C3−3αC2+32
α2+bn (C2−α))e2
n+O(e3n).
(4.34)
Substituting equations (4.32) and (4.34) in formula (4.21), one obtains
en+1 ={
2C22 −C3−3αC2 +
12
α2 +bn (C2−α)}
e3n +O
(e4
n). (4.35)
This completes proof of the theorem.
99
4.2.2 Derivation from a cubic curve
Another iterative processes can also be derived by considering different exponentially fitted
osculating curves. For instance, if one takes a cubic equation namely
an
{ye−α(x−xn)− f (xn)
}3+bn
{ye−α(x−xn)− f (xn)
}2+
{ye−α(x−xn)− f (xn)
}+dn(x−xn)= 0,
(4.36)
and then using tangency conditions (4.6) on (4.36), one gets
an =Cf ′′(xn)+α2 f (xn)−2α f ′(xn)
{ f ′(xn)−α f (xn)}2 ,
bn =− f ′′(xn)+α2 f (xn)−2α f ′(xn)2{ f ′(xn)−α f (xn)}2 ,
dn =−{ f ′(xn)−α f (xn)}.
. (4.37)
At a root
y(xn+1) = 0, (4.38)
and it follows from (4.36) that
xn+1 = xn−{
1+12
L∗f (xn)+C{
L∗f (xn)}2
}f (xn)
f ′(xn)−α f (xn). (4.39)
This is a modification over the Amat et al. C-methods [30, 71] and do not fail even if f ′(xn)
is very small or zero in the vicinity of required root. It can be verified that in the absence of
the term in parameter ‘α’, method (4.39) reduces to Amat et al. C-methods [30, 71].
A mathematical proof for an order of convergence of the proposed iterative scheme (4.39)
is given below :
Theorem 4.2.2. Let f : I ⊆ R→ R be a sufficiently differentiable function defined on an
open interval I, enclosing a simple zero of f (x) (say x = r ∈ I). Assume that initial guess
x = x0 is sufficiently close to ‘r’ and f ′(xn)−α f (xn) 6= 0 in I. Then an iteration scheme
defined by formula (4.39) is cubically convergent and satisfies the following error equation:
en+1 ={
(2−4C)C22 −C3 +(−3+8C)
(C2α− α2
2
)}e3
n +O(e4n). (4.40)
100
Proof. Using equations (4.29)-(4.31), one gets
f (xn)f ′(xn)−α f (xn)
= en− (C2−α)e2n−
(2C3−2C2
2 +2αC2−α2)e3n +O
(e4
n), (4.41)
and
L∗f (xn) =f (xn)
{f ′′(xn)+α2 f (xn)−2α f ′(xn)
}{
f ′(xn)−α f (xn)}2 =2(C2−α)en− (6C2
2 −6C3−6C2α+3α2)e2n
+O(e3n). (4.42)
Substituting equations (4.41) and (4.42) in formula (4.39), one can obtain
en+1 ={
(2−4C)C22 −C3 +(−3+8C)
(C2α− α2
2
)}e3
n +O(e4n). (4.43)
This completes proof of the theorem.
4.2.3 Derivation from a general quadratic curve
Let us consider a following general quadratic equation :
(x− xn)2 +an
(ye−α(x−xn)− f (xn)
)2+bn(x− xn)+ cn
(ye−α(x−xn)− f (xn)
)+dn = 0,
(4.44)
and determine the unknowns bn, cn and dn in the terms of parameter ‘an’ by tangency condi-
tions (4.6), one can have
bn =2[1+an{ f ′(xn)−α f (xn)}2]{ f ′(xn)−α f (xn)}
f ′′(xn)+α2 f (xn)−2α f ′(xn),
cn =− 2[1+an{ f ′(xn)−α f (xn)}2]
f ′′(xn)+α2 f (xn)−2α f ′(xn),
dn =0.
. (4.45)
Taking next iterate ‘xn+1’ as a point of intersection of (4.44) with x−axis, one gets another
two-parameter family of iteration scheme as
xn+1 = xn−{
f (xn)f ′(xn)−α f (xn)
}× D1
D2, (4.46)
101
where
D1 = 2+an{
f ′(xn)−α f (xn)}2 (
2+L∗f (xn)), (4.47)
and
D2 = 1+an{
f ′(xn)−α f (xn)}2±
√{1+an { f ′(xn)−α f (xn)}2
}2−L∗f (xn) D1. (4.48)
This is a modification over formula (20) of Sharma [32] and do not fail even if f ′(xn) is very
small or zero in the vicinity of required root. Iterative family (4.46) requires computation of
square root at each step and sometimes is not convenient for practical usage. Therefore, it
can be further reduced to two parameter rational iterative family(similar to Chun [31]
), or
by using the well-known approximation(similar to Jiang and Han [61]
)
√1+ x≈ 1+
12
x, |x|< 1,
and is given by
xn+1 = xn−2[1+an{ f ′(xn)−α f (xn)}2]D1
4[1+an{ f ′(xn)−α f (xn)}2
]2−L∗f (xn)D1
f (xn)f ′(xn)−α f (xn)
. (4.49)
Special cases:
For particular values of ‘an’, some interesting particular cases of (4.46) are
(i) For an = 0, one can obtain
xn+1 = xn− 2
1±√
1−2L∗f (xn)
f (xn)f ′(xn)−α f (xn)
. (4.50)
This is a modification over the well-known cubically convergent Euler’s method [7, 30].
(ii) Taking an = 1, one gets
xn+1 = xn− f (xn)f ′(xn)−α f (xn)
D∗1
D∗2, (4.51)
where
D∗1 = 2+
{f ′(xn)−α f (xn)
}2 (2+L∗f (xn)
), (4.52)
and
D∗2 = 1+
{f ′(xn)−α f (xn)
}2±√{
1+{ f ′(xn)−α f (xn)}2}2−L∗f (xn) D∗
1. (4.53)
This is a modification over the cubically convergent osculating circle method derived
by Kanwar et al. [33].
102
(iii) Letting an = 2{ f ′(xn)}2
(2−L∗f (xn)
) , one can have
xn+1 = xn− 22−L∗f (xn)
f (xn)f ′(xn)−α f (xn)
. (4.54)
This is a modification over the well-known cubically convergent Halley’s method.
(iv) Substituting an =L∗f (xn)
4{ f ′(xn)}2(
1−L∗f (xn)) , one gets
xn+1 = xn−{
1+12
L∗f (xn)
1−L∗f (xn)
}f (xn)
f ′(xn)−α f (xn). (4.55)
This is a modification over the well-known cubically convergent super-Halley method.
(v) If an →±∞, one gets
xn+1 = xn−{
1+12
L∗f (xn)}
f (xn)f ′(xn)−α f (xn)
. (4.56)
This is a modification over the well-known cubically convergent Chebyshev’s method.
(vi) For an =−1/{ f ′(xn)}2, one can get
xn+1 = xn− f (xn)f ′(xn)−α f (xn)
. (4.57)
This is a modification over the well-known quadratically convergent Newton’s method.
It can be verified that in the absence of the term in parameter ‘α’, method (4.50), method
(4.51), method (4.54), method (4.55), method (4.56) and method (4.57) reduce to Euler’s
method [7, 30, 64], osculating circle method [33], Halley’s method [60, 30–32, 61–73],
super-Halley method [30–32, 61, 71–73], Chebyshev’s method [7, 30–32, 59–61] and New-
ton’s method [7, 11–16] respectively. The sign of parameter ‘α’ in all these methods is
chosen so that the denominator is largest in magnitude.
Next, Theorem 4.2.3 presents a mathematical proof for an order of convergence of the
proposed iterative scheme (4.46).
103
Theorem 4.2.3. Let f : I ⊆ R→ R be a sufficiently differentiable function defined on an
open interval I, enclosing a simple zero of f (x) (say x = r ∈ I). Assume that initial guess x =
x0 is sufficiently close to ‘r’ and f ′(xn)−α f (xn) 6= 0 in I. Then an iteration scheme defined
by formula (4.46) is cubically convergent for any finite ‘an’ except an = −1/{ f ′(xn)}2 (in
that case it has quadratic convergence) and satisfies the following error equation :
en+1 =
{(2C2−α
)α+an{ f ′(r)}2(4C2
2 −6C2α+3α2)
2(1+an{ f ′(r)}2
) −C3
}e3
n +O(e4
n). (4.58)
Proof. Using equations (4.29)-(4.31), one can get
f (xn)f ′(xn)−α f (xn)
= en− (C2−α)e2n−
(2C3−2C2
2 +2αC2−α2)e3n +O
(e4
n), (4.59)
and
L∗f (xn) =f (xn)
{f ′′(xn)+α2 f (xn)−2α f ′(xn)
}{
f ′(xn)−α f (xn)}2 =2(C2−α)en−
(6C2
2 −6C3−6C2α+3α2)e2n
+O(e3n). (4.60)
Using equations (4.29), (4.30) and (4.60), one can have
D1 = 2(1+an{ f ′(r)}2)+2an{ f ′(r)}2(5C2−3α
)en +O(e2
n), (4.61)
and
D2 = 2(1+an{ f ′(r)}2)+
(4an{ f ′(r)}2(2C2−α
)−2(C2−α))en +O(e2
n). (4.62)
Substituting equations (4.59), (4.61) and (4.62) in formula (4.46), one gets
en+1 =
{(2C2−α
)α+an{ f ′(r)}2(4C2
2 −6C2α+3α2)
2(1+an{ f ′(r)}2
) −C3
}e3
n +O(e4
n). (4.63)
This completes proof of the theorem.
Alternatively, all these modified techniques((4.21), (4.39) and (4.46)
)can also be ob-
tained by using the idea of Ben-Israel [211]. By applying the existing classical techniques to
a modified function, say f (x) := e−∫
a(x)dx f (x), for some suitable function f (x), one can get
the corresponding modified iterative schemes.
104
4.2.4 Worked examples
Now, let us make an attempt to present some numerical results obtained by employing ex-
isting classical methods namely, Chebyshev’s method (CM), Halley’s method (HM), super-
Halley method (SHM), Euler’s method (EM), osculating circle method (OCM) and newly
developed methods namely, modified Chebyshev’s method (MCM) (4.23), modified Hal-
ley’s method (MHM) (4.24), modified super-Halley method (MSHM) (4.25), modified Eu-
ler’s method (MEM) (4.50) and modified osculating circle method (MOCM) (4.51) respec-
tively to solve nonlinear equations given in Table 4.1. Here, for simplicity, the formulae are
tested for |α|= 1. The results are summarized in Table 4.2 (Number of iterations), Table 4.3
(Computational order of convergence) and Table 4.4 (Total number of function evaluations)
respectively. Computations have been performed using MAT LABr version 7.5 (R2007b) in
double precision arithmetic. We use ε = 10−15 as a tolerance error. The following stopping
criteria are used for computer programs :
(i)|xn+1− xn|< ε, (ii)| f (xn+1)|< ε.
105
Table 4.1
Test examples, interval (I) containing the root, initial guesses (x0) and exact root (r)
Example No. Examples I Initial guesses Exact root (r)
0.44.2.1 exp(1− x)−1 = 0. [0,2]
1.51.000000000000000
4.2.2 x exp(−x) = 0. [1,3] 2 0.000000000000000
4.2.3 sin(x) = 0. [0,2] 1.5 0.000000000000000
4.2.4 exp(−x)− sin(x) = 0. [5,7] 5 6.285049273382587
4.2.5 arctan(x) = 0. [−1,2] 2 0.000000000000000
04.2.6 x3 +4x2−10 = 0. [0,2]
0.11.3652300134140907
-1
4.2.7 x exp(−x)−0.1 = 0. [−2,2] 0.8 0.11832559158630
1
2.94.2.8 exp(x2 +7x−30)−1 = 0. [2,4]
3.153.000000000000000
-14.2.9 cos(x)− x = 0. [−2,2]
10.7390851332151607
106
Table 4.2
Number of iterations (D and F below stand for divergent and failure respectively)
Example No. CM MCM HM MHM SHM MSHM EM MEM OCM MOCM
(4.23) (4.24) (4.25) (4.50) (4.51)
4 4 3 3 3 3 4 3 3 34.2.1
4 3 3 3 3 3 3 3 3 3
4.2.2 D 1 D 1 D 1 D 1 D 1
4.2.3 5∗ 5 6 5 4∗ 4 4 5 4 5
4.2.4 11∗ 5 5 4 4 4 4 4 4 4
4.2.5 D 5 4 5 5 D 5 6 4 5
F 4 F 4 F 4 F 4 F 44.2.6
46 4 6 4 5 4 4 4 21 4
5 3 4 3 5 3 6 3 5 3
4.2.7 D 3 5 3 5 3 4 3 4 3
F 3 F 3 F 3 F 3 F 3
D 5 4 4 4 4 4 4 D 54.2.8
5 5 4 4 6 5 6 6 5 5
D 5 5 4 8 4 4 5 4 54.2.9
3 4 3 3 3 3 3 3 3 4
∗ Converges to undesired root.
107
Table 4.3
Computational order of convergence (COC)
Example No. CM MCM HM MHM SHM MSHM EM MEM OCM MOCM
(4.23) (4.24) (4.25) (4.50) (4.51)
3.00 3.02 2.94 2.99 3.01 3.01 2.99 2.89 3.19 3.144.2.1
3.01 2.97 2.82 2.99 3.00 3.00 2.92 3.25 2.57 2.91
4.2.2 . . . ≡≡ . . . ≡≡ . . . ≡≡ . . . ≡≡ . . . ≡≡4.2.3 — 2.96 3.01 2.99 — 2.44 3.00 3.00 3.06 2.99
4.2.4 — 2.98 3.00 2.85 2.92 2.76 3.00 3.01 3.03 2.88
4.2.5 . . . 2.89 2.97 2.99 2.91 . . . 3.00 3.00 2.81 2.99
== 2.90 == 2.96 == 2.86 == 2.96 == 2.904.2.6
3.02 2.92 3.01 2.97 2.81 2.91 3.00 2.97 2.88 2.92
2.98 2.76 2.92 2.78 2.99 2.81 3.03 2.81 3.01 2.80
4.2.7 — 3.28 3.01 3.19 3.05 3.10 3.04 3.11 3.06 3.11
== 3.06 == 3.31 == 3.13 == 3.16 == 3.15
. . . 3.01 3.00 3.01 3.00 3.01 3.04 3.03 . . . 3.014.2.8
2.99 2.98 3.00 2.97 3.00 3.02 3.00 3.09 2.99 2.97
. . . 2.98 3.05 2.89 3.01 3.13 3.04 3.00 3.02 3.004.2.9
2.41 3.01 2.52 2.89 3.03 2.86 3.03 3.17 2.96 2.99
Here, the symbols namely
(i) ‘—’ represents COC for a case of undesired root.
(ii) ‘. . . ’ represents COC for a case of divergence.
(iii) ‘==’ represents COC for a case of failure.
(iv) ‘≡≡’ represents COC for a case when number of iterations are not sufficient.
108
Table 4.4
Total number of function evaluations (TNOFE)
Example CM MCM HM MHM SHM MSHM EM MEM OCM MOCM
No. (4.23) (4.24) (4.25) (4.50) (4.51)
12 12 9 9 9 9 12 9 9 94.2.1
12 9 9 9 9 9 9 9 9 9
4.2.2 ∼ 3 ∼ 3 ∼ 3 ∼ 3 ∼ 3
4.2.3 _ 15 18 15 _ 12 12 15 12 15
4.2.4 — 15 15 12 12 12 12 12 12 12
4.2.5 ∼ 15 12 15 15 ∼ 15 18 12 15
= 12 = 12 = 12 = 12 = 124.2.6
138 12 18 12 15 12 12 12 63 12
15 9 12 9 15 9 18 9 15 9
4.2.7 ∼ 9 15 9 15 9 12 9 12 9
= 9 = 9 = 9 = 9 = 9
∼ 15 12 12 12 12 12 12 ∼ 154.2.8
15 15 12 12 18 15 18 18 15 15
∼ 15 15 12 24 12 12 15 12 154.2.9
9 12 9 9 9 9 9 9 9 12
Here, the symbols namely
(i) ‘ _’ represents TNOFE for a case of undesired root.
(ii) ‘∼’ represents TNOFE for a case of divergence.
(iii) ‘ =’ represents TNOFE for a case of failure.
109
4.3 Generalized families of multipoint iterative methods
without memory
The practical difficulty associated with third-order methods given by formula (4.21) may be
an evaluation of second-order derivative. In past and recent years, many new modifications
of Newton’s method free from second-order derivative have been proposed by researchers
[7, 27, 34, 35, 82, 9, 47–55, 57, 58, 75–81, 87–109] by discretization of second-order deriva-
tive or by considering different quadrature formulae for computation of an integral arising
from Newton’s theorem [8, 52–55] or by considering Adomian decomposition method [39–
44]. All these modifications are targeted at increasing a local order of convergence with the
view of increasing their efficiency index [13]. Some of these multipoint iterative meth-
ods are of third-order [7, 52–55, 57, 58, 75–81, 87–95], while others are of order four
[7, 34, 35, 82, 9, 47–51, 96–109]. Recently, fourth-order methods proposed by Nedzhi-
bov et al. [101], Basu [102], Maheswari [103], Cordero et al. [104], Petkovic and Petkovic
(a survey) [105], Ghanbari [106], Soleymani [47, 107], Cordero and Torregrosa [108], Chun
et al. [109] etc. have optimal order of convergence [9]. The efficiency index [13] of these
optimal multipoint iterative methods is equal to 3√
4≈ 1.587. These multipoint methods cal-
culate new approximations to a zero of f (x) by sampling f (x) and possibly its derivatives for
a number of values of independent variable, per step. Nedzhibov et al. [101] have modified
Chebyshev-Halley type methods [74] to derive several cubic and quartic order convergent
multipoint iterative methods free from second-order derivative.
Since all these techniques are variants of Newton’s method and main practical difficulty
associated with these multipoint iterative methods is that an iteration can be aborted due to
the overflow or leads to divergence if the derivative of the function at any iterative point is
singular or almost singular (namely f ′(xn)≈ 0).
Here, it is intended to develop and unify the class of third-order and fourth-order
multipoint iterative methods free from second-order derivative in which f ′(xn) = 0 is
permitted. The main idea of proposed methods lies in discretization of second-order deriva-
tive involved in family (4.21) (similar to Nedzhibov et al. [101]). Therefore, again this work
can be viewed as a generalization over Nedzhibov et al. [101] families of multipoint iter-
110
ative methods. An additional feature of these processes is that they may possess a number
of disposable parameters which can be used to ensure the convergence is to a certain order
for simple roots. This section, deals with the derivation of three different families free from
second-order derivative involved in the family (4.21).
4.3.1 First Family
Let us take u ≡ u(xn) = f (xn)f ′(xn)−α f (xn)
, and expanding the function f (xn− u) about a point
x = xn with f (xn) 6= 0 by Taylor’s series expansion, one can have
f (xn−u) = f (xn)−u f ′(xn)+u2
2f ′′(xn)+O(u3). (4.64)
Inserting the value of ‘u’ defined above in the coefficient of f ′(xn), one can obtain
f (xn−u)( f ′(xn)−α f (xn))+α{ f (xn)}2
f ′(xn)−α f (xn)=
u2
2f ′′(xn)+O(u3). (4.65)
Further, L∗f (xn) from equation (4.22) may be written as follows :
L∗f (xn) =f (xn) f ′′(xn)
{ f ′(xn)−α f (xn)}2 −α f (xn)
f ′(xn)−α f (xn)− α f (xn) f ′(xn){ f ′(xn)−α f (xn)}2 . (4.66)
Taking the approximate value of f ′′(xn) from formula (4.65), one can simplify formula of
L∗f (xn) given in (4.66) as
L∗f (xn)≈ 2 f (xn−u)f (xn)
−α2{
f (xn)f ′(xn)−α f (xn)
}2
. (4.67)
If |u| is sufficiently small, then the second term in above equation can be neglected and one
obtains
L∗f (xn)∼= 2 f (xn−u)f (xn)
. (4.68)
Substituting this approximate value of L∗f (xn) in (4.21), a following corresponding multipoint
family is obtained
xn+1 = xn−
1+f (xn−u)
f (xn)[
1+bn
(f (xn)
f ′(xn)−α f (xn)
)]
f (xn)f ′(xn)−α f (xn)
. (4.69)
111
This is a modification over the formula (2.1) of Nedzhibov et al. [101] and do not fail
even if f ′(x) is very small or zero in the vicinity of required root. For different specific
values of ‘bn’, the following various families of multipoint iterative methods can be derived
from (4.69), e.g.
(i) For bn = 0, one gets formula
xn+1 = xn− f (xn)f ′(xn)−α f (xn)
− f (xn−u)f ′(xn)−α f (xn)
. (4.70)
This is a modification over the well-known cubically convergent Traub’s method [7,
pp. 168].
(ii) Letting bn =−L∗f (xn)2u , one can obtain
xn+1 = xn− { f (xn)}2
{ f ′(xn)−α f (xn)}{ f (xn)− f (xn−u)} . (4.71)
This is a modification over the well-known cubically convergent Newton-Secant method
[7, pp. 180].
(iii) Substituting bn =−L∗f (xn)u , one obtains
xn+1 = xn− f (xn)f ′(xn)−α f (xn)
[f (xn)− f (xn−u)
f (xn)−2 f (xn−u)
]. (4.72)
This is a modification over the well-known quartically convergent Traub-Ostrowski’s
method [7, 82, 101].
It can be seen that in absence of the term in parameter ‘α’, method (4.70), method (4.71)
and method (4.72) reduce to Traub’s method [7, pp. 168], Newton-Secant method [7, pp.
180] and Traub-Ostrowski’s method [7, 82, 101] respectively. The sign of parameter ‘α’ in
all these families is chosen so that the denominator is largest in magnitude.
A mathematical proof for an order of convergence of the proposed iterative family (4.69)
is given in following theorem :
112
Theorem 4.3.1. Let f : I ⊆ R→ R be a sufficiently differentiable function defined on an
open interval I, enclosing a simple zero of f (x) (say x = r ∈ I). Assume that initial guess
x = x0 is sufficiently close to ‘r’ and f ′(xn)−α f (xn) 6= 0 in I. Then an iteration scheme
defined by formula (4.69), is cubically convergent for bn ∈ R and satisfies the following
error equation :
en+1 = (C2−α)(bn +2C2−α)e3n +O(e4
n). (4.73)
Proof. Using Taylor’s series expansion for f (xn) about x = r and taking into account that
f (r) = 0 and f ′(r) 6= 0, one can have
f (xn) = f ′(r)ben +C2e2n +C3e3
n +O(e4n)c, (4.74)
and
f ′(xn) = f ′(r)b1+2C2en +3C3e2n +O(e3
n)c. (4.75)
From equations (4.74) and (4.75), one gets
f (xn)f ′(xn)−α f (xn)
= en +(−C2 +α)e2n +O(e3
n). (4.76)
Further, one can get
f (xn−u) = f ′(r)b(C2−α)e2n +O(e3
n)c. (4.77)
Using equations (4.74)-(4.77), one can have
f (xn−u)
f (xn)[
1+bn
(f (xn)
f ′(xn)−α f (xn)
)] = (C2−α)en +((α−C2)(bn +3C2)+2C3−α2)e2
n +O(e3n).
(4.78)
Substituting equations (4.76) and (4.78) in formula (4.69), one gets
en+1 = (C2−α)(bn +2C2−α)e3n +O(e4
n). (4.79)
This completes proof of the theorem.
113
4.3.2 Second family
Replacing second-order derivative in (4.22) by the finite difference between first-order deriva-
tives as
f ′′(xn)≈ f ′(xn)− f ′(xn−θu)θu
,θ ∈ R−{0}, (4.80)
where u is as defined earlier.
One obtains
L∗f (xn)≈ f ′(xn)− f ′(xn−θu)θ{ f ′(xn)−α f (xn)} +α2
{f (xn)
f ′(xn)−α f (xn)
}2
− 2α f ′(xn) f (xn){ f ′(xn)−α f (xn)}2 . (4.81)
If |u| is sufficiently small, then the second term in above equation can be neglected and then
one can get
L∗f (xn)∼= f ′(xn)− f ′(xn−θu)θ{ f ′(xn)−α f (xn)} −
2α f ′(xn) f (xn){ f ′(xn)−α f (xn)}2 . (4.82)
Substituting this approximate value of L∗f (xn) in (4.21), a following multipoint family is
obtained
xn+1 = xn−{
1+12
(f ′(xn)− f ′(xn−θu)
)−2αθu f ′(xn)θ(
f ′(xn)+(bn−α) f (xn))
}f (xn)
f ′(xn)−α f (xn). (4.83)
The formula (4.83) is a modification over formula (2.5) given by Nedzhibov et al. [101] and
do not fail even if f ′(xn) = 0 or very small in the vicinity of required root. For particular
values of ‘bn’ and ‘θ’, some particular cases of formula (4.83) are
(i) For bn =−L∗f (xn)2u and θ = 1, one gets
xn+1 = xn− 2 f (xn)
f ′(xn)+ f ′(
xn− f (xn)f ′(xn)−α f (xn)
)+2α2 f (xn) f ′(xn)
f ′(xn)−α f (xn)
. (4.84)
This is a modification over the well-known cubically convergent Weerakoon and Fer-
nando method [8].
(ii) Putting bn = −L∗f (xn)2u and θ = −1, one can get a following new cubically convergent
family of multipoint iterative methods
xn+1 = xn− 2 f (xn)
3 f ′(xn)− f ′(
xn + f (xn)f ′(xn)−α f (xn)
)+2α2 f (xn) f ′(xn)
f ′(xn)−α f (xn)
. (4.85)
114
(iii) Substituting bn =−L∗f (xn)2u and θ =−1
2 , one can get an another new cubically convergent
family of multipoint iterative methods
xn+1 = xn− f (xn)
2 f ′(xn)− f ′(
xn + f (xn)2{ f ′(xn)−α f (xn)}
)+α2 f (xn) f ′(xn)
f ′(xn)−α f (xn)
. (4.86)
(iv) Letting bn =−L∗f (xn)2u and θ = 1
2 , one gets
xn+1 = xn− f (xn)
f ′(
xn− f (xn)2{ f ′(xn)−α f (xn)}
)+α2 f (xn) f ′(xn)
f ′(xn)−α f (xn)
. (4.87)
This is a modification over the cubically convergent method derived by Traub [7, pp.
182].
(v) Taking bn =−L∗f (xn)u and θ = 2
3 , one can obtain
xn+1 = xn− f (xn)f ′(xn)−α f (xn)
f ′(
xn− 23u
)− f ′(xn)+ 4
3{ f ′(xn)}2
f ′(xn)−α f (xn)− 4
3α f (xn)
2 f ′(
xn− 23u
)−2 f ′(xn)+ 4
3{ f ′(xn)}2+α2{ f (xn)}2
f ′(xn)−α f (xn)
.
(4.88)
This is a modification over the well-known quartically convergent Jarratt’s method [34].
It is easy to observe that in absence of the term in parameter ‘α’, method (4.84), method
(4.85), method (4.86), method (4.87) and method (4.88) reduce to Weerakoon and Fernando
method [8], third-order multipoint iterative method derived by Traub [7, pp. 182] and Jar-
ratt’s method [34, 35] respectively. Here, the parameter ‘α’ should be chosen so that the
denominator is largest in magnitude.
A mathematical proof for an order of convergence of the proposed iterative family (4.83)
is presented in the next theorem.
115
Theorem 4.3.2. Let f : I ⊆ R→ R be a sufficiently differentiable function defined on an
open interval I, enclosing a simple zero of f (x) (say x = r ∈ I). Assume that initial guess
x = x0 is sufficiently close to ‘r’ and f ′(xn)− f (xn) 6= 0 in I. Then an iteration scheme defined
by formula (4.83) is cubically convergent and satisfies the following error equation :
en+1 =(
2C22 −
12
C3(2−3θ)+bn(C2−α)−3C2α+2α2)
e3n +O(e4
n). (4.89)
Proof. Using Taylor’s series expansion for f (xn) about x = r and taking into account that
f (r) = 0 and f ′(r) 6= 0, one can have
f (xn) = f ′(r)ben +C2e2n +C3e3
n +O(e4n)c, (4.90)
and
f ′(xn) = f ′(r)b1+2C2en +3C3e2n +O(e3
n)c. (4.91)
From equations (4.90) and (4.91), one gets
f (xn)f ′(xn)−α f (xn)
= en +(−C2 +α)e2n +O(e3
n). (4.92)
Further, one can get
f ′(xn−θu) = f ′(r)b1+2C2(1−θ)en +(3C3(1−θ)2 +2C2(C2−α)θ
)e2
n +O(e3n)c. (4.93)
Using equations (4.90)-(4.93), one can obtain
(1−2αθu) f ′(xn)− f ′(xn−θu)2θ( f ′(xn)+(bn−α) f (xn))
=(−α+
C2
θ
)en +
12θ
(3C3 +2(αθ−C2)(bn +C2−2α)−4C2
2)e2
n
+O(e3n). (4.94)
Substituting equations (4.92) and (4.93) in formula (4.83), one gets
en+1 =(
2C22 −
12
C3(2−3θ)+bn(C2−α)−3C2α+2α2)
e3n +O(e4
n). (4.95)
Proof of the theorem is now complete.
116
4.3.3 Third family
Replacing second-order derivative in (4.22) by the finite difference as
f ′′(xn)≈5 f ′(xn)−4 f ′
(xn− u
2
)− f ′(xn−u)3u
, (4.96)
where u is as defined earlier. One obtains
L∗f (xn)≈5 f ′(xn)−4 f ′
(xn− u
2
)− f ′(xn−u)3{ f ′(xn)−α f (xn)} +α2
{f (xn)
f ′(xn)−α f (xn)
}2
− 2α f ′(xn) f (xn){ f ′(xn)−α f (xn)}2 .
(4.97)
If |u| is sufficiently small, then the second term in above equation can be neglected and one
can obtain
L∗f (xn)∼=5 f ′(xn)−4 f ′
(xn− u
2
)− f ′(xn−u)3{ f ′(xn)−α f (xn)} − 2α f ′(xn) f (xn)
{ f ′(xn)−α f (xn)}2 . (4.98)
Substituting this approximate value of L∗f (xn) in (4.21), the following multipoint family is
obtained
xn+1 = xn−{
1+
(5 f ′(xn)−4 f ′
(xn− u
2
)− f ′(xn−u))−6αu f ′(xn)
6(
f ′(xn)+(bn−α) f (xn))
}f (xn)
f ′(xn)−α f (xn).
(4.99)
Then for bn =−L∗f (xn)2u in (4.99), one gets
xn+1 = xn− 6 f (xn)
f ′(xn)+4 f ′(
xn− f (xn)2{ f ′(xn)−α f (xn)}
)+ f ′
(xn− f (xn)
f ′(xn)−α f (xn)
)+6αu f ′(xn)
.
(4.100)
This is a modification over the cubically convergent formula explored by Hasanov et al.
[101, 230], which do not fail even if f ′(xn) = 0 in the vicinity of required root. Here, the
parameter ‘α’ should be chosen such that the denominator is largest in magnitude. It can be
observed that in absence of the term in parameter ‘α’, the family (4.100) reduces to Hasanov
et al. method [101, 230].
Next, Theorem 4.3.3 presents a mathematical proof for an order of convergence of the
proposed iterative family (4.99).
117
Theorem 4.3.3. Let f : I ⊆ R→ R be a sufficiently differentiable function defined on an
open interval I, enclosing a simple zero of f (x) (say x = r ∈ I). Assume that initial guess
x = x0 is sufficiently close to ‘r’ and f ′(xn)−α f (xn) 6= in I. Then an iteration scheme defined
by formula (4.99) is cubically convergent and satisfies the following error equation :
en+1 =(2C2
2 +bn(C2−α)−3C2α+2α2)e3n +O(e4
n). (4.101)
Proof. Using Taylor’s series expansion for f (xn) about x = r and taking into account that
f (r) = 0 and f ′(r) 6= 0, one can have
f (xn) = f ′(r)ben +C2e2n +C3e3
n +O(e4n)c, (4.102)
and
f ′(xn) = f ′(r)b1+2C2en +3C3e2n +O(e3
n)c. (4.103)
From equations (4.102) and (4.103), one gets
f (xn)f ′(xn)−α f (xn)
= en +(−C2 +α)e2n +O(e3
n). (4.104)
Further, one can have
f ′(xn−u) = f ′(r)b1+2C2(C2−α)e2n +O(e3
n)c, (4.105)
and
f ′(
xn− u2
)= f ′(r)
⌊1+C2en +
(3C3
4+C2(C2−α)
)e2
n +O(e3n)
⌋. (4.106)
Using equations (4.102)-(4.106), one can obtain
(5−6αu) f ′(xn)−4 f ′(xn− u
2
)− f ′(xn−u)6( f ′(xn)+(bn−α) f (xn))
=(C2−α)en +(2C3−2α2− (C2−α)(bn +3C2)
)e2
n
+O(e3n). (4.107)
Substituting equations (4.104) and (4.107) in the formula (4.99), one can get
en+1 =(2C2
2 +bn(C2−α)−3C2α+2α2)e3n +O(e4
n). (4.108)
This completes proof of the theorem.
On the similar lines, one can further derive some more new cubically convergent families
of multipoint iterative methods free from second-order derivative by discretizing the second-
order derivative involved in families (4.39) and (4.46).
118
4.3.4 Further exploration of family (4.69)
One can also construct a different family of fourth-order convergent multipoint iterative
methods by suitably choosing parameter ‘bn’. Again consider the family (4.69)
xn+1 = xn−
1+f (xn−u)
f (xn)[
1+bn
(f (xn)
f ′(xn)−α f (xn)
)]
f (xn)f ′(xn)−α f (xn)
, (4.109)
where α ∈ R and u = f (xn)f ′(xn)−α f (xn)
.
Taking bn = −2C2 + α ≈ − f ′′(xn)f ′(xn)
+ α, in an error equation (4.79), the family (4.109) has at
least fourth-order convergence, i.e. after simplifications, one gets new fourth-order multi-
point iterative family given by
xn+1 = xn− f (xn)f ′(xn)−α f (xn)
− f (xn−u) f ′(xn){ f ′(xn)}2− f (xn) f ′′(xn)
. (4.110)
Further, one can derive another fourth-order multipoint iterative family free from second-
order derivative by using (4.65) and is given by
xn+1 = xn− f (xn)f ′(xn)−α f (xn)
[1+
f (xn−u) f ′(xn)u{ f ′(xn)}2−2α{ f (xn)}2−2 f (xn−u)( f ′(xn)−α f (xn))
].
(4.111)
This is a modification over the well-known Traub-Ostrowski’s method [7, 82] and is also
an optimal fourth-order method. This family requires same evaluations of the function and
its derivative as that of Traub-Ostrowski’s method per iteration. Thus, an efficiency index
[13] of this method is equal to 3√
4∼= 1.587, which is better than the one of Newton’s method2√
2∼= 1.414 and method (4.72) (derived in this chapter) 3√
3∼= 1.442. More importantly, this
method will not fail even if the derivative of the function is very small or even zero in the
vicinity of required root, unlike Traub-Ostrowski’s method [1, 43]. Here, the parameter ‘α’
should be chosen so that denominator is largest in magnitude.
One can see in absence of the term in parameter ‘α’, the family (4.111) reduces to well-
known Traub-Ostrowski’s method [7, 82].
A mathematical proof for an order of convergence of the proposed iterative family (4.111)
is given by next theorem.
119
Theorem 4.3.4. Let f : I ⊆ R→ R be a sufficiently differentiable function defined on an
open interval I, enclosing a simple zero of f (x) (say x = r ∈ I). Assume that initial guess
x = x0 is sufficiently close to ‘r’ and f ′(xn)−α f (xn) 6= in I. Then an iteration scheme defined
by formula (4.111) has optimal order of convergence four and satisfies the following error
equation :
en+1 = (C2−α)(C22 −C3−αC2 +2α4)e4
n +O(e5n). (4.112)
Proof. Using Taylor’s series expansion for f (xn) about x = r and taking into account that
f (r) = 0 and f ′(r) 6= 0, one can have
f (xn) = f ′(r)ben +C2e2n +C3e3
n +O(e4n)c, (4.113)
and
f ′(xn) = f ′(r)b1+2C2en +3C3e2n +4C4e3
n +O(e4n)c. (4.114)
From equations (4.113) and (4.114), one gets
f (xn)f ′(xn)−α f (xn)
= en +(−C2 +α)e2n +(2C2
2 −2C3−2αC2 +α2)e3n +O(e4
n). (4.115)
Further, one can obtain
f (xn−u) = f ′(r)b(C2−α)e2n +(−2C2
2 +2C3 +2αC2−α2)e3n +O(e4
n)c. (4.116)
Using equations (4.113), (4.114) and (4.116), one gets
f (xn−u) f ′(xn)u{ f ′(xn)}2−2α{ f (xn)}2−2 f (xn−u)( f ′(xn)−α f (xn))
=(C2−α)en +(−C22 +C3)e2
n
+(−2C2C3 +3C4 +2C2
2α−C3α
−3C2α2 +2α3)e3n +O(e4
n).
(4.117)
Substituting equations (4.115) and (4.117) in formula (4.111) and simplifying, one obtains
en+1 = (C2−α)(C22 −C3−αC2 +2α4)e4
n +O(e5n). (4.118)
This completes proof of the theorem.
120
4.3.5 Worked examples
Here, an attempt has been made to present some numerical results obtained by employ-
ing existing classical methods namely, Steffensen’s method (SM), Traub’s method (T M),
Newton-Secant method (NSM), Weerankoon and Fernando method (WFM), Hasanov et al.
method (HAM), Traub-Ostrowski’s method (TOM), Jarratt’s method (JM) and newly devel-
oped methods namely, modified Traub’s method (MT M) (4.70), modified Newton-Secant
method (MNSM) (4.71), modified Weerankoon and Fernando method (MWFM) (4.84),
modified Hasanov et al. method (MHAM) (4.100), modified Traub-Ostrowski’s method
(MTOM) (4.111) and modified Jarratt’s method (MJM) (4.88) respectively to solve nonlin-
ear equations given in Table (4.5). Here, for simplicity, the formulae are tested for |α| = 1.
The results are summarized in Table 4.6 (Number of iterations), Table 4.7 (Computational
order of convergence) and Table 4.8 (Total number of function evaluations) respectively.
Computations have been performed using MAT LABr version 7.5 (R2007b) in double pre-
cision arithmetic. We use ε = 10−15 as a tolerance error. The following stopping criteria are
used for computer programs :
(i)|xn+1− xn|< ε, (ii)| f (xn+1)|< ε.
121
Table 4.5
Test examples, interval (I) containing the root, their initial guesses (x0) and exact
root (r)
Example No. Examples I Initial guesses Exact root
0.44.3.1 exp(1− x)−1 = 0. [0,2]
1.51.0000000000000000
1.54.3.2 sin(x) = 0. [0,2]
1.510.0000000000000000
4.3.3 exp(−x)− sin(x) = 0. [5,7] 5 6.285049273382587
-1
4.3.4 arctan(x) = 0. [−1,2] -2 0.000000000000000
2
0
4.3.5 x3 +4x2−10 = 0. [0,2] 0.1 1.365230013414097
2
14.3.6 (x−1)3−1 = 0. [0,3]
32.000000000000000
2.934.3.7 exp(x2 +7x−30)−1 = 0. [2,4]
3.33.000000000000000
-14.3.8 cos(x)− x = 0. [−1,2]
20.7390851332151607
4.3.9 log(x) = 0. [.5,3.5] 3 1.000000000000000
122
Tabl
e4.
6
Num
ber
ofite
ratio
ns(D
and
Fbe
low
stan
dsfo
rdi
verg
enta
ndfa
ilure
resp
ectiv
ely)
Exa
mpl
eSM
TM
MT
MN
SMM
NSM
WFM
MW
FMH
AM
MH
AM
TOM
MTO
MJM
MJM
No.
(4.7
0)(4
.71)
(4.8
4)(4
.100
)(4
.111
)(4
.88)
44
44
44
44
43
33
44.
3.1
34
34
44
34
43
33
4
44∗
43∗
44
43
43∗
54∗
54.
3.2
43
45∗
47
43
44∗
53∗
5
4.3.
33
54
44
9∗4
34
44
44
D4
43
34
43
33
43
4
4.3.
4D
D4
46
D5
36
55
D5
DD
44
6D
53
65
5D
5
13F
4F
5F
5F
5F
4F
5
4.3.
538
644
75
85
75
54
55
184
44
43
43
43
43
4
1F
1F
1F
5F
1F
1F
54.
3.6
165
54
54
54
53
43
4
DD
64
44
44
43
33
34.
3.7
D6
76
66
66
64
44
4
58
44
43
54
510
55
54.
3.8
53
53
43
43
43
53
4
4.3.
9D
D5
D6
D5
D6
D5
35
∗C
onve
rges
toun
desi
red
root
123
Tabl
e4.
7
Com
puta
tiona
lord
erof
conv
erge
nce
(CO
C)
Exa
mpl
eSM
TM
MT
MN
SMM
NSM
WFM
MW
FMH
AM
MH
AM
TOM
MTO
MJM
MJM
No.
(4.7
0)(4
.71)
(4.8
4)(4
.100
)(4
.111
)(4
.88)
3.00
2.99
2.85
2.99
2.98
3.00
2.98
2.99
2.98
3.96
3.78
3.95
2.88
4.3.
12.
983.
033.
922.
972.
993.
002.
423.
003.
004.
034.
114.
033.
00
5.05
—2.
97—
3.94
3.00
2.85
4.29
3.94
—3.
98—
3.96
4.3.
25.
042.
902.
97—
3.94
3.00
2.84
5.05
3.94
—3.
982.
803.
96
4.3.
35.
435.
862.
934.
313.
582.
972.
865.
383.
585.
353.
685.
463.
66
...
4.99
2.93
4.52
3.66
2.98
2.92
4.55
3.67
4.73
3.84
4.79
3.86
4.3.
4..
...
.2.
524.
913.
97..
.3.
123.
313.
974.
873.
78..
.3.
80
...
...
2.52
4.91
3.97
...
3.12
3.31
3.97
4.87
3.78
...
3.80
2.00
==2.
88==
2.98
==2.
98==
2.98
==3.
90==
3.02
4.3.
52.
002.
992.
923.
162.
993.
082.
993.
162.
993.
703.
923.
703.
01
2.00
2.99
2.84
2.94
2.98
2.78
2.99
2.80
2.98
3.39
3.94
3.83
2.96
≡≡==
≡≡==
≡≡==
2.99
==≡≡
==≡≡
==3.
004.
3.6
1.97
3.00
2.83
2.89
2.99
2.87
2.82
2.89
2.99
3.26
3.40
3.26
2.61
...
...
2.84
3.03
3.02
3.07
3.01
3.03
3.02
4.09
4.09
4.09
2.50
4.3.
7..
.2.
862.
992.
982.
972.
952.
992.
982.
973.
733.
743.
692.
91
2.00
3.11
2.92
3.12
2.20
2.75
2.99
2.44
3.00
2.53
3.96
3.48
3.14
4.3.
82.
002.
242.
982.
333.
023.
262.
992.
193.
003.
394.
003.
463.
10
4.3.
9..
...
.3.
77..
.3.
00..
.2.
98..
.3.
00..
.3.
983.
532.
95
Her
e,th
esy
mbo
lsus
ed,a
real
read
yde
fined
inth
isch
apte
rund
erth
eTa
ble
4.3.
124
Tabl
e4.
8
Tota
lnum
ber
offu
nctio
nev
alua
tions
(TN
OFE
)
Exa
mpl
eSM
TM
MT
MN
SMM
NSM
WFM
MW
FMH
AM
MH
AM
TOM
MTO
MJM
MJM
No.
(4.7
0)(4
.71)
(4.8
4)(4
.100
)(4
.111
)(4
.88)
1212
1212
1212
1212
1212
1212
164.
3.1
912
912
1212
912
129
99
16
8_
12_
1212
1212
16_
15_
154.
3.2
89
12_
1221
1212
16_
15_
15
4.3.
36
1512
1212
2712
1216
1212
1212
∼12
129
912
1212
129
129
12
4.3.
4∼
∼12
1218
∼15
1224
1515
∼15
∼∼
1212
18∼
1512
2415
15∼
15
26=
12=
15=
15=
20=
12=
15
4.3.
576
192
1221
1524
1528
2015
1215
15
3612
1212
1212
1212
169
129
12
2=
3=
3=
15=
4=
3=
154.
3.6
4815
1512
1512
1516
2012
129
12
∼∼
1812
1212
1212
1212
1212
124.
3.7
∼18
2118
1818
1818
1816
1616
16
1024
1212
129
1516
2030
1515
154.
3.8
109
159
129
1212
169
159
12
4.3.
9∼
∼15
∼18
∼15
∼24
∼15
915
Her
e,th
esy
mbo
lsus
ed,a
real
read
yde
fined
inth
isch
apte
rund
erth
eTa
ble
4.4.
125
4.4 Discussion and conclusions
This work proposes many new families of Chebyshev’s method, Halley’s method, super-
Halley method, Chebyshev-Halley type methods, C-methods, Euler’s method, osculating
circle method and ellipse method for finding simple roots of nonlinear equations, permitting
f ′(xn) = 0, at some points in the vicinity of required root. Further, an attempt is made to
propose some new families of third-order and fourth-order multipoint iterative methods free
from second-order derivative by discretization. The numerical examples considered in Ta-
ble 4.1 and Table 4.5 overwhelmingly support that proposed one-point as well as multipoint
iterative methods can compete with any of the existing methods including classical New-
ton’s method and Traub-Ostrowski’s method. A reasonably close initial guess is necessary
for these multipoint methods to converge. This condition, however, applies practically to
all iterative methods for solving equations. Fourth-order multipoint iterative method of first
family have optimal order with computational efficiency index [13] equal to 3√
4∼= 1.587 and
third-order multipoint iterative methods of first and second families have efficiency indices
[13] equal to 3√
3∼= 1.442, which are better than the one of Newton’s method√
2∼= 1.414. In
third family, a modified third-order multipoint family of Hasanov et al. method [101, 226]
has been obtained, permitting f ′(xn) = 0, at some points in the vicinity of required root.
Further, one can also construct many new families of iterative methods free from first and
second-order derivatives in computing process by the forward-difference approximations.
Furthermore, the behaviours of existing classical iterative methods (one-point as well
as multipoint) and the proposed modifications can be compared by their corresponding cor-
rection factors. The factor f (xn)f ′(xn)
, which appears in the classical iterative schemes, is now
modified to f (xn)f ′(xn)−α f (xn)
, where α ∈ R. Here, the parameter ‘α’ should be chosen such that
the denominator is largest in magnitude. As is well-known that, if an initial guess ‘xn’ is
close enough to the solution ‘r’ with f ′(r) 6= 0, the classical iterates are well defined and
converge to the required root. Now this modified factor has two major advantages over its
predecessor. The first one is that it is always well defined, even if f ′(xn) = 0. The second one
is that an absolute value of denominator in the formula is greater than | f ′(xn)| provided ‘xn’
is not accepted as an approximation to the required root ‘r’. This means that the numerical
126
stability of iterative schemes having this factor is better than the classical iterates. Moreover,
if ‘xn’ is sufficiently close to ‘r’ (simple root), then f (xn) will be sufficiently close to zero,
and in particular the considered denominator f ′(xn)−α f (xn) will also be close to zero if
f ′(xn) ≈ 0. In these problems where f ′(xn) −→ 0, a small value of the parameter ‘α’ may
lead to divergence or failure. Such situations can be handled by choosing ‘α’ in such a way
such that the correction factor is as small as possible, i.e. denominator f ′(xn)−α f (xn) is
sufficiently large in magnitude.
Finally, it is concluded that these techniques can be used as an alternative to existing tech-
niques or in some cases where existing techniques are not successful, therefore, have definite
practical utility.
127