16
The application of Newton’s method in vector form for solving nonlinear scalar equations where the classical Newton method fails Higinio Ramos 1a,b , J. Vigo-Aguiar 2a a Scientific Computing Group. University of Salamanca. Spain b Escuela Polit´ ecnica Superior de Zamora. University of Salamanca. Spain Abstract In this paper we propose a strategy to obtain the solutions of a scalar equation f ( x) = 0 through solving an associated system of two equations. In this way, the solutions of the associated system lying on the identity line provide solutions of the given equation. In most cases, the associated system cannot be solved exactly. Solving this system with the Newton method for systems may result more ecient than the application of the scalar Newton method to the simpler equation. In some pathological cases in which the scalar Newton method does not provide solution, this strat- egy works appropriately. Some examples are given to illustrate the performance of the proposed strategy. Keywords: nonlinear equations, Newton method, cycles of period two, approximate solutions, iteration function 2000 MSC: 65H05, 65H10 1. Introduction Finding the roots of a nonlinear equation f ( x) = 0 with f ( x): X X , X R , is a classical and important problem in science and engineering. There are very few functions for which the roots can be expressed explicitly in closed form. Thus, the solutions must be obtained approxi- mately, relying on numerical techniques based on iterative processes [1]. Given an initial guess for the root, x 0 , successive approximations are obtained by means of an iteration function (IF) Φ : X −→ X x n+1 ( x n ) , n = 0, 1, 2 ... which will often converge to a root α of the equation, provided that some convergence criterion is satisfied. If there exists a real number p 1 and a nonzero constant A > 0 such that lim n→∞ |Φ( x n ) α| | x n α| p = A 1 [email protected] 2 [email protected] Preprint submitted to Journal of Computational and Applied Mathematics July 31, 2014

The application of Newton�s method in vector form for solving nonlinear scalar equations where the classical Newton method fails

  • Upload
    usal

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

The application of Newton’s method in vector form for solvingnonlinear scalar equations where the classical Newton method

fails

Higinio Ramos 1a,b, J. Vigo-Aguiar 2a

aScientific Computing Group. University of Salamanca. SpainbEscuela Politecnica Superior de Zamora. University of Salamanca. Spain

Abstract

In this paper we propose a strategy to obtain the solutions of a scalar equation f (x) = 0 throughsolving an associated system of two equations. In this way, the solutions of the associated systemlying on the identity line provide solutions of the given equation. In most cases, the associatedsystem cannot be solved exactly. Solving this system with the Newton method for systems mayresult more efficient than the application of the scalar Newton method to the simpler equation. Insome pathological cases in which the scalar Newton method does not provide solution, this strat-egy works appropriately. Some examples are given to illustrate the performance of the proposedstrategy.

Keywords: nonlinear equations, Newton method, cycles of period two, approximate solutions,iteration function2000 MSC: 65H05, 65H10

1. Introduction

Finding the roots of a nonlinear equation f (x) = 0 with f (x) : X → X , X ⊂ R , is a classicaland important problem in science and engineering. There are very few functions for which theroots can be expressed explicitly in closed form. Thus, the solutions must be obtained approxi-mately, relying on numerical techniques based on iterative processes [1]. Given an initial guessfor the root, x0, successive approximations are obtained by means of an iteration function (IF)Φ : X −→ X

xn+1 = Φ(xn) , n = 0, 1, 2 . . .

which will often converge to a root α of the equation, provided that some convergence criterionis satisfied.

If there exists a real number p ≥ 1 and a nonzero constant A > 0 such that

limn→∞

|Φ(xn) − α||xn − α|p

= A

[email protected]@.usal.es

Preprint submitted to Journal of Computational and Applied Mathematics July 31, 2014

Usuario
Texto escrito a máquina
Preliminary version. Journal of Computational and Applied Mathematics 275 (2015) 228–237

then p is called the order of convergence, and A is the asymptotic error constant of the methoddefined by Φ. For such a method, the error |en+1| = |xn+1−α| is proportional to |en|p = |xn−α|p asn→ ∞. In practice, the order of convergence is often determined by using the Schroeder-Traub’stheorem [5] which states that being Φ an IF with Φ(r) continuous in a neighborhood of α, then Φis of order r if and only if

Φ(α) = α , Φ′(α) = · · · = Φ(r−1)(α) = 0 , Φ(r)(α) , 0 .

Given an IF Φ, the orbit of x0 ∈ X under Φ is defined as the set given by the sequence ofpoints

orb(x0) ={x0,Φ(x0), . . . ,Φk(x0), . . .

}where Φk(x) denotes the k-fold composition of Φ with itself applied to x0, that is, Φk(x0) =Φ(Φ(. . .Φ(x0) . . . )) where Φ is applied k times. The point x0 is called the seed of the orbit. Apoint x0 is a fixed or equilibrium point of Φ if Φ(x0) = x0. Note that the orbit of a fixed point x0consists on the constant sequence {x0}.

If Φn(x0) = x0 and Φ j(x0) , x0 for 0 < j < n then x0 is called a point with period n. In thiscase the corresponding orbit is said to be a periodic orbit of period n, or a n-cycle. All points onthe orbit of x0 have the same period. Note that if x0 is a periodic point of period n ≥ 1 then x0 isa fixed point of Φn, and the corresponding orbit has only n different points. Similarly, if x0 is aperiodic point of period n ≥ 1 then x0 is also fixed by Φkn for any positive integer k.

A point x0 is called eventually fixed or eventually periodic if x0 itself is not fixed or periodic,but some point on orb(x0) is fixed or periodic.

A fixed point x0 of Φ is attracting if |Φ′(x0)| < 1, is repelling if |Φ′(x0)| > 1, and is indifferentor neutral if |Φ′(x0)| = 1 . Readers interested in the above concepts, belonging to to the topic ofdynamical systems, may see references [6], [8], [7].

2. The Newton-Raphson method

We will consider that the function f (x) whose root need to be calculated is a scalar real one,f : R → R. Among the iteration methods, the Newton method is probably the best known andmost reliable and most used algorithm [4],[2]. There are many ways to obtain Newton method.We will offer a brief outline of the geometric approach. The basic idea behind Newton’s methodrelies in approximating f (x) with the best linear approximation passing for an initial guess x0which is near the root α. This best linear approximation is the tangent line to the graph of f (x)at (x0, f (x0)). Provided f ′(x0) , 0, the intersection of this line with the horizontal axis results in

x1 = x0 −f (x0)f ′(x0)

.

This process can be continued with a new approximation x1 to get x2, and so on. In this way, theNewton iteration function is given by

N f (x) = x − f (x)f ′(x)

.

It converges quadratically to simple zeros and linearly to multiple zeros, provided a good approx-imation is at hand. There exist different results about semilocal convergence of Newton’s method

2

in which precise bounds are given for balls of convergence and uniqueness. Nevertheless, thedynamics of the Newton IF N f may be very complicated, even for apparently simple functions.We expect that the sequence obtained through the IF will converge to the desired root of f (x).Nevertheless, depending on the function under consideration and the initial point selected, theorbit of this point may not always converge successfully. Some pathological situations in theapplication of Newton’s method follow:

1. Newton’s method may fail if at any stage of computation the derivative of the functionf (x) is either zero or very small. For low values of the derivative, the Newton iterationoffshoots away from the current point of iteration and may possibly converge to a root faraway from the desired one.

2. It may fail also to converge to the desired root in case the initial point is far from this root.3. For certain forms of equations, Newton method diverges. A typical example of this situa-

tion is the function f (x) = 3√

x, for which the iteration function is N f (x) = 2x, which resultsin a divergent sequence whatever the initial guess x0 , 0 is.

4. In some cases, the iterations oscillate. For example, this is the situation if the initial guessis in an orbit of period n or is eventually periodic of period n.

5. The convergence of Newton’s method may be very slow near roots of multiplicity greaterthan one. In fact, for multiple roots the Newton method looses its quadratic convergence.

6. Finally, the most dramatic situation occurs when the iteration function N f exhibits chaoticbehavior, in which there are orbits that wander from here to there aimlessly.

An important result to be considered here is the following theorem whose proof may be foundin [6], p. 167.

Theorem 1 (Newton’s fixed point theorem). α is a root of f (x) of multiplicity k if and only if αis a fixed point of the IF N f . Moreover, such a fixed point is always attracting.

The importance of this result resides in the fact that the iteration function N f does not haveextraneous fixed points (which are not solutions of f (x) = 0). These extraneous fixed pointsappear in other iteration functions of well-known methods such as the Halley method or theChebyshev method.

3. The associated system to the scalar equation

Let us consider a particularization of a result in [9].

Proposition 1. Let be f (x) : R → R a differentiable function, and let O = {x1, x2} ∈ R withx1 , x2. Then O is a 2-cycle of the Newton IF N f if and only if x = x1, y = x2 is a solution of thesystem

{N f (x) = y,N f (y) = x

}.

From the above proposition and Theorem1 it is easy to get the following main result, in whichis based the proposed strategy for obtaining the roots of the equation f (x) = 0.

Proposition 2. Let be f (x) : R → R a differentiable function and N f (x) the correspondingNewton IF. Any solution of the system

{N f (x) = y,N f (y) = x

}is of one of the following types

• x = x1, y = x2 with x1 , x2, which means that the set {x1, x2} is a 2-cycle

• x = y = α, which means that α is a root of f (x) = 0.3

Thus, for solving the equation f (x) = 0, we may consider to solve what we call the associatedsystem given by

{N f (x) = y,N f (y) = x

}. If we may ensure by some means that the IF N f has no

periodic orbits of period 2, then to get a solution of the system will be equivalent to get a solutionof the escalar equation f (x) = 0.

The associated system may be solved by different procedures, in some cases in algebraic way,but in general we will consider the Newton method, which also may be used for solving systemsof equations [2], [3]. For solving a system of the form { f1(x, y) = 0, f2(x, y) = 0} using Newtonmethod, the strategy is the same as that used by Newton’s method for solving a single nonlinearequation: to linearize each of the equations. Using the notation F(x, y) = ( f1(x, y), f2(x, y))T withX = (x, y)T , the solution of the above system using the Newton method is given by

X(k+1) = X(k) −[F′(X(k))

]−1F(X(k))

where F′(X(k)) is the Jacobian matrix. To obtain the computational solution an initial approxi-mation vector X(0) = (x(0), y(0)) must be provided. Although at first glance it might seem thatsolving a system could be more difficult than solving a scalar equation (and usually it is so), wewill show different examples where the opposite occurs.

3.0.1. A local convergence resultThe following result shows that under local conditions, solving the associated system leads

to the solution of the equation f (x) = 0.

Proposition 3. Let the function f (x) be defined and twice continuously differentiable on theclosed finite interval [a, b] ∈ R. If the following conditions are satisfied

1. f (a) f (b) < 02. f ′(x) , 0 , x ∈ [a, b]3. f ′′(x) is either ≥ 0 or ≤ 0 for all x ∈ [a, b]

then there is a unique solution of the system{N f (x) = y,N f (y) = x

}where N f is the Newton IF

of f (x), being x = y = α with f (α) = 0.

Proof. Condition 1) merely states that f (a) and f (b) have different signs, and hence that theequation f (x) = 0 has at least one root in (a, b). By virtue of condition 2) there is only onesolution, which we will call α. Condition 3) states that the graph of f (x) is either concave fromabove or concave from below.

There are four different situations covered by the theorem:

I) f (a) < 0 , f (b) > 0 , f ′′(x) ≥ 0II) f (a) < 0 , f (b) < 0 , f ′′(x) ≤ 0

III) f (a) > 0 , f (b) > 0 , f ′′(x) ≥ 0IV) f (a) > 0 , f (b) < 0 , f ′′(x) ≤ 0

but all cases may be reduced to one of them by using appropriate transformations (see [10], p.79). Thus, it suffices to prove the theorem in the case I). In this case the graph of f (x) looks asgiven in Figure 1.

It is obvious that the set of solutions of the system{N f (x) = y,N f (y) = x

}in [a, b] is not

empty as x = y = α is a solution. Let us see that this is the unique solution. If there is asolution different of that given by x = y = α, say x = xk, y = yk, we may assume without loss ofgenerality that xk ≤ yk. We assert that either xk = α or yk = α. In other case we would have threepossibilities:

4

a Α b

Figure 1: Graph of a function f (x) verifying conditions I) in Proposition 3

i) xk ≤ yk < α

ii) xk < α < yk

iii) α < xk ≤ yk

Let us see that these cases result in contradictions. For cases i) and ii) we have that f (xk) < 0.Expanding f (x) in Taylor series about yk and taking x = α it results that

0 = f (α) = f (yk) + f ′(yk)(α − yk) +12

f ′′(ζk)(α − yk)2 ≥ f (yk) + f ′(yk)(α − yk)

or0 ≥ f (yk) + f ′(yk)(α − yk)

from which we have thatα ≤ yk −

f (yk)f ′(yk)

= xk

which is a contradiction with the hypothesis that xk < α. Thus, cases i) and ii) are not possible.For case iii) we have that f (xk) > 0, f ′(xk) > 0, and thus it results that

yk = xk −f (xk)f ′(xk)

< xk

which contradicts the hypothesis xk ≤ yk.Hence, either xk = α or yk = α. If xk = α then it follows that

yk = xk −f (xk)f ′(xk)

= α − f (α)f ′(α)

= α ,

5

and xk = yk = α as stated. Similarly, if yk = α then we have

xk = yk −f (yk)f ′(yk)

= α − f (α)f ′(α)

= α ,

which results in xk = yk = α. Thus the proof of the proposition is complete.

4. Application examples

In order to test the efficiency of the strategy proposed here we are going to evaluate thenumber of iterations required to reach a specified level of convergence for different problems.We have considered the command FindRoot in the Mathematica program, taking the optionsMethod->"Newton" and WorkingPrecision->30, which specifies how many digits of preci-sion should be maintained in internal computations. The stopping criterion is given by ∥xN −xN−1∥ < 10−15 and ∥ f (xN)∥ < 10−15 .

In Table 1 appear the functions considered, together with the roots we want to approximate.

Function Root

f1(x) = 3√

x e−x20

f2(x) = ex6+7x−30 − 1 1.627818561918557849123154131

f3(x) =(ex2+ x − 20

)201.704885352466915992721311334

f4(x) = x3 + log(x) + 0.15 cos(50x) 0.717519716444759257246717966

f5(x) = π − 2x sin(π

x

)1.657400240258006123793738672

Table 1: List of functions and the corresponding roots to be approximated.

4.1. Example 1The function f1(x) has only the root α = 0 as can be seen in Figure2.For x , 0 the Newton IF results in N f1 (x) = 2(3x3+x)

6x2−1 . For any initial guess x0 , 0 the iterationwith this function results in a divergent sequence, and the Newton method is not capable to getan approximation to the solution. The cobweb plot in Figure3 shows the dynamical behavior ofthe IF N f1 (x) where we observe that even for an initial guess very close to the solution we obtaina divergence sequence.

On the other hand, the associated system{N f1 (x) = y,N f1 (y) = x

}reads2

(3x3 + x

)6x2 − 1

= y,2(3y3 + y

)6y2 − 1

= x

.Solving this system using the command Solve in the Mathematica program we obtain that theunique real solution is x = y = 0, which is the root of f1(x) = 0. This proves that the IF N f1 (x)

6

-2 -1 1 2

-1.0

-0.5

0.5

1.0

Figure 2: Plot of the function f1(x).

does not have any 2-cycle. Solving the above system using the Newton method with initialguess (x0, y0)T = (−2, 2)T we obtain after 9 iterations the approximate solution with the requiredprecision.

4.2. Example 2

The function f2(x) has two roots, as may be seen in Figure 4. We are interested in the positiveroot, α1, while the negative one could be approximated similarly. The Newton IF is

N f2 (x) = x +e−x6−7x+30 − 1

6x5 + 7

and is located under the diagonal line y = x and for x→ ∞ tends asymptotically to this diagonal.Thus, from any initial guess x0 >> α1 the convergence of the iterative sequence produced byN f2 (x) will be very slow. Looking at the graph of the function f2(x) in Figure 4 we see that ifx0 < α1 the tangent line is almost horizontal and the next iteration will be far away from the root,and if x0 > α1 the tangent line is very steep and the Newton sequence will approach very slowlytowards the root. The scalar Newton method works acceptably only if the initial guess is veryclose to the root.

7

-2 -1 0 1 2-2

-1

0

1

2

Figure 3: Plot of the function N f1 (x) with some iterations taking x0 = 0.002.

-2 -1 1 2

-2

-1

1

2

f2HxL

1.0 1.5 2.0 2.5 3.0

1.5

2.0

2.5

3.0

Nf2HxL

Figure 4: Graphs of the functions f2(x) (left) and N f2 (x) (right).

In Table 2 appear the data corresponding to the approximation of the root by using the scalarNewton method applied to f2(x) and the Newton method applied to the associated system: initial

8

guesses, computational time (in seconds) and number of iterations.

x0/(x0, y0) Time Iterations

Scalar Newton 4 1.016 4104

2 0.015 56

1.6 0.015 15

1.5 Overflow

Vector Newton (1.0, 2.0) 0.046 31

(1.0, 4.0) 0.046 30

(0.5, 6.0) 0.016 36

(0.0, 10.0) 0.062 38

Table 2: Data corresponding to the function f2(x).

It can also be seen from Table 2 that a less number of iterations is required to reach the rootwith the proposed method than that required with Newton’s method, except if the initial guess isvery close to the root.

We note that in this case there are no cycles of period 2 either. With the help of the programMathematica we have plot the graphs of the implicit curves in the associated systeme−x6−7x+30 − 1

6x5 + 7+ x = y,

e−y6−7y+30 − 16y5 + 7

+ y = x

near the roots, which appear in Figure 5. We see that there are only two intersection pointsbetween the curves, which are on the diagonal y = x, meaning that these are the two roots of theequation f2(x) = 0

(α1 = 1.627818561918557849123154131 , α2 = −1.872516561317824507097057829).

9

-2.00 -1.95 -1.90 -1.85 -1.80-2.00

-1.95

-1.90

-1.85

-1.80

1.501.551.601.651.701.751.801.50

1.55

1.60

1.65

1.70

1.75

1.80

Figure 5: Detailed graphs of the implicit curves in the associated system to f2(x), near the roots. The roots are theintersections over the diagonal line y = x.

4.3. Example 3

The function f3(x) has two roots, being f3(x) > 0 out of these roots. We are interested in thepositive one. The negative zero could be approximated similarly. The Newton iteration functionis

N f3 (x) = x − ex2+ x − 20

20(2ex2 x + 1

)which tends asymptotically to the graph of y = x for x → ∞. The behavior of the iterations issimilar to that of the above problem. In Table 3 appear the data corresponding to the approxima-tion of the root by using the scalar Newton method applied to f3(x) and the vector Newton methodapplied to the associated system. As can be shown in Figure 6, for this problem there are no 2-cycles, and the graphs of the implicit curves in the associated system intersect only on the tworoots (α1 = 1.704885352466915992721311334 , α2 = −1.754947637178101198447348684).

10

-2 -1 0 1 2

-2

-1

0

1

2

Figure 6: Graphs of the implicit curves in the associated system to f3(x). The two roots are the intersections over thediagonal line y = x.

11

x0/(x0, y0) Time Iterations

Scalar Newton 10 0.828 2517

4 0.312 836

2 0.218 588

1 0.234 570

Vector Newton (1.0, 2.0) 0.015 9

(1.0, 4.0) 0.015 9

(0.5, 6.0) 0.015 11

(0.0, 10.0) 0.015 11

Table 3: Data corresponding to the function f3(x).

We observe that te proposed strategy is more efficient than solving f (x) = 0 with Newton’smethod. Although the Newton method for systems needs to consider the Jacobians, there are sofew iterations, even in a wide interval, that the computational time is much less than the timeneeded for the scalar Newton method.

4.4. Example 4

This example corresponds to a function that has only one real root. Due to the oscillatorycharacter of the function, there are a lot of 2-cycles of the Newton IF N f4 (x). In Figure 7 we cansee the intersections of the graphs of the two implicit curves of the associated system to f4(x)on the rectangle [0.6, 0.8] × [0.6, 0.8]. There is only one intersection over the diagonal whichcorresponds to the unique root α = 0.717519716444759257246717966, the other intersectionscorrespond to different 2-cycles. Some of the 2-cycles are shown in Figure 8. These 2-cycleshave been obtained solving the associated systems taking appropriate initial guesses. The orbitsare

orb1 = {0.692198885685395552, 0.748996030431394228}orb2 = {0.634597589403757766, 0.755453007690175635}orb3 = {0.687693676287228638, 0.812103547553422787} .

For this problem there are no significant differences between the performance of both approaches,scalar or vector Newton procedures, but we must note that the initial guesses in both cases mustbe chosen near the root in order to get good performances. We note that the two intersections inthe curves of the associated system closer to a root determine an interval where the scalar New-ton method converges. Thus, for any initial guess, x0, in the interval (0.692198885685395552,0.748996030431394228) determined by the two points in the orbit orb1 there is convergence ofthe Newton iteration to the intended root.

12

0.60 0.65 0.70 0.75 0.80

0.60

0.65

0.70

0.75

0.80

Figure 7: Plot showing the intersections of the graphs of the implicit curves in the associated system to f4(x), near theroot. The root corresponds to the intersection point over y = x, the other intersection points correspond to 2-cycles.

4.5. Example 5

This last example corresponds to a curious example appeared in [11], [12] for which takingthe initial guess x0 = 1/2 the Newton sequence

{N f (xn)

}→ 0 but f (0) , 0 and N f (0) , 0. This

function is a symmetric one with respect to the vertical axis that has two roots, as can be seen inFigure 9.

13

0.6 0.7 0.8 0.9

-0.3

-0.2

-0.1

0.1

0.2

0.3

0.6 0.7 0.8 0.9

-0.3

-0.2

-0.1

0.1

0.2

0.3

0.6 0.7 0.8 0.9

-0.3

-0.2

-0.1

0.1

0.2

0.3

Figure 8: Graphical representation of different 2-cycles of the iteration function N f4 (x) near the root.

-4 -2 2 4

-2

-1

1

2

3

4

Figure 9: Graph of the function f5(x).

14

In Figure 10 we observe that there are two intersection points with the diagonal line, whichcorrespond to the zeros of the function, and that there are a number of infinite 2-cycles. The2-cycle closer to the root determines an interval where the scalar Newton method converges tothe positive root, and is given by (0.7461032276136741263663, 2.68369772675810300822294).Out of this interval we cannot say nothing about the convergence to the intended root, althoughit may have other guesses that produce convergence to this root. Nevertheless, taking the initialguess (x0, y0) = (0.1, 4) for solving the associated Newton system with the vector Newton methodwe obtain after 18 iterations the desired root within the required tolerance.

-4 -2 0 2 4-4

-2

0

2

4

Figure 10: Plot showing the intersections of the graphs of the implicit curves in the associated system to f5(x), near theroots. The roots correspond to the intersection points over y = x, the other intersection points correspond to 2-cycles.

5. Conclusions

It is well-known that in certain cases the Newton method does not work satisfactory forsolving the scalar equation f (x) = 0. We have presented a strategy for obtaining the solutionsof this equation through solving an associated system of two equations related with the Newtoniteration function. The solutions of this system provide not only the roots of the scalar equation,but also the orbits of period two of the Newton iteration. In most cases the associated system

15

cannot be solved exactly. In this situation we must use an iterative procedure for solving thesystem, among which we may use the Newton method for systems, assuming an appropriateinitial guess is provided.

Some numerical examples are shown to see the performance of the proposed strategy. Someof them are not capable of being solved by the scalar Newton method. The strategy may beextended to obtain orbits of any periods.

The closest orbit to the intended root provides an interval of convergence for the scalar New-ton method. An open question is how to get the regions in the plane for which we obtain conver-gence of the vector Newton method to the zeros of f (x) or to the 2-cycles points of N f (x).

6. Acknowledgements

We thank the referees and the principal editor for their relevant and useful comments whichprovided insights that helped to improve the paper.

References

[1] J. M. Ortega,W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Banach Spaces, Academic Press,NewYork, 1970.

[2] P. Deuflhard, Newton Methods for Nonlinear Problems. Affine Invari- ance and Adaptive Algorithms, Springer,Berlin, 2011.

[3] I. K. Argyros, Convergence and Applications of Newton-type Iterations, Springer, New York, 2008.[4] C. T. Kelley, Solving Nonlinear Equations with Newton’s Method, SIAM, Philadelphia, 2003.[5] J. F. Traub, Iterative Methods for the Solution of Equations, Chelsea Publishing Company, New York, 1997.[6] R. Devaney, A First Course In Chaotic Dynamical Systems, Boston, Addison-Wesley, 1992.[7] M. Martelli, Introduction to Discrete Dynamical Systems and Chaos, New York, John Wilwy & Sons, 1999.[8] R. A. Holmgren, A first course in discrete dynamical systems, New York, Springer-Verlag, 1994.[9] S. Plaza, V. Vergara, Existence of attracting periodic orbits for the Newton’s method, Sci. Ser. A: Math. Sci. 7

(2001) 31–36.[10] P. Henrici, Elements of Numerical Analysis, John Wiley and Sons, New York, 1964.[11] P. Horton, No fooling! Newton’s method can be fooled, Mathematics Magazine 80 (2007) 383–387.[12] J. Merikoski, T. Tossavainen, Fooling Newton’s method as much as one can, Mathematics Magazine 80 (2009)

134–135.

16