05 Roots Finding Open

Embed Size (px)

DESCRIPTION

School project on root finding algorithm

Citation preview

  • Roots of EquationsOpen Methods

  • The following root finding methods will be introduced:

    A. Bracketing MethodsA.1. Bisection MethodA.2. Regula Falsi

    B. Open MethodsB.1. Fixed Point IterationB.2. Newton Raphson's MethodB.3. Secant Method

  • B. Open MethodsTo find the root for f(x) = 0, we construct a magic formulae xi+1 = g(xi)to predict the root iteratively until x converge to a root. However, x may diverge!Bisection methodOpen method (diverge)Open method (converge)

  • What you should know about Open MethodsHow to construct the magic formulae g(x)?

    How can we ensure convergence?

    What makes a method converges quickly or diverge?

    How fast does a method converge?

  • B.1. Fixed Point IterationAlso known as one-point iteration or successive substitution

    To find the root for f(x) = 0, we rearrange f(x) = 0 so that there is an x on one side of the equation.

    If we can solve g(x) = x, we solve f(x) = 0.We solve g(x) = x by computing

    until xi+1 converges to xi.

  • Fixed Point Iteration Example

    Reason: When x converges, i.e. xi+1 xi

  • Example

    Find root of f(x) = e-x - x = 0. (Answer: = 0.56714329)

    ixia (%)t (%)00100.011.000000100.076.320.367879171.835.130.69220146.922.140.50047338.311.850.60624417.46.8960.54539611.23.8370.5796125.902.2080.5601153.481.2490.5711431.930.705100.5648791.110.399

  • Two Curve Graphical MethodDemoThe point, x, where the two curves, f1(x) = x and f2(x) = g(x),intersect is the solution to f(x) = 0.

  • Fixed Point IterationThere are infinite ways to construct g(x) from f(x).

    For example,So which one is better?(ans: x = 3 or -1)Case a:Case b:Case c:

  • x0 = 4x1 = 3.31662x1 = 3.10375x1 = 3.03439x1 = 3.01144x1 = 3.00381x0 = 4x1 = 1.5x1 = -6x1 = -0.375x1 = -1.263158x1 = -0.919355-1.02762-0.990876-1.00305x0 = 4x1 = 6.5x1 = 19.625x1 = 191.070Converge!Converge, but slowerDiverge!

  • How to choose g(x)?Can we know which function g(x) would converge to solution before we do the computation?

  • Convergence of Fixed Point IterationBy definitionFixed point iteration

  • Convergence of Fixed Point IterationAccording to the derivative mean-value theorem, if a g(x) and g'(x) are continuous over an interval xi x , there exists a value x = within the interval such thatTherefore, if |g'(x)| < 1, the error decreases with each iteration. If |g'(x)| > 1, the error increase.If the derivative is positive, the iterative solution will be monotonic.If the derivative is negative, the errors will oscillate.

  • Demo(a) |g'(x)| < 1, g'(x) is +ve converge, monotonic

    (b) |g'(x)| < 1, g'(x) is -ve converge, oscillate

    (c) |g'(x)| > 1, g'(x) is +ve diverge, monotonic

    (d) |g'(x)| > 1, g'(x) is -ve diverge, oscillate

  • Fixed Point Iteration Impl. (as C function)// x0: Initial guess of the root// es: Acceptable relative percentage error// iter_max: Maximum number of iterations alloweddouble FixedPt(double x0, double es, int iter_max) { double xr = x0; // Estimated root double xr_old; // Keep xr from previous iteration int iter = 0; // Keep track of # of iterations

    do { xr_old = xr; xr = g(xr_old); // g(x) has to be supplied if (xr != 0) ea = fabs((xr xr_old) / xr) * 100;

    iter++; } while (ea > es && iter < iter_max);

    return xr;}

  • The following root finding methods will be introduced:

    A. Bracketing MethodsA.1. Bisection MethodA.2. Regula Falsi

    B. Open MethodsB.1. Fixed Point IterationB.2. Newton Raphson's MethodB.3. Secant Method

  • B.2. Newton-Raphson MethodUse the slope of f(x) to predict the location of the root.

    xi+1 is the point where the tangent at xi intersects x-axis.

  • Newton-Raphson MethodWhat would happen when f '() = 0?

    For example, f(x) = (x-1)2 = 0

  • Error Analysis of Newton-Raphson MethodBy definitionNewton-Raphson method

  • Error Analysis of Newton-Raphson MethodSuppose is the true value (i.e., f() = 0).Using Taylor's seriesWhen xi and are very close to each other, is between xi and .The iterative process is said to be of second order.

  • The Order of Iterative Process (Definition)Using an iterative process we get xk+1 from xk and other info.

    We have x0, x1, x2, , xk+1 as the estimation for the root .Let k = xk

    Then we may observe

    The process in such a case is said to be of p-th order. It is called Superlinear if p > 1. It is called Linear if p = 1. It is called Sublinear if p < 1.

  • Error of the Newton-Raphson MethodEach error is approximately proportional to the square of the previous error. This means that the number of correct decimal places roughly doubles with each approximation.Example: Find the root of f(x) = e-x - x = 0(Ans: = 0.56714329)Error Analysis

  • Error Analysis

    ixit (%)|i|estimated |i+1|001000.567143290.058210.50000000011.80.067143290.00815820.5663110030.1470.00083230.00000012530.5671431650.00002200.0000001252.83x10-1540.567143290< 10-8

  • Newton-Raphson vs. Fixed Point Iteration

    Find root of f(x) = e-x - x = 0.(Answer: = 0.56714329)Newton-RaphsonFixed Point Iteration with

    ixia (%)t (%)00100.011.000000100.076.320.367879171.835.130.69220146.922.140.50047338.311.850.60624417.46.8960.54539611.23.8370.5796125.902.2080.5601153.481.2490.5711431.930.705100.5648791.110.399

    ixit (%)|i|001000.5671432910.50000000011.80.0671432920.5663110030.1470.000832330.5671431650.00002200.00000012540.567143290< 10-8

  • Pitfalls of the Newton-Raphson MethodSometimes slow

    iterationx00.5151.65246.485341.8365437.65285533.8877565Infinity1.0000000

  • Pitfalls of the Newton-Raphson Method

    Figure (a)An infection point (f"(x)=0) at the vicinity of a root causes divergence.Figure (b)A local maximum or minimum causes oscillations.

  • Pitfalls of the Newton-Raphson Method

    Figure (c)It may jump from one location close to one root to a location that is several roots away.Figure (d)A zero slope causes division by zero.

  • Overcoming the Pitfalls?No general convergence criteria for Newton-Raphson method.

    Convergence depends on function nature and accuracy of initial guess.A guess that's close to true root is always a better choiceGood knowledge of the functions or graphical analysis can help you make good guesses

    Good software should recognize slow convergence or divergence.At the end of computation, the final root estimate should always be substituted into the original function to verify the solution.

  • The following root finding methods will be introduced:

    A. Bracketing MethodsA.1. Bisection MethodA.2. Regula Falsi

    B. Open MethodsB.1. Fixed Point IterationB.2. Newton Raphson's MethodB.3. Secant Method

  • B.2. Secant MethodNewton-Raphson method needs to compute the derivatives.

    The secant method approximate the derivatives by finite divided difference.From Newton-Raphson method

  • Secant Method

  • Secant Method Example

    Find root of f(x) = e-x - x = 0 with initial estimate of x-1 = 0 and x0 = 1.0. (Answer: = 0.56714329)Again, compare this results obtained by the Newton-Raphson method and simple fixed point iteration method.

    ixi-1xif(xi-1)f(xi)xi+1t0011.00000-0.632120.612708.0 %110.61270-0.63212-0.070810.563840.58 %20.612700.56384-0.070810.005180.567170.0048 %

  • Comparison of the Secant and False-position methodBoth methods use the same expression to compute xr.

    They have different methods for the replacement of the initial values by the new estimate. (see next page)

  • Comparison of the Secant and False-position method

  • Comparison of the Secant and False-position method

  • Modified Secant MethodReplace xi-1 - xi by xi and approximate f'(x) as

    From Newton-Raphson method,

    Needs only one instead of two initial guess points

  • Modified Secant Method

    Find root of f(x) = e-x - x = 0 with initial estimate of x0 = 1.0 and =0.01. (Answer: = 0.56714329)Compared with the Secant method

    ixi-1xif(xi-1)f(xi)xi+1t0011.00000-0.632120.612708.0 %110.61270-0.63212-0.070810.563840.58 %20.612700.56384-0.070810.005180.567170.0048 %

    ixixi+xif(xi)f(xi+xi)xi+1011.01-0.63212-0.645780.53726310.5372630.5426350.0470830.0385790.5670120.567010.5671430.000209-0.008670.567143

  • Modified Secant Method About

    If is too small, the method can be swamped by round-off error caused by subtractive cancellation in the denominator of

    If is too big, this technique can become inefficient and even divergent.

    If is selected properly, this method provides a good alternative for cases when developing two initial guess is inconvenient.

  • The following root finding methods will be introduced:

    A. Bracketing MethodsA.1. Bisection MethodA.2. Regula Falsi

    B. Open MethodsB.1. Fixed Point IterationB.2. Newton Raphson's MethodB.3. Secant MethodCan they handle multiple roots?

  • Multiple RootsA multiple root corresponds to a point where a function is tangent to the x axis.

    For example, this function has a double root.

    f(x) = (x 3)(x 1)(x 1) = x3 5x2 + 7x - 3For example, this function has a triple root.

    f(x) = (x 3)(x 1)(x 1) (x 1) = x4 6x3 +12x2 - 10x + 3

  • Multiple Roots

    Odd multiple roots cross the axis. (Figure (b))

    Even multiple roots do not cross the axis. (Figure (a) and (c))

  • Difficulties when we have multiple rootsBracketing methods do not work for even multiple roots.

    f() = f'() = 0, so both f(xi) and f'(xi) approach zero near the root. This could result in division by zero. A zero check for f(x) should be incorporated so that the computation stops before f'(x) reaches zero.

    For multiple roots, Newton-Raphson and Secant methods converge linearly, rather than quadratic convergence.

  • Modified Newton-Raphson Methods for Multiple RootsSuggested Solution 1:

    Disadvantage: work only when m is known.

  • Modified Newton-Raphson Methods for Multiple RootsSuggested Solution 2:

  • Example of the Modified Newton-Raphson Method for Multiple RootsOriginal Newton Raphson method

    The method is linearly convergent toward the true value of 1.0.

    ixit (%)0010010.42857145720.68571433130.83286541740.91332908.750.95578334.460.97765512.2

  • Example of the Modified Newton-Raphson Method for Multiple RootsFor the modified algorithm

    ixit (%)0010011.1052631121.0030820.3131.0000020.00024

  • Example of the Modified Newton-Raphson Method for Multiple RootsHow about their performance on finding the single root?

    iStandardt (%)Modifiedt (%)043343313.4132.6363641223.13.32.8202256.033.0086960.292.9617281.343.0000750.00252.9984790.0553.0000002x10-72.9999987.7x10-5

  • Modified Newton-Raphson Methods for Multiple RootsWhat's the disadvantage of the modified Newton-Raphson Methods for multiple roots over the original Newton-Raphson method?

    Note that the Secant method can also be modified in a similar fashion for multiple roots.

  • Summary of Open MethodsUnlike bracketing methods, open methods do not always converge.

    Open methods, if converge, usually converge more quickly than bracketing methods.

    Open methods can locate even multiple roots whereas bracketing methods cannot. (why?)

  • Study ObjectivesUnderstand the graphical interpretation of a rootUnderstand the differences between bracketing methods and open methods for root locationUnderstand the concept of convergence and divergenceKnow why bracketing methods always converge, whereas open methods may sometimes divergeRealize that convergence of open methods is more likely if the initial guess is close to the true root.

  • Study ObjectivesUnderstand what conditions make a method converge quickly or divergeUnderstand the concepts of linear and quadratic convergence and their implications for the efficiencies of the fixed-point-iteration and Newton-Raphson methodsKnow the fundamental difference between the false-position and secant methods and how it relates to convergenceUnderstand the problems posed by multiple roots and the modifications available to mitigate them

  • Analysis of Convergent RateSuppose g(x) converges to the solution, then the Taylor series of g() about xi can be expressed asBy definitionThus (1) becomes

  • Analysis of Convergent RateWhen xi is very close to the solution, we can rewrite (2) asSuppose g(n) exists and the nth term is the first non-zero term, thenThus to analyze the convergent rate, we can find the smallest n such that g(n)() 0.