Applied Mathematics
Vol.08 No.10(2017), Article ID:79773,10 pages
10.4236/am.2017.810103

Newton’s Method and an Exact Opposite That Average into Halley’s Method

Isaac Fried

Department of Mathematics, Boston University, Boston, MA, USA

Copyright © 2017 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: July 25, 2017; Accepted: October 20, 2017; Published: October 23, 2017

ABSTRACT

This note is mainly concerned with the creation of oppositely converging and alternatingly converging iterative methods that have the added advantage of providing ever tighter bounds on the targeted root. By a slight parametric perturbation of Newton’s method we create an oscillating super-linear method approaching the targeted root alternatingly from above and from below. Further extension of Newton’s method creates an oppositely converging quadratic counterpart to it. This new method requires a second derivative, but for it, the average of the two opposite methods rises to become a cubic method. This note examines also the creation of high order iterative methods by a repeated specification of undetermined coefficients.

Keywords:

Iterative Methods, Alternating Methods, Opposite Methods, Root Bounds, Undetermined Coefficients

1. Introduction

Iterative methods [1] [2] [3] for locating roots of nonlinear equations are of further appeal if converging oppositely [4] or alternatingly [5] [6] so as to establish bounds, or to bracket, the targeted root.

In this note we are mainly concerned with the creation of oppositely and alternatingly converging iterative methods for ever tighter bounds on the targeted root. By a slight parametric perturbation of Newton’s method we create an alternating super-linear method approaching the targeted root in turns from above and from below. Further extension of Newton’s method [7] creates an oppositely converging quadratic counterpart to it. This New method requires a second derivative, but for it, the average of the two opposite methods rises to become a cubic method.

This note examines also the creation of high order iterative methods by a repeated evaluation of undetermined coefficients [7] .

2. The Function

At the heart of this note lies the seeking of simple root a of function f ( x ) , f ( a ) = 0 , a function which we assume throughout the paper to have the expanded form

f ( x ) = A ( x a ) + B ( x a ) 2 + C ( x a ) 3 + D ( x a ) 4 + E ( x a ) 5 + , A 0 (1)

such that

f ( a ) = 0 , A = f ( a ) , B = 1 2 ! f ( a ) , C = 1 3 ! f ( a ) , D = 1 4 ! f ' ( a ) , E = 1 5 ! f ( 5 ) ( a ) (2)

and so on.

The condition A = f ( a ) 0 guarantees that root a of f ( x ) is simple, or, otherwise said, of multiplicity one.

The one-step iterative method is of the general, and expanded, form

x 1 = F ( x 0 ) , x 1 = F ( a ) + F ( a ) ( x 0 a ) + 1 2 ! F ( a ) ( x 0 a ) 2 + 1 3 ! F ( a ) ( x 0 a ) 3 + (3)

and, evidently, if F ( a ) = a , namely, if a is a fixed-point of iteration function F ( x ) in Equation (3), and if, further, F ( a ) = 0 , then iterative method (3) converges quadratically to a. Higher order derivatives of F ( x ) being zero at point a moves iterative method (3) to still higher orders of convergence to fixed- point a = F ( a ) .

3. Newton’s Method

With comprehensiveness in mind we actually derive this classical, mainstay iterative method of numerical analysis. We start by generally stating it as

x 1 = x 0 + P f ( x 0 ) , P 0 , x 1 = F ( x 0 ) , F ( x ) = x + P f ( x ) , f ( a ) = 0 , F ( a ) = a (4)

for any value of free parameter P. By the fact that f ( a ) = 0 , point a is a fixed- point of iteration function F ( x ) of method (4), that is to say, F ( a ) = a .

Power series expansion of x 1 yields

x 1 a = ( 1 + A P ) ( x 0 a ) + B P ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (5)

Equation (5) suggests that the choice P = 1 / A should result in a quadratic method. However, since A = f ( a ) , and since a is unknown, we replace it by the known approximation x 0 , actually take P = 1 / f ( x 0 ) to have

x 1 = x 0 f 0 f 0 , or in short x 1 = x 0 u 0 , u = f ( x ) f ( x ) (6)

which happens to be still quadratic; convergence of Newton’s method to a simple, namely of multiplicity one ( f ( a ) 0 ), root is verified to be of second order

x 1 a = B A ( x 0 a ) 2 + 2 A C B 2 A 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) (7)

where A , B , C are as in Equations ((1) and (2)).

For example, for f = x + 10 x 2 , we generate by Equation (6), starting with x 0 = 1 , the converging x 1 sequence

{ 1 , 4.8 × 10 1 , 2.2 × 10 1 , 8.7 × 10 2 , 2.8 × 10 2 , 5.0 × 10 3 , 2.2 × 10 4 , 5.0 × 10 7 , 2.5 × 10 12 , 6.4 × 10 23 } (8)

4. Extrapolation to the Limit

Let x 0 , x 1 = x 0 u 0 , x 2 = x 1 u 1 , u = f / f be already near root a, then, by Equation (7)

x 1 a = B A ( x 0 a ) 2 and x 2 a = B A ( x 1 a ) 2 (9)

nearly. Eliminating B / A from the two equations we are left with

( 2 x 0 + 3 x 1 x 2 ) a 2 + ( x 0 2 3 x 1 2 + 2 x 0 x 2 ) a + ( x 1 3 x 0 2 x 2 ) = 0 (10)

which we solve for an approximate a, as

x 3 = a = x 0 3 + 1 + 4 ρ 2 ( 2 ρ ) u 0 , u 0 = f ( x 0 ) f ( x 0 ) (11)

where

ρ = u 1 / u 0 = B A ( x 0 a ) + O ( ( x 0 a ) 2 ) . (12)

The square root in Equation (11) may be approximated as

1 + 4 ρ = 1 + 2 ρ 2 ρ 2 + 4 ρ 3 10 ρ 4 + 28 ρ 5 84 ρ 6 ± (13)

and

x 3 a = 2 B 2 ( B 2 A C ) A 4 ( x 0 a ) 5 + O ( ( x 0 a ) 6 ) . (14)

For example, for f ( x ) = x + x 2 + x 3 , and starting with x 0 = 0.2 , we compute x 1 = 0.0368 , x 2 = 0.0135 ; and then from Equation (11), x 3 = 0.000112 . Another such cycle starting with x 0 = x 3 produces a next x 3 = 1.36 × 10 20 .

5. Estimating the Leading Term B/A

From Equation (9) we have that, approximately

B A = x 2 x 3 ( x 1 x 3 ) 2 (15)

if x 3 is already close to a.

We pick from the list in Equation (8) the values

x 1 = 5.0 × 10 7 , x 2 = 2.5 × 10 12 , x 3 = 6.4 × 10 23 (16)

and have from Equation (15) that B / A = 10 .

6. Hopping to the Other Side of the Root

If instead of x 1 = x 0 f 0 / f 0 of Equation (6) we vault over by the double step

x 1 = x 0 2 f 0 f 0 (17)

then we land at

x 1 = 2 a x 0 + 2 B A ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) (18)

implying that asymptotically, as x 0 a

x 1 = 2 a x 0 , or x 1 = a ϵ if x 0 = a + ϵ (19)

and this is good to know.

For example, seeking a root of f ( x ) = x + 10 x 2 we start with x 0 = 0.2 , which is above the root a = 0 , and using x 1 = x 0 2 f 0 / f 0 we obtain x 1 = 0.04 which is, indeed, under the root a = 0 .

7. A Chord Method

Each step of Newton’s method provides us with an f and f values, which can be used to polynomially extrapolate f ( x ) to zero. A linear extrapolation through the pair of points ( x 0 , f 0 ) , ( x 1 , f 1 ) results in the line

f ( x ) = f 0 f 1 x 0 x 1 x + f 1 x 0 f 0 x 1 x 0 x 1 (20)

We set f ( x ) = 0 and obtain the extrapolated value

x 2 = f 0 x 1 f 1 x 0 f 0 f 1 (21)

which may now be repeated to form a chord iterative method.

For example, from the two values x 0 = 5 × 10 7 , x 1 = 2.5 × 10 12 taken from the list in Equation (8) for f ( x ) = x + 10 x 2 , we compute from Equation (21) x 2 = 1.25 × 10 17 .

Starting with x 0 = 5 × 10 3 , x 1 = 2.2 × 10 4 we repeatedly compute

x 2 = { 5 × 10 3 , 2.2 × 10 4 , 1.0 × 10 5 , 2.3 × 10 8 , 2.4 × 10 12 , 5.5 × 10 19 , 1.3 × 10 29 } (22)

with no need for any further derivative function evaluation.

Theoretically, by power series expansion

x 2 a = B 2 A 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (23)

According to Equation (23) x 0 a and x 2 a are ultimately of the same sign.

8. A Rational Higher Order Method

To have this we start with

x 1 = x f 0 f 0 1 1 + Q f 0 (24)

for open parameter Q. Power series expansion yields

x 1 a = 1 A ( B + A 2 Q ) ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (25)

To have a cubic method we take

Q = B A 2 (26)

with A and B as in Equation (2), but evaluated at x 0 rather than at a, to have Halley’s method

x 1 = x 0 2 f 0 2 f 0 2 f 0 f 0 f 0 (27)

which is cubic

x 1 a = B 2 A C A 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (28)

9. A Polynomial Higher Order Method

Here we start with the quadratic in f

x 1 = x 0 f 0 f 0 ( 1 + Q f 0 ) (29)

for undetermined parameter Q. Power series expansion of x 1 yields

x 1 a = 1 A ( B A 2 Q ) ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (30)

To have a cubic method we take

Q = B A 2 (31)

with A and B evaluated at x 0 rather than at a as in Equation (2), and

x 1 = x 0 1 f 0 f 0 f 0 2 f 0 3 f 0 2 , (32)

which is verified to be still cubic

x 1 a = 2 B 2 A C A 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (33)

10. An Alternating Super-Linear Method

We take Newton’s method of Equation (6) and parametrically perturb it into

x 1 = x 0 f 0 f 0 ( 1 + ϵ ) (34)

for some parameter ϵ , to have

x 1 a = ϵ ( x 0 a ) + ( 1 + ϵ ) B A ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (35)

which is super-liner if | ϵ | 1 . Moreover, for a positive ϵ convergence here is ultimately alternating: x 0 a > 0 and x 1 a < 0 are of opposite signs.

For example, for f ( x ) = x + 10 x 2 , x 0 = 0.2 , ϵ = 1 / 25 , we compute from Equation (34)

x 1 = { 2 × 10 1 , 7.5 × 10 2 , 2.0 × 10 2 , 2.2 × 10 3 , 4.0 × 10 5 , 1.6 × 10 6 , 6.4 × 10 8 } (36)

11. An Opposite Quadratic Method

To create a method oppositely converging to Newton’s method of Equations ((6) and (7)) we start with the perturbed Newton’s method

x 1 = x 0 f 0 f 0 ( 1 + Q f 0 ) (37)

or in power series form

x 1 a = 1 A ( B A 2 Q ) ( x 0 a ) 2 + 2 A 2 ( A C B 2 ) ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (38)

To have a quadratic method opposite to Newton’s we set, in view of Equation (7)

1 A ( B A 2 Q ) = B A , or Q = 2 B A 2 (39)

resulting in

x 1 = x 0 f 0 f 0 ( 1 + f 0 f 0 2 f 0 ) (40)

which is, indeed, opposite to Newton’s

x 1 a = B A ( x 0 a ) 2 + 2 A 2 ( 3 B 2 2 A C ) ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (41)

Compare Equation (41) with Equation (7).

12. The Average of the Opposites Is a Cubic Method

Method (40) requires f and f above the mere f of Newton’s method, but for this, the average of the two opposite methods rises to become the cubic method

x 1 = 1 2 ( x 0 f 0 f 0 ( 1 + f 0 f 0 2 f 0 ) ) + 1 2 ( x 0 f 0 f 0 ) = x 0 f 0 f 0 f 0 f 0 2 2 f 0 3 (42)

of Equation (32).

13. A Quartic Method

Hereby we advance another undetermined coefficients strategy for constructing high order iterative methods [8] [9] [10] [11] [12] to locate root a of f ( x ) , f ( a ) = 0 .

We start with the polynomial iteration function

x 1 = x 0 + P f 0 + Q f 0 2 + R f 0 3 (43)

of undetermined coefficients P , Q , R , and expand x 1 as

x 1 a = ( 1 + A P ) ( x 0 a ) + ( B P + A 2 Q ) ( x 0 a ) 2 + ( C P + 2 A B Q + A 3 R ) ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) (44)

The coefficients of ( x 0 a ) up to ( x 0 a ) 3 are made zero with

P = 1 A , Q = B A 3 , R = A C 2 B 2 A 5 (45)

to have a quartic method.

However, as root a is unavailable we replace it in P , Q , R by x 0 to have the variable

P = 1 f 0 , Q = f 0 2 f 0 3 , R = f 0 f 0 3 f 0 6 f 0 5 . (46)

But, with this replacement of a by x 0 method (43) falls back to a mere quadratic

x 1 a = 2 B A ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (47)

To repair this retreat in the order of convergence we propose to further correct x 1 of Equations ((43) and (46)) into

x 1 = x 0 f 0 f 0 + Z 1 Q f 0 2 + Z 2 R f 0 3 (48)

for new parameters Z 1 , Z 2 . Power series expansion reveals that method (48) is restored to fourth order with Z 1 = 1 , Z 2 = 1 , to have

x 1 = x 0 f 0 f 0 Q f 0 2 + R f 0 3 , Q = f 0 2 f 0 3 , R = f 0 f 0 3 f 0 6 f 0 5 (49)

for which

x 1 a = 5 B 3 5 A B C + A 2 D A 3 ( x 0 a ) 4 + O ( ( x 0 a ) 5 ) . (50)

14. The Rational Method Revisited

We start here with

x 1 = x 0 + P f 0 1 + Q f 0 (51)

and expand it into

x 1 a = ( 1 + A P ) ( x 0 a ) + P ( B A 2 Q ) ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (52)

To have a cubic method we set

P = 1 A , Q = B A 2 (53)

or

P = 1 f 0 , Q = f 0 2 f 0 2 . (54)

Inserting these last P and Q values into Equation (51) we have the disappointing

x 1 a = 2 B A ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (55)

We retry for a cubic method by writing, still for P and Q of Equation (54)

x 1 = x 0 + Z 1 P f 0 1 + Z 2 Q f 0 (56)

for which we get by power series expansion

x 1 a = ( 1 Z 1 ) ( x 0 a ) + B A Z 1 ( 1 + Z 2 ) ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (57)

We take Z 1 = 1 , Z 2 = 1 , and have the method

x 1 = x 0 + P f 0 1 + Q f 0 , P = 1 f 0 , Q = f 0 2 f 0 2 (58)

which is now restored to third order of convergence

x 1 a = B 2 A C A 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (59)

15. Undetermined Variable Factors

Here we start with

x 1 = x 0 + P ( x 0 ) f ( x 0 ) , f ( a ) = 0 (60)

and expand x 1 to have

x 1 a = P f + ( 1 + P f + P f ) ( x 0 a ) + ( 1 2 P f + P f + 1 2 P f ) ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) (61)

in which f = f ( a ) , P = P ( a ) , P = P ( a ) , P = P ( a ) . To have a cubic method, and considering that f ( a ) = 0 we impose the two conditions:

[ f f 1 2 f f ] [ P P ] = [ 1 0 ] (62)

and solve system (62) for P as

P = det [ 1 f 0 f ] det [ f f 1 2 f f ] = 2 f 2 f 2 f f (63)

Since root a is unknown we evaluate P instead at x 0 and have iterative method (60) in the form

x 1 = x 0 2 f 0 2 f 0 2 f 0 f 0 f 0 (64)

which we recognize as being the cubic method of Halley

x 1 a = B 2 A C A 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (65)

16. Conclusions

In Section 6, we have demonstrated that the double-step Newton’s method places the next point on the other side of the root. Passing a line through two such points we create a chord method as that of Section 7.

A slight perturbation of Newton’s method, as in Equation (34) creates a super-linear method of alternating convergence as in Section 10. By a further modification of Newton’s method we have created a quadratic method opposite to Newton’s. The average of these two opposite methods is a cubic method, as shown in Section 12.

In Section 13, a quartic method is successfully created by repeated undetermined coefficients.

Cite this paper

Fried, I. (2017) Newton’s Method and an Exact Opposite That Average into Halley’s Method. Applied Mathematics, 8, 1427-1436. https://doi.org/10.4236/am.2017.810103

References

  1. 1. Ostrowski, A. (1960) Solution of Equations and Systems of Equations. Academic Press, New York.

  2. 2. Householder, A.S. (1970) The Numerical Treatment of a Single Nonlinear Equation. McGraw-Hill, New York.

  3. 3. Traub, J.F. (1977) Iterative Methods for the Solution of Equations. Chelsea Publishing Company, New York.

  4. 4. Fried, I. (2009) Oppositely Converging Newton-Raphson Method for Nonlinear Equilibrium Problems. International Journal for Numerical Methods in Engineering, 79, 375-378. https://doi.org/10.1002/nme.2574

  5. 5. Fried, I. (2013) High-Order Iterative Bracketing Methods. International Journal for Numerical Methods in Engineering, 94, 708-714. https://doi.org/10.1002/nme.4467

  6. 6. Fried, I. (2014) Effective High-Order Iterative Methods via the Asymptotic Form of the Taylor-Lagrange Remainder. Journal of Applied Mathematics, 2014, Article ID: 108976. https://doi.org/10.1155/2014/108976

  7. 7. Chun, C. and Neta, B. (2008) Some Modification of Newton’s Method by the Method of Undetermined Coefficients. Computers and Mathematics with Applications, 56, 2528-2538. https://doi.org/10.1016/j.camwa.2008.05.005

  8. 8. Steffensen, I.F. (1933) Remarks on Iteration. Scandinavian Actuarial Journal, 1933, 64-72. https://doi.org/10.1080/03461238.1933.10419209

  9. 9. King, R.F. (1973) A Family of Fourth-Order Methods for Nonlinear Equations. SIAM Journal of Numerical Analysis, 10, 876-879. https://doi.org/10.1137/0710072

  10. 10. Hansen, E. and Patrick, M. (1977) A Family of Root Finding Methods. Numerische Mathematik, 27, 257-269. https://doi.org/10.1007/BF01396176

  11. 11. Neta, B. (1979) A Sixth-Order Family of Methods for Nonlinear Equations. International Journal of Computer Mathematics, 7, 157-161. https://doi.org/10.1080/00207167908803166

  12. 12. Popovski, D.B. (1981) A Note on Neta’s Family of Sixth-Order Methods for Solving Equations. International Journal of Computer Mathematics, 10, 91-93. https://doi.org/10.1080/00207168108803269