+ All Categories
Home > Documents > Chapter 2 Second-order variants of Newton’s...

Chapter 2 Second-order variants of Newton’s...

Date post: 30-May-2018
Category:
Upload: nguyenkhue
View: 216 times
Download: 0 times
Share this document with a friend
33
Chapter 2 Second-order variants of Newton’s method 2.1 Introduction The aim of this CHAPTER 1 is to develop new iterative methods for finding simple roots of scalar nonlinear equations, unconstrained optimization problems and systems of nonlinear equations. The methods are found to yield better performance than Newton’s method and they overcome its limitations. 2.2 Iterative methods for solving nonlinear equations One of the most important and challenging problems in computational mathematics is to compute efficiently the approximate roots of nonlinear equations (1.1). Generally, iterative methods are used to solve such equations. All these methods require prior knowledge of one or more initial guesses for the required root. Once an initial interval is known to contain a root, several classical procedures such as Bisection method, Regula-falsi method, Secant method, Steffensen’s method (see [1, 7, 13]) and Newton’s method [7, 11–16] etc. are available to refine it further. Bisection and Regula-falsi methods are globally convergent and have linear rate of convergence, on the other hand, Secant method is super linearly convergent, whereas Steffensen’s and Newton’s methods are quadratically convergent. Out of these methods, Newton’s method [7, 11–16] is one of the most fundamental tools 1 The main contents of this CHAPTER have been published in [204], [205], [206]. 32
Transcript
Page 1: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Chapter 2

Second-order variants of Newton’smethod

2.1 Introduction

The aim of this CHAPTER1 is to develop new iterative methods for finding simple roots of

scalar nonlinear equations, unconstrained optimization problems and systems of nonlinear

equations. The methods are found to yield better performance than Newton’s method and

they overcome its limitations.

2.2 Iterative methods for solving nonlinear equations

One of the most important and challenging problems in computational mathematics is to

compute efficiently the approximate roots of nonlinear equations (1.1). Generally, iterative

methods are used to solve such equations. All these methods require prior knowledge of one

or more initial guesses for the required root. Once an initial interval is known to contain a

root, several classical procedures such as Bisection method, Regula-falsi method, Secant

method, Steffensen’s method (see [1, 7, 13]) and Newton’s method [7, 11–16] etc. are

available to refine it further. Bisection and Regula-falsi methods are globally convergent

and have linear rate of convergence, on the other hand, Secant method is super linearly

convergent, whereas Steffensen’s and Newton’s methods are quadratically convergent.

Out of these methods, Newton’s method [7, 11–16] is one of the most fundamental tools1The main contents of this CHAPTER have been published in [204], [205], [206].

32

Page 2: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

in computational mathematics. Newton’s method is probably the simplest, flexible and most

frequently used algorithm in numerical analysis, and is given by

xn+1 = xn− f (xn)f ′(xn)

. (2.1)

Although, Newton’s method is often very efficient, but still there are many situations

where it performs poorly. There is no general convergence criterion for Newton’s method.

2.2.1 Geometrically constructed families of Newton’s method

The purpose of this work is to eliminate the defects of Newton’s method by simple modifi-

cations of iteration processes. Numerical results indicate that proposed iterative methods are

effective and comparable to the well-known Newton’s method. Furthermore, the presented

techniques have guaranteed convergence unlike Newton’s method and are as simple as this

known technique.

These methods are derived by implementing approximations through a straight line,

parabolic and elliptical curve in the vicinity of required root.

Let

y = f (x), (2.2)

represents the graph of the function f (x) in equation (1.1).

(i) Approximation by a straight line

Consider an equation of a straight line having slope equal to α f (x0) and passing through a

point (x0,0) in the form

y = α f (x0)(x− x0), (2.3)

where ‘x0’ is an initial guess to the required root ‘r’ of the equation (1.1) and α ∈ R.

Let

x1 = x0 +h, |h|<< 1, (2.4)

be a better approximation to the required root. Assume that the point of intersection of

straight line (2.3) with the graph (2.2) is at a point (x1, f (x0 +h)). Now the straight line (2.3)

while passing through the point (x1, f (x0 +h)) takes the form

f (x0 +h) = hα f (x0). (2.5)

33

Page 3: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Expanding left-hand side by Taylor’s series expansion about a point x = x0 and retaining its

terms up to (O(h)), one can have

f (x0)+h f ′(x0)+O(h2) = hα f (x0). (2.6)

Rearranging (2.6), one gets

f (x0)+h f ′(x0)−hα f (x0) = 0. (2.7)

Solving equation (2.7) for ‘h’, one can obtain

h =− f (x0)f ′(x0)−α f (x0)

. (2.8)

Putting (2.8) in (2.4), one gets the first approximation to the required root as

x1 = x0− f (x0)f ′(x0)−α f (x0)

. (2.9)

Now repeating this process until the straight line (2.3) becomes x− axis, a general formula

for successive approximations is given by

xn+1 = xn− f (xn)f ′(xn)−α f (xn)

. (2.10)

This is a new one-parameter family of Newton’s method. In order to obtain quadratic con-

vergence of the method, the sign of entity ‘α’ in denominator should be chosen so that

denominator is largest in magnitude. This formula is well defined even if f ′(xn) is zero

unlike Newton’s formula.

Further, it can be seen that this family of Newton’s method gives very good approxima-

tion to the required root when |α| is small. This is because that, for small values of ‘α’, slope

or angle of inclination of straight line (2.3) with x− axis becomes smaller, i.e. as α → 0,

the straight line tends to x− axis. This means that our next approximation will move faster

towards the desired root. However, for large values of ‘α’, formula (2.10) still works but

takes more iterations as compared to the small values of ‘α’.

One can also derive the same formula (2.10) by considering an exponentially fitted

osculating straight line [206] in a following form :

y = e−α(x−x0)[a(x− x0)+b], (2.11)

34

Page 4: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

where ‘x0’ is an initial guess to the required root ‘r’ of an equation (1.1), ‘a’ and ‘b’ are

arbitrary constants to be determined by imposing the tangency conditions on y at x = x0 :

y(x0) = f (x0) and y′(x0) = f ′(x0). (2.12)

Next, Theorem 2.2.1 presents a mathematical proof for an order of convergence of the

proposed iterative formula (2.10).

Theorem 2.2.1. Let f : I ⊆R→R has at least first two continuous derivatives defined on

an open interval I, enclosing a simple zero of f (x) (say x = r ∈ I). Assume that initial guess

x = x0 is sufficiently close to ‘r’, f ′(r) 6= 0 and f ′(xn)−α f (xn) 6= 0 in I. Then an iteration

scheme defined by formula (2.10) is quadratically convergent and satisfies the following

error equation :

en+1 = (C2−α)e2n +O(e3

n), (2.13)

where en = xn− r is an error at n th iteration and Ck =( 1

k!

) f k(r)f ′(r) ,k = 2,3, . . .

Proof. Using Taylor’s series expansion for f (xn) about x = r and taking into account that

f (r) = 0 and f ′(r) 6= 0, we have

f (xn) = f ′(r)ben +C2e2n +C3e3

n +O(e4n)c, (2.14)

and

f ′(xn) = f ′(r)b1+2C2en +3C3e2n +O(e3

n)c. (2.15)

Making use of equations (2.14) and (2.15), one can have

f (xn)f ′(xn)−α f (xn)

= en− (C2−α)e2n +O(e3

n). (2.16)

Substituting equation (2.16) in the formula (2.10), one obtains

en+1 = (C2−α)e2n +O(e3

n). (2.17)

This completes proof of the theorem.

35

Page 5: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

(ii) Approximation by a parabola

Assume that the equation (1.1) has a simple root ‘r’ which is to be found and let ‘x0’ be an

initial guess to this root. On the same graph of f (x) given by equation (2.2), we sketch a

parabola

y = α2 f (x0)(x− x0)2, (2.18)

where α ∈ R, is a scaling parameter. The parabola (2.18) widens as |α| → 0 and narrows

down as |α| becomes large.

Let

x1 = x0 +h, |h|<< 1, (2.19)

be a better approximation to the required root. Assume that one of the points of intersections

of this graph with parabola (2.18) is at a point (x1, f (x0 +h)). Now the parabola (2.18) while

passing through point (x1, f (x0 +h)) takes a form of

f (x0 +h) = α2 f (x0)h2. (2.20)

Expanding left-hand side by Taylor’s series expansion about a point x = x0 and ignoring its

terms with second and higher-order derivatives, one can have

f (x0)+h f ′(x0)+O(h2) = α2 f (x0)h2. (2.21)

Rearranging (2.21), one ends up with a following quadratic equation :

α2 f (x0)h2− f ′(x0)h− f (x0) = 0. (2.22)

Solving (2.22) for values of ‘h’, one can obtain

h =− 2 f (x0)f ′(x0)±

√{ f ′(x0)}2 +4α2{ f (x0)}2

. (2.23)

By putting these values of ‘h’ in (2.19), one gets first approximation to the required root as

x1 = x0− 2 f (x0)f ′(x0)±

√{ f ′(x0)}2 +4α2{ f (x0)}2

. (2.24)

Now repeating this process until parabola (2.18) becomes x− axis, a general formula for

successive approximations is given by

xn+1 = xn− 2 f (xn)f ′(xn)±

√{ f ′(xn)}2 +4α2{ f (xn)}2

, (2.25)

36

Page 6: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

in which the sign should be chosen so as to make the denominator largest in magnitude. This

is a new parabolic version of Newton’s method [58]. The beauty of this method is that, it

converges quadratically and has a same error equation as Newton’s method. Moreover,

this method does not fail even if f ′(xn) = 0 in the vicinity of required root unlike Newton’s

method.

Further, it can be observed that a family of parabolic methods (2.25) gives a very good

approximation to the required root when |α| (scaling parameter) is small. This is because

that for small values of ‘α’, parabola widens along with horizontal direction. This means

that our next approximation will move faster toward the desired root. For large values of ‘α’,

the formula (2.25) still works but takes a more number of iterations as compared to the small

values of ‘α’.

A mathematical proof for an order of convergence of the proposed iterative formula

(2.25) has been presented in the following theorem :

Theorem 2.2.2. Let f : I ⊆ R→ R has at least first two continuous derivatives defined

on an open interval I, enclosing a simple zero of f (x) (say x = r ∈ I). Assume that initial

guess x = x0 is sufficiently close to ‘r’ and f ′(r) 6= 0 in I. Then an iteration scheme defined

by formula (2.25) is quadratically convergent and satisfies the following error equation :

en+1 = C2e2n +O(e3

n). (2.26)

Proof. Making use of equations (2.14) and (2.15), one can have

{ f ′(xn)}2 +4α2{ f (xn)}2 = { f ′(r)}2b1+4C2en +4(C22e2

n +α2)+O(e3n)c. (2.27)

Further, using equations (2.14), (2.15) and (2.27), one gets

2 f (xn)f ′(xn)±

√{ f ′(xn)}2 +4α2{ f (xn)}2

= en−C2e2n +O(e3

n). (2.28)

Substituting equation (2.28) in the formula (2.25), one can obtain

en+1 = C2e2n +O(e3

n). (2.29)

This error equation is same as that of Newton’s method and thus, completes proof of the

theorem.

37

Page 7: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

(iii) Approximation by an ellipse

let ‘x0’ be an initial guess to a simple root ‘r’ of an equation (1.1). For some α′ ∈ R−{0},

let us sketch an ellipse(x− x0)

2

α′2+{y− f (x0)}2

{ f (x0)}2 = 1, (2.30)

on graph (2.2) of f (x).

Let

x1 = x0 +h, |h|<< 1, (2.31)

be a better approximation to the required root. Assume that one of the points of intersections

of this graph with ellipse (2.30) is at a point (x1, f (x0 + h)). Now an ellipse (2.30) while

passing through point (x1, f (x0 +h)) takes a form

α2h2{ f (x0)}2 +{ f (x0 +h)}2−2 f (x0 +h) f (x0) = 0, (2.32)

where α′ = 1α .

Expanding left-hand side by Taylor’s series expansion about a point x = x0 and neglecting

second and higher-order derivatives, one can have

α2h2{ f (x0)}2 +{

f (x0)+h f ′(x0)+O(h2)}2−2

{f (x0)+h f ′(x0)+O

(h2)} f (x0) = 0.

(2.33)

Simplifying (2.33), one ends up with a quadratic equation given by

α2h2{ f (x0)}2 +h2{ f ′(x0)}2−{ f (x0)}2 = 0. (2.34)

Solving this quadratic equation for ‘h’, one gets

h =± f (x0)√{ f ′(x0)}2 +α2{ f (x0)}2

. (2.35)

Substituting (2.35) in (2.31), one can get the first approximation to the required root as

x1 = x0± f (x0)√{ f ′(x0)}2 +α2{ f (x0)}2

, (2.36)

where positive sign is taken if x0 < r and negative sign is taken if x0 > r. Geometrically, if

slope of curve { f ′(x0)} at a point {x0, f (x0)} is negative, then we take positive sign other-

wise, negative.

38

Page 8: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Now repeating this process until an ellipse becomes “point ellipse” on x− axis, a general

formula of successive approximations is given by

xn+1 = xn± f (xn)√{ f ′(xn)}2 +α2{ f (xn)}2

. (2.37)

This is an elliptic version of Newton’s method [204]. Again the beauty of this method is

that, it is quadratically convergent and has a same error equation as Newton’s method.

Moreover, this method does not fail even if f ′(xn) = 0 or in the vicinity of required root

unlike Newton’s method.

Further, it is found that a family of ellipse methods (2.37) gives a very good approxi-

mation to the required root when ‘α’ is small. This is because for small values of ‘α’, the

ellipse shrinks in vertical direction and extends along with horizontal direction. This means

that our next approximation will move faster towards the desired root. For large values of

‘α’, formula (2.37) still works but needs more number of iterations as compared to the small

values of ‘α’.

Special case of ellipse method : circle method

At α = 1f (xn)

, an equation (2.37) yields

xn+1 = xn± f (xn)√1+{ f ′(xn)}2

. (2.38)

This formula can be named as the circle method, as an ellipse given by equation

(2.30), now becomes a circle having center at {x0, f (x0)} and radius f (x0). Therefore, the

circle method is a special case of ellipse method (2.37). Apparently, formula (2.38) has

a scaling problem. As a result of this, an efficiency index of the circle method tangibly

decreases as compared to Newton’s method [7, 11–16], parabolic method (2.25) and ellipse

method (2.37) respectively. However, the circle method has guaranteed convergence unlike

Newton’s method.

The following theorem presents a proof for an order of convergence of the proposed iterative

formula (2.37) :

39

Page 9: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Theorem 2.2.3. Let f : I ⊆ R→ R has at least first two continuous derivatives defined

on an open interval I, enclosing a simple zero of f (x) (say x = r ∈ I). Assume that initial

guess x = x0 is sufficiently close to ‘r’ and f ′(r) 6= 0 in I. Then an iteration scheme defined

by formula (2.37) is quadratically convergent and satisfies the following error equation :

en+1 = C2e2n +O(e3

n). (2.39)

Proof. Making use of equations (2.14) and (2.15), one can have

{ f ′(xn)}2 +α2{ f (xn)}2 = { f ′(r)}2b1+4C2en +(4C22e2

n +α2)+O(e3n)c. (2.40)

Further, using equations (2.14) and (2.40), one gets

f (xn)√{ f ′(xn)}2 +α2{ f (xn)}2

= en−C22e2

n +O(e3n). (2.41)

Substituting equation (2.41) in the formula (2.37), one can obtain

en+1 = C2e2n +O(e3

n). (2.42)

Again this error equation is same as that of Newton’s method. This completes proof of the

theorem.

It is interesting to note that by ignoring the term in ‘α’, the iterative methods (2.10),

(2.25) and (2.37) reduce to the well-known Newton’s method for solving nonlinear equa-

tions numerically.

(iv) Exponential iterative methods

The proposed methods can also be extended further to exponentially quadratically con-

vergent iterative formulae for solving nonlinear equations numerically. Let us consider

xn+1 = xn exp(− h

xn

)as a better approximation to the required root ‘r’, then from equations

(2.10), (2.25) and (2.37), one can derive the following exponential iterative formulae:

xn+1 = xn exp(

f (xn)f ′(xn)−α f (xn)

), (2.43)

xn+1 = xn exp

(2 f (xn)

f ′(xn)±√{ f ′(xn)}2 +4α2{ f (xn)}2

), (2.44)

40

Page 10: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

and

xn+1 = xn exp

(± f (xn)√

{ f ′(xn)}2 +α2{ f (xn)}2

), (2.45)

respectively.

Letting α → 0 in (2.43), (2.44) and (2.45), one can obtain another exponential iterative

method given by

xn+1 = xn exp(− f (xn)

xn f ′(xn)

). (2.46)

Note that by taking the first-order Taylor’s series expansion of exp(− f (xn)

xn f ′(xn)

)in equa-

tion (2.46), Newton’s formula can be achieved. Chen and Li [210] have also derived some

new classes of exponentially quadratically convergent iterative methods by using different

approach based on the main idea of Mamta et al. [58].

41

Page 11: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

2.2.2 Worked examples

In this section, an attempt has been made to check the effectiveness of newly developed

methods. Here, let us consider some examples to compare the number of iterations, com-

putation order of convergence (COC) and total number of function evaluations (TNOFE)

needed in the traditional Newton’s method and its modifications namely, modified Newton’s

method (2.10) (MNM), parabolic method (2.25) (PM) and ellipse method (2.37) (EM) re-

spectively, for solving nonlinear equations. Here, for simplicity, the formulae are tested for

|α| = 1. The results are summarized in Table 2.1 and Table 2.2 respectively. Computations

have been performed using MAT LABr version 7.5(R2007b) in double precision arithmetic.

We use ε = 10−15 as a tolerance error. The following stopping criteria are used for computer

programs :

(i)|xn+1− xn|< ε, (ii)| f (xn+1)|< ε.

Example 2.2.1. sin(x) = 0.

This equation has infinite number of roots. It can be seen that Newton’s method does not

necessarily converge to a root that is nearest to the starting value. For example, Newton’s

method with initial guess x0 = 1.5 converges to −4π, for away from the required root zero.

Similarly, Newton’s method with initial guess x0 = 1.52 converges to −6π and so on. Pro-

posed methods do not exhibit this type of behaviour.

-10 -5 0 5 10

-1.0

-0.5

0.0

0.5

1.0

Graph of the function "sin(x)"

(0, 0)

y-ax

is

x-axis

42

Page 12: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Example 2.2.2. e−x− sin(x) = 0.

Again this equation has infinite number of roots lying close to π,2π,3π, .......... Newton’s

method [211] with starting value x0 = 5 converges to the root closest to 3π whereas proposed

methods converge to the closest root 6.285049273382587.

5.0 5.5 6.0 6.5 7.0

-0.8

-0.6

-0.4

-0.2

0.0

0.2

0.4

0.6

0.8

1.0

Graph of the function "e -x -sin(x)"

(6.285049273382587, 0)

y

-axi

s

x-axis

Example 2.2.3. xe−x = 0.

0 1 2 3 4 5 0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

Graph of the function "x e -x "

(0, 0)

y

-axi

s

x-axis

43

Page 13: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Care must be taken when applying either method to approximating the root at x = 0. The

derivative of function xe−x is zero at x = 1, and negative for x > 1. For any initial guess

x0 < 0, Newton’s method converges to the root very efficiently. For any initial guess x0 > 1,

Newton’s iterates move away from the zero. For example, x0 = 2, then x1 = 4, x2 = 5.3333

and so on. On the other hand proposed methods can give the required root if sign in the

proposed methods is chosen suitably.

Example 2.2.4. 4x4−4x2 = 0.

In applying, Newton’s method to solve this equation, problems arise if starting points give

horizontal tangents or tangents cycle back and forth from one to another. The points ±√

22

give horizontal tangents and±√

217 cycle, each leading to the other and back. For more detail

one can refer to [212, pp. 301]. Proposed methods (2.25) (PM) and (2.37) (EM) do not

exhibit this behaviour.

-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 0

2

4

6

8

10

12

Graph of the function "4x 4 -4x 2 "

(0,0) (-1,0) (1,0)

y-

axis

x-axis

Example 2.2.5. e1−x−1 = 0.

Example 2.2.6. ex2+7x−30−1 = 0.

Example 2.2.7. (x−1)6−1 = 0.

44

Page 14: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Tabl

e2.

1

Test

exam

ples

,int

erva

ls(I

),th

eir

initi

algu

esse

s(x 0

),ex

actr

oot(

r)an

dnu

mbe

rof

itera

tions

.

Exa

mpl

eN

o.I

Initi

algu

esse

s(x 0

)E

xact

root

(r)

NM

MN

M(2

.10)

PM(2

.25)

EM

(2.3

7)1.

50.

0000

0000

0000

000

−4π∗

75

4

1.51

0.00

0000

0000

0000

0−5

π∗7

54

2.2.

1[−

2,2]

1.52

0.00

0000

0000

0000

0−6

π∗7

54

1.53

0.00

0000

0000

0000

0−7

π∗7

54

5.0

6.28

5049

2733

8258

79.

4246

7254

7385

22∗

75

42.

2.2

[5,7

]6.

06.

2850

4927

3382

587

35

43

2.2.

3[0

,5]

2.0

0.00

0000

0000

0000

0D

iver

gent

16

6√ 2 2

1.00

0000

0000

0000

0−1

.000

0000

0000

0000∗

79

8

−√ 2 2

−1.0

0000

0000

0000

001.

0000

0000

0000

000∗

79

82.

2.4

[−.8

,.8]

√ 21 71.

0000

0000

0000

000

0.00

0000

0000

0000

0∗0.

0000

0000

0000

000∗

78

−√ 21 7

−1.0

0000

0000

0000

000.

0000

0000

0000

000∗

0.00

0000

0000

0000

0∗7

8

2.2.

5[4

,11]

101.

0000

0000

0000

000

Div

erge

nt14

1313

23.

0000

0000

0000

000

Div

erge

nt4

42

2.2.

6[2

,3.5

]2.

83.

0000

0000

0000

000

1610

1113

3.5

3.00

0000

0000

0000

012

1212

12

1.0

2.00

0000

0000

0000

0Fa

ils1

11

2.2.

7[1

,3]

1.5

2.00

0000

0000

0000

016

88

8

2.5

2.00

0000

0000

0000

07

87

7∗ C

onve

rges

toun

desi

red

root

45

Page 15: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Table 2.2

Computational order of convergence and Total number of function evaluations

(COC) (TONFE)

Example No. NM MNM PM EM NM MNM PM EM

(2.10) (2.25) (2.37) (2.10) (2.25) (2.37)

— 2.00 3.00 3.00 _ 14 10 8

— 2.00 3.00 3.00 _ 14 10 82.2.1

— 2.00 3.00 3.00 _ 14 10 8

— 2.00 3.00 3.00 _ 14 10 8

— 2.00 2.40 2.74 _ 14 10 82.2.2

2.85 2.00 2.20 2.70 6 10 8 6

2.2.3 . . . ≡≡ 2.00 2.00 ∼ 2 12 12

— 2.00 1.99 2.00 _ 14 18 16

— 2.00 1.99 2.00 _ 14 18 162.2.4

— — 2.00 2.00 _ _ 14 16

— — 2.00 2.00 _ _ 14 16

2.2.5 . . . 2.00 2.00 2.00 ∼ 28 26 26

. . . 1.96 1.98 ≡≡ ∼ 8 8 4

2.2.6 2.00 2.00 2.00 2.00 32 20 22 26

2.00 2.00 2.00 2.00 24 24 24 24

== ≡≡ ≡≡ ≡≡ = 2 2 2

2.2.7 1.97 2.00 2.00 2.00 32 16 16 16

2.00 2.00 2.00 2.00 14 16 14 14

Here, the symbols namely

(i) ‘—’ and ‘ _’ represent COC and TNOFE for a case of undesired root respectively.

(ii) ‘. . . ’ and ‘∼’ represent COC and TNOFE for a case of divergence respectively.

(iii) ‘==’ and ‘=’ represent COC and TNOFE for a case of failure respectively.

(iv) ‘≡≡’ represents COC when the number of iterations are not sufficient.

46

Page 16: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

2.3 Iterative methods for unconstrained optimization

problems

Optimization problems with or without constraints arise in various fields such as science,

engineering, economics especially in management sciences, engineering design, operation

research, computer science, financial management etc. where numerical information is pro-

cessed. In recent years, many problems in business situations and engineering designs have

been modeled as an optimization problem for taking optimal decisions. In fact, numerical

optimization techniques play a significant role in almost all branches of engineering and

mathematics.

Several classical methods such as Golden search method [18], Fibonacci search method

[18], quadratic fitting method [15, 17, 18], cubic interpolation method [18] and Newton’s

method [133, 134] etc. are available in literature for solving unconstrained minimization

problems (1.2). For detailed survey of these most important methods, many more excellent

text books are also available in literature [1, 13, 14, 20, 23–26]. All unconstrained mini-

mization methods are iterative in nature and hence they start from an initial trial solution and

proceed towards the minimum point in a sequential manner.

Again, Newton’s method (1.39) plays a vital role in operation research, optimization and

control theory. It has many applications in management science, industrial and financial re-

search, chaos and fractals, dynamical systems, variational inequalities and equilibrium-type

problems, stability analysis, data mining and even to random operator equations. Its role in

optimization theory cannot be under estimated as this method is a basis for the most effec-

tive procedures in linear and nonlinear programming. For a more detailed survey, one can

refer to paper by Ployak [133] and the references cited therein. In short, for solving nonlin-

ear, univariate optimization problems, Newton’s method is an important and basic method

which converges quadratically. The idea behind Newton’s method is to approximate an ob-

jective function locally by a quadratic function which at x = xn agrees with function f (x)

up to second-order derivative. The process can be repeated at a point that optimizes an ap-

proximate function. Again the condition f ′′(xn) 6= 0 in the neighbourhood of the root is

required for the success of the Newton’s method. Many new modified Newton type methods

47

Page 17: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

for solving unconstrained optimization problems (1.2) have been suggested in the literature

[45, 133–143, 213–221].

The purpose of this study is to develop iterative methods to solve nonlinear, univariate

unconstrained optimization problems to find an extremum of the given function f (x) under

no constraints and is to eliminate the defects of Newton’s method by simple modifications

of iteration processes.

2.3.1 Extensions of proposed iterative methods

An attempt has been made to extend the proposed iterative formulae namely, modified New-

ton’s method (2.10) (MNM), parabolic method (2.25) (PM) and ellipse method (2.37) (EM)

respectively to unconstrained optimization problems.

(i) Extension of the method (2.10)

Assume that f (x) is sufficiently smooth function and has an extremum (maxima or min-

ima) at a point x = β. From (2.5), consider an auxiliary function with parameter ‘α’ as

q(x) = f (x)−αh f (xn). (2.47)

It is possible to construct a quadratic function q(x) from (2.47) which agrees with f (x) up

to second-order derivative in the neighbourhood of a point x = xn, i.e.

q(x) = f (xn)+(x− xn) f ′(xn)+12(x− xn)2 f ′′(xn)−α(x− xn) f (xn). (2.48)

One may calculate an estimate of f (x) at x = xn+1 by finding a point where derivative of q(x)

vanishes [133, 134], i.e. q′(xn+1) = 0. This gives

f ′(xn)+(xn+1− xn) f ′′(xn)−α(xn+1− xn) f ′(xn) = 0. (2.49)

Solving this equation for xn+1− xn, one gets

xn+1 = xn− f ′(xn)f ′′(xn)−α f ′(xn)

. (2.50)

This is a one parameter family of Newton’s method for solving unconstrained optimiza-

tion problems. To obtain a quadratic convergence of this method, the sign of entity ‘α’ in

denominator should be chosen so that denominator is largest in magnitude. This method

48

Page 18: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

is well defined even if f ′′(xn) = 0 or very small in the vicinity of required optimum point

unlike Newton’s method.

Following is a mathematical proof for an order of convergence of the proposed iterative

method (2.50) :

Theorem 2.3.1. Let f : I ⊆ R→ R be a sufficiently differentiable function defined on an

open interval I, and x = β ∈ I be an optimum point of f (x). Assume that initial guess x = x0

is sufficiently close to ‘β’ and f ′′(xn)−α f ′(xn) 6= 0 in I. Then an iteration scheme defined

by formula (2.50) is quadratically convergent and satisfies the following error equation :

en+1 = (3A3−α)e2n +O(e3

n), (2.51)

where en = xn−β is an error at nthiteration and Ak =( 1

k!

) f k(β)f ′′(β) ,k = 3,4, ..........

Proof. Since x = β is an optimum point of f (x) i.e. f ′(β) = 0 and f ′′(β) 6= 0.

Expanding f ′(xn) and f ′′(xn) about x = β by Taylor’s series expansion, one can obtain

f ′(xn) = f ′′(β)ben +3A3e2n +O(e3

n)c, (2.52)

and

f ′′(xn) = f ′′(β)b1+6A3e2n +O(e3

n)c. (2.53)

Using equations (2.52) and (2.53), one can have

f ′(xn)f ′′(xn)−α f ′(xn)

= en− (3A3−α)e2n +O(e3

n). (2.54)

Substituting equation (2.54) in formula (2.50), one gets

en+1 = (3A3−α)e2n +O(e3

n). (2.55)

This completes proof of the theorem.

49

Page 19: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

(ii) Extension of the method (2.25)

Assume that f (x) is sufficiently smooth function and has an extremum (maxima or minima)

at a point x = β. From (2.20), consider an auxiliary function with parameter ‘α’ as

q(x) = f (x)−α2h2 f (xn). (2.56)

Again, it is possible to construct a quadratic function q(x) from (2.56) which agrees with

f (x) up to second-order derivative in the neighbourhood of a point x = xn, i.e.

q(x) = f (xn)+(x− xn) f ′(xn)+12(x− xn)2 f ′′(xn)−α2(x− xn)2 f (xn). (2.57)

One may calculate an estimate of f (x) at x = xn+1 by finding a point where derivative of q(x)

vanishes [133, 134], i.e. q′(xn+1) = 0. This leads to

α2(xn+1− xn)2 f ′(xn)− (xn+1− xn) f ′′(xn)− f ′(xn) = 0. (2.58)

Solving this quadratic equation for xn+1− xn, one gets

xn+1 = xn− 2 f ′(xn)f ′′(xn)±

√{ f ′′(xn)}2 +4α2{ f ′(xn)}2

. (2.59)

This is a modification over the method (2.25) for solving unconstrained optimization

problems. In formula (2.59), a sign in denominator should be chosen so that denominator is

largest in magnitude. The beauty of this method is that, it converges quadratically and has a

same error equation as that of Newton’s method. Moreover, this method does not fail even if

f ′′(xn) = 0 or very small in the vicinity of required optimum point unlike Newton’s method

for unconstrained optimization problems.

A mathematical proof for order of convergence of proposed iterative method (2.59) is

presented below.

Theorem 2.3.2. Let f : I ⊆ R→ R be a sufficiently differentiable function defined on

an open interval I, and x = β ∈ I be an optimum point of f (x). Assume that initial guess

x = x0 is sufficiently close to ‘β’ in I. Then an iteration scheme defined by formula (2.59) is

quadratically convergent and satisfies the following error equation :

en+1 = 3A3e2n +O(e3

n). (2.60)

50

Page 20: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Proof. Making use of equations (2.52) and (2.53), one can have

f ′′(xn)±√{ f ′′(xn)}2 +4α2{ f ′(xn)}2 = f ′′(β)b2+12A3e2

n +O(e3n)c. (2.61)

Further, using equations (2.53) and (2.61), one gets

2 f ′(xn)f ′′(xn)±

√{ f ′′(xn)}2 +4α2{ f ′(xn)}2

= en−3A3e2n +O(e3

n). (2.62)

Substituting equation (2.62) in the formula (2.59), one can obtain

en+1 = 3A3e2n +O(e3

n). (2.63)

This error equation is same as that of Newton’s method for solving unconstrained optimiza-

tion problems. This completes proof of the theorem.

(iii) Extension of the method (2.37)

Consider the function

G(x) = x− g(x)g′(x)

, where g(x) = f ′(x). (2.64)

Here f (x) is function to be minimized. G′(x) is defined around a critical point ‘β’ of f (x), if

g′(β) = f ′′(β) 6= 0 and is given by G′(x) = g(x)g′′(x){g′(x)}2 .

If we assume that g′′(x) 6= 0, we have G′(x) = 0 iff g(β) = 0. Consider an equation g(x) = 0,

whose one or more roots are to be found.

Let y = g(x) represents the graph of the function g(x) and assume that an initial estimate

‘x0’ is known for an optimum point ‘β’ of an equation g(x) = 0.

Let us sketch an ellipse

(x− x0)2

α′2+{y−g(x0)}2

{g(x0)}2 = 1, (2.65)

for some α′ ∈ R−{0}.

Let

x1 = x0 +h, |h|<< 1, (2.66)

be a better approximation to the required root. Assume that one of the points of intersections

of this graph with an ellipse (2.65) is at a point (x1,g(x0 +h)). Now the ellipse (2.65) while

51

Page 21: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

passing through the point (x1,g(x0 +h)) takes a form

α2h2{g(x0)}2 +{g(x0 +h)}2−2g(x0 +h)g(x0) = 0, (2.67)

where α′ = 1α .

Expanding left-hand side by Taylor’s series expansion about a point x = x0 and neglecting

second and higher-order derivatives, one can have

α2h2{g(x0)}2 +{

g(x0)+hg′(x0)+O(h2)}2−2

{g(x0)+hg′(x0)+O

(h2)}g(x0) = 0.

(2.68)

Simplifying (2.68), one ends up with a quadratic equation given by

α2h2{g(x0)}2 +h2{g′(x0)}2−{g(x0)}2 = 0. (2.69)

Solving this quadratic equation for ‘h’, one gets

h =± g(x0)√{g′(x0)}2 +α2{g(x0)}2

. (2.70)

Substituting (2.70) in (2.66), one gets first approximation to the required root as

x1 = x0± g(x0)√{g′(x0)}2 +α2{g(x0)}2

, (2.71)

where positive sign is taken if x0 < β, the negative sign is taken if x0 > β. Geometrically, if

slope of the curve {g′(x0)} at a point {x0,g(x0)} is negative, then take positive sign other-

wise, negative.

Now repeating this process until an ellipse becomes a point ellipse, on x− axis, a general

successive formula is given by

xn+1 = xn± g(xn)√{g′(xn)}2 +α2{g(xn)}2

. (2.72)

Since g(xn) = f ′(xn), therefore successive iterative process becomes

xn+1 = xn± f ′(xn)√{ f ′′(xn)}2 +α2{ f ′(xn)}2

. (2.73)

This is a modification over method (2.37) for solving unconstrained optimization prob-

lems. The beauty of this method is that, it converges quadratically and has a same error equa-

tion as that of Newton’s method. Moreover, this method does not fail even if f ′′(xn) = 0

52

Page 22: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

or very small in the vicinity of required optimum point unlike Newton’s method for solving

unconstrained optimization problems.

Now, a mathematical proof is presented for an order of convergence of the proposed

iterative method (2.73).

Theorem 2.3.3. Let f : I ⊆ R→ R be a sufficiently differentiable function defined on

an open interval I, and x = β ∈ I be an optimum point of f (x). Assume that initial guess

x = x0 is sufficiently close to ‘β’ in I. Then an iteration scheme defined by formula (2.73) is

quadratically convergent and satisfies the following error equation :

en+1 = 3A3e2n +O(e3

n). (2.74)

Proof. Making use of equations (2.52) and (2.53), one can have√{ f ′′(xn)}2 +α2{ f ′(xn)}2 = f ′′(β)b1+6A3e2

n +O(e3n)c. (2.75)

Further, using equations (2.52) and (2.75)

f ′(xn)√{ f ′′(xn)}2 +α2{ f ′(xn)}2

= en−3A3e2n +O(e3

n). (2.76)

Substituting equation (2.76) in formula (2.73), one gets

en+1 = 3A3e2n +O(e3

n). (2.77)

This error equation is same as that of Newton’s method for solving unconstrained optimiza-

tion problems. This completes proof of the theorem

Again, it is interesting to note that by ignoring the term in ‘α’, the method (2.50), (2.59)

and (2.73) reduce to Newton’s method for solving unconstrained optimization problems

[133].

(iv) Exponential iterative methods

One can also extend the proposed methods to exponentially quadratically convergent itera-

tive formulas for unconstrained optimization problems.

If one consider xn+1 = xn exp(− h

xn

)be a better approximation to exact optimum point ‘β’,

then from equations (2.50), (2.59) and (2.73), one can obtain the following exponential iter-

ation formulae :

xn+1 = xn exp(

f ′(xn)f ′′(xn)−α f ′(xn)

), (2.78)

53

Page 23: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

xn+1 = xn exp

(2 f ′(xn)

f ′′(xn)±√{ f ′′(xn)}2 +4α2{ f ′(xn)}2

), (2.79)

and

xn+1 = xn exp

(± f ′(xn)√

{ f ′′(xn)}2 +α2{ f ′(xn)}2

), (2.80)

respectively.

Letting α→ 0 in these formulae, one can derive another exponential iterative formula given

by

xn+1 = xn exp(− f ′(xn)

xn f ′′(xn)

). (2.81)

Note that by taking first-order Taylor’s series expansion of exp(− f ′(xn)

xn f ′′(xn)

)in an equation

(2.81), Newton’s method for solving unconstrained optimization problems can be achieved.

Recently, Kahya [134] also derived similar methods namely (2.50) and (2.59) by using dif-

ferent approach based on the ideas of Mamta et. al. [58].

2.3.2 Worked examples

Here, let us consider some examples to compare number of iterations, computation order of

convergence (COC) and total number of function evaluations (TNOFE) needed in the tradi-

tional Newton’s method and its modifications namely, modified version of Newton’s method

(2.50) (MV NM), modified version of parabolic method (2.59) (PV NM) and modified ver-

sion of ellipse method (2.73) (EV NM) respectively, for solving unconstrained optimization

problems. Here for simplicity, the formulae are tested for |α| = 1. The results are sum-

marized in Table 2.3 and Table 2.4 respectively. Computations have been performed using

MAT LABr version 7.5(R2007b) in double precision arithmetic. We use ε = 10−15 as a

tolerance error. The following stopping criteria are used for computer programs :

(i)|xn+1− xn|< ε, (ii)| f ′(xn+1)|< ε.

54

Page 24: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Tabl

e2.

3

Test

exam

ples

,the

irin

itial

gues

ses(

x 0),

optim

umpo

int(

β),n

umbe

rof

itera

tions

and

optim

alva

lue.

Exa

mpl

eE

xam

ples

Initi

alO

ptim

alpo

int(

β)N

MM

VN

MM

VPM

MV

EM

Opt

imal

valu

e

No.

gues

ses

(2.5

0)(2

.59)

(2.7

3)

2Fa

ils1

11

2.3.

1x3−

6x2+

9x−

8=

0.3.

53

56

55

-8

0Fa

ils8

108

2.3.

2x4−

x−10

=0.

10.

6299

6052

4947

4366

67

66

-10.

4724

7039

3710

58

28

108

8

-2Fa

ils1

11

2.3.

3x

exp(

x)−

1=

0.-1

.5-1

71

66

-1.3

6787

9441

1714

423

07

87

7

325

1714

132.

3.4

3774

.522

x+

2.27

x−18

1.52

9=

0.45

40.7

7726

1090

2992

35

119

83.

5997

1630

3015

5524

15

75

52.

3.5

(x−

2)2+

cos(

x)=

0.3

2.35

4242

7582

2278

14

65

5-0

.580

2374

2062

3167

2

-14

76

62.

3.6

exp(

x)−

3x2=

0.1

0.20

4481

4493

3399

151

56

55

1.10

1450

7066

6703

6

0.5

66

66

2.3.

710

.2 x+

6.2x

3=

0,x

>0.

2.0

0.86

0541

4755

7067

505

76

615

.804

0029

2848

297

55

Page 25: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Table 2.4

Computational order of convergence and Total number of function evaluations

(COC) (TNOFE)

Example NM MVNM MVPM MVEM NM MVNM MVPM MVEM

No. (2.50) (2.59) (2.73) (2.50) (2.59) (2.73)

== ≡≡ ≡≡ ≡≡ = 2 2 22.3.1

2.00 2.00 2.00 2.00 10 12 10 10

== 2.01 2.00 2.00 = 16 20 16

2.3.2 2.00 2.01 2.00 2.00 12 14 12 12

2.00 2.01 2.00 2.00 16 20 16 16

== ≡≡ ≡≡ ≡≡ = 2 2 2

2.3.3 2.00 ≡≡ 2.01 2.00 14 2 12 12

2.00 2.00 2.00 2.00 14 16 14 14

2.00 2.00 2.14 2.05 10 34 28 262.3.4

2.00 2.00 1.92 1.81 10 22 18 16

2.00 2.00 1.83 1.99 10 14 10 102.3.5

2.00 2.00 2.01 2.00 8 12 10 10

2.01 2.00 2.00 2.04 8 14 12 122.3.6

2.00 2.00 2.00 2.00 10 12 10 10

2.00 2.00 2.00 2.00 12 12 12 122.3.7

2.01 2.00 2.00 2.00 10 14 12 12

Here, the symbols namely

(i) ‘==’ and ‘=’ represent COC and TNOFE for a case of failure respectively.

(ii) ‘≡≡’ represents COC when the number of iterations are not sufficient.

56

Page 26: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

2.4 A modified family of Newton’s method for systems of

nonlinear equations

Nonlinearity is of interest to physicists and mathematicians, since more physical systems

are inherently nonlinear in nature. One of the most important problems in optimization and

computational mathematics is to solve systems of nonlinear equations (1.3). Systems of

nonlinear equations are difficult to solve in general. The best way to solve these equations

is by iterative methods. Many robust and efficient methods are brought forward [13, 14, 16,

20, 23]. One of the classical method to solve systems of nonlinear equations is Newton’s

method [7, 17, 18], which is given by

x(n+1) = x(n)− [JF

(x(n))]−1F

(x(n)), (2.82)

where JF(x) is the Jacobian matrix of the function F(x) and[JF(x)

]−1 is its inverse.

This basic method, converges quadratically. Although, Newton’s method for simple

roots is generalized to a system of nonlinear equations, convergence of iteration is a serious

problem. Unless a good initial guess is known, it is extremely unlikely that an iteration will

converge. Moreover, convergence of Newton’s method must require a condition that the

Jacobian is non-singular in a neighbourhood of the root. If this condition is not provided,

then Newton’s method may be divergent and even fail. Many attempts [144-164, 4*, 16*]

have been made to develop new techniques for solving systems of nonlinear equations.

The purpose of this attempt is to develop modified family of Newton’s method for solving

systems of nonlinear equations. The developed family is found to yield better performance

than Newton’s method.

2.4.1 Modified family of Newton’s method

The proposed scheme given by formula (2.10) can further be extended for the case of system

of nonlinear equations. In general, we can usually find the solutions to a system of equations

when number of unknowns matches the number of equations.

To illustrate the extension of proposed scheme (2.10), consider the following system of

nonlinear equations in ‘x’ and ‘y’ :

57

Page 27: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

f (x,y) = 0,

g(x,y) = 0.

. (2.83)

Consider the auxiliary equations

e−α(x−x0) f (x,y) = 0,

e−α(y−y0) g(x,y) = 0.

, (2.84)

where α ∈ R. Root of the system of equations (2.83) is also the root of the system (2.84)

and vice-versa. Let (x0,y0) be an initial guess to the required root of a system (2.84). If

(x0 +h,y0 + s) is a root of the system (2.84), then one must have

e−αh f (x0 +h,y0 + s) = 0,

e−αkg(x0 +h,y0 + s) = 0.

. (2.85)

Assuming that f (x,y) and g(x,y) are sufficiently differentiable, one can expand (2.85) by

Taylor’s series expansion about a point (x0,y0) to obtain

h(

∂ f∂x0−α f0

)+ s ∂ f

∂y0+ f0 + ................ = 0,

h ∂g∂x0

+ s(

∂g∂y0−αg0

)+g0 + ................ = 0.

, (2.86)

where∂ f∂x0

=[

∂ f∂x

]

x=x0

, ∂ f∂y0

=[

∂ f∂y

]

y=y0

, f0 = f (x0,y0),

∂g∂x0

=[

∂g∂x

]

x=x0

, ∂g∂y0

=[

∂g∂y

]

y=y0

, g0 = g(x0,y0) etc.

Neglecting second and higher-order terms in (2.86), one gets a following system of linear

equations :

h(

∂ f∂x0−α f0

)+ s ∂ f

∂y0+ f0 = 0,

h ∂g∂x0

+ s(

∂g∂y0−αg0

)+g0 = 0.

. (2.87)

If the Jacobian given by

J( f ,g) =

(∂ f∂x0−α f0

)∂ f∂y0

∂g∂x0

(∂g∂y0−αg0

) , (2.88)

does not vanish, then linear system of equations (2.87) possesses a unique solution given by

h =g0

∂ f∂y0− f0

(∂g

∂y0−αg0

)

J( f ,g) ,

s =f0

∂g∂x0−g0

(∂ f∂x0−α f0

)

J( f ,g) .

. (2.89)

58

Page 28: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Therefore, new approximations are given by

x1 = x0 +h,

y1 = y0 + s.

. (2.90)

The process is to be repeated till one obtains the roots of desired accuracy. The parameter

‘α’ in (2.89) is chosen so as to give largest value of denominator. Method (2.90) will work

even if ∂ f∂x0

∂g∂y0− ∂ f

∂y0

∂g∂x0

= 0 unlike Newton’s method.

Again, it is interesting to note that by ignoring the term in ‘α’, equation (2.90) reduces to

Newton’s method for solving systems of nonlinear equations [7, 17, 18].

Vector notation

Let us consider a general system of nonlinear equations

F(x) = 0, (2.91)

where F(x) =(

f1(x), f2(x), ....., fn(x))T

, fi : Rn → Rn and x =(x1,x2, . . . ,xn

)T is a vector.

Then the modified Newton’s method (2.90) (MNM) may be written as

x(n+1) = x(n)− [F ′

(x(n))−diag

(α(n)

i fi(x(n)))]−1F

(x(n)), (2.92)

where i = 1,2, . . . ,n. In case when Jacobian F ′(x(n)) is numerically singular, then parameter

α(n)i is chosen such that

[F ′

(x(n))− diag

(α(n)

i fi(x(n)))] is non singular, then the modified

Newton’s method (2.92) can continue computation.

Order of convergence of the proposed iterative formula (2.92) is established in the fol-

lowing theorem :

59

Page 29: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Theorem 2.4.1. Let F : D⊂ Rn → Rn for an open convex set D. Assume that F(x) has a

root x(∗) ∈D, where the Jacobian F ′(x(∗)) is non-singular and F(x) is sufficiently smooth in

some neighbourhood S of the root. If for all x ∈ S,[F ′(x)−diag

(αi fi(x)

)]is non-singular,

then proposed method (2.92) has quadratic order of convergence.

Proof. Let e(n) = x(n)− x(∗).

Expanding fi(x(n)) by Taylor’s series expansion about x(∗), one can have

(here, let ∂ j =

∂/∂x j, hence f ′i (x) = ∂ j fi(x), where j = 1,2.)

fi(x(n)) = fi

(x(∗))+ f ′i

(x(∗))e(n) +

12

f ′′i(x(∗))(e(n))2 +O

(||e(n)||3), (2.93)

and taking into account that fi(x(∗)) = 0 and f ′i

(x(∗)) 6= 0, one gets

fi(x(n)) = f ′i

(x(∗))e(n) +

12

f ′′i(x(∗))(e(n))2 +O

(||e(n)||3), (2.94)

and

∂ j fi(x(n)) = ∂ j f ′i

(x(∗))+O

(||e(n)||). (2.95)

In vector notation, one can write (2.94) and (2.95) as

F(x(n)) = F ′

(x(∗))(e(n))+

12

F ′′(x(∗))(e(n))2 +O

(||e(n)||3), (2.96)

and

F ′(x(n)) = F ′

(x(∗))+F ′′

(x(∗))(e(n))+O

(||e(n)||2). (2.97)

Also

diag(α(n)

i fi(x(n)))(e(n)) = O

(||e(n)||2). (2.98)

Using equation (2.97) and (2.98), one can obtain

[F ′

(x(n))−diag

(α(n)

i fi(x(n)))]−1F

(x(n)) = e(n) +O

(||e(n)||2). (2.99)

Substituting equation (2.99) in the formula (2.92), one gets

e(n+1) = O(||e(n)||2). (2.100)

This completes proof of the theorem.

60

Page 30: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

2.4.2 Worked examples

Now, some numerical results are presented for method (2.92) in Table 2.5. Compared are

Newton’s method (NM) and modified Newton’s method (2.92) (MNM) for solving sys-

tems of nonlinear equations. Computations have been performed using MAT LABr version

7.5(R2007b) in double precision arithmetic. Here, the formulae are tested for |α|= 1. We use

ε = 10−15 as a tolerance error. Following stopping criteria are taken for computer programs:

(i)∣∣∣∣x(n+1)− x(n)

∣∣∣∣ < ε (ii)∣∣∣∣F(

x(n))∣∣∣∣ < ε.

The number of iterations needed to converge to the required solution have been analyzed.

Consider the following systems of nonlinear equations

Example 2.4.1

x1− cos(x2) = 0,

sin(x1)+0.5x2 = 0.

.

Solution is(0.5303886895389944, −1.1011737334182012

)T.

Example 2.4.2

x1x2−1 = 0,

ln(x1)− x2 = 0.

.

Solution is(1.763222834351897, 0.5671432904097838

)T.

Example 2.4.3

x21− x2

2−1 = 0,

x31x2

2−1 = 0.

.

Solution is(1.236505703391499, 0.7272869822289587

)T.

Example 2.4.4

x21− x2

2−4 = 0,

x21 + x2

2−16 = 0.

.

Solution is(3.162277660168379, 2.449489742783178

)T.

61

Page 31: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Example 2.4.5

exp(x1)− x2−2 = 0,

cos(x1)− x1 + x2−1 = 0.

.

Solution is(1.478488895998027, 2.386312496098139

)T.

Example 2.4.6

x21 +3log(x1)− x2

2 = 0,

2x21− x1x2−5x1 +1 = 0.

.

Solution is(1.319205803329892, −1.603556555187415

)T.

Example 2.4.7

x21 + x2

2 + x23−1 = 0,

2x21 + x2

2−4x3 = 0,

3x21−4x2

2 + x23 = 0.

.

Solution is(0.6982886099715139, 0.6285242979602138, 0.3425641896895695

)T.

Example 2.4.8

x2x3 + x4(x2 + x3) = 0,

x1x3 + x4(x1 + x3) = 0,

x1x2 + x4(x1 + x2) = 0,

x1x2 + x3(x1 + x2) = 1.

.

Solution is(0.5773502691896256,0.5773502691896257,0.5773502691896257,−0.2886751345948129

)T.

62

Page 32: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

Table 2.5

Test examples, their initial guesses and number of iterations.

Example No. Initial guesses NM MNM (2.92)

(x1,x2)T = (0,0)T 6 52.4.1

(x1,x2)T = (2,2)T Divergent 10

(x1,x2)T = (1,1)T 5 62.4.2

(x1,x2)T = (6,−1)T Fails 13

(x1,x2)T = (0,0)T Fails 62.4.3

(x1,x2)T = (1,2)T 6 6

(x1,x2)T = (0,0)T Fails 92.4.4

(x1,x2)T = (3,2)T 5 6

(x1,x2)T = (.45, .45)T 14 82.4.5

(x1,x2)T = (0,0.5)T Fails 8

(x1,x2)T = (1.5,−1.5)T 4 52.4.6

(x1,x2)T = (.5495,−1)T Converges to undesired root 7

(x1,x2,x3)T = (0.5,0.5,0.5)T 6 72.4.7

(x1,x2,x3)T = (1.4,1.4,−1.4)T Fails 9

(x1,x2,x3,x4)T = (0.5,0.5,0.5,−0.2)T 6 62.4.8

(x1,x2,x3,x4)T = (0.1,0.1,0.1,−0.1)T Fails 9

2.5 Discussion and conclusions

This study presents several new methods of second-order for solving scalar nonlinear equa-

tions, unconstrained optimization problems and systems of nonlinear equations. The nu-

merical examples considered in this chapter show that in many cases proposed methods are

efficient alternative to Newton’s method which may converge slowly or even fail. These are

simple extensions of Newton’s method and have well-known geometric derivations. These

methods remove severe conditions f ′(xn) 6= 0 or f ′′(xn) 6= 0 of Newton’s method for the case

of nonlinear equations or for the case of unconstrained optimization problems respectively.

The behaviours of Newton’s method and proposed modifications can be compared by their

correction factors. For example, Newton’s method correction factor f (xn)f ′(xn)

is now modified tof (xn)

f ′(xn)−α f (xn), where parameter ‘α’ is chosen such that corresponding function values f ′(xn)

63

Page 33: Chapter 2 Second-order variants of Newton’s methodshodhganga.inflibnet.ac.in/bitstream/10603/5708/9/09... ·  · 2014-05-27Chapter 2 Second-order variants of Newton’s method

and α f (xn) have opposite signs. However, for α = 0 and if derivatives of function f (xn) are

singular or almost singular, Newton’s method will either fail or diverge. Therefore, these

modifications have two remarkable advantages over Newton’s method namely: (i) if α 6= 0,

the modified denominator of proposed modifications is well defined and is never zero, pro-

vided xn is not accepted as an approximation to the required root or optima respectively and

hence they are well defined even if f ′(xn) = 0 or f ′′(xn) = 0 happens, (ii) The absolute value

of the modified denominator of modified techniques is always greater than denominator of

Newton’s method i.e. | f ′(xn)| provided xn is not accepted as an approximation to the required

root or optima respectively. This means that proposed methods are numerically more stable

unlike Newton’s method. Further, numerical results demonstrate that parabolic and ellipse

methods outperform Newton’s method and one-parameter family of Newton’s method for

solving nonlinear equations as well as unconstrained optimization problems.

Furthermore, for systems of nonlinear equations, a new family of Newton’s method has

been proposed for finding a zero of vector function. The beauty of the proposed family is

that it works even if Jacobian is singular at some points. From numerical results, one can see

that proposed method is much superior in global convergence to Newton’s method and this

method is very efficient when Jacobian is singular.

However, proposed methods for solving nonlinear equations, unconstrained optimization

problems and systems of nonlinear equations could not be extended for a case of complex

roots.

64


Recommended