+ All Categories
Home > Documents > Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed...

Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed...

Date post: 30-Dec-2016
Category:
Upload: santhosh
View: 215 times
Download: 1 times
Share this document with a friend
9
Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed problems Ioannis K. Argyros a , Santhosh George b,a Department of Mathematical Sciences, Cameron University Lawton, OK 73505, USA b Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, 757 025, India article info Keywords: Gauss–Newton method Nonlinear ill-posed problems Tikhonov regularization Iterative regularization method Balancing principle abstract We expand the applicability of a modified Gauss–Newton method recently presented in George (2013) [19] for approximate solution of a nonlinear ill-posed operator equation between two Hilbert spaces. We use a center-type Lipschitz condition in our convergence analysis instead of a Lipschitz-type condition used in earlier studies such as George (2013, 2010) [19,18]. This way a tighter convergence analysis is obtained and under less compu- tational cost, since the more precise and easier to compute center-Lipschitz instead of the Lipschitz constant is used in the convergence analysis. Numerical examples are presented to show that our results apply but earlier ones do not apply to solve equations. Ó 2013 Elsevier Inc. All rights reserved. 1. Introduction Let X and Y be Banach spaces. Let B r ðxÞ denote the open ball centered at x 2 X and radius r > 0. Denote by B r ðxÞ the closure of B r ðxÞ. Let also LðX; Y Þ denote the space of all bounded linear operators from X into Y. In this study, we consider the task of approximately solving the nonlinear ill-posed equation F ðxÞ¼ y: ð1:1Þ This equation and the task of solving it make sense only when placed in an appropriate framework. Throughout this paper we shall assume that F : DðFÞ # X ! Y is a nonlinear operator between Hilbert spaces X and Y with inner product and cor- responding norm denoted by h:; :i and k:k respectively and y 2 Y . We assume that (1.1) has a unique solution ^ x. For d > 0, let y d 2 Y be an available data with ky y d k 6 d: Since (1.1) is ill-posed [10], regularization methods are used to obtain stable approximate solutions [17,20]. Iterative regu- larization methods are one such class of regularization methods [7–9,14,15,21]. Motivated by the earlier works [6–9,12,13,15,16,21]. In [18,19], George considered a Newton-type iterative method (NTIM) defined for A 0 ¼ F 0 ðx 0 Þ by x d nþ1;a ¼ x d n;a ðA 0 A 0 þ aIÞ 1 ½A 0 ðF ðx d n;a Þ y d Þþ aðx d n;a x 0 Þ; x d 0;a ¼ x 0 ; or equivalently, x d nþ1;a ¼ x 0 ðA 0 A 0 þ aIÞ 1 A 0 ½ðF ðx d n;a Þ y d Þ A 0 ðx d n;a x 0 Þ; x d 0;a ¼ x 0 ð1:2Þ 0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2013.04.026 Corresponding author. E-mail addresses: [email protected] (I.K. Argyros), [email protected] (S. George). Applied Mathematics and Computation 219 (2013) 10518–10526 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
Transcript
Page 1: Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed problems

Applied Mathematics and Computation 219 (2013) 10518–10526

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation

journal homepage: www.elsevier .com/ locate /amc

Expanding the applicability of a modified Gauss–Newtonmethod for solving nonlinear ill-posed problems

0096-3003/$ - see front matter � 2013 Elsevier Inc. All rights reserved.http://dx.doi.org/10.1016/j.amc.2013.04.026

⇑ Corresponding author.E-mail addresses: [email protected] (I.K. Argyros), [email protected] (S. George).

Ioannis K. Argyros a, Santhosh George b,⇑a Department of Mathematical Sciences, Cameron University Lawton, OK 73505, USAb Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, 757 025, India

a r t i c l e i n f o a b s t r a c t

Keywords:Gauss–Newton methodNonlinear ill-posed problemsTikhonov regularizationIterative regularization methodBalancing principle

We expand the applicability of a modified Gauss–Newton method recently presented inGeorge (2013) [19] for approximate solution of a nonlinear ill-posed operator equationbetween two Hilbert spaces. We use a center-type Lipschitz condition in our convergenceanalysis instead of a Lipschitz-type condition used in earlier studies such as George (2013,2010) [19,18]. This way a tighter convergence analysis is obtained and under less compu-tational cost, since the more precise and easier to compute center-Lipschitz instead of theLipschitz constant is used in the convergence analysis. Numerical examples are presentedto show that our results apply but earlier ones do not apply to solve equations.

� 2013 Elsevier Inc. All rights reserved.

1. Introduction

Let X and Y be Banach spaces. Let BrðxÞ denote the open ball centered at x 2 X and radius r > 0. Denote by BrðxÞ the closureof BrðxÞ. Let also LðX;YÞ denote the space of all bounded linear operators from X into Y.

In this study, we consider the task of approximately solving the nonlinear ill-posed equation

FðxÞ ¼ y: ð1:1Þ

This equation and the task of solving it make sense only when placed in an appropriate framework. Throughout this paperwe shall assume that F : DðFÞ# X ! Y is a nonlinear operator between Hilbert spaces X and Y with inner product and cor-responding norm denoted by h:; :i and k:k respectively and y 2 Y . We assume that (1.1) has a unique solution x̂. For d > 0,let yd 2 Y be an available data with

ky� ydk 6 d:

Since (1.1) is ill-posed [10], regularization methods are used to obtain stable approximate solutions [17,20]. Iterative regu-larization methods are one such class of regularization methods [7–9,14,15,21].

Motivated by the earlier works [6–9,12,13,15,16,21]. In [18,19], George considered a Newton-type iterative method(NTIM) defined for A0 ¼ F 0ðx0Þ by

xdnþ1;a ¼ xd

n;a � ðA�0A0 þ aIÞ�1½A�0ðFðxd

n;aÞ � ydÞ þ aðxdn;a � x0Þ�; xd

0;a ¼ x0;

or equivalently,

xdnþ1;a ¼ x0 � ðA�0A0 þ aIÞ�1A�0½ðFðxd

n;aÞ � ydÞ � A0ðxdn;a � x0Þ�; xd

0;a ¼ x0 ð1:2Þ

Page 2: Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed problems

I.K. Argyros, S. George / Applied Mathematics and Computation 219 (2013) 10518–10526 10519

for approximately solving (1.1).The convergence analysis of (NTIM) was based on the following:

Assumption 1.1. There exists a constant K0 > 0 such that for every x;u 2 DðFÞ and v 2 X, there exists an elementUðx;u; vÞ 2 X satisfying

½F 0ðxÞ � F 0ðuÞ�v ¼ F 0ðuÞUðx;u; vÞ; kUðx;u; vÞk 6 K0kvkkx� uk

for all x;u 2 DðFÞ and v 2 X.The hypotheses of Assumption 1.1 may not hold or may be very expensive or impossible to verify in general. In particular,

as it is the case for well-posed nonlinear equations the computation of the Lipschitz constant K0 even if this constant exists isvery difficult. Moreover, there are classes of operators for which Assumption 1.1 is not satisfied but (NTIM) converges.

In the present paper, we expand the applicability of (NTIM) under less computational cost. Let us explain how we achievethis goal. Our new convergence analysis is based on the following:

Assumption 1.2. Let x0 2 X be fixed. There exists a constant K0 such that for every u 2 DðFÞ and v 2 X, there exists anelement U0ðx0;u;vÞ 2 X satisfying

½F 0ðx0Þ � F 0ðuÞ�v ¼ F 0ðx0ÞU0ðx0;u; vÞ; kUðx0;u;vÞk 6 K0kvkkx0 � uk:

Note that

K0 6 K0;

holds in general and K0

K0can be arbitrary large. The advantages of the new approach are:

(1) Assumption 1.2 is weaker than Assumption 1.1. Notice that there are classes of operators that satisfy Assumption 1.2but do not satisfy Assumption 1.1;

(2) The computational cost of constant K0 is less than that of constant K0, even when K0 ¼ K0;(3) The sufficient convergence criteria are weaker;(4) The computable error bounds on the distances involved (including K0) are less costly and more precise than the old

ones (including K0);(5) The information on the location of the solution is more precise; and(6) The convergence domain of (NTIM) is larger.

These advantages are also very important in computational mathematics since they provide under less computationalcost a wider choice of initial guesses for (NTIM) and the computation of fewer iterates to achieve a desired error tolerance.Numerical examples for (1)–(6) are presented in Section 4 (see also Remark 3.1).

We have noticed that all the results presented in [19] hold true if weaker Assumption 1.2 is used instead of Assumption1.1. Therefore we list these results without the proofs which can be found in [19] using Assumption 1.1.

These advantages are obtained, since we use the more precise center-Lipschitz constant instead of the Lipschitz constant.The regularization parameter a is chosen according to the balancing principle considered by Pereverzev and Schock in [22].

The organization of this paper is as follows: Convergence analysis and parameter choice strategy are discussed in Sec-tion 2. Section 3 deals with the implementation of the method. Numerical examples are given in Section 4.

2. Convergence analysis

Let

ydn :¼ x0 � R�1

a A�0½FðxdnÞ � yd � A0ðxd

n � x0Þ� ð2:1Þ

and

xdnþ1 :¼ x0 � R�1

a A�0½FðydnÞ � yd � A0ðyd

n � x0Þ�; ð2:2Þ

where Ra :¼ A�0A0 þ aI and x0 is the initial guess.

Remark 2.1. It can be seen that ydn ¼ xd

2n�1;a and xdnþ1 ¼ xd

2n;a where xdn;a is defined as in (1.2).

Let

edn :¼ kyd

n � xdnk; 8n P 0: ð2:3Þ

Hereafter, for convenience, we use the notation xn; yn and en for xdn; y

dn and ed

n respectively.

Let �d0 <ffiffiffiffia0p

4K0; 0 < �q 6 1

K0

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi32�

2K0�d0ffiffiffiffi

a0p

q� 1

� �, the parameter a is selected from some finite set

DM :¼ fai : 0 < a0 < a1 < � � � < aMg

Page 3: Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed problems

10520 I.K. Argyros, S. George / Applied Mathematics and Computation 219 (2013) 10518–10526

and

�cq :¼�d0ffiffiffiffiffia0p þ K0

2�q2 þ �q:

Throughout this paper we assume that the operator F is Fréchet differentiable at all x 2 DðFÞ.

Remark 2.2. (cf. [19, Remark 3.2]). Note that if kx0 � x̂k 6 �q then by Assumption 1.2 we have

e0 6 �cq: ð2:4Þ

Theorem 2.3. (cf. [19, Theorem 2.4]). Let kx0 � x̂k 6 �q; �q ¼ K0r where r 2 1�ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1�4K0 �cq

p2K0

;1þ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1�4K0 �cq

p2K0

� �. Let en be as in (2.3) and let

xn and yn be as in (2.1) and (2.2) respectively with d 2 ½0; �d0Þ and a 2 DM. Then we have the following;

(a) kxn � yn�1k 6 �qkyn�1 � xn�1k(b) kyn � xnk 6 �q2kyn�1 � xn�1k(c) en 6 �q2n �cq(d) xn; yn 2 Brðx0Þ.

Theorem 2.4. (cf. [19, Theorem 2.5]). Let yn and xn be as in (2.1) and (2.2) respectively with d 2 ð0; �d0� and assumptions of The-orem 2.3 hold. Then ðxnÞ is a Cauchy sequence in Brðx0Þ and converges to xd

a 2 Brðx0Þ. Further

A�0ðFðxdaÞ � ydÞ þ aðxd

a � x0Þ ¼ 0 ð2:5Þ

and

kxn � xdak 6

�q2n �cq

ð1� �qÞ :

We will be using the following assumption to obtain an error estimate for kxda � x̂k.

Assumption 2.5. There exists a continuous, strictly monotonically increasing function u : ð0; a� ! ð0;1Þ with a P kF 0ðx0Þk2

satisfying;

� limk!0uðkÞ ¼ 0

supkP0auðkÞkþ a

6 uðaÞ; 8k 2 ð0; a�:

� there exists v 2 X such that

x0 � x̂ ¼ uðA�0A0Þv :

Theorem 2.6. (cf. [19, Theorem 3.1]). Let xda be as in (2.5). Then

kxda � x̂k 6 1

1� �qdffiffiffiap þuðaÞ� �

:

2.1. Error bounds under source conditions

Combining the estimates in Theorems 2.4 and 2.6 we obtain the following.

Theorem 2.7. Let xdn be defined as in (2.2). If all assumptions of the Theorems 2.4 and 2.6 are fulfilled, then

kxdn � x̂k 6

�q2n

1� �q�cq þ

11� �q

dffiffiffiap þuðaÞ� �

:

Further if nd :¼minfn : �q2n < dffiffiap g, then

kxdnd� x̂k 6 ~C

dffiffiffiap þuðaÞ� �

where ~C :¼ 11��q ð �cq þ 1Þ.

Page 4: Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed problems

I.K. Argyros, S. George / Applied Mathematics and Computation 219 (2013) 10518–10526 10521

2.2. A priori choice of the parameter

Observe that the upper bound dffiffiap þuðaÞ in Theorem 2.7 is of optimal order for the choice a :¼ ad which satisfies

dffiffiffiffiadp ¼ uðaÞ. Now, using the function wðkÞ :¼ k

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiu�1ðkÞ

p; 0 < k 6 a, we have d ¼

ffiffiffiap

uðaÞ ¼ wðuðaÞÞ so that ad ¼ u�1½w�1ðdÞ�.Here u�1 means the inverse function of u.

Theorem 2.8. Suppose that all assumptions of the Theorems 2.4 and 2.6 are fulfilled. For d > 0, let ad ¼ u�1½w�1ðdÞ� and let nd beas in Theorem 2.7 with a ¼ ad. Then

kxdnd� x̂k ¼ O w�1ðdÞ

� �:

2.3. Adaptive choice of the parameter

In the balancing principle considered by Pereverzev and Schock in [22], the regularization parameter a ¼ ai are selectedfrom some finite set

DN :¼ fai : 0 < a0 < a1 < � � � < aNg:

Let

ni ¼ min n : �q2n6

dffiffiffiffiffiaip

and let xdniþ1;ai

:¼ xdni

where xdni

is defined as in (2.2) with a ¼ ai and n ¼ ni. Then from Theorem 2.7, we have

kxdni ;ai� x̂k 6 ~C

dffiffiffiffiffiaip þuðaiÞ� �

; 8i ¼ 1;2; . . . N:

Precisely we choose the regularization parameter a ¼ ak from the set DN defined by

DN :¼ fai ¼ lia0; i ¼ 1;2; . . . Ng

where l > 1.To obtain a conclusion from this parameter choice we consider all possible functions u satisfying Assumption 1.2 and

uðaiÞ 6 dffiffiffiaip . Any of such functions is called admissible for x̂ and it can be used as a measure for the convergence of xd

a ! x̂(see [23]).

The main result of the Section 3 is the following.

Theorem 2.9. Assume that there exists i 2 f0;1; . . . ;Ng such that uðaiÞ 6 dffiffiffiaip . Let the assumptions of Theorems 2.4 and 2.6 be

fulfilled and let

l :¼max i : uðaiÞ 6dffiffiffiffiffiaip

< N;

k ¼max i : 8j ¼ 1;2; . . . ; i; kxdni ;ai� xd

nj ;ajk 6 4~C

dffiffiffiffiffiajp

where ~C is as in Theorem 2.7. Then l 6 k and

kxdnk ;ak� x̂k 6 6~Clw�1ðdÞ:

Proof. The proof is analogous to the proof of Theorem 4.4 in [18] and is omitted therefore here.

3. Implementation of the method

Finally the balancing algorithm associated with the choice of the parameter specified in Theorem 2.9 involves the follow-ing steps:

� Choose a0 > 0 such that �d0 <ffiffiffiffia0p

4K0and l > 1.

� Choose N big enough but not too large and ai :¼ lia0; i ¼ 0;1;2; . . . ;N.

� Choose �q 6

ffiffiffiffiffiffiffiffiffiffiffiffi32�

2K0�d0ffiffiffiffi

a0p

q�1

K0.

Page 5: Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed problems

10522 I.K. Argyros, S. George / Applied Mathematics and Computation 219 (2013) 10518–10526

3.1. Algorithm

1. Set i ¼ 0.2. Choose ni ¼minfn : �q2n

6dffiffiffiaip g.

3. Solve xdni ;ai¼ xd

niby using the iteration (2.1) and (2.2) with n ¼ ni and a ¼ ai.

4. If kxdai� xd

ajk > 4~C dffiffiffi

ajp ; j < i, then take k ¼ i� 1 and return xd

ak.

5. Else set i ¼ iþ 1 and return to step 2.

Remark 3.1. If K0 ¼ K , then the results obtained in this paper reduce to the ones in [19] which in turn improved the resultsin [18,19]. However, if K0 < K0, then we have that �q; �cq; �q; �d0 are more precise than the old q; cq; q; d0 respectively. Forexample we have that

d0 < �d0; �q < q; etc:

Notice, in particular that for the corresponding ratios of convergence

�qq¼ K0

K! 0 as

K0

K! 0:

4. Numerical example

We present the numerical examples in this section. In the first example, we verify Assumption 1.2 and apply (NTIM) tosolve an itegral equation.

Example 4.1. We apply the algorithm by choosing a sequence of finite dimensional subspace ðVnÞ of X with dim Vn ¼ nþ 1.Precisely we choose Vn as the linear span of fv1;v2; . . . ;vnþ1g where v i; i ¼ 1;2; . . . ;nþ 1 are the linear splines in a uniformgrid of nþ 1 points in ½0;1�.

We consider the same example of non-linear integral operator as in [25, Section 4.3]. Let F : DðFÞ# H1ð0;1Þ�!L2ð0;1Þdefined by

FðuÞ :¼Z 1

0kðt; sÞu3ðsÞds;

where

kðt; sÞ ¼ð1� tÞs; 0 6 s 6 t 6 1ð1� sÞt; 0 6 t 6 s 6 1

The Fréchet derivative of F is given by

F 0ðuÞw ¼ 3Z 1

0kðt; sÞðuðsÞÞ2wðsÞds: ð4:1Þ

Note that for v > 0; x0ðtÞ > j > 0

ðF 0ðvÞ � F 0ðx0ÞÞw ¼ 3Z 1

0kðt; sÞx2

0ðsÞv2ðsÞx2

0ðsÞ� 1

� �wðsÞds :¼ F 0ðx0ÞUðv ; x0;wÞ;

where Uðv ; x0;wÞ ¼ v2

x20� 1

� �w. So Assumption 1.2 is satisfied (cf. [24, Example 2.7]).

In our computation, we take yðtÞ ¼ t�t11

110 and yd ¼ yþ d. Then the exact solution

x̂ðtÞ ¼ t3:

We use

x0ðtÞ ¼ t3 þ 356ðt � t8Þ;

as our initial guess, so that the function x0 � x̂ satisfies the source condition

x0 � x̂ ¼ uðF 0ðx0ÞÞx̂x0

� �2

;

where uðkÞ ¼ k.

Page 6: Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed problems

I.K. Argyros, S. George / Applied Mathematics and Computation 219 (2013) 10518–10526 10523

Observe that while performing numerical computation on finite dimensional subspace ðVnÞ of X, one has to consider theoperator PnF 0ð:ÞPn instead of F 0ð:Þ, where Pn is the orthogonal projection onto Vn. Thus incurs an additional errorkPnF 0ð:ÞPn � F 0ð:Þk ¼ OðkF 0ð:ÞðI � PnÞkÞ.

Let kF 0ð:ÞðI � PnÞk 6 en. For the operator F 0ð:Þ defined in (4.1), en ¼ Oðn�2Þ (cf. [11]). Thus we expect to obtain the rate of

convergence O ðdþ n�2Þ12

� �.

We choose a0 ¼ 1:7� ðdþ enÞ; l ¼ 1:7; K0 ¼ 1 and q ¼ 0:81. The results of the computation are presented in Table 1. Theplots of the exact solution and the approximate solution obtained are given in Figs. 1 and 2.

Table 1Iterations and corresponding error estimates.

n k nk dþ en ak kxdnk ;ak� x̂k kxd

nk ;ak�x̂k

ðdþenÞ1=2

8 2 9 0.0682 0.5699 0.2613 1.000216 2 9 0.0671 0.5601 0.1962 0.757732 2 9 0.0668 0.5576 0.1456 0.563564 2 9 0.0667 0.5570 0.1089 0.4219

128 2 9 0.0667 0.5569 0.0846 0.3275256 2 9 0.0667 0.5568 0.0714 0.2764512 2 9 0.0667 0.5568 0.0707 0.2738

1024 2 9 0.0667 0.5568 0.0661 0.2558

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.90

0.1

0.2

0.3

0.4

0.5

0.6

0.7exact solnapprox.soln

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.90

0.1

0.2

0.3

0.4

0.5

0.6

0.7exact solnapprox.soln

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9exact solnapprox.soln

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1exact solnapprox.soln

Fig. 1. Curves of the exact and approximate solutions.

Page 7: Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed problems

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1exact solnapprox. soln

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1exact solnapprox.soln

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1exact solnapprox.soln

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1exact solnapprox.soln

Fig. 2. Curves of the exact and approximate solutions.

10524 I.K. Argyros, S. George / Applied Mathematics and Computation 219 (2013) 10518–10526

In the next two cases, we present examples for nonlinear equations where Assumption 1.2 is satisfied but not Assumption1.1.

Example 4.2. Let X ¼ Y ¼ R; D ¼ ½0;1Þ; x0 ¼ 1 and define function F on D by

FðxÞ ¼ x1þ1i

1þ 1i

þ c1xþ c2; ð4:2Þ

where c1; c2 are real parameters and i > 2 an integer. Then F 0ðxÞ ¼ x1=i þ c1 is not Lipschitz on D. Hence, Assumption 1.1 isnot satisfied. However central Lipschitz condition Assumption 1.2 holds for K0 ¼ 1.

Indeed, we have

kF 0ðxÞ � F 0ðx0Þk ¼ jx1=i � x1=i0 j ¼

jx� x0j

xi�1

i0 þ � � � þ x

i�1i

;

so

kF 0ðxÞ � F 0ðx0Þk 6 K0jx� x0j:

Example 4.3. We consider the integral equations

uðsÞ ¼ f ðsÞ þ kZ b

aGðs; tÞuðtÞ1þ1=ndt; n 2 N: ð4:3Þ

Page 8: Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed problems

I.K. Argyros, S. George / Applied Mathematics and Computation 219 (2013) 10518–10526 10525

Here, f is a given continuous function satisfying f ðsÞ > 0; s 2 ½a; b�; k is a real number, and the kernel G is continuous andpositive in ½a; b� � ½a; b�.

For example, when Gðs; tÞ is the Green kernel, the corresponding integral equation is equivalent to the boundary valueproblem

u00 ¼ ku1þ1=n

uðaÞ ¼ f ðaÞ; uðbÞ ¼ f ðbÞ:

These type of problems have been considered in [1–5].Equation of the form (4.3) generalize equations of the form

uðsÞ ¼Z b

aGðs; tÞuðtÞndt; ð4:4Þ

studied in [1–5]. Instead of (4.3) we can try to solve the equation FðuÞ ¼ 0 where

F : X # C½a; b� ! C½a; b�; X ¼ fu 2 C½a; b� : uðsÞP 0; s 2 ½a; b�g

and

FðuÞðsÞ ¼ uðsÞ � f ðsÞ � kZ b

aGðs; tÞuðtÞ1þ1=ndt:

The norm we consider is the max-norm.The derivative F 0 is given by

F 0ðuÞvðsÞ ¼ vðsÞ � k 1þ 1n

� �Z b

aGðs; tÞuðtÞ1=nvðtÞdt; v 2 X:

First of all, we notice that F 0 does not satisfy a Lipschitz-type condition in X. Let us consider, for instance,½a; b� ¼ ½0;1�; Gðs; tÞ ¼ 1 and yðtÞ ¼ 0. Then F 0ðyÞvðsÞ ¼ vðsÞ and

kF 0ðxÞ � F 0ðyÞk ¼ jkj 1þ 1n

� �Z b

axðtÞ1=ndt:

If F 0 were a Lipschitz function, then

kF 0ðxÞ � F 0ðyÞk 6 L1kx� yk;

or, equivalently, the inequality

Z 1

0xðtÞ1=ndt 6 L2max

x2½0;1�xðsÞ; ð4:5Þ

would hold for all x 2 X and for a constant L2. But this is not true. Consider, for example, the functions

xjðtÞ ¼tj; j P 1; t 2 ½0;1�:

If these are substituted into (4.5)

1

j1=nð1þ 1=nÞ6

L2

j() j1�1=n

6 L2ð1þ 1=nÞ; 8j P 1:

This inequality is not true when j!1.Therefore, condition (4.5) is not satisfied in this case. Hence Assumption 1.1 is not satisfied. However, condition

Assumption 1.2 holds. To show this, let x0ðtÞ ¼ f ðtÞ and c ¼ mins2½a;b�f ðsÞ; a > 0 Then for v 2 X,

k½F 0ðxÞ � F 0ðx0Þ�vk ¼ jkj 1þ 1n

� �maxs2½a;b�

Z b

aGðs; tÞðxðtÞ1=n � f ðtÞ1=nÞvðtÞdt

6 jkj 1þ 1

n

� �maxs2½a;b�

Gnðs; tÞ

where Gnðs; tÞ ¼ Gðs;tÞjxðtÞ�f ðtÞjxðtÞðn�1Þ=nþxðtÞðn�2Þ=nf ðtÞ1=nþ���þf ðtÞðn�1Þ=n kvk.

Hence,

k½F 0ðxÞ � F 0ðx0Þ�vk ¼jkjð1þ 1=nÞ

cðn�1Þ=n maxs2½a;b�

Z b

aGðs; tÞdtkx� x0k 6 K0kx� x0k;

where K0 ¼ jkjð1þ1=nÞcðn�1Þ=n N and N ¼maxs2½a;b�

R ba Gðs; tÞdt. Then Assumption 1.2 holds for sufficiently small k.

In the last example, we show that K0

K0can be arbitrarily large in certain nonlinear equation.

Page 9: Expanding the applicability of a modified Gauss–Newton method for solving nonlinear ill-posed problems

10526 I.K. Argyros, S. George / Applied Mathematics and Computation 219 (2013) 10518–10526

Example 4.4. Let X ¼ DðFÞ ¼ R; x0 ¼ 0, and define function F on DðFÞ by

FðxÞ ¼ d0xþ d1 þ d2 sin ed3x; ð4:6Þ

where di; i ¼ 0;1;2;3 are given parameters. Then, it can easily be seen that for d3 sufficiently large and d2 sufficiently small,K0

K0can be arbitrarily large.

References

[1] I.K. Argyros, Convergence and Application of Newton-Type Iterations, Springer, 2008.[2] I.K. Argyros, Approximating solutions of equations using Newton’s method with a modified Newton’s method iterate as a starting point, Rev. Anal.

Numer. Theor. Approx. 36 (2007) 123–138.[3] I.K. Argyros, A semilocal convergence for directional Newton methods, Math. Comput. (AMS) 80 (2011) 327–343.[4] I.K. Argyros, S. Hilout, Weaker conditions for the convergence of Newton’s method, J. Complexity 28 (2012) 364–387.[5] I.K. Argyros, Y.J. Cho, S. Hilout, Numerical Methods for Equations and its Applications, CRC Press, Taylor and Francis, New York, 2012.[6] A. Bakushinsky, A. Smirnova, On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems, Numer. Funct.

Anal. Optim. 26 (2005) 35–48.[7] A.B. Bakushinskii, The problem of convergence of the iteratively regularized Gauss–Newton method, Comput. Math. Math. Phys. 32 (1992) 1353–1359.[8] A.B. Bakushinskii, Iterative methods without saturation for solving degenerate nonlinear operator equations, Dokl. Akad. Nauk 344 (1995) 7–8.[9] B. Blaschke, A. Neubauer, O. Scherzer, On convergence rates for the iteratively regularized Gauss–Newton method IMA, J. Numer. Anal. 17 (1997) 421–

436.[10] J. Hadamard, Lectures on Cauchy’s Problem in Linear Partial Differential Equations, Dover Publ., New York, 1952.[11] C.W. Groetsch, J.T. King, D. Murio, Asymptotic analysis of a finite element method for Fredholm equations of the first kind, in: C.T.H. Baker, G.F. Miller

(Eds.), Treatment of Integral Equations by Numerical Methods, Academic Press, London, 1982, pp. 1–11.[12] T. Hohage, Logarithmic convergence rate of the iteratively regularized Gauss–Newton method for an inverse potential and an inverse scattering

problem, Inverse Prob. 13 (1997) 1279–1299.[13] T. Hohage, Regularization of exponentially ill-posed problems, Numer. Funct. Anal. Optim. 21 (2000) 439–464.[14] B. Kaltenbacher, A posteriori parameter choice strategies for some Newton-type methods for the regularization of nonlinear ill-posed problems,

Numer. Math. 79 (1998) 501–528.[15] B. Kaltenbacher, A note on logarithmic convergence rates for nonlinear Tikhonov regularization, J. Inverse Ill-Posed Prob. 16 (2008) 79–88.[16] S. Langer, T. Hohage, Convergence analysis of an inexact iteratively regularized Gauss–Newton method under general source conditions, J. Inverse Ill-

Posed Prob. 15 (2007) 19–35.[17] H.W. Engl, K. Kunisch, A. Neubauer, Regularization of Inverse Problems, Kluwer, Dordrecht, 1996.[18] S. George, On convergence of regularized modified Newton’s method for nonlinear ill-posed problems, J. Inverse Ill-Posed Prob. 18 (2010) 133–146.[19] S. George, Newton type iteration for Tikhonov regularization of nonlinear ill-posed problems, J. Math. 2013 (2013), http://dx.doi.org/10.1155/2013/

439316. 9p (Article ID 439316).[20] H.W. Engl, K. Kunisch, A. Neubauer, Convergence rates for Tikhonov regularization of nonlinear ill-posed problems, Inverse Prob. 5 (1989) 523–540.[21] P. Mahale, M.T. Nair, A simplified generalized Gauss–Newton method for nonlinear ill-posed problems, Math. Comput. 78 (265) (2009) 171–184.[22] S. Pereverzev, E. Schock, On the adaptive selection of the parameter in regularization of ill-posed problems, SIAM J. Numer. Anal. 43 (5) (2005) 2060–

2076.[23] S. Lu, S.V. Pereverzev, Sparsity reconstruction by the standard Tikhonov method, RICAM-Report No. 2008-17.[24] O. Scherzer, H.W. Engl, K. Kunisch, Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems, SIAM J.

Numer. Anal. 30 (6) (1993) 1796–1838.[25] E.V. Semenova, Lavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators, Comput. Methods Appl.

Math. 4 (2010) 444–454.


Recommended