+ All Categories
Home > Documents > Fixed Point Iteration - Maplesoft - Technical Computing

Fixed Point Iteration - Maplesoft - Technical Computing

Date post: 12-Feb-2022
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
28
Fixed Point Iteration Univ.-Prof. Dr.-Ing. habil. Josef BETTEN RWTH Aachen University Mathematical Models in Materials Science and Continuum Mechanics Augustinerbach 4-20 D-52056 A a c h e n , Germany <[email protected]> Abstract This worksheet is concerned with finding numerical solutions of non-linear equations in a single unknown. Using MAPLE 12 the fixed-point iteration has been applied to some examples. Keywords: zero form and fixed point form; LIPPSCHITZ constant; a-priori and a-posteriori error estimation; BANACH 's fixed-point theorem Introduction A value x = p is called a fixed-point for a given function g(x) if g(p) = p. In finding the solution x = p for f(x) = 0 one can define functions g(x) with a fixed-point at x = p in several ways, for example, as g(x) = x - f(x) or as g(x) = x - h(x)*f(x) , where h(x) is a continuous function not equal to zero within an interval [a, b] considered. The iteration process is expressed by > restart: > x[n+1]:=g(x[n]); # n = 0,1,2,... := x + n 1 ( ) g x n with a selected initial value for n = 0 in the neighbourhood of x = p. BANACH's Fixed-Point Theorem Let g(x) be a continuous function in [a, b]. Assume, in addition, that g'(x) exists on (a, b) and that a constant L = [0, 1) exists with > restart: > abs(diff(g(x),x))<=L; 1
Transcript
Page 1: Fixed Point Iteration - Maplesoft - Technical Computing

Fixed Point Iteration

Univ.-Prof. Dr.-Ing. habil. Josef BETTEN RWTH Aachen University Mathematical Models in Materials Science and Continuum Mechanics Augustinerbach 4-20 D-52056 A a c h e n , Germany

<[email protected]>

Abstract

This worksheet is concerned with finding numerical solutions of non-linear equations in a single unknown. Using MAPLE 12 the fixed-point iteration has been applied to some examples.

Keywords: zero form and fixed point form; LIPPSCHITZ constant; a-priori and a-posteriori error estimation; BANACH 's fixed-point theorem

Introduction

A value x = p is called a fixed-point for a given function g(x) if g(p) = p. In finding the solution x = p for f(x) = 0 one can define functions g(x) with a fixed-point at x = p in several ways, for example, as g(x) = x - f(x) or as g(x) = x - h(x)*f(x) , where h(x) is a continuous function not equal to zero within an interval [a, b] considered. The iteration process is expressed by > restart:> x[n+1]:=g(x[n]); # n = 0,1,2,...

:= x + n 1 ( )g xn

with a selected initial value for n = 0 in the neighbourhood of x = p.

BANACH's Fixed-Point Theorem

Let g(x) be a continuous function in [a, b]. Assume, in addition, that g'(x) exists on (a, b) and that a constant L = [0, 1) exists with > restart:> abs(diff(g(x),x))<=L;

1

Page 2: Fixed Point Iteration - Maplesoft - Technical Computing

≤ ddx

( )g x L

for all x in [a, b]. Then, for any selected initial value in [a, b] the sequence defined by > x[n+1]:=g(x[n]); # n = 0,1,2,..

:= x + n 1 ( )g xn

converges to the unique fixed-point p in [a, b]. The constant L is known as LIPPSCHITZconstant. Based upon the mean value theorem we arrive from the above assumtion at> abs(g(x)-g(xi))<=L*abs(x-xi);

≤ − ( )g x ( )g ξ L − x ξfor all x and xi in [a, b]. The BANACH fixed-point theorem is sometimes called the contraction mapping principle. From the fixed-point theorem one can proof the following error estimates:

a-priori error estimate:

> abs(x[k]-p)<=(L^k/(1-L))*abs(x[1]-x[0]); alpha:=rhs(%);

≤ − + xk pLk − + x1 x0

− 1 L

:= αLk − + x1 x0

− 1 L> The rate of convergence depends on the factor L^k . The smaller the value of L , the faster the convergence, which is very slow if the LIPPSCHITZ constant L is close to one. The necessary number of iterations for a given error "epsilon" can be calculated by the following formula [see equation (8.36) in: BETTEN, J.: Finite Elemente für Ingenieure 2, zweite Auflage, 2004, Springer-Verlag, Berlin / Heidelberg / New York]: > iterations[epsilon]>=ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L);

⎝⎜⎜

⎠⎟⎟ln

( ) − 1 L ε− + x1 x0

( )ln Literationsε

>

a-posteriori error estimate:

> restart:> abs(x[k]-p)<=(L/(1-L))*abs(x[k]-x[k-1]); beta:=rhs(%);

≤ − + xk pL − xk x − k 1

− 1 L

:= βL − xk x − k 1

− 1 Lwhere> restart:

2

Page 3: Fixed Point Iteration - Maplesoft - Technical Computing

> alpha>=beta;

≤ β α>

Examples

In the following some examples of fixed point iterations should be discussed. The first example is concerned with finding the solution f(x) = 0, where > restart:> f(x):=x-cos(x);

:= ( )f x − x ( )cos xUsing the MAPLE command "fsolve" we immediately arrive at the solution: > Digits:=4; p:=fsolve(f(x)=0);

:= Digits 4 := p 0.7391

The fixed point form is x = g(x) with g(x) = cos(x). Thus, one can read the fixed point from the folowing Figure: > alias(H=Heaviside,th=thickness,sc=scaling,co=color):> p[1]:=plot({x,cos(x)},x=0..1,sc=constrained,th=3,co=black):> p[2]:=plot(0.7391*H(x-0.7391),x=0.7390..0.7392,co=black):> p[3]:=plot({1,H(x-1)},x=0..1.01,co=black,

title="Fixed Point at x = 0.7391"):> p[4]:=plot([[0.7391,0.7391]],style=point,symbol=circle,

symbolsize=30):> plots[display](seq(p[k],k=1..4));

> The graph y = g(x) is contained in the square { (x , y) | x = [0, 1], y = [0, 1] }, id est: " g " mapps the interval [0, 1] to itself. The fixed point iteration is given as follows:> starting_point:=x[0]=0.5; x[1]:=evalf(subs(x=0.5,cos(x)));

:= starting_point = x0 0.5

:= x1 0.8776

3

Page 4: Fixed Point Iteration - Maplesoft - Technical Computing

> for i from 2 to 23 do x[i]:=evalf(subs(x=%,cos(x))) od;

:= x2 0.6390

:= x3 0.8027

:= x4 0.6948

:= x5 0.7682

:= x6 0.7192

:= x7 0.7523

:= x8 0.7301

:= x9 0.7451

:= x10 0.7350

:= x11 0.7418

:= x12 0.7373

:= x13 0.7403

:= x14 0.7383

:= x15 0.7396

:= x16 0.7387

:= x17 0.7393

:= x18 0.7389

:= x19 0.7392

:= x20 0.7390

:= x21 0.7391

:= x22 0.7391

:= x23 0.7391> Using the starting point x[0] = 0.5 we need 21 iterations in order to arrive at the solution p = 0.7391. If we choose x[0] = 0 or x[0] = 1 we arrive at the fixed point after 23 or 22 iterations, respectively. The next example is concerned with finding the solution f(x) = 0, where > restart:> f(x):=x-(1/2)*(sin(x)+cos(x));

:= ( )f x − − x12

( )sin x12

( )cos x

Using the MAPLE command "fsolve" we immediately arrive at the solution: > p:=fsolve(f(x)=0);

:= p 0.7048120020The fixed point form is x = g(x) with g(x) = [sin(x) + cos(x)]/2. Thus, one can read the fixed point from the following Figure: > alias(H=Heaviside,th=thickness,sc=scaling,co=color):

4

Page 5: Fixed Point Iteration - Maplesoft - Technical Computing

> p[1]:=plot({x,(sin(x)+cos(x))/2},x=0..1, sc=constrained,th=3,co=black):

> p[2]:=plot({1,H(x-1),-H(x-1.001),0.7048*H(x-0.7048), 0.7048*H(x-0.7058)},x=0..1.001,co=black, title="Fixed Point at x = 0.7048"):

> p[3]:=plot([[0.7048,0.7048]],style=point,symbol=circle, symbolsize=30):

> plots[display](seq(p[k],k=1..3));

> The fixed point iteration is given as follows: > g(x):=(sin(x)+cos(x))/2;> x[0]:=0; x[1]:=evalf(subs(x=0,g(x)),2);>

:= ( )g x + 12

( )sin x12

( )cos x

:= x0 0

:= x1 0.50> for i from 2 to 10 do x[i]:=evalf(subs(x=%,g(x))) od;

:= x2 0.6785040503

:= x3 0.7030708012

:= x4 0.7047118221

:= x5 0.7048062961

:= x6 0.7048116773

:= x7 0.7048119834

:= x8 0.7048120009

:= x9 0.7048120019

5

Page 6: Fixed Point Iteration - Maplesoft - Technical Computing

:= x10 0.7048120020We see, the 10th iteration x[10] is identical to the above MAPLE solution. The following steps are concerned with both the a-priori and the a-posteriori error estimate. At first we determine the LIPPSCHITZ constant from the absolute derivative of the function g(x): > absolute_derivative:=abs(Diff(g(xi),xi))=abs(diff(g(x),x));

:= absolute_derivative = ddξ

( )g ξ − 12

( )cos x12

( )sin x

> p[1]:=plot(abs(diff(g(x),x)),x=0..1,sc=constrained, th=3,co=black):

> p[2]:=plot({H(x),H(x-1),-H(x-1.001)},x=0..1.001,co=black, title="Absolute Derivative of g(x)"):

> plots[display](p[1],p[2]);

The greatest value of the absolute derivative in the neighbourhood of the expected fixed point, here in the interval x = [0, 1], may be assumed to be as the LIPPSCHITZ constant: > L:=evalf(subs(x=0,abs(diff(g(x),x))),2);

:= L 0.50Assuming " i " iterations, then the a-priori error estimate is given by: > for i from 1 by 3 to 10 do

a_priori_estimate[i]:=evalf((L^i/(1-L))*abs(x[1]-x[0]),5) od;

:= a_priori_estimate1 0.50000

:= a_priori_estimate4 0.062500

:= a_priori_estimate7 0.0078125

:= a_priori_estimate10 0.00097655> The necessary number of iterations for a given error " epsilon " can be calculated by the formula on page 2:> iterations[epsilon]>=ln((1-L_)*epsilon/abs(X[1]-X[0]))/ln(L_);

⎝⎜⎜

⎠⎟⎟ln

( ) − 1 L_ ε− + X1 X0

( )ln L_iterationsε

> iterations[epsilon]:= evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L),3);

6

Page 7: Fixed Point Iteration - Maplesoft - Technical Computing

:= iterationsε −1.44 ( )ln 1.00 ε> for i from 1 to 5 do iterations[epsilon=10^(-i)]:=

evalf(ln((1-L)*10^(-i)/abs(x[1]-x[0]))/ln(L),2) od;

:= iterations = ε / 1 10 3.3

:= iterations = ε / 1 100 6.7

:= iterations = ε / 1 1000 10.

:= iterations = ε / 1 10000 13.

:= iterations = ε / 1 100000 17.> The a-posteriori error estimate after the 10th iteration is given by:> a_posteriori_estimate[k]:=(L_/(1-L_))*abs(x[k]-x[k-1]);

:= a_posteriori_estimatek

L_ − xk x − k 1

− 1 L_> x[k]:=x[10]; x[k-1]:=x[9];

:= xk 0.7048120020

:= x − k 1 0.7048120019> a_posteriori_estimate[10]:=

subs({k=10,L_=L},a_posteriori_estimate[k]);

:= a_posteriori_estimate10 0.1000000000 10-9

> The 10th iteration is identical to the MAPLE solution: x[10] = p. The necessary a-priori-iterations with the above a-posteriori error = 0.1*10^(-9) is given by: > a_priori_iterations[epsilon]:=-1.44*ln(1.00*epsilon);

:= a_priori_iterationsε −1.44 ( )ln 1.00 ε> a_priori_iterations[evalf(0.1*10^(-9),2)]:=

evalf(subs(epsilon=0.1*10^(-9),%),3);

:= a_priori_iterations0.10 10-9 33.1

> The necessary a-priori-iterations can alternatively expressed as follows:> for k in [0.1,0.01,0.001,0.0001,0.000001,0.0000000001] do

a_priori_iterations[epsilon=k]:=evalf(-1.44*ln(k),2) od;

:= a_priori_iterations = ε 0.1 3.2

:= a_priori_iterations = ε 0.01 6.4

:= a_priori_iterations = ε 0.001 9.7

:= a_priori_iterations = ε 0.0001 13.

:= a_priori_iterations = ε 0.1 10-5 20.

:= a_priori_iterations = ε 0.1 10-9 32.

> Another example is concerned with the zero form f(x) = 0, where

7

Page 8: Fixed Point Iteration - Maplesoft - Technical Computing

> restart:> f(x):=1+cosh(x)*cos(x);

:= ( )f x + 1 ( )cosh x ( )cos xThe MAPLE command "fsolve" immediately furnishes the solution:> p:=fsolve(f(x)=0);

:= p 1.875104069The fixed point form is x = g(x) with g(x) = x - h(x)*f(x). In this example we have to introduce a continuous function h(x) according to BANACH's theorem. A suitable function is given by: > h(x):=-exp(-x);

:= ( )h x −e( )−x

> g(x):=x-h(x)*f(x);

:= ( )g x + x e( )−x

( ) + 1 ( )cosh x ( )cos xThus, one can read the fixed point from the following Figure:> alias(H=Heaviside,sc=scaling,th=thickness,co=color):> p[1]:=plot({x,g(x)},x=1..3,sc=constrained,th=3,

xtickmarks=5,ytickmarks=4,co=black):> p[2]:=plot({3,3*H(x-3)},x=1..3.001,1..3,co=black,

title="Fixed Point at x = 1.8751"):> p[3]:=plot({2+H(x-1.5)-1.5*H(x-1.5)},x=1.49..1.5001,1.5..2,

linestyle=4,th=2,co=black):> p[4]:=plot({2+H(x-2)-1.5*H(x-2)},x=1.99..2.001,1.5..2,

linestyle=4,th=2,co=black):> p[5]:=plot({1.5,2},x=1.5..2,linestyle=4,th=2,co=black):> p[6]:=plot([[1,1],[3,1],[3,3],[1,3],

[1.5,1.5],[2,1.5],[2,2],[1.5,2],[1.8751,1.8751]], style=point,symbol=circle,symbolsize=30):

> plots[display](seq(p[k],k=1..6));

8

Page 9: Fixed Point Iteration - Maplesoft - Technical Computing

> In this Figure we have drawn two squares, in which the fixed point x = 1.8751 is contained. Both x and y = g(x) are elements of [a, b] = [1, 3] or, alternatively, of [a, b] = [1.5, 2]. That is: the operator " g " maps an interval [a, b] into itself, valid for both squares. In the following the fixed point iteration has been discussed based upon the two squares. We will see that the rate of convergence is faster by considering the smaller square. Firstly, let us discuss the iteration based upon the greater square. After that we compare these results with the rate of convergence based upon the smaller square. The LIPPSCHITZ constant can be determined from the absolute derivative of the function g(x): > absolute_derivative:=abs(Diff(g(xi),xi))=abs(diff(g(x),x)):> simplify(%):> subs({sinh(x)=(exp(x)-exp(-x))/2,cosh(x)=(exp(x)+exp(-x))/2},%):> simplify(%);

= ddξ

( )g ξ − + + + + 1 e( )−x

e( )−2 x

( )cos x12

( )sin x12

e( )−2 x

( )sin x

> p[1]:=plot(rhs(%),x=1..3,th=3,co=black):> p[2]:=plot({1,H(x-3)},x=1..3.001,co=black,

title="Absolute Derivative of g(x)"): > p[3]:=plot({0.3651,0.3651*H(x-1.8751)},x=1..1.8752,

linestyle=4,co=black):> p[4]:=plot([[1.8751,0.3651]],style=point,

symbol=circle,symbolsize=30):> plots[display](seq(p[k],k=1..4));

> The greatest value of the derivative in this Figure can be considered as the LIPPSCHITZ constant: > L:=evalf(subs(x=3,abs(diff(g(x),x))),4);

:= L 0.8824Assuming the starting point x[0] = 2, we arrive at the following fixed point iteration: > x[0]:=2; x[1]:=evalf(subs(x=2,g(x)));

:= x0 2

:= x1 1.923450867> for i from 2 to 21 do x[i]:=evalf(subs(x=%,g(x))) od;

:= x2 1.893171371

:= x3 1.881762275

9

Page 10: Fixed Point Iteration - Maplesoft - Technical Computing

:= x4 1.877544885

:= x5 1.875997102

:= x6 1.875430574

:= x7 1.875223412

:= x8 1.875147687

:= x9 1.875120010

:= x10 1.875109895

:= x11 1.875106198

:= x12 1.875104847

:= x13 1.875104353

:= x14 1.875104173

:= x15 1.875104107

:= x16 1.875104083

:= x17 1.875104074

:= x18 1.875104071

:= x19 1.875104070

:= x20 1.875104069

:= x21 1.875104069> alpha:=Lambda^k*abs(xi[1]-xi[0])/(1-Lambda);

:= αΛk − + ξ1 ξ0

− 1 Λ> alpha:=evalf(subs({k=20,Lambda=L,xi[0]=x[0],xi[1]=x[1]},%),4);

:= α 0.05363We see, the 20th iteration x[20] is identical to the MAPLE solution p = 1.875104069. The following steps illustrade both the a-priori and the a-posteriori error estimate. Assuming " i " iterations, then the a-prori estimate is given by: > for i in [1,5,10,15,20,30,40,50] do

a_priori_estimate[i]:=evalf(L^i*abs(x[1]-x[0])/(1-L),4) od;

:= a_priori_estimate1 0.5777

:= a_priori_estimate5 0.3503

:= a_priori_estimate10 0.1874

:= a_priori_estimate15 0.1003

:= a_priori_estimate20 0.05363

:= a_priori_estimate30 0.01535

:= a_priori_estimate40 0.004392

:= a_priori_estimate50 0.001257

10

Page 11: Fixed Point Iteration - Maplesoft - Technical Computing

> The necessary number of iterations for a given error " epsilon " can be calculated by the formula on page 2: > iterations[epsilon]:=

evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L),4);

:= iterationsε −7.994 ( )ln 1.527 ε> for i in

[0.5777,0.3503,0.1003,0.05363,0.01535,0.004392,0.001257] do iterations[epsilon=i]:= evalf(subs(epsilon=i,iterations[epsilon]),2) od;

:= iterations = ε 0.5777 1.1

:= iterations = ε 0.3503 5.2

:= iterations = ε 0.1003 15.

:= iterations = ε 0.05363 20.

:= iterations = ε 0.01535 30.

:= iterations = ε 0.004392 40.

:= iterations = ε 0.001257 50.> The a-posteriori error estimate after the 20th iteration is given by: > x[20]:=1.875104069; x[19]:=1.875104070;

:= x20 1.875104069

:= x19 1.875104070> beta:=a_posteriori_estimate[20]=L*abs(x[20]-x[19])/(1-L);

:= β = a_posteriori_estimate20 0.7503401361 10-8

> Iserting this value into the a-priori estimate, we find the following number of necessary iterations: > a_priori_iterations[epsilon=evalf(0.75*10^(-8),2)]:=

evalf(subs(epsilon=0.75*10^(-8),iterations[epsilon]),3);

:= a_priori_iterations = ε 0.75 10-8 146.

> In the above discussion we have considered a range x = [1, 3] , where the LIPPSCHITZ constant is L = [0, 1) . Besides the fixed point p = 1.875104069 the function f(x) = 1 + cosh(x)*cos(x) has the following zeros:> for i from 1 to 8 do ZERO[i-1..i]:=fsolve(f(x)=0,x,i-1..i) od;

:= ZERO .. 0 1 ( )fsolve , , = + 1 ( )cosh x ( )cos x 0 x .. 0 1

:= ZERO .. 1 2 1.875104069

:= ZERO .. 2 3 ( )fsolve , , = + 1 ( )cosh x ( )cos x 0 x .. 2 3

11

Page 12: Fixed Point Iteration - Maplesoft - Technical Computing

:= ZERO .. 3 4 ( )fsolve , , = + 1 ( )cosh x ( )cos x 0 x .. 3 4

:= ZERO .. 4 5 4.694091133

:= ZERO .. 5 6 ( )fsolve , , = + 1 ( )cosh x ( )cos x 0 x .. 5 6

:= ZERO .. 6 7 ( )fsolve , , = + 1 ( )cosh x ( )cos x 0 x .. 6 7

:= ZERO .. 7 8 7.854757438> p[1]:=plot(f(x),x=0..5,-20..10,th=3,co=black):> p[2]:=plot({-20,10,10*H(x-5),-20*H(x-5)},x=0..5.001,co=black,

title="f(x) := 1 + cosh(x)*cos(x)"):> plots[display](seq(p[k],k=1..2));

> The absolute derivative of the function g(x) in the interval x = [0, 5] is illustrated in the following Figure: > p[1]:=plot(abs(diff(g(x),x)),x=0..5,0..1.5,th=3,co=black):> p[2]:=plot({1,1.5,1.5*H(x-5),1.5*H(x-3.22)},x=0..5.001,co=black,

title="Absolute Derivative of g(x)"):> plots[display](p[1],p[2]);

> x[derivative=1]:=evalf(fsolve(abs(diff(g(x),x))=1,x,-1..5),3);

:= x = derivative 1 3.22> We see, in the above considered interval x = [1, 3] the absolute derivative of the function g(x) is smaller than one. In this range we find the fixed point p = 1.875104069. Note, instead of the interval x = [1, 3] it would be possible and more comfortable to consider the range x = [1.5, 2] ,

12

Page 13: Fixed Point Iteration - Maplesoft - Technical Computing

for instance. Then we arrive at the following detals, in contrast to the foregoing results:> restart:> f(x):=1+cosh(x)*cos(x);

:= ( )f x + 1 ( )cosh x ( )cos x> p:=fsolve(f(x)=0);

:= p 1.875104069> h(x):=-exp(-x);

:= ( )h x −e( )−x

> g(x):=x-h(x)*f(x);

:= ( )g x + x e( )−x

( ) + 1 ( )cosh x ( )cos x> alias(H=Heaviside,sc=scaling,th=thickness,co=color):> p[1]:=plot({x,g(x)},x=1.5..2,1.5..2,

sc=constrained,th=3,co=black, title="Fixed Point at x = 1.8751"):

> p[2]:=plot({2,2*H(x-2)},x=1.5..2.001,1.5..2, xtickmarks=5,ytickmarks=4,co=black):

> p[3]:=plot(1.8751*H(x-1.8751),x=1.8750..1.8752,1.5..1.8751, co=black):

> p[4]:=plot([[1.8751,1.8751]],style=point,symbol=circle, symbolsize=30):

> plots[display](seq(p[k],k=1..4));

> The LIPPSCHITZ constant can be determined from the absolute derivative | g'(x) | . > absolute_derivative:=abs(Diff(g(xi),xi))=

simplify(abs(diff(g(x),x))):> simplify(subs({sinh(x)=(exp(x)-exp(-x))/2,

cosh(x)=(exp(x)+exp(-x))/2},%));

= ddξ

( )g ξ − + + + + 1 e( )−x

e( )−2 x

( )cos x12

( )sin x12

e( )−2 x

( )sin x

> p[1]:=plot(rhs(%),x=1.5..2,0..0.5,th=3,co=black):

13

Page 14: Fixed Point Iteration - Maplesoft - Technical Computing

> p[2]:=plot({0.5,H(x-2)},x=1.5..2.001,co=black, title="Absolute Derivative | g'(x) |"):

> p[3]:=plot(0.3651*H(x-1.8751),x=1.8750..1.8752,0..0.3651, linestyle=4,co=black):

> p[4]:=plot([[1.8751,0.3651]],style=point,symbol=circle, symbolsize=30):

> p[5]:=plot(0.3651,x=1.5..1.8751,linestyle=4,co=black):> plots[display](seq(p[k],k=1..5));

> The greatest value of the derivative in this Figure can be considered as LIPPSCHITZ' s constant: > L:=evalf(subs(x=2,abs(diff(g(x),x))),4);

:= L 0.4090in contrast to the foregoing value L = 0.8824. Inserting the fixed point p = 1.8751 into the absolute derivative, one arrives at an optimal value: > L[opt]:=evalf(subs(x=1.8751,abs(diff(g(x),x))),4);

:= Lopt 0.3651> However, the fixed point " p " is "not yet" known. Thus, the following results have been based upon the value L = 0.4090. From the foregoing fixed point iteration we know the following data: > x[0]:=2; x[1]:=evalf(subs(x=2,g(x)));

x[19]:=1.875104070; x[20]:=1.875104069; L:=0.4090;

:= x0 2

:= x1 1.923450867

:= x19 1.875104070

:= x20 1.875104069

:= L 0.4090> We see, the 20th iteration x[20] is identical to the MAPLE solution finding by using the MAPLE command fsolve. With the improved LIPPSCHITZ constant L = 0.4090 in contrast to L = 0.8824 we calculate:> alpha:=Lambda^k*abs(xi[1]-xi[0])/(1-Lambda);

14

Page 15: Fixed Point Iteration - Maplesoft - Technical Computing

:= αΛk − + ξ1 ξ0

− 1 Λ> alpha:=evalf(subs({k=20,Lambda=L,xi[0]=x[0],xi[1]=x[1]},%),2);

:= α 0.30 10-8

in contrast to alpha = 0.05363 of the foregoing calculation based upon L = 0.8824. The following steps illustrade both the a-priori and the a-posteriori error estimate. Assuming " i " iterations, then the a-priori estimate is given by: > for i in [1,5,10,15,20] do

a_priori_estimate[i]:=evalf(L^i*abs(x[1]-x[0])/(1-L),2) od;

:= a_priori_estimate1 0.069

:= a_priori_estimate5 0.0020

:= a_priori_estimate10 0.000022

:= a_priori_estimate15 0.27 10-6

:= a_priori_estimate20 0.30 10-8

> The last value is identical to alpha. The necessary number of iterations for a tolerated error " epsilon " can be calculated by the formula on page 2:> iterations[epsilon]:=

evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L),4);

:= iterationsε −1.119 ( )ln 7.675 ε> for i in [0.069,0.002,0.000022,evalf(0.27*10^(-6),2),

evalf(0.3*10^(-8),2)] do iterations[epsilon=i]:= evalf(subs(epsilon=i,iterations[epsilon]),2) od;

:= iterations = ε 0.069 0.69

:= iterations = ε 0.002 4.6

:= iterations = ε 0.000022 9.6

:= iterations = ε 0.27 10-6 14.

:= iterations = ε 0.30 10-8 20.

> The a-posteriori error estimate after the 20th iteration is given by: > beta:=Lambda*abs(xi[k]-xi[k-1])/(1-Lambda);

:= βΛ − ξk ξ − k 1

− 1 Λ> beta:=

simplify(subs({k=20,Lambda=L,xi[k]=x[20],xi[k-1]=x[19]},%));

:= β 0.6920473773 10-9

> beta:=evalf(%,2);

:= β 0.69 10-9

> Q:=Alpha/Beta=evalf(alpha/beta,3);

15

Page 16: Fixed Point Iteration - Maplesoft - Technical Computing

:= Q = ΑΒ

4.35

> Inserting the value of beta = 0.692*10^(-9) into the a-priori estimate, we find again the necessary iterations: > a_priori_iterations[epsilon=evalf(0.69*10^(-9),2)]:=

evalf(subs(epsilon=0.69*10^(-9),iterations[epsilon]),2);

:= a_priori_iterations = ε 0.69 10-9 21.

> Comparing the above results, based upon L = 0.8824 and L = 0.4090, we see that the rate of convergence essentially depends on the factor L^k in the formula for alpha. The smaller the value of L , the faster the convergence, which is very slow if the LIPPSCHITZ constant L is close to one. The next example is concerned with the zero form f(x) = x - exp(x^2-2) = 0. > restart:> f(x):=x-exp(x^2-2); g(x):=x-f(x);

:= ( )f x − x e( ) − x2 2

:= ( )g x e( ) − x2 2

The MAPLE command "fsolve" immediately furnishes the solution: > p:=fsolve(f(x)=0);

:= p 0.1379348256The fixed point form is x = g(x) with g(x) = exp(x^2-2). Thus, one can read the fixed point from the following Figure: > alias(H=Heaviside,sc=scaling,th=thickness,co=color):> p[1]:=plot({x,g(x)},x=0.1..0.2,0.1..0.2,

sc=constrained,th=3,co=black):> p[2]:=plot(0.1379*H(x-0.1379),x=0.137..0.138,co=black):> p[3]:=plot({0.2,0.2*H(x-0.2)},x=0.1..0.2001,co=black,

title="Fixed Point at x = 0.1379"):> p[4]:=plot([[0.1379,0.1379]],style=point,symbol=circle,

symbolsize=30):> plots[display](seq(p[k],k=1..4));

16

Page 17: Fixed Point Iteration - Maplesoft - Technical Computing

> The fixed point iteration is given as follows: > x[0]:=0.12; x[1]:=evalf(subs(x=0.12,g(x)));

:= x0 0.12

:= x1 0.1372982105> for i from 2 to 7 do x[i]:=evalf(subs(x=%,g(x))) od;

:= x2 0.1379106591

:= x3 0.1379339061

:= x4 0.1379347905

:= x5 0.1379348242

:= x6 0.1379348256

:= x7 0.1379348256We see, the 6th iteration x[6] is identical to the above MAPLE solution. The following steps are concerned with both the a-priori and the a-posteriori error estimate. At first we determine the LIPPSCHITZ constant from the absolute derivative | g'(x) | . > absolute_derivative:=abs(Diff(g(xi),xi))=diff(g(x),x);

:= absolute_derivative = ddξ

( )g ξ 2 x e( ) − x2 2

> p[1]:=plot(diff(g(x),x),x=0.1..0.2,0..0.06,th=4,co=black):> p[2]:=plot({0.06,0.06*H(x-0.2)},x=0.1..0.2001,co=black,

title="Absolute Derivative | g'(x) |"):> p[3]:=plot([[0.1379,0.038]],style=point,symbol=circle,

symbolsize=30):> p[4]:=plot(0.038,x=0.1..0.1379,linestyle=4,co=black):> p[5]:=plot(0.038*H(x-0.1379),x=0.137..0.138,linestyle=4,

co=black):> plots[display](seq(p[k],k=1..5));

17

Page 18: Fixed Point Iteration - Maplesoft - Technical Computing

The greatest value of the absolute derivative in the neighbourhood of the expected fixed point, here in the interval x = [0.1, 0.2], may be assumed to be as the LIPPSCHITZ constant: > L:=evalf(subs(x=0.2,diff(g(x),x)),2);

:= L 0.056Assuming " i " iterations, then the a-priori error estimate is given by: > for i from 1 to 6 do a_priori_estimate[i]:=

evalf((L^i/(1-L))*abs(x[1]-x[0]),2) od;

:= a_priori_estimate1 0.0012

:= a_priori_estimate2 0.000066

:= a_priori_estimate3 0.38 10-5

:= a_priori_estimate4 0.20 10-6

:= a_priori_estimate5 0.12 10-7

:= a_priori_estimate6 0.66 10-9

> The necessary number of iterations for a given error "epsilon" can be calculated by the formula on page 2: > iterations[epsilon]>=

ln((1-Lambda)*epsilon/abs(xi[1]-xi[0]))/ln(Lambda);

⎝⎜⎜

⎠⎟⎟ln

( ) − 1 Λ ε− + ξ1 ξ0

( )ln Λiterationsε

> iterations[epsilon]:= evalf(subs({Lambda=L,xi[1]=x[1],xi[0]=x[0]},lhs(%)),4);

:= iterationsε −0.3470 ( )ln 54.56 ε> for i in[0.0012,0.000066,evalf(0.38*10^(-5),2),

evalf(0.20*10^(-6),2),evalf(0.12*10^(-7),2), evalf(0.66*10^(-9),2)] do iterations[epsilon=i]:= evalf(subs(epsilon=i,-0.347*ln(54.56*epsilon)),2) od;

:= iterations = ε 0.0012 0.94

:= iterations = ε 0.000066 2.0

18

Page 19: Fixed Point Iteration - Maplesoft - Technical Computing

:= iterations = ε 0.38 10-5 3.0

:= iterations = ε 0.20 10-6 3.8

:= iterations = ε 0.12 10-7 4.9

:= iterations = ε 0.66 10-9 6.0

> The a-posteriori error estimate after the 6th iteriation is given by: > x[5]:=0.1379348242; x[6]:=0.1379348256; L:=0.056;

:= x5 0.1379348242

:= x6 0.1379348256

:= L 0.056> beta:=a_posteriori_estimate[6]=L*abs(x[6]-x[5])/(1-L);

:= β = a_posteriori_estimate6 0.8305084746 10-10

> alpha:=Lambda^k*abs(xi[1]-xi[0])/(1-Lambda);

:= αΛk − + ξ1 ξ0

− 1 Λ> x[0]:=0.12; x[1]:=0.1372982105; L:=0.056;

:= x0 0.12

:= x1 0.1372982105

:= L 0.056> alpha:=a_priori_error_estimate[6]=L^6*abs(x[1]-x[0])/(1-L);

:= α = a_priori_error_estimate6 0.5651416893 10-9

> Q:=Alpha/Beta=evalf(0.565142*10^(-9)/(0.83051*10^(-10)),4);

:= Q = ΑΒ

6.805

> Inserting the value of beta = 0.8305*10^(-10) into the a-priori estimate, we find again the necessary iterations: > a_priori_iterations[epsilon=evalf(0.805*10^(-10),2)]:=

evalf(subs(epsilon=0.8305*10^(-10),-0.347*ln(54.56*epsilon)),1);

:= a_priori_iterations = ε 0.80 10-10 6.

> In the last example the convergence is very fast, since the LIPPSCHITZ constant L = 0.056 is very small. >

LEGENDRE Polynomials

Evaluating integrals numerically the GAUSS - LEGENDRE quadrature is very convenient. In order to apply this method one need the so called GAUSS points, which are identical to the zeros of the LEGENDRE polynomials [BETTEN, J.: Finite Elemente für Ingenieure 2,

19

Page 20: Fixed Point Iteration - Maplesoft - Technical Computing

zweite Auflage, Springer-Verlag, Berlin / Heidelberg / New York 2004]. Using the MAPLE package orthopoly we immediately find the LEGENDRE polynomials: > restart:> with(orthopoly):> Legendre[n]:=P(n,x);

:= Legendren ( )P ,n x> for i in [0,1,4,5,8] do Legendre[i]:=P(i,x) od;

:= Legendre0 1

:= Legendre1 x

:= Legendre4 + − 38

358

x4 154

x2

:= Legendre5 − + 638

x5 354

x3 158

x

:= Legendre8 + − + − 35128

6435128

x8 300332

x6 346564

x4 31532

x2

> the zeros of which in the interval (0, 1) are given as follows: > for i in [1,4,5,8] do ZERO[i][0..1]:=

fsolve(P(i,x)=0,x,0..1) od;

:= ZERO1 .. 0 1

0.

:= ZERO4 .. 0 1

,0.3399810436 0.8611363116

:= ZERO5 .. 0 1

, ,0. 0.5384693101 0.9061798459

:= ZERO8 .. 0 1

, , ,0.1834346425 0.5255324099 0.7966664774 0.9602898565

> The roots of the LEGENDRE polynomials lie in the interval (-1, 1) and have a symmetry with respect to the origin as has been illustrated in the following Figure:> alias(H=Heaviside,sc=scaling,th=thickness,co=color):> p[1]:=plot({P(1,x),P(3,x),P(4,x),P(5,x)},x=-1..1,-1..1,

sc=constrained,th=3,co=black,xtickmarks=4,ytickmarks=4, title="LEGENDRE Polynomials P(n,x)"):

> p[2]:=plot({-1,1,H(x-1),-H(x-1)},x=0..1.001,co=black):> p[3]:=plot({H(x+1),-H(x+1)},x=-1.001..0,co=black):> plots[display](seq(p[k],k=1..3));

20

Page 21: Fixed Point Iteration - Maplesoft - Technical Computing

> Instead of using the MAPLE command fsolve in finding the zeros of the LEGENDRE polinomials we are going to use the fixed point iteration in the following. In the first example we consider the polynomial P(4,x):> restart:> with(orthopoly):> Legendre[P][4]:=P(4,x);

:= LegendreP4

+ − 38

358

x4 154

x2

> f(x):=8*%;

:= ( )f x + − 3 35 x4 30 x2

> ZERO[0..1]:=fsolve(f(x)=0,x,0..1);

:= ZERO .. 0 1 ,0.3399810436 0.8611363116> From the polynomial f(x) we read the following fixed point forms:> g[1](x):=x+f(x);

:= ( )g1 x + + − x 3 35 x4 30 x2

> g[2](x):=sqrt(1/10+7*x^4/6);

:= ( )g2 x + 90 1050 x4

30> g[4](x):=(6*x^2/7-3/35)^(1/4);

:= ( )g4 x⎛

⎝⎜⎜

⎠⎟⎟ −

6 x2

7335

( ) / 1 4

> One can show that only g[2] is compatible with BANACH's fixed-point theorem. > g(x):=g[2](x);

:= ( )g x + 90 1050 x4

30> alias(H=Heaviside,sc=scaling,th=thickness,co=color):> p[1]:=plot({x,g(x)},x=0.3..0.4,0.3..0.4,sc=constrained,

21

Page 22: Fixed Point Iteration - Maplesoft - Technical Computing

xtickmarks=4,ytickmarks=4,th=3,co=black, title="Fixed Point at x = 0.34"):

> p[2]:=plot({0.4,0.4*H(x-0.4)},x=0.3..0.4001,co=black):> p[3]:=plot(0.34*H(x-0.34),x=0.339..0.3401,co=black):> p[4]:=plot([[0.34,0.34]],style=point,symbol=circle,

symbolsize=30):> plots[display](seq(p[k],k=1..4));

> This Figure shows that the operator " g " maps the interval x = [0.3, 0.4] into itself :The graph y = g(x) is contained in the square Q = { (x, y) | x = [0.3, 0.4], y = [0.3, 0.4] }.> abs(Diff(g(xi),xi))=abs(diff(g(x),x));

= ddξ

( )g ξ 70x3

+ 90 1050 x4

> p[1]:=plot(rhs(%),x=0.3..0.4,th=3,co=black, xtickmarks=3,ytickmarks=3, title="Absolute Derivative | g'(x) |"):

> p[2]:=plot({0.6,0.6*H(x-0.4)},x=0.3..0.4001,0..0.6,co=black):> p[3]:=plot([[0.34,subs(x=0.34,abs(diff(g(x),x)))]],

style=point,symbol=circle,symbolsize=30):> p[4]:=plot((subs(x=0.34,abs(diff(g(x),x)))*H(x-0.34),

x=0.3..0.34001,co=black)):> plots[display](seq(p[k],k=1..4));

> We see, in the neighbourhood of the expected fixed point x = p the absolute derivative is less than one. The LIPPSCHITZ constant can be assumed to be the greatest value in (p-delta, p+delta):

22

Page 23: Fixed Point Iteration - Maplesoft - Technical Computing

> L:=(p-delta,p+delta)[max];

:= L ( ), − p δ + p δmax

> L:=evalf(subs(x=0.4,abs(diff(g(x),x))),2);

:= L 0.41> With the starting point x[0] = 0.3 we get the following fixed point iteration:> x[0]:=0.3; x[1]:=subs(x=0.3,g(x));

:= x0 0.3

:= x1 0.3308322838> for i from 2 to 15 do x[i]:=subs(x=%,g(x)) od;

:= x2 0.3376030997

:= x3 0.3393458083

:= x4 0.3398101553

:= x5 0.3399349860

:= x6 0.3399686240

:= x7 0.3399776943

:= x8 0.3399801403

:= x9 0.3399808000

:= x10 0.3399809780

:= x11 0.3399810260

:= x12 0.3399810390

:= x13 0.3399810423

:= x14 0.3399810433

:= x15 0.3399810437> The 15th iteration x[15] is "identical" to the MAPLE solution based upon the command fsolve.Assuming " i " iterations, then the a-priori error estimate is given by: > for i from 3 by 4 to 15 do a_priori_estimate[i]:=

evalf((L^i/(1-L))*abs(x[1]-x[0]),2) od;

:= a_priori_estimate3 0.0036

:= a_priori_estimate7 0.000096

:= a_priori_estimate11 0.28 10-5

:= a_priori_estimate15 0.81 10-7

> The necessary number of iterations for a given error " epsilon " can be calculated by the formula on page 2: > iterations[epsilon]:=

23

Page 24: Fixed Point Iteration - Maplesoft - Technical Computing

evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L),4);

:= iterationsε −1.122 ( )ln 19.16 ε> for i in [0.0036,0.000096,evalf(0.28*10^(-5),2),

evalf(0.81*10^(-7),2)] do iterations[epsilon=i]:= evalf(subs(epsilon=i,iterations[epsilon]),2) od;

:= iterations = ε 0.0036 3.0

:= iterations = ε 0.000096 6.9

:= iterations = ε 0.28 10-5 11.

:= iterations = ε 0.81 10-7 14.

> beta:=a_posteriori_estimate[15]=L*abs(x[15]-x[14])/(1-L);

:= β = a_posteriori_estimate15 0.2779661017 10-9

> beta:=evalf(%,2);

:= β = a_posteriori_estimate15 0.28 10-9

> alpha:=a-priori-estimate[15]=a_priori_estimate[15];

:= α = − − a priori estimate15 0.81 10-7

> Q:=A/B=evalf(0.81*10^(-7)/(0.69*10^(-9)),4);

:= Q = AB

117.4

> Inserting the value of beta = 0.28*10^(-9) into the a-priori estimate, we find the necessary a-priori iterations of i = 21 instead of 15 iterations with alpha = 0.81*10^(-7) : > a_priori_iterations[epsilon=evalf(0.28*10^(-9),2)]:=

evalf(subs(epsilon=0.28*10^(-9),iterations[epsilon]),2);

:= a_priori_iterations = ε 0.28 10-9 21.

> The next example is concerned with finding the zero of the LEGENDRE polynom P(5, x) in the interval x = [0.5, 0.6]. The results are briefly listed in the following: > restart:> with(orthopoly):> Legendre[p][5]:=P(5,x);

:= Legendrep5

− + 638

x5 354

x3 158

x

> f(x):=8*%;

:= ( )f x − + 63 x5 70 x3 15 x> ZERO[0.5..0.6]:=evalf(fsolve(f(x)=0,x,0.5..0.6),8);

:= ZERO .. 0.5 0.6 0.53846931> From the polynomial f(x) we read the following fixed point forms: > g[0](x):=x+f(x);

:= ( )g0 x + − 16 x 63 x5 70 x3

24

Page 25: Fixed Point Iteration - Maplesoft - Technical Computing

> g[1](x):=(70*x^3-63*x^5)/15;

:= ( )g1 x − 143

x3 215

x5

> g[3](x):=((15*x+63*x^5)/70)^(1/3);

:= ( )g3 x⎛⎝⎜⎜

⎞⎠⎟⎟ +

314

x910

x5( ) / 1 3

> g[5](x):=((70*x^3-15*x)/63)^(1/5);

:= ( )g5 x⎛⎝⎜⎜

⎞⎠⎟⎟ −

109

x3 521

x( ) / 1 5

> One can show that only g[3] is compatible with BANACH's fixed- point theorem, since g[3] maps the interval x = [0.5, 0.6] to itself and the absolute derivative | g'[3] | is smaller than one.

> g(x):=g[3](x);

:= ( )g x⎛⎝⎜⎜

⎞⎠⎟⎟ +

314

x910

x5( ) / 1 3

> alias(H=Heaviside,sc=scaling,th=thickness,co=color):> p[1]:=plot({x,g(x)},x=0.5..0.6,0.5..0.6,sc=constrained,

xtickmarks=4,ytickmarks=4,th=3,co=black, title="Fixed Point at x = 0.538"):

> p[2]:=plot({0.6,0.6*H(x-0.6)},x=0.5..0.6001,co=black):> p[3]:=plot(0.538*H(x-0.538),x=0.537..0.539,co=black):> p[4]:=plot([[0.538,0.538]],style=point,symbol=circle,

symbolsize=30):> plots[display](seq(p[k],k=1..4));

> In this Figure the graph y = g(x) is contained in the square Q = { (x, y) | x = [0.5, 0.6], y = [0.5, 0.6] }. The operator " g " maps the interval x = [0.5, 0.6] into itself . > abs(Diff(g(xi),xi))=abs(diff(g(x),x));

25

Page 26: Fixed Point Iteration - Maplesoft - Technical Computing

= ddξ

( )g ξ13

+ 314

9 x4

2

⎛⎝⎜⎜

⎞⎠⎟⎟ +

314

x910

x5( ) / 2 3

> p[1]:=plot(rhs(%),x=0.5..0.6,0.6..0.8,th=3,co=black, xtickmarks=3,ytickmarks=3, title="Absolute Derivative | g'(x) |"):

> p[2]:=plot({0.8,0.8*H(x-0.6)},x=0.5..0.6001,co=black):> p[3]:=plot(0.681*H(x-0.538),x=0.537..0.539,co=black):> p[4]:=plot([[0.538,0.681]],style=point,symbol=circle,

symbolsize=30):> plots[display](seq(p[k],k=1..4));

> We see, in the neighbourhood of the expected fixed point x = p the absolute derivative is less than one. The LIPPSCHITZ constant can be assumed to be the greatest value in the range (p-delta, p+delta): > L:=(p-delta, p+delta)[max];

:= L ( ), − p δ + p δmax

> L:=evalf(subs(x=0.6,abs(diff(g(x),x))),2);

:= L 0.76> With the starting point x[0] = 0.5 we receive the following fixed point iteration:> x[0]:=0.5; x[1]:=evalf(subs(x=0.5,g(x)),8);

:= x0 0.5

:= x1 0.51333184> for i from 2 to 25 do x[i]:=evalf(subs(x=%,g(x)),8) od;

:= x2 0.52180783

:= x3 0.52732399

:= x4 0.53096887

:= x5 0.53340154

:= x6 0.53503604

26

Page 27: Fixed Point Iteration - Maplesoft - Technical Computing

:= x7 0.53613917

:= x8 0.53688593

:= x9 0.53739248

:= x10 0.53773657

:= x11 0.53797051

:= x12 0.53812967

:= x13 0.53823800

:= x14 0.53831176

:= x15 0.53836199

:= x16 0.53839620

:= x17 0.53841950

:= x18 0.53843538

:= x19 0.53844620

:= x20 0.53845357

:= x21 0.53845859

:= x22 0.53846200

:= x23 0.53846433

:= x24 0.53846591

:= x25 0.53846699> The 25th iteration x[25] is "identical" to the MAPLE solution based on the command fsolve. Assuming " i " iterations, then the a-priori error estimate is given by:> for i from 5 by 5 to 25 do

a_priori_estimate[i]:=evalf((L^i/(1-L))*abs(x[1]-x[0]),4) od;>

:= a_priori_estimate5 0.01406

:= a_priori_estimate10 0.003563

:= a_priori_estimate15 0.0009033

:= a_priori_estimate20 0.0002290

:= a_priori_estimate25 0.00005808> The necessary number of iterations for an allowable error " epsilon " can be calculated by the formula on page 2: > iterations[epsilon]:=

evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L),5);

:= iterationsε −3.6438 ( )ln 18.005 ε

27

Page 28: Fixed Point Iteration - Maplesoft - Technical Computing

> for i in [0.014,0.003563,0.0009033,0.000229,0.0000581] do iterations[epsilon=i]:= evalf(subs(epsilon=i,iterations[epsilon]),2) od;

:= iterations = ε 0.014 5.0

:= iterations = ε 0.003563 9.7

:= iterations = ε 0.0009033 15.

:= iterations = ε 0.000229 20.

:= iterations = ε 0.0000581 25.> beta:=a_posteriori_estimate[25]=L*abs(x[25]-x[24])/(1-L);

:= β = a_posteriori_estimate25 0.3420000000 10-5

> beta:=evalf(%,3);

:= β = a_posteriori_estimate25 0.342 10-5

> alpha:=a_priori_estimate[25];

:= α 0.00005808> Q:=Alpha/Beta=evalf(alpha/rhs(beta),4);

:= Q = ΑΒ

16.98

> Inserting the value of beta = 0.342*10^(-5) into the a-priori estimate, we find the necessary a-priori iterations of i = 35 instead of 25 iterations with alpha = 0.5808*10^(-4) :> a_priori_iterations[epsilon=evalf(0.342*10^(-5),3)]:=

evalf(subs(epsilon=0.342*10^(-5),iterations[epsilon]),2);

:= a_priori_iterations = ε 0.342 10-5 35.

> In the same way as before one can find the zeros of the CHEBYSHEV polynomials T(n, x), which are also orthogonal polynomials contained in the MAPLE package orthopoly. Applications of the fixed point iteration to systems of nonlinear equations have been discussed in more detail, for instance, by BETTEN, J.: Finite Elemente für Ingenieure 2, zweite Auflage, Springer Verlag, Berlin / Heidelberg / New York 2004. Instead of the fixed point iteration one can utilize the NEWTON or NEWTON-RAPHSON iteration for solving root-finding problems. NEWTON's method is one of the most well-known and powerful numerical methods, the convergence order of which is at least two. In contrast, the fixed point iteration is linearly convergent. In general, a sequence with a high order of convergence converges more rapidly than a sequence with a lower order. Some examples have been discussed by BETTEN, J: "NEWTON's Method in Comparison with the Fixed Point Iteration" published as a worksheet in the MAPLE Application Center. >

28


Recommended