Ch 7. Iterative Techniques in Matrix...

Post on 23-Jul-2020

0 views 0 download

transcript

Ch 7. Iterative Techniques in Matrix Algebra

Jeong-hwan Gwak

2

Contents

7.1 Norms of Vectors and Matrices 7.1 Norms of Vectors and Matrices 7.2 7.2 EigenvaluesEigenvalues and Eigenvectorsand Eigenvectors7.3 Iterative Techniques for Solving Linear Systems7.3 Iterative Techniques for Solving Linear Systems7.4 Error Bounds and Iterative Refinement7.4 Error Bounds and Iterative Refinement7.5 The Conjugate Gradient Method7.5 The Conjugate Gradient Method

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

(1)(1)

(2)(2)

19

20

21

Jacobi Method

22

23

24

Gauss-Seidel Method

25

26

27

28

29

Convergence of general iterative technique

30

31

SOR Method

if w=1 : Gauss-Seidel method

32

33

34

35

How the appropriate value of w is chosen?

36

37

38

39

40

41

42

43

44

Department of Mathematics

The conjugate gradient Method

• To solve positive definite linear systemn n×

• More computationally expensive than those in Gaussian elimination

• Very useful when employed as an iterative approximation method for solving large sparse systems with nonzero entries occurring inpredictable patterns

• When the matrix has been preconditioned to make the calculationsmore effective, good results are obtained in only about steps.n

Department of Mathematics

▪ Assumption matrix is positive definite

▪ Notation (inner product)where and are vectors.

▪ Some properties ( inner product )For any vectors , , and , and any real number

A ( , , 0 )tA A unless∀ ⟨ ⟩ = > =x x x x x x 0

, t⟨ ⟩ =x y x y x y dimn −

x y z αi ) , ,⟨ ⟩ = ⟨ ⟩x y y xii ) , , ,α α α⟨ ⟩ = ⟨ ⟩ = ⟨ ⟩x y x y x yiii ) , , ,⟨ + ⟩ = ⟨ ⟩ + ⟨ ⟩x z y x y z yiv) ,⟨ ⟩ ≥x x 0v) ,⟨ ⟩ = ⇔ =x x 0 x 0vi ) , ,A A when A is positive definite⟨ ⟩ = ⟨ ⟩x y x y

Department of Mathematics

The vector is a solution to the positive definite linear system

iff minimizes

Forhas minimum ,

when

(i ) *x =Ax b

( ) , 2 ,g = ⟨ ⟩ − ⟨ ⟩x x Ax x b

(ii ) ,∀ ≠x v 0( )g t+x v

,,

t ⟨ − ⟩=

⟨ ⟩v b Ax

v Av

Theorem

*x

,,

⟨ ⟩=⟨ ⟩

v rv Av

Department of Mathematics

· is an initial approximation to . is an initial search direction.

· For

compute

.

and choose a new search direction .

(0)x *x(1) ( )≠v 0

1 , 2 , 3 ,k =

( ) ( 1)

( ) ( )

,,

k k

k k kt−⟨ − ⟩

=⟨ ⟩

v b Axv Av

( ) ( 1) ( )k k kkt

−= +x x v

( 1)k+v

Department of Mathematics

· Choice of the search directions (method of steepest descent)

1 2( , , , )tnx x x= ⋅ ⋅ ⋅x

1 2( ) ( , , , )ng g x x x=x

, 2 ,A= ⟨ ⟩ − ⟨ ⟩x x x b

1 1 1

2n n n

ij i j i ii j i

a x x x b= = =

= −∑∑ ∑

1( ) 2 2

n

ki i kik

g a x bx =

∂⇒ = −

∂ ∑x

1 2

( ) ( ), ( ), , ( )t

n

g g ggx x x

⎛ ⎞∂ ∂ ∂⇒∇ = ⎜ ⎟∂ ∂ ∂⎝ ⎠

x x x x 2( ) 2A= − = −x b r

( 1) ( ) ( )k k kA+⇒ = = −v r b x

The direction of greatest decrease in the value of is the direction given by (i.e. in the direction of the residual )

( )g x( )g−∇ x r

Department of Mathematics

Alternative approach

( ) ( 1) ( ) ( 1)

( ) ( ) ( ) ( )

, ,

, ,

k k k k

k k k k k

At

A A

− −−⇒ = =

v b x v r

v v v v

Def. A - orthogonality condition

A set of nonzero direction vectors { }(1) (2) ( ), , , nv v vi i i

that satisfy ( ) ( ), 0i jA =v v i j≠if

( ) ( 1) ( )k k kkt

−= +x x v

Department of Mathematics

( ) ( 1)

( ) ( )

,

,

k k

k k k

At

A

−−=

v b x

v v

Theorem

Then, assuming exact arithmetic,

{ }(1) (2) ( ), , , nv v v

: positive definite matrix

( ) ( 1) ( )k k kkt

−= +x x v

A(0)x : arbitrary

Define

for 1 , 2 , 3 ,k =

( )n =Ax b

: A –orthogonal set of nonzero vectors

Department of Mathematics

( ) ( ), 0k j =r v

Theorem

The residual vectors ( )kr

for each

1 , 2 , 3 , ,k n=

1 , 2 , 3 , ,j k=

, for a conjugate direction method, satisfy

, where

Department of Mathematics

1. Initial approximation

Conjugate gradient method(0)x

First search direction (1) (0) (0)= = −v r b Ax

( 1) ( 2) ( 1)1

k k kkt

− − −−= +x x v2.

( ) ( ), 0 ,i j⟨ ⟩ =v Av ( ) ( ), 0i j⟨ ⟩ =r r

and

i j≠for

⇒ (1) ( 1), , k−⋅⋅ ⋅v vfind (1) ( 1), , k−⋅⋅ ⋅x x

3. If is the solution to ,=Ax b done .

Otherwise, ( 1) ( 1)k k− −= − ≠r b Ax 0( 1) ( ), 0k i−⟨ ⟩ =r v for 1, 2, , 1.i k= ⋅⋅⋅ −

⇒ ( ) ( 1) ( 1)1

k k kks− −−= +v r v

( 1)k−x

where

Department of Mathematics

4. choose 1,ks − so that ( 1) ( ), 0k k−⟨ ⟩ =v Av

( ) ( 1) ( 1)1

k k kks− −−= +Av Ar Av

⇒ ( 1) ( ),k k−⟨ ⟩v Av ( 1) ( 1)1 ,k k

ks− −

−+ ⟨ ⟩v Ar( 1) ( 1),k k− −= ⟨ ⟩v Ar

⇒( 1) ( 1)

1 ( 1) ( 1)

,,

k k

k k ks− −

− − −

⟨ ⟩= −

⟨ ⟩v Arv Av

( 1) ( ), 0k k−⟨ ⟩ =v Av⇒

5. ( ) ( ), 0k i⟨ ⟩ =v Av for each 1, 2, , 2 .i k= ⋅⋅⋅ −

⇒ (1) ( ){ , , }k⋅ ⋅ ⋅v v is an A-orthogonal set .

Department of Mathematics

6. Having chosen ( ) ,kv complete

( 1) ( 1) ( 1)( ) ( 1)1

( ) ( ) ( ) ( )

( 1) ( 1) ( 1) ( 1)

1( ) ( ) ( ) ( )

( 1) ( 1)( 1) ( 1)

( ) ( )

( ) ( 1) ( )

,,, ,

, ,, ,, ( , 0)

,

k k kk kk

k k k k k

k k k k

kk k k k

k kk k

k k

k k kk

st

s

t

− − −−−

− − − −

− −− −

⟨ + ⟩⟨ ⟩= =⟨ ⟩ ⟨ ⟩

⟨ ⟩ ⟨ ⟩= +⟨ ⟩ ⟨ ⟩

⟨ ⟩= ⟨ ⟩ =⟨ ⟩

⇒ = +

r v rv rv Av v Av

r r v rv Av v Avr r v rv Av

x x v

Department of Mathematics

7. Compute

8. ComputeSince ,

`

( )kr( ) ( 1) ( )k k k

kt−= +x x v

( ) ( ) ( 1) ( ) ( ) ( )

( ) ( )

, , ,

,

k k k k k kk

k kk

t

t

−⇒ ⟨ ⟩ = ⟨ ⟩ − ⟨ ⟩

= − ⟨ ⟩

r r r r Av r

r Av

( ) ( 1) ( )k k kkt

−⇒ = −r r Av

( ) ( 1) ( )k k kkt

−⇒ − = − +Ax b Ax b Av

( 1) ( )( , 0 )k k−⟨ ⟩ =r r∵

ks( 1) ( 1)

( ) ( )

,,

k k

k k kt− −⟨ ⟩

=⟨ ⟩

r rv Av

( 1) ( 1) ( ) ( ), ,k k k kkt

− −⟨ ⟩ = ⟨ ⟩r r v Av

( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

( 1) ( 1) ( 1) ( 1)

, ,, ,

(1 / ) , ,(1 / ) , ,

k k k k

k k k k k

k k k kk

k k k kk

s

tt − − − −

⟨ ⟩ ⟨ ⟩⇒ = − = −

⟨ ⟩ ⟨ ⟩

⟨ ⟩ ⟨ ⟩= =

⟨ ⟩ ⟨ ⟩

v Ar r Avv Av v Av

r r r rr r r r

Department of Mathematics

Summary

for(0 ) (0 ) (1) (0 );= − =r b Ax v r 1 , 2 , 3 , ,k n=

( 1) ( 1)

( ) ( )

,,

k k

k k kt− −⟨ ⟩

=⟨ ⟩

r rv Av

( ) ( 1) ( )k k kkt

−= −x x v

( ) ( 1) ( )k k kkt

−= −r r Av

( ) ( )

( 1) ( 1)

,,

k k

k k ks − −

⟨ ⟩=

⟨ ⟩r r

r r

( 1) ( ) ( )k k kks+ = −v r v

Department of Mathematics

Extend the conjugate gradient Method

• If the matrix A is ill-conditioned,the conjugate gradient method is highly susceptible to rounding errors» the exact answer shoud not be obtained in n steps

n

to include preconditioning

• If the matrix A is well-conditioned,An acceptable approximate solution is often in about steps

Department of Mathematics

-1 -1( )t=A C A C

1. Choose nonsingular conditioning matrix C

2. Consider =Ax b

where t=x C x -1=b C b -1 -( ( ) )t t=C C-1 -1( )( )t t−⇒ = =Ax C AC C Ax C Ax

3. Solve =Ax b xfor-t⇒ =x C x

4. Preconditioning( ) ( )k t k=x C x( ) ( ) 1 1 ( )

1 ( ) 1 ( )

( )( )

k k t t k

k k

− − −

− −

⇒ = − = −

= − =

r b Ax C b C AC C xC b Ax C r

Department of Mathematics

5. Let

6.

( ) ( ) ( ) 1 ( ),k t k k kC−= =v C v w r( ) ( ) 1 ( ) 1 ( )

( 1) ( 1) 1 ( 1) 1 ( 1)

( ) ( )

( 1) ( 1)

, ,, ,,,

k k k k

k k k k k

k k

k k

s− −

− − − − − −

− −

⟨ ⟩ ⟨ ⟩⇒ = =

⟨ ⟩ ⟨ ⟩

⟨ ⟩=

⟨ ⟩

r r C r C rr r C r C r

w ww w

( 1) ( 1) 1 ( 1) 1 ( 1)

( ) 1 ( )( ) ( )

( 1) ( 1) ( 1) ( 1)

( ) 1 ( ) ( ) ( )

, ,,,

, ,, ,

k k k k

k t k t t kk k

k k k k

t k k k k

t− − − − − −

− −

− − − −

⟨ ⟩ ⟨ ⟩= =

⟨ ⟩⟨ ⟩

⟨ ⟩ ⟨ ⟩= =

⟨ ⟩ ⟨ ⟩

r r C r C rC v C A C C vv A v

w w w wC v C A v v A v

Department of Mathematics

7.

8.

9.

( ) ( 1) ( )k k kkt

−= +x x v( ) ( 1) ( )t k t k t k

kt−⇒ = +C x C x C v

( ) ( 1) ( )k k kkt

−⇒ = +x x v

( ) ( 1) ( )k k kkt

−= −r r Av1 ( ) 1 ( 1) 1 ( )k k t k

kt− − − − −⇒ = −C r C r C AC v

( ) ( 1) ( )k k t t kkt

− −⇒ = −r r AC C v( ) ( 1) ( )k k k

kt−⇒ = −r r Av

( 1) ( ) ( )k k kks

+ = +v r v( 1) 1 ( ) ( )t k k t k

ks+ −⇒ = +C v C r C v( 1) 1 ( ) ( )

( ) ( )

k t k kk

t k kk

s

s

+ − −

⇒ = +

= +

v C C r v

C w v

Department of Mathematics

Summary

( 1) ( 1)

( ) ( )

,,

k k

k k kt− −⟨ ⟩

=⟨ ⟩w wv Av

( ) ( 1) ( )k k kkt

−= +x x v( ) ( 1) ( )k k k

kt−= −r r Av

( ) ( )

( 1) ( 1)

,,

k k

k k ks − −

⟨ ⟩=

⟨ ⟩w w

w w

( 1) ( ) ( )k t k kks

+ −= +v C w v

(0 ) (0 ) (1) (0 );= − =r b Ax v r 1 , 2 , 3 , ,k n=for