+ All Categories
Home > Documents > Numerical Solutions of Algebraic Equations: Iterative Methods

Numerical Solutions of Algebraic Equations: Iterative Methods

Date post: 27-Mar-2022
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
40
Numerical Solutions of Algebraic Equations: Iterative Methods Institute of Lifelong Learning, University of Delhi pg.1 Lesson: Numerical Solutions of Algebraic Equations: Iterative Methods Course Developer: Brijendra Yadav & Chaman Singh Department/College: Assistant Professor, Department of Mathematics, A.N.D. College, University of Delhi
Transcript
Institute of Lifelong Learning, University of Delhi pg.1
Lesson: Numerical Solutions of Algebraic Equations: Iterative
Methods
Department/College: Assistant Professor, Department of
Mathematics, A.N.D. College, University of Delhi
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.2
Table of Contents
Methods 1: Learning Outcomes
4: Matrix Norms
o 4.1: Euclidean Norm or Frobenius Norm of a Matrix
o 4.2: Maximum Norm of a Matrix
o 4.3: Hilbert Norm or Spectral Norm of a Matrix
5: ILL-Conditioned Linear Systems
o 5.2: Residual
Method
o 7.3. Successive Over Relaxation (SOR) Method
8: Convergence Analysis of Iterative Methods
Exercises
Summary
Reference
1. Learning outcomes:
After studying this chapter you should be able to understand the
Vector Norms
Matrix Norms
Maximum Norm of a Matrix
Hilbert Norm or Spectral Norm of a Matrix
ILL-Conditioned Linear Systems
Residual
Gauss-Seidel Iterative Method
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.3
2. Introduction:
We have studied that Gauss elimination method and its variants
belongs to the direct methods to solve the system of linear equations.
These methods yield the solution after an amount of computation that can
be specified in advance. In contrast, in an indirect or iterative method,
we start with an approximate solution and if convergent obtain better and
better approximations from a computational cycle repeated as often as
may be necessary for achieving a desired accuracy. The amount of
arithmetic in a iterative method depends upon the accuracy required and
varies from case to case. The iterative method is used if the convergence
is rapid (if matrices have large main diagonal entries).
In general, one should prefer a direct method for the solution of a
linear system but in the case of matrices with a large number of zero
elements, it will be advantageous to use iterative methods which preserve
these elements.
3. Vector Norms:
Let , 1, 2, . . ., njX x j is a vector. The distance between a vector
and the null vector is a measure of the size or length of the vector. This is
also called the norm of the vector.
The norm of a vector X is a real number denoted by X and satisfies the
following axioms:
3.1. P-norm of a Vector:
The p-norm of a vector 1 2( , , . . . , )nX x x x is denoted by p
X and
defined as
np X x x x
where p is a fixed number and 1p .
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.4
3.2. 1 -norm of a Vector:
The 1-norm of a vector 1 2( , , . . . , )nX x x x is denoted by 1
X and
defined as
3.3. 2 -norm or Euclidean norm of a Vector:
The Euclidean or 2 -norm of a vector 1 2( , , . . . , )nX x x x is denoted by
2 e X or X and defined as
1
2 2 2 2 2 22
1 2 1 2. . . . . .n ne X x x x x x x .
2 e X or X is called the Euclidean norm because it is just the formula for
distance in the three-dimensional Euclidean space.
3.4. -norm or Maximum Norm of a Vector:
The -norm or maximum norm of a vector 1 2( , , . . . , )nX x x x is
denoted by X and defined as
1 2maximum , , . . . , maxn j j
X x x x x .
X is also called the uniform norm.
4. Matrix Norms:
Let ijA a is a nxn matrix. The matrix norm is denoted by A and
is a non-negative number which satisfies the following axioms:
(I) 0 0 0 0A if A and A iff A .
(II) kA k A for any arbitrary number k
(III) A B A B (Triangular Inequality).
(IV) AB A B
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.5
Frobenius norm of a matrix A is denoted by ( ) e
F A or A and defined
as
1
A F A a a


Frobenius norm of a matrix is also called Euclidean norm of the matrix.
4.2. Maximum Norm of a Matrix:
4.2.1. Maximum Absolute Row Sum Norm of a Matrix:
Maximum absolute row sum norm of a matrix is denoted by A
and defined as
max ij i
4.2.2. Maximum Absolute Column Sum Norm of a Matrix:
Maximum absolute column sum norm of a matrix is denoted by 1
A
4.3. Hilbert Norm or Spectral Norm of a Matrix:
The Hilbert norm of a matrix A is denoted by 2
A and defined as
If A is Hermitian or real and symmetric, then
2 2( ) ( )A A
( )A A .
The choice of a particular norm is dependent mostly on practical
considerations. The row-norm is however most widely used because it is
easy to compute and at the same time, provides a fairly adequate
measure of the size of the matrix.
Example 1: Given the matrix
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.6
1 3 5
2 4 7
6 8 9
.
Solution: We have
1 max[(1 2 6), (3 4 8), (5 7 9)]
max[9, 15, 21] 21.
A

2 2 2 2 2 2 2 2 21 3 5 2 4 7 6 8 9
1 9 25 4 16 49 36 64 81
285
16.88
max[(1 3 5), (2 4 7), (6 8 9)]
max[9, 13, 23] 23.
A
5. ILL-Conditioned Linear Systems:
In practical applications, if 'small' changes in the input data cause
'large' changes in the solution (the output). Then the computational
problem is called ill-conditioned. On the other hand, if the corresponding
changes in the solution are also small, then the system is called well-
conditioned.
Let
Now we define
A k A
s s s
If k(A) is near to unity then the system is called well-conditioned
otherwise it is called ill-conditioned.
5.1. Condition Number of a Matrix:
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.7
The condition number of non-singular square matrix A is denoted by
( )k A and defined as
1( )k A A A .
or


where and are the largest and smallest eigen values in modulus of
*A A.
*


where * and * are the largest and the smallest eigenvalues in modulus
of A.
5.2. Residual:
Let AX b is a system of equations. Then the residual r of an approximate
solution X of AX b is defined as
r b AX
Value Addition: Note
The residual r is small if X has a high accuracy, but the converse may not
be true.
Theorem 1: A linear system of equations AX = b whose condition
number is small is well-conditioned. A large condition number indicates ill-
conditioning.
AX = b (1)
AX b
Institute of Lifelong Learning, University of Delhi pg.8
A X b
Then we have
X b (2)
We know the residue of an approximate solution X of AX b is
( )r A X X (3)
Multiplying (3) by 1A from left and interchanges sides, we have
1 1 ( )A r A A X X
1A r X X
1X X A r
1X X A r
11X X A r
b implies a small relative error
X X
X
. So that the system is well-conditioned. However if k(A) is large
then the system is ill-conditioned.
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.9
Example 2: Determine the Euclidean and the maximum absolute row
sum norms of the matrix
1 7 4
4 4 9
12 1 3
Therefore,
2
( ) 1 49 16 16 16 81 144 1 9 333F A
or ( ) 333 18.25F A .
max n
ij i
Example 3: Show that the matrix
25 24 10
66 78 37
92 73 80
Using the formula
Institute of Lifelong Learning, University of Delhi pg.10
1 2
Since k is very small therefore matrix A is ill-conditioned.
Example 4: Determine the condition number of the matrix
1 4 9
4 9 16
9 16 25
Solution: We have
1 4 9
4 9 16
9 16 25
By finding the inverse of the matrix A we have
1
max{14, 29, 50} 50.A
Maximum absolute row-sum norm of 1 1A A

8 8 A
1( ) 50 15 750.k A A A

Institute of Lifelong Learning, University of Delhi pg.11
The accuracy of an approximate solution can be improve by an
iterative procedure.
11 1 12 2 1 1
21 1 22 2 2 2
1 1 2 2
a x a x a x b
a x a x a x b
a x a x a x b




Let (1) (1) (1)
1 2, , . . ., nx x x is an approximate solution of the system of equations
(1). On substituting these values in (1), we have
(1) (1) (1) (1)
(1) (1) (1) (1)
(1) (1) (1) (1)
1 1 2 2
a x a x a x b
a x a x a x b
a x a x a x b




11 1 12 2 1 1
21 1 22 2 2 2
1 1 2 2
a e a e a e d
a e a e a e d
a e a e a e d




where (1) (1)
i i i i i ie x x and d b b
on solving system of equations (3) we get the values of 1 2, ,. . ., ne e e .
Since
we have
i i ix x e
which is a latter approximation for ix . This process can be repeated to
improve upon the accuracy.
2 2
Institute of Lifelong Learning, University of Delhi pg.12
Solution: Given system of equations is
2 2
is an approximate solution of the given system.
Substituting these value in (1), we have
(1) (1)
(1) (1)
2 3
(2)
On subtracting the equations of (2) from the corresponding equations of
(1), we have
(1) (1)1 0
Hence
1
7. Solutions of Linear Systems: Iterative Methods:
In the Iterative methods, we start with an approximate solution and if
convergent, derive a sequence of closer approximations. The cycle of
computation is repeated so as to obtain the desired accuracy. This means
that in a direct method the amount of computation is fixed, while in an
iterative method the amount of computation depends on the accuracy
required.
Let us consider the system of simultaneous linear equation as
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.13
11 1 12 2 13 3 1 1
21 1 22 2 23 3 2 2
1 1 2 2 3 3
. . .
. . .
. . .
a x a x a x a x b
a x a x a x a x b
a x a x a x a x b




(1)
Let the diagonal coefficients iia in (1) do not vanish. If this condition is
not satisfied then rearrange the equation to satisfy this condition.
Now rearrange the equations as
13 11 12 1 2 3
11 11 11 11
22 22 22 22
. . .
. . .
. . .
nn nn nn nn
a a a a
a a a a
a a a a
(2)
Let the first approximations to the unknowns 1 2 3, , , . . ., nx x x x be
(1) (1) (1) (1)
1 2 3, , , . . ., nx x x x . Putting these in the R.H.S. of (2), a system of second
approximation is obtained.
11 11 11 11
22 22 22 22
. . .
. . .
. . .
nn nn nn nn
a a a a
a a a a
a a a a
Continuing in this way let (n) (n) (n) (n)
1 2 3, , , . . ., nx x x x be the nth approximations,
then the system of next approximations is given by
(n 1) ( ) ( ) ( )13 11 12 1 2 3
11 11 11 11
22 22 22 22
. . .
. . .
. . .
nn nn nn nn
a a a a
a a a a
a a a a
(4)
We proceed in this way until we get a result of desired accuracy.
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.14
In the matrix form the solution of system of equations can be written as
X HX C
where H is called iteration matrix.
Thus, the (n+1)th approximation of iteration formula can be written as
( 1) ( )n nX HX C .
This method is also called the method of simultaneous displacements.
Example 6: Solve the system of equations
1 2 3
1 2 3
1 2 3
27 6 85
1 2 3
1 2 3
1 2 3
27 6 85
(1)
Rearranging the equations for the unknown with the largest coefficient in
terms of the remaining unknowns,



we have, first approximation
27 15 54 x x and x
Second approximations to the solution,
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.15



1 2 3,x x and x in (3) we have
(2) (2) (2)
Third approximations to the solution,



1 2 3,x x and x in (4) we have
(3) (2) (2)
Fourth approximations to the solution,



1 2 3,x x and x from (5) we have
(4) (4) (4)
Since the values of (4) (4) (4)
1 2 3,x x and x are sufficiently close to
(3) (3) (3)
1 2 3,x x and x respectively. Hence the values
(4) (4) (4)
1 2 32.4257, 3.5728 1.9259x x and x can be considered as the solution
of the given system.
Institute of Lifelong Learning, University of Delhi pg.16
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
(1)
Rearranging the equations for the unknown with the largest coefficient in
terms of the remaining unknowns,



In matrix form, the above system can be written as
X C HX
x
0
0
0
0
X
X
Institute of Lifelong Learning, University of Delhi pg.17
Second approximation is
X
X
(9)
2.9991
1.0012
1.0010
X


Since the values of (9)X are sufficiently close to (10)X . Hence the values
1 2 32.99 3, 1.000 1 1.00 1x x and x can be considered as the solution of
the given system.
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.18
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
(1)
Rearranging the equations for the unknown with the largest coefficient in
terms of the remaining unknowns,




In matrix form, the above system can be written as
X C HX
x
x
x
0
0
0
0
0
X
X
Institute of Lifelong Learning, University of Delhi pg.19
Second approximation is
X
X
X
X
X
Institute of Lifelong Learning, University of Delhi pg.20
Seventh approximation is
X
X
X
X
X
Institute of Lifelong Learning, University of Delhi pg.21
Twelve approximation is
X

Hence the required solution is 1 2 3 41, 2, 3 0x x x and x .
7.2. Gauss-Seidel Iterative Method:
method is a modification to Jacobi iteration method.
Let us consider the system of simultaneous linear equation as
11 1 12 2 13 3 1 1
21 1 22 2 23 3 2 2
1 1 2 2 3 3
. . .
. . .
. . .
a x a x a x a x b
a x a x a x a x b
a x a x a x a x b




(1)
Let the diagonal coefficients iia in (1) do not vanish. If this condition is
not satisfied then rearrange the equation to satisfy this condition.
Now rearrange the equations as
13 11 12 1 2 3
11 11 11 11
22 22 22 22
. . .
. . .
. . .
nn nn nn nn
a a a a
a a a a
a a a a
(2)
Let the first approximations to the unknowns 1 2 3, , , . . ., nx x x x be
(1) (1) (1) (1)
1 2 3, , , . . ., nx x x x . Then the second approximation system of next
approximations is given by
Institute of Lifelong Learning, University of Delhi pg.22
(2) (1) (1) (1)13 11 12 1 2 3
11 11 11 11
22 22 22 22
33 33 33 33
. . .
. . .
. . .
. . .
a a a a
a a a a
a a a a
a a a
Continuing in this way let (n) (n) (n) (n)
1 2 3, , , . . ., nx x x x be the nth approximations,
then the system of (n+1)th approximations is given by
(n 1) ( ) ( ) ( )13 11 12 1 2 3
11 11 11 11
22 22 22 22
(n 1) ( 1) ( 1) ( )31 32 32 3 1 2
33 33 33 33
. . .
. . .
. . .
a a a a
a a a a
a a a a
a a
(n 1) (n)
21 1 22 2 2 2
3
1 1 2 2 . . .
a x a x a x b
a x a x a x b










(5)
In matrix notation, the system of equations (3) can be written as
(n 1) (n)( )D L X UX b
or (n 1) 1 (n) 1
(n)
HX C n
(6)
where 1 1( ) ( )H D L U and C D L b .
This solution can also be written as
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.23

(n) 1 (n) 1
(n) 1 (n) 1
X D L D L U X D L b
X D L AX D L b
X D L b AX








(n 1) (n) 1 (n)( )X X D L b AX
(n 1) (n) (n)( )D L X X b AX
or we may write it as
(n) (n)( )D L V r (7)
where (n) (n 1) (n) (n) (n)V X X and r b AX
Solve the equation (7) for (n)V by forward substitution. The solution is
then found from
This gives the final solution of the Gauss-Seidel method.
We proceed in this way until we get a result of desired accuracy. Gauss-
Seidel method is also called the method of successive displacements.
Example 9: Solve the system of equations
1 2 3
1 2 3
1 2 3
27 6 85
1 2 3
1 2 3
1 2 3
27 6 85
(1)
Rearranging the equations for the unknown with the largest coefficient in
terms of the remaining unknowns,
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.24



let (n) (n) (n)
1 2 3, andx x x be the nth approximations, then the (n+1)th
approximations is given by
(n 1) (n 1) ( )
3 1 2
(0) (0) (0)
first approximation to (1)
1 1. .x i e x , using equation (3) we have
(1) (0) (0)
1 2 3
1 85 6
first approximation to (1)
2 2. .x i e x , using equation (4) we have
(1) (1) (0)
2 1 3
15 x
first approximation to (1)
3 3. .x i e x , using equation (5) we have
(1) (1) (1)
3 1 2
54 x
Institute of Lifelong Learning, University of Delhi pg.25
Thus first approximation to the solution is
(1) (1) (1)
second approximation to (2)
1 1. .x i e x , using equation (3) we have
(2) (1) (1)
1 2 3
1 85 6
27 x
second approximation to (2)
2 2. .x i e x , using equation (4) we have
(2) (2) (1)
2 1 3
15 x
second approximation to (2)
3 3. .x i e x , using equation (5) we have
(2) (2) (2)
3 1 2
54 x
(2) (2) (2)
Similarly third approximation to the solution is
(3) (3) (3)
Since the values of (2) (2) (2)
1 2 3,x x and x are sufficiently close to
(3) (3) (3)
1 2 3,x x and x respectively. Hence the values
(3) (3) (3)
1 2 32.426, 3.572 1.926x x and x can be considered as the solution of
the given system.
1 2 3
1 2 3
1 2 3
Institute of Lifelong Learning, University of Delhi pg.26
by Gauss-Seidel iterative method.
1 2 3
1 2 3
1 2 3
(1)
Rearranging the equations for the unknown with the largest coefficient in
terms of the remaining unknowns,



let (n) (n) (n)
1 2 3, andx x x be the nth approximations, then the (n+1)th
approximations is given by
(n 1) (n 1) ( )
3 1 2
(0) (0) (0)
first approximation to (1)
1 1. .x i e x , using equation (3) we have
(1) (0) (0)
1 2 3
6 x
first approximation to (1)
2 2. .x i e x , using equation (4) we have
(1) (1) (0)
2 1 3
Institute of Lifelong Learning, University of Delhi pg.27
(1)
2
4 x
first approximation to (1)
3 3. .x i e x , using equation (5) we have
(1) (1) (1)
3 1 2
5 x
(1) (1) (1)
second approximation to (2)
1 1. .x i e x , using equation (3) we have
(2) (1) (1)
1 2 3
6 x
second approximation to (2)
2 2. .x i e x , using equation (4) we have
(2) (2) (1)
2 1 3
4 x
second approximation to (2)
3 3. .x i e x , using equation (5) we have
(2) (2) (2)
3 1 2
5 x
(2) (2) (2)
third approximation to (2)
1 1. .x i e x , using equation (3) we have
(3) (2) (2)
1 2 3
Institute of Lifelong Learning, University of Delhi pg.28
(3)
1
6 x
third approximation to (2)
2 2. .x i e x , using equation (4) we have
(3) (3) (2)
2 1 3
4 x
third approximation to (2)
3 3. .x i e x , using equation (5) we have
(3) (3) (3)
3 1 2
5 x
(3) (3) (3)
Fourth approximation to (2)
1 1. .x i e x , using equation (3) we have
(4) (3) (3)
1 2 3
6 x
fourth approximation to (2)
2 2. .x i e x , using equation (4) we have
(4) (4) (3)
2 1 3
4 x
fourth approximation to (2)
3 3. .x i e x , using equation (5) we have
(4) (4) (4)
3 1 2
5 x
Institute of Lifelong Learning, University of Delhi pg.29
Thus fourth approximation to the solution is
(4) (4) (4)
Since the values of (4) (4) (4)
1 2 3,x x and x are sufficiently close to
(3) (3) (3)
1 2 3,x x and x respectively. Hence the values
(4) (3) (3)
1 2 33.0024 3, 0.9982 1 0.9991 1x x and x can be considered as the
solution of the given system.
7.3. Successive Over Relaxation (SOR) Method:
This method is a generalization of the Gauss-Seidel method. SOR method
is generally used when the coefficient matrix of the system of equations is
symmetric and has property A.
Now, let us define an auxiliary vector X as
(n 1) (n 1) (n) 1X DLX DUX D b (1)
Let the final solution from Gauss-Seidel method is now written as
(n 1) (n) (n 1) (n)X X w X X
(n 1) (n) (n 1)(1 )X w X wX (2)
replacing the values from (1) in (2) we have
(n 1) 1 (n) 1( ) (1 ) ( )X D wL w D wU X w D wL b
(n 1) (n) , 0, 1, 2, . . .X HX C n (3)
where 1 1( ) (1 ) ( )H D wL w D wU and C w D wL b .
Equation (3) can also be written as
(n 1) (n) 1 (n) 1( ) ( ) (1 ) ( )X X D wL D wL w D wU X w D wL b
(n 1) (n) 1 ( )( ) nX X w D wL r
where ( ) (n)nr b AX is the residual.
This can also be written as
(n 1) (n) 1 ( )( ) nX X w D wL r
(n 1) (n) ( )( ) nD wL X X wr (4)
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.30
This equation describes the SOR method in its error format.
When w = 1, then equation (4) reduces to the Gauss-Seidel method. The
parameter w is called the Relaxation Parameter and ( 1)nX is a weighted
mean of ( 1)kX and ( )kX .
7.3.1. Over Relaxation and Under Relaxation Method:
If w > 1, then the SOR method is called Over Relaxation method and if
w<1, then SOR method is called Under Relaxation method.
7.3.2. Optimal Relaxation Parameter for the SOR Method:
The optimal relaxation parameter for the SOR method is denoted by
wopt and defined as



where is the largest Eigen value in modulus of the Jacobi iteration
matrix.
1 1
2 2
1 2 , ,
x ba


for a = 0.25 determine the optimal relaxation factor, if the system is to be
solved with relaxation method.
1 1
2 2
1 2 , ,
x ba
1 2 1
2 1 22
x ax b
x ax b


Thus the ( 1)n thX iteration of the Jacobi iteration method we have
( 1) ( ) 0
2 0
Institute of Lifelong Learning, University of Delhi pg.31
for the Eigen values of the matrix 0
2 0
Thus, Optimal relaxation factor is
2
1 1 1 1 2 optw
a
2
1 1
2 2


for k = 0.5 determine the optimal relaxation factor, if the system is to be
solved with relaxation method.
1 1
2 2
1 2 1
2 1 2
x kx b
x kx b


Thus the ( 1)n thX iteration of the Jacobi iteration method we have
( 1) ( ) 0
Institute of Lifelong Learning, University of Delhi pg.32
for the Eigen values of the matrix 0
0
k
k
Thus, Optimal relaxation factor is
2
k
2
8. Convergence Analysis of Iterative Methods:
To discuss the convergence of the iteration method for solving system of
equations AX b , we study the behaviour of the difference between the
exact solution X and an approximation ( )kX . An iterative method is said
to converge for an initial solution (0)X if the corresponding iterative
sequence (0) (1) (2), , , . . .X X X converges to a solution of the given system.
Convergence of an iterative method depends on the relation between ( ) ( 1)n nX and X .
Theorem 2: Let A be a square matrix. Then
lim 0n
n A
1A ,
Institute of Lifelong Learning, University of Delhi pg.33
nnA A
n n A A

For simplicity, assume that all the eigenvalues of A are distinct. Then,
there exists a similarity transformation P, such that
1A P DP
where D is the diagonal matrix having the eigenvalues of A on the
diagonal. Therefore
where
1
2
n A
, if and only if the eigenvalues satisfy 1i , that is ( ) 1A .
Theorem 3: The infinite series
2 . . .I A A
n A
Proof: (Necessary Condition) We know that
lim 0n
n A
, then ( ) 1A ,
Consider the identity
2 1. . . ( )n nI A A A I A I A
multiplying 1( )I A to the left, we have
2 1 1. . . ( )n nI A A A I A I A
As n, we get
Institute of Lifelong Learning, University of Delhi pg.34
2 1. . . ( )I A A I A .
Theorem 4: The iteration method for the solution of system of equations
converges to the exact solution for an initial vector (0)X , if the norm of
iteration matrix is less than unity i.e., 1H .
Proof: For an iterative method the (n+1)th approximation is given by
( 1) ( )n nX HX C (1)
where H is called the iteration matrix.
Without loss of generality may assume that the initial vector is
(0) 0X .
We have
(1)X C
(2) (1)X HX C HC C H I C
(3) (2) 2X HX C H H I C C H H I C
.
.
.
(n 1) (n) 1 . . .n nX HX C H H H I C
(n 1) 1lim lim . . .n n
n n X H H H I C

Hence proved.
Theorem 5: If the coefficient matrix A of the system of equations AX b
is a strictly diagonally dominant matrix, then any of the iteration method
converges for any initial starting vector.
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.35
Proof: We know that the ( 1)thn iteration of the system of equations
AX b by the iteration method is given by
( 1) ( )n nX HX C (1)
where H is called the iteration matrix.
(I) Jacobi Iteration Method:
For the Jacobi iteration method, the iteration matrix H is given by
1( )H D L U
1 1( ) (I )H D A D D A
We know that the iteration method converge if
1H
1 1
1 1


Which is true, since the matrix A is strictly diagonally dominant.
(I) Gauss-Seidel Iteration Method:
For the Gauss-Seidel iteration method, the iteration matrix H is given by
1
1




We know that the iteration method converge if
1H
We know that the iteration method will be convergent if
1I ( ) 1D L A
Institute of Lifelong Learning, University of Delhi pg.36
Let be an eigenvalue of 1I ( )D L A . then
1I ( )D L A X X
or ( ) ( )D L X AX D L X
or 1 1
, 1 n i

j i j


j i j


j i j

(3)
Since X is an eigenvector, 0X . Without loss of generality, we assume
that
1X
1 1i jX and X for all j i .
From equation (3), we have
1
Institute of Lifelong Learning, University of Delhi pg.37
1
1
1
1
n
ij







Which is true, since the matrix A is strictly diagonally dominant.
Exercise:
1. Solve the following system of equations using Jacobi Iteration method:
(I) 4 5 7
2. Solve the following system of equations using Gauss-Seidel iteration
method
(I)
Institute of Lifelong Learning, University of Delhi pg.38
(I)
2 3 9 61



.
(a) Show that the jacobi iteration method converges hence find its rate of
convergence.
(c) Starting with (0) 0X , iterate three times.
5. For the following system of equations
(I)



.
(a) Show that the Gauss-Seidel iteration method converges hence find its
rate of convergence.
(c) Starting with (0) 0X , iterate three times.
6. For the following system of equations
(I)
Institute of Lifelong Learning, University of Delhi pg.39
(II)
(a) Determine the convergence factor for the Jacobi and Gauss-Seidel
methods.
(b) Find the optimal relaxation parameter optw for the SOR iteration
method.
(I)



Show that both (i) Jacobi method and (ii) Gauss-Seidel iteration method
diverges diverge for solving the system of equations.
8. Find the necessary and sufficient conditions on k, so that the (i) Jacobi
method, (ii) Gauss-Seidel method converges for solving the system of
equations AX b where
Vector Norms
Matrix Norms
Maximum Norm of a Matrix
Numerical Solutions of Algebraic Equations: Iterative Methods
Institute of Lifelong Learning, University of Delhi pg.40
Hilbert Norm or Spectral Norm of a Matrix
ILL-Conditioned Linear Systems
Residual
Gauss-Seidel Iterative Method
References:
1. Brian Bradie, A Friendly Introduction to Numerical Analysis, Peason
Education, India, 2007.
2. M.K. Jain, S.R. K. Iyengar and R. K. Jain, Numerical Methods for
Scientific and Engineering Computation, New Age International
Publisher, India, 6th edition, 2007.

Recommended