+ All Categories
Home > Documents > Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical...

Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical...

Date post: 03-May-2018
Category:
Upload: dinhkhuong
View: 223 times
Download: 7 times
Share this document with a friend
64
[101] 003 Numerical Analysis Using M#math, www.msharpmath.com 1 "msharpmath" revised on 2014.09.01 Numerical Analysis Using M#math Chapter 3 Numerical Linear Algebra www.msharpmath.com * Translator or contributor ** *[101] Msharpmath Inc. **[your ID] Your affiliation and address. Please donate your talent to distribute Free Textbook for students. You can be either a translator or an author, or both. Your publication as a translator or an author will be posted in our homepage so that many students can download it. Contribution is chapter-based. You may participate in chapter-base contribution, and become a translator or an author of Free Textbook. If you are an instructor for the numerical analysis course, you may join us as an author of this free-textbook. The condition is that you need to translate (in your native language, but optional as an author), modify, expand, or rewrite parts of this textbook from your point of view. Also, it is recommended to include illustrating figures, more helpful examples or interesting exercise problems. Please contact us to get the original word files. In cases your contribution as an author is significant, your authorship will be permanently the second default author of each chapter as follows. Contribution as a translator will be by default posted permanently. First default author, Msharpmath Inc.* Second default author, Your Name** or translator, Your Name** Instructor’s name***
Transcript
Page 1: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

1

"msharpmath" revised on 2014.09.01

Numerical Analysis Using M#math

Chapter 3 Numerical Linear Algebra

www.msharpmath.com *

Translator or contributor **

*[101] Msharpmath Inc.

**[your ID] Your affiliation and address.

Please donate your talent to distribute Free Textbook for students. You can

be either a translator or an author, or both. Your publication as a translator or an

author will be posted in our homepage so that many students can download it.

Contribution is chapter-based. You may participate in chapter-base contribution,

and become a translator or an author of Free Textbook.

If you are an instructor for the numerical analysis course, you may join us

as an author of this free-textbook. The condition is that you need to translate (in

your native language, but optional as an author), modify, expand, or rewrite

parts of this textbook from your point of view. Also, it is recommended to

include illustrating figures, more helpful examples or interesting exercise

problems.

Please contact us to get the original word files. In cases your contribution

as an author is significant, your authorship will be permanently the second

default author of each chapter as follows. Contribution as a translator will be by

default posted permanently.

First default author, Msharpmath Inc.*

Second default author, Your Name**

or translator, Your Name**

Instructor’s name***

Page 2: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

2

[101] 003

Chapter 3 Numerical Linear Algebra

www.msharpmath.com *

Translator or contributor **

*[101] Msharpmath Inc.

**[your ID] Your affiliation and address.

3-1 Fundamental Concept

3-2 Basic Matrix Operations

3-3 Elementray Row Operations

3-4 Review of Coupled Linear Equations

3-5 Forward and Backward Substitution

3-6 Elimination Methods

3-7 Iteration Methods

3-8 Numerical Difficulties

3-9 Exercises ==============================

3-10 Tridiagonal Matrix and TDMA (optional)

3-11 LU Decomposition (optional)

3-12 Choleski Decomposition (optional)

3-13 QR Decomposition (optional)

3-14 PLU Decomposition (advanced)

Numerical linear algebra encompass a wide variety of fundamentals and

applications. These are matrix operations (addition/subtraction, multiplication,

inverse, determinant, elementary row operations etc.), coupled linear equations

, matrix eigenvalue problems (treated in the next chapter)

among others.

As for coupled linear equations, several key definitions are discussed. The

direct solution method for coupled linear equations are covered with

the Gauss elimination and the LU decomposition. However, the direct method

Page 3: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

3

requires a large memory space and expensive computational cost especially for

large scale problems. In such cases, the iteration methods such as the Gauss-

Seidel method, SIP (Strongly Implicit Procedure), CGS (Conjugate Gradient

Solver) are employed. However, basic concept of Gauss-Seidel method is

discussed in this chapter, and further in-depth treatment of other iterative

methods are provided in Appendix.

//============================================================================

// syntax (a lot more in ‘matrix’ page, www.msharpmath.com)

//----------------------------------------------------------------------------

.e_k(n) // k-th unit vector in the n-dimensional space

.I(m,n=m) // identity matrix of dimension m x n

.ones(m,n=m) // all-ones matrix

.zeros(m,n=m) // all-zeros matrix

.tdma(a,b,c,d, na,nb) // tridiagonal matrix algorithm

~A // ~A = A.conj, conjugate

A’ // A' = A.conjtr, conjugate transpose

A.’ // A.' = A.tr, transpose

x ** y // inner product, returns ‘double’

A | B // augmented matrix, horizontal concatenation

A.add(i,j,s) // R_i = R_i + R_j * s, elementary row operation

A.backsub // backward substitution for upper-triangular matrix

A.chol // Choleski decomposition

A.col(j) // j-th column vector

A.det // determinant

A.diag(k) // diagonal vector

A.diagm(k) // n x n diagonal matrix

A.gausselim // Gauss elimination in lower side, without pivot

A.gaussjord // Gauss-Jordan elimination with pivot

A.gausspivot // Gauss elimination with pivot (row exchange)

A.forsub // forward substitution for lower-triangular matrix

A.issquare // returns 1 if A is square, otherwise return 0

A.jord(i) // eliminate elements below and above a_ii

A.minor(i,j) // (m-1) x (n-1) matrix by eliminating i-th row, j-th column

A.mul(i,s) // R_i = R_i * s, elementary row operation

A.row(i) // i-th row vector

A.swap(I,j) // R_i <-> R_j, elementary row operation

A.tril(k) // lower-triangular

Page 4: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

4

A.triu(k) // upper-triangular

A.trace // trace

(L,U) = A.lu // LU decomposition

(Q,R) = A.qr // qr decomposition

(P,L,U) = A.plu // PLU decomposition

Section 3-1 Fundamental Concept

■ Definition of matrix. A matrix is defined as a two-dimensional array of

numbers (or functions) in a quadrilateral form. In general, an matrix A

consists of rows and columns such that

(1)

where designates a matrix element located at an intersection of the

-th row and the -th column ( and ). In M#math,

a list of numbers inside a pair of [] can be used to create a matrix

// separate rows by ';' and separate elements by ','

#> [ 1,2,3; 4,5; 6 ];

ans =

[ 1 2 3 ]

[ 4 5 0 ]

[ 6 0 0 ]

In particular, an matrix is called a column vector, and a

matrix is called a row vector, namely

(2)

where a column vector and a row vector are described. It has been

conventional that mentioning simply vectors means, by default, column vectors.

Using a colon ‘:’ is an easy way to create a row vector in a range of numbers.

Page 5: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

5

#> 1:5 ; // colon ':' for a range of numbers

[ 1 2 3 4 5 ]

#> 1 : 0.2: 2 ; // a:h:b = [ a, a+h, a+2h, ... ]

[ 1 1.2 1.4 1.6 1.8 2 ]

■ Minor. When the -th row and the -th column are erased from an

matrix , the resultant matrix is called a minor

and is used later in evaluating a determinant or an inverse matrix. For

example, a minor from a matrix looks like

#> A.minor(2,1) ; (3)

■ Square matrix. An matrix is called square if its rows and

columns have the same dimension (i.e. ). For an square matrix

A , a principal diagonal vector (or simply a principal diagonal) is defined with

the elements along the diagonal direction (cf. A.diag(0)

and A.diagm(0) in M#math).

If all the elements below the principal diagonal are zeros, a matrix is

called upper-triangular. Similarly, a matrix is called lower-triangular if all

the elements above the principal diagonal are zeros. When a matrix is both

upper- and lower-triangular, it is called diagonal. In the following, examples of

upper-triangular, lower-triangular, and diagonal matrices are indicated for

matrices

(4)

A =

[ 11 12 13 ]

[ 21 22 23 ]

[ 31 32 33 ]

#> A.triu(0);

ans =

[ 11 12 13 ]

[ 0 22 23 ]

[ 0 0 33 ]

Page 6: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

6

#> A.tril(0);

ans =

[ 11 0 0 ]

[ 21 22 0 ]

[ 31 32 33 ]

#> A.diagm(0); // A.diag(0) = [ 11; 12; 13 ] is a 3 x 1 column vector

ans =

[ 11 0 0 ]

[ 0 22 0 ]

[ 0 0 33 ]

In addition, a scalar matrix corresponds to a matrix whose diagonal elements

are all the same. Finally, an identity matrix is defined to be where

is a Kronecker delta [ ]. Listed below are

examples of scalar and identity matrix.

#> S = s*.I(3); I = .I(3); (5)

Section 3-2 Basic Matrix Operations

In this section, basic matrix operations including multiplication, transpose

and trace are discussed.

■ Transpose, trace and multiplication. A transpose of a matrix is defined

such that

#> A.' = A .tr ; // pure transpose (6)

and a trace of an n n square matrix is a scalar evaluated as

#> A .trace; // trace (7)

Multiplication of an matrix and an matrix results in a

matrix. An explicit formula for multiplication is

Page 7: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

7

#> A * B; (8)

For example,

(9)

However, it should be noted that in general (product may not

be even defined).

Also, multiplication of a matrix and a vector is defined in two ways; one is

and the other is . It can be easily seen that

is a vector (or more

precisely a column vector), and

is a row vector. For example,

(10)

(11)

Similarly, multiplication of two vectors has two forms,

and . Note

that

is a scalar (called a dot product

of two vectors), and

is a

square matrix (called an outer product of two vectors). In M#math, a dot

product of two vectors is treated by a special binary operator written as

// x ** y , inner product

(Example 1) Given

,

,

find the

followings .

The results can be obtained by hand calculation and by M#math together

Page 8: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

8

#> A = [2,3; -1,5];; x = [7;-4];; y = [-8;2];;

#> A * x ;

ans =

[ 2 ]

[ -27 ]

#> x' * A ;

ans = [ 18 1 ]

#> x * y'; // x y^T

ans =

[ -56 14 ]

[ 32 -8 ]

#> x ** y ; x' * y; // x^T y

ans = -64

ans = [ -64 ]

Here, care should be taken that is a scalar and is a

matrix. In

M#math, x**y is a scalar whereas x'*y is a matrix.

■ Determinant and inverse. For a

matrix , a determinant is

defined by a scalar such that

#> A .det; (12)

Similarly for a matrix, a determinant is evaluated to be

(13)

For an

square matrix, a determinant is defined from the so-called

Laplace expansion in such a way that

Page 9: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

9

(14)

where is called a cofactor and related to the determinant of a minor

, i.e.

(15)

However, for numerical computation of determinants, a more efficient method

will be employed instead of the above purely mathematical formula.

A determinant of a product can be calculated as a product of each

determinant, i.e.

(16)

Particularly for upper- and lower-triangular matrices, the determinant is simply

a product of all diagonal elements, i.e.

(17)

An inverse of a matrix is defined from

(18)

where designates an inverse matrix. If a matrix has its inverse matrix

, it is called nonsingular, and otherwise is called singular. Although

practical only for small matrices, the inverse matrix of an matrix can be

obtained as

(19)

if and only if . This indicates that is equivalent to a

nonsingular matrix.

For transpose and inverse operations, it is known that

(20a)

(20b)

Page 10: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

10

(20c)

(20d)

■ Matrix classification based on operations. Matrix operations such as

transpose and inverse can be used to classify square matrices as below.

○ If , is a symmetric matrix

○ If , is an orthogonal matrix

○ If , is a Householder matrix

These matrices of special types are substantially important in both numerical

and theoretical analyses. Application of these matrices will be treated later.

Section 3-3 Elementray Row Operations

■ Elementary row operations. In the linear algebra, row vectors of an

matrix are manipulated mostly. These are called the elementary row

operations, which are described below.

① interchange two rows (frequently called permutation)

A.swap(i,j)

② multiply a non-zero scalar to a row,

A.mul(i,s)

③ add a scalar multiple of one row to another row,

A.add(i,j,s)

In the above, each row vector of a matrix is designated by for the sake of

convenience. Then, let us practice elementary row operations with a

matrix

(21)

#> A = [ -1,2,5,-4; 3,-7,0,2; -5,3,-4,6 ];

A =

Page 11: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

11

[ -1 2 5 -4 ]

[ 3 -7 0 2 ]

[ -5 3 -4 6 ]

First, exchanging and

yields

(22)

#> A.swap(1,3) ; // A.swap = A.rowswap

ans =

[ -5 3 -4 6 ]

[ 3 -7 0 2 ]

[ -1 2 5 -4 ]

Next, multiplying 7 to

and multiplying 3 to result in

(23)

#> A.mul(1, -7).mul(2, 3);

ans =

[ 7 -14 -35 28 ]

[ 9 -21 0 6 ]

[ -5 3 -4 6 ]

In a similar way, a third elementary row operation is also easily performed, e.g.

(24)

#> A.add(3,1, -5);

ans =

[ -1 2 5 -4 ]

[ 3 -7 0 2 ]

[ 0 -7 -29 26 ]

■ Gauss elimination. Given an

matrix, an element

is called a

pivot coefficient, e.g. the following

matrix

Page 12: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

12

(25)

has four pivot coefficients, . By using pivot coefficients and

elementary row operations, it is possible to make all the elements below pivot

coefficients to vanish. This procedure is called the Gauss elimination. By

referring to the

matrix in equation (25), the Gauss elimination starts with

the following elementary row operations

(26)

which convert equation (25) into

(27)

In the above, superscript * reflects the fact that elementary row operations have

been applied once on that coefficient. Then, the next step of the Gauss

elimination reads

(28)

where superscript ** reflects that elementary row operations have been

performed twice on the coefficients. Similarly, after the last step of the Gauss

elimination, we have

Page 13: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

13

(29)

The matrix obtained after the Gauss elimination is found to contain only zero

elements below all the pivot coefficients.

//----------------------------------------------------------------------

// Gauss-elimination, user code for A.gausselim

//----------------------------------------------------------------------

matrix gausselim(A) { // copy A

n = A.m;

for.k(1,n-1) { // for(k = 1; k < n; k++)

if( |A(k,k)| < _eps ) break;

prow = A.row(k)/A(k,k);

for.i(k+1,n) { // for(i = k+1; i <= n; i++)

A.row(i) -= A(i,k)*prow; A(i,k) = 0;

A;; // display

}

cut;;

}

return A;

}

■ Determinant. The determinant of a square matrix does not change by

adding a scalar multiple of one row to another row (i.e. ). However,

exchanging two rows alters the sign of . Based on these properties, the

determinant of a matrix can be obtained by first applying the Gauss elimination

and taking a product of all the diagonal elements. Last, change the sign of the

determinant if an odd number of permutations are experienced.

(Example 2) Apply the Gauss elimination to a matrix , and find a

determinant of which is a submatrix of

All the elements below the pivot coefficient can be made zeros by

applying the elementary row operations on

Page 14: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

14

(30)

or

(31)

#> A = [ 1,-2,-5,6; 3,-7,0,2; -5,3,-4,-8 ];

A =

[ 1 -2 -5 6 ]

[ 3 -7 0 2 ]

[ -5 3 -4 -8 ]

#> A = A.add(2,1, -3).add(3,1, 5);

A =

[ 1 -2 -5 6 ]

[ 0 -1 15 -16 ]

[ 0 -7 -29 22 ]

Next, by applying the elementary row operations again on , we obtain

(32)

#> A = A.add(3,2, -7);

A =

[ 1 -2 -5 6 ]

[ 0 -1 15 -16 ]

[ 0 0 -134 134 ]

In the above, no permutation has been done so that the determinant of is

(33)

A user-defined function 'gausselim' listed above displays the elimination

procedure as follows.

#> A = [ 1,-2,-5,6; 3,-7,0,2; -5,3,-4,-8 ];

#> gausselim(A); // elimination below a_ii

gausselim.A =

[ 1 -2 -5 6 ]

Page 15: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

15

[ 0 -1 15 -16 ]

[ -5 3 -4 -8 ]

gausselim.A =

[ 1 -2 -5 6 ]

[ 0 -1 15 -16 ]

[ 0 -7 -29 22 ]

------------------------------------------------------------

gausselim.A =

[ 1 -2 -5 6 ]

[ 0 -1 15 -16 ]

[ 0 0 -134 134 ]

------------------------------------------------------------

or built-in function can be directly employed to yield

#> A.gausselim;

ans =

[ 1 -2 -5 6 ]

[ 0 -1 15 -16 ]

[ 0 0 -134 134 ]

■ Gauss-Jordan elimination. In the Gauss elimination process, only the

elements below the pivot coefficients are made to be zero elements. On the

other hand, the Gauss-Jordan elimination converts all the elements both below

and above the pivot coefficients to zeros. In addition, all the pivot coefficients

are made to be the unity.

Figure 3-1 illustrates the difference between the Gauss elimination and the

Gauss-Jordan elimination.

Figure 3-1 Gauss and Gauss-Jordan eliminations

//----------------------------------------------------------------------

// Gauss-Jordan elimination without pivot

//----------------------------------------------------------------------

matrix jordelim(A) { // copy A

n = A.m;

for.k(1,n) { // for(k = 1; k <= n; k++)

if( |A(k,k)| < _eps ) break; // _eps = 2.22e-16

Page 16: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

16

A.row(k) /= A(k,k);

for.i(1,n) { // for(i = 1; i <= n; i++)

if( i == k ) continue; // skip if k==i

A.row(i) -= A(i,k)*A.row(k);

A(i,k) = 0.;

A;; // display

}

cut;;

}

return A;

}

(Example 3) Apply the Gauss-Jordan elimination to

Since the same matrix has been already considered in (Example 2), the

elimination with respect to the first pivot coefficient is the same (see

Figure 3-1). Therefore, we have

#> A = [ 1,-2,-5,6; 3,-7,0,2; -5,3,-4,-8 ];

A =

[ 1 -2 -5 6 ]

[ 3 -7 0 2 ]

[ -5 3 -4 -8 ]

#> A = A.add(2,1, -3).add(3,1, 5); // equivalent to A = A.jord(1)

A =

[ 1 -2 -5 6 ]

[ 0 -1 15 -16 ]

[ 0 -7 -29 22 ]

In addition, the Gauss-Jordan elimination requires that the pivot coefficient to

be the unity. Therefore, the elementary row operation multiplying a scalar is

performed (i.e. ) so that we have

Page 17: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

17

(34)

#> A = A.mul(2, -1);

A =

[ 1 -2 -5 6 ]

[ 0 1 -15 16 ]

[ 0 -7 -29 22 ]

Next, by applying the elementary row operations on both below and above a

pivot coefficient

(35)

#> A = A.add(1,2, 2).add(3,2, 7);

A =

[ 1 0 -35 38 ]

[ 0 1 -15 16 ]

[ 0 0 -134 134 ]

Again, elementary row operations are successively applied

(36)

#> A = A.mul(3, -1/134) .add(1,3, 35).add(2,3, 15);

A =

[ 1 0 0 3 ]

[ 0 1 0 1 ]

[ 0 0 1 -1 ]

This procedure can be simulated by the user-code listed above

#> A = [ 1,-2,-5,6; 3,-7,0,2; -5,3,-4,-8 ];;

#> jordelim(A);; // elimination both below and above a_ii

jordelim.A =

[ 1 -2 -5 6 ]

[ 0 -1 15 -16 ]

[ -5 3 -4 -8 ]

Page 18: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

18

jordelim.A =

[ 1 -2 -5 6 ]

[ 0 -1 15 -16 ]

[ 0 -7 -29 22 ]

------------------------------------------------------------

jordelim.A =

[ 1 0 -35 38 ]

[ -0 1 -15 16 ]

[ 0 -7 -29 22 ]

jordelim.A =

[ 1 0 -35 38 ]

[ -0 1 -15 16 ]

[ 0 0 -134 134 ]

------------------------------------------------------------

jordelim.A =

[ 1 0 0 3 ]

[ -0 1 -15 16 ]

[ -0 -0 1 -1 ]

jordelim.A =

[ 1 0 0 3 ]

[ -0 1 0 1 ]

[ -0 -0 1 -1 ]

------------------------------------------------------------

☑ The final result can be also obtained by one of the followings

#> A .jord(1). jord(2). jord(3) ;

#> A = [ 1,-2,-5,6; 3,-7,0,2; -5,3,-4,-8 ];

A =

[ 1 -2 -5 6 ]

[ 3 -7 0 2 ]

[ -5 3 -4 -8 ]

#> A.jord(1);

ans =

[ 1 -2 -5 6 ]

[ 0 -1 15 -16 ]

[ 0 -7 -29 22 ]

#> A.jord(1).jord(2);

ans =

[ 1 0 -35 38 ]

[ 0 1 -15 16 ]

[ 0 0 -134 134 ]

#> A.jord(1).jord(2).jord(3);

ans =

[ 1 0 0 3 ]

Page 19: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

19

[ 0 1 0 1 ]

[ 0 0 1 -1 ]

Observe that this example is equivalent to solving the following linear equations,

as will be discussed later.

■ Inverse matrix. The Gauss-Jordan elimination is frequently used to find

an inverse matrix. For a square matrix , a subsidiary matrix

is constructed such that

(37)

Applying the Gauss-Jordan elimination would yield

(38)

the right half of which is then equal to the inverse matrix , i.e.

(39)

Below is an example to find an inverse matrix by the Gauss-Jordan elimination.

(Example 4) Find an inverse matrix of

.

From equation (37), a subsidiary matrix is constructed as

Page 20: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

20

Then, the Gauss-Jordan elimination is applied in the following sequence

(40)

Therefore, an inverse matrix is

(41)

#> A = [ 2,3; 4,5 ];

A =

[ 2 3 ]

[ 4 5 ]

#> (A | .I(2)) .gaussjord;

ans =

[ 1 0 -2.5 1.5 ]

[ 0 1 2 -1 ]

#> (A | .I(2)) .gaussjord ..()(3,4).ratio;

(1/2) x

[ -5 3 ]

[ 4 -2 ]

Section 3-4 Review of Coupled Linear Equations

In this section, coupled linear equations are defined focusing on the non-

homogeneous source vector. And a brief discussion on the solution methods for

coupled linear equations is presented.

■ Definition of coupled linear equations. Consider a total of

unknowns

and a total of

equations written as

(42)

Page 21: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

21

The above set of equations are called the coupled linear equations (simply

linear equations or matrix equation hereafter). For the sake of conciseness,

matrices are introduced as below

(43)

Then, a compact form for linear equations reads

(44)

where a matrix

is called a coefficient matrix, a solution vector, and a

source vector.

The system of linear equations is called homogeneous if , and

otherwise called non-homogeneous. In particular, if , equation (44)

naturally has a trivial solution . However, under certain circumstances,

can have a non-trivial solution and this will be treated in the next

chapter. Therefore, only non-homogeneous cases of

will be considered

in this chapter.

■ Indeterminate and redundant/inconsistent system. Now, consider the case

where the number of unknowns differs from that of equations. First, if the

number of unknowns exceeds the number of equations (i.e. ), the linear

system is called indeterminate. This means that there can exist an infinite

number of solutions, and a simple example is presented below

(45)

Next, if the number of equations is more than that of the unknowns (i.e.

), the linear system is called redundant or inconsistent depending on the

existence of solution. An inconsistent linear system corresponds to a situation

where no solution is possible at all (e.g. two equations such as ).

Listed below is one example

Page 22: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

22

(46)

In a redundant linear system, extra equations are in fact unnecessary since

they are verbatim extra (e.g. ). In reality, a redundant or

inconsistent linear system rarely occurs in practical problems. Therefore, we

will confine our interest only to the case of .

■ Crammer's rule. For an square matrix , a solution to

can be determined directly from the Crammer's rule. Although this rule is

unfortunately of little use even for a moderate value of , a special case of

is reviewed below for later reference. Given a matrix equation

its solution is found to be

where

(47)

■ Classification of numerical methods. Numerical methods to find a

solution to the coupled linear equations can be classified into two groups;

namely, the direct method and the iteration method.

The well-known direct methods are the Gauss elimination method, Gauss-

Jordan elimination method, LU decomposition method among others. In this

approach, the solution is obtained in a manner , even though an

inverse matrix may not be explicitly determined. A major drawback of the

direct method is a need of large memory space and expensive computational

cost.

A key strategy of iteration methods begins with an initially guessed

solution , and this guess is repeatedly updated in a manner .

Page 23: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

23

The Jacobi method, Gauss-Seidel method, SIP (Strongly Implicit Procedure)

and CGS (Conjugate Gradient Solver) belong to the iteration methods. On the

other hand, these methods require a prerequisite, namely satisfaction of the

diagonal dominance written as

(48)

If the above condition is not met, the iteration method is prone to divergence

and thus a specially devised algorithm should be adopted.

In the sections to follow, various solution methods for coupled linear

equations are discussed.

Section 3-5 Forward and Backward Substitution

Especially for upper- and lower-triangular matrices, solution to

resorts to simple substitution process. In this section, two substitution methods

(forward and backward) are discussed.

■ Forward substitution method. Given a lower-triangular matrix

(for convenience of discussion, matrix is considered), the matrix

equation

(49)

or

(50)

can be solved such that (let )

(51)

Page 24: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

24

In other words, ① find first, ② using a known , find , ③ with

known , find . This procedure finds the solution in an increasing order

of index, and therefore is called the forward substitution. It should be

emphasized that the forward substitution is different from the direct method

using an inverse matrix . The forward substitution can be summarized as

(52)

☑ For convenience of discussion, the forward substitution however is

symbolically denoted by , although an inverse matrix is not

actually used. In the following, the backward substitution will be also

symbolically denoted by .

■ Backward substitution method. Given an upper-triangular matrix ,

the coupled linear equations

can be solved starting from as follows.

(53)

The above procedure finds the solution in a decreasing order of index, and thus

is called the backward substitution. The backward substitution procedure can be

summarized as

(54)

Page 25: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

25

Section 3-6 Elimination Methods

A combination of the Gauss elimination (elementary row operations) and

the backward substitution is called the Gauss elimination method, which is

frequently used to solve . In this section, discussed are additional

ingredients such as pivoting and scaling to improve the Gauss elimination

method.

■ Gauss elimination to solve . The Gauss elimination as well as

the backward substitution is applied to an augmented matrix to find a

solution vector .

(Example 5) Using the Gauss elimination method, solve the coupled linear

equations written as

An augmented matrix corresponding to the given equation is

(55)

which is the same as that already examined in (Example 4). Therefore, the result

of the Gauss elimination becomes

#> A = [ 1,-2,-5,6; 3,-7,0,2; -5,3,-4,-8 ];

A =

[ 1 -2 -5 6 ]

[ 3 -7 0 2 ]

[ -5 3 -4 -8 ]

#> A .gausselim;

ans =

[ 1 -2 -5 6 ]

[ 0 -1 15 -16 ]

[ 0 0 -134 134 ]

Page 26: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

26

Then, a backward substitution applies to the above matrix to find

(56)

or

is obtained as a solution.

#> A .gausselim .backsub;

ans =

[ 3 ]

[ 1 ]

[ -1 ]

■ Pivoting. A deadlock of Gauss elimination occurs when the pivot

coefficient vanishes (i.e. ), which results in a zero divisor. Also, if the

pivot coefficient

is significantly small, then the parameter

in the Gauss elimination nearly blows off. This may cause a loss of effective

digits in computing the elementary row operation . A remedy for such

a difficulty is simply exchanging two rows after searching the largest absolute

coefficient below the pivot coefficient, and this is called the pivoting.

After pivoting, the elementary row operation then works with

(57)

which can substantially alleviate the effect of round-off errors.

//----------------------------------------------------------------------

// Gauss elimination with pivot, user code for A.gausspivot

//----------------------------------------------------------------------

matrix gausspivot(A) { // copy A

n = A.m;

for.k(1,n-1) { // for(k = 1; k < n; k++)

// pivoting procedure below

//imax = k;

//pmax = |A(k,k)|;

Page 27: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

27

//for(i = k+1; i <= n; i++) {

// if( pmax < |A(i,k)| ) { pmax = |A(i,k)|; imax = i; }

//}

imax = A..(k:n)(k).maxentryk +k-1; // maxentryk

if( imax != k ) { [ k,imax ];; A = A.swap(imax,k);; }

// elimination procedure

if( |A(k,k)| < _eps ) continue; // _eps = 2.22e-16

prow = A.row(k)/A(k,k);

for.i(k+1,n) { // for(i = k+1; i <= n; i++)

A.row(i) -= A(i,k)*prow;

A(i,k) = 0.;

A;; // display

}

cut;;

}

return A;

}

//----------------------------------------------------------------------

// Gauss-Jordan elimination with pivot, user code for A.gaussjord

//----------------------------------------------------------------------

matrix gaussjord(A) { // copy A

n = A.m;

for.k(1,n) { // for(k = 1; k <= n; k++) up to the last line

// pivoting procedure below

//imax = k;

//pmax = |A(k,k)|;

//for(i = k+1; i <= n; i++) {

// if( pmax < |A(i,k)| ) { pmax = |A(i,k)|; imax = i; }

//}

imax = A..(k:n)(k).maxentryk +k-1; // maxentryk

if( imax != k ) { [ k,imax ];; A = A.swap(imax,k);; }

// elimination procedure

if( |A(k,k)| < _eps ) continue; // _eps = 2.22e-16

A.row(k) /= A(k,k);

for.i(1,n) { // for(i = 1; i <= n; i++)

if( i == k ) continue; // skip if k=i

A.row(i) -= A(i,k)*A.row(k);

A(i,k) = 0.;

A;; // display

}

cut;;

}

return A;

}

Page 28: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

28

(Example 6) Given linear equations

Solve this by the Gauss elimination method with/without pivoting. Use 3

effective digits and chopping.

As a first case, pivoting is not considered. Using the pivot coefficient

from the first pivot equation

(58)

is used to eliminate the coefficient of in such that

(59)

where chopping after 3 digits (discard digits) is applied. This solution is again

inserted into the first equation

(60)

from which

are obtained. It can be easily noticed that

the foregoing solution is significantly different from the exact solution given

below

(61)

The observed error is approximately .

As a next case, pivoting is adopted. The largest coefficient of appears in

equation , and thus two equations are interchanged so that

Page 29: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

29

Then, with , the Gauss elimination yields

from which

is determined. Finally, from equation

(62)

is obtained with an approximate error of . Observe

that pivoting enhances the accuracy of the solution considerably.

★★(Example 7) Solve the coupled linear equations

with/without pivoting.

The Gauss-elimination with pivot is a default method of ‘A.solve’. Next,

without-pivoting is done by ‘A.gausselim’ followed by ‘A.backsub’.

#> A = [ 0.00002, 3000000, 3000000.00002; 40000,-0.00002,39999.99998 ];

#> A.gausspivot.backsub; // with pivot, A.solve = A.gausspivot.backsub

ans =

[ 1 ]

[ 1 ]

#> A.gausselim.backsub; // without pivot

ans =

[ 0.999984 ]

[ 1 ]

Without pivot, we have a solution . When compared with

the exact solution ( ), it is surprising to find that a loss of effective

digits is serious even though double precision (with 16 effective digits) has been

used for computation on only a system. On the other hand, a use of

pivoting cures this unadvocating situation and gives a solution nearly consistent

with the exact solution.

Page 30: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

30

#> A = [ 0.00002, 3000000, 3000000.00002; 40000,-0.00002,39999.99998 ];

A =

[ 2e-005 3e+006 3e+006 ]

[ 40000 -2e-005 40000 ]

#> gausspivot(A);

gausspivot.ans = [ 1 2 ]

gausspivot.A =

[ 40000 -2e-005 40000 ]

[ 2e-005 3e+006 3e+006 ]

gausspivot.A =

[ 40000 -2e-005 40000 ]

[ 0 3e+006 3e+006 ]

------------------------------------------------------------

#> ans.backsub;

ans =

[ 1 ]

[ 1 ]

■ Scaling. Meanwhile, the pivot coefficient is not necessarily the

largest absolute value among the coefficients of . In certain cases, is

divided by the largest absolute coefficient and this is called the scaling of

the equation. The effect of scaling is to enhance the accuracy by reducing an

otherwise serious round-off error. However, when double precision is employed,

the effect of scaling is not significant in most cases. In other words, scaling is of

secondary importance when compared to pivoting.

■ Solutions to indeterminate system. It is evident that the indeterminate

linear system written as

(63)

has an infinite number of solutions. In fact, solutions to a given indeterminate

linear system can be easily found by the Gauss-Jordan elimination. For example,

an augmented matrix can be constructed from equation (63), and applying the

Gauss-Jordan elimination yields

(64)

Page 31: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

31

#> [ 1,-1,3,4; 1,1,-5,-6 ].gaussjord ;

ans =

[ 1 0 -1 -1 ]

[ 0 1 -4 -5 ]

which can be rearranged to read

(65)

where

is still a free variable.

Section 3-7 Iteration Methods

When applied to a large-scale problem, the direct methods studied so far

require excessive amount of computational cost. Moreover, in many

applications, the coefficient matrix is associated with a few but large-

dimensioned nonzero vectors. In such cases, the iteration methods turn out to be

more practical.

■ Principles of iteration. The principle of iteration method is first

illustrated by a simple equation . Of course, this simple equation

has a solution . However, it is not always possible to obtain a solution

directly as such.

The iteration method first assumes a proper initial guess and updates this

estimate repeatedly by iteration. For example, in solving , an

iteration equation may be set to be

(66)

where

is considered to be known from the previous iteration or from the

initial guess at the beginning. If we start from an initial guess , the

iteration given in equation (66) yields

%> diagonal dominant case

x = 0;;

Page 32: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

32

for.i(1,99) { // for(i=1; i<100; i++) {

[ i, x = (2-x)/4, f = 5*x-2 ]; // display

if( |f| < 1.e-7 ) break;

}

[ 1 0.5 0.5 ]

[ 2 0.375 -0.125 ]

[ 3 0.40625 0.03125 ]

[ 4 0.398438 -0.0078125 ]

[ 5 0.400391 0.00195313 ]

[ 6 0.399902 -0.000488281 ]

[ 7 0.400024 0.00012207 ]

[ 8 0.399994 -3.05176e-005 ]

[ 9 0.400002 7.62939e-006 ]

[ 10 0.4 -1.90735e-006 ]

[ 11 0.4 4.76837e-007 ]

[ 12 0.4 -1.19209e-007 ]

[ 13 0.4 2.98023e-008 ]

which shows that 13 iterations are required to get a solution with an accuracy up

to 6 decimal places.

However, if we select the following iteration relation

(67)

the results of iterations are divergent as follows.

%> divergent case

#> x = 0;;

#> for[10] [ x = 2-4*x, f = 5*x-2 ];

ans = [ 2 8 ]

ans = [ -6 -32 ]

ans = [ 26 128 ]

ans = [ -102 -512 ]

ans = [ 410 2048 ]

ans = [ -1638 -8192 ]

ans = [ 6554 32768 ]

ans = [ -26214 -131072 ]

ans = [ 104858 524288 ]

ans = [ -419430 -2.09715e+006 ]

Page 33: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

33

which is due to failure of satisfying the diagonal dominance, as will be

addressed below.

■ Convergence criterion of iteration method. In order to terminate

iterations, it is required to specify a certain criterion for convergence. The most

widely-used criterion for convergence is the maximum change in two

consecutive solutions, namely

(68)

where stands for the solution corresponding to the -th step of iteration,

and is an error tolerance pre-selected a priori. Another criterion commonly

adopted is

where

is called the residue of given equation .

The success of iteration methods strongly depends on the satisfaction of

diagonal dominance

(69)

which is frequently called the Scarborough convergence criterion. In the above,

a constraint is that at least one equation must hold an inequality relation.

However, even if this condition is not met, iteration methods are often

successful to obtain the desired solution.

■ Several iteration methods. The coupled linear equations written as

(70)

can be rearranged for

by solving the -th equation

(71)

Page 34: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

34

The right-hand side of the above equation is now treated as known. In treating

these as known values, there exist two variations; namely the Jacobi method

and the Gauss-Seidel method. By designating

superscript

the iteration step,

these methods are written as

(72)

(73)

It can be easily seen that the most recently updated solution

is exploited

in the right-hand side of equation (73), whereas it does not in equation (72).

Therefore, the Jacobi method is rarely used except when vectorization or

parallel processing is taken into consideration.

The Gauss-Seidel method can be rearranged for ease of use such that

(74)

where stands for the field values in memory.

//----------------------------------------------------------------------

// Gauss-Seidel iteration procedure

//----------------------------------------------------------------------

matrix gauss_seidel(&A,b) { // refer A, copy b

n = A.n;

x = delx =.zeros(n,1);

d = A.diag(0);

for.iter(1,10000) {

for.j(1,n) {

delx(j) = ( b(j) - A.row(j)**x ) / d(j);

x(j) += delx(j);

}

Page 35: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

35

sum = delx.norm2;

if( iter <= 2 ) { x.tr ;; } // display ;;

if( sum < 1.e-6 ) { iter;; break; } // display ;;

}

return x;

}

(Example 8) Solve

by the Gauss-Seidel

iteration method.

Referring to equation (74), the given equations are transformed into

(75)

where superscript * designates the values stored prior to computation.

First by initializing

and inserting these

into equation (75), we have

(76)

In the above, the most recently updated values are denoted by superscript #.

Next iteration for can be then performed as follows

(77)

Page 36: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

36

And subsequent iterations are performed by

%> Gauss-Seidel by variable approach

#> (x1,x2,x3) = (0,0,0);; k=0;;

for[20] {

x1 = x1 + (1/2) * ( -9 - ( 2*x1 -x2 +x3) );;

x2 = x2 + (1/3) * ( 45 - ( -x1 +3*x2 -2*x3) );;

x3 = x3 + (1/5) * ( 72 - ( x1 -2*x2 +5*x3) );;

[ ++k, x1,x2,x3 ]; // to display

}

k x1 x2 x3

[ 1 -4.5 13.5 20.7 ]

[ 2 -8.1 26.1 26.46 ]

[ 3 -4.68 31.08 27.768 ]

[ 4 -2.844 32.564 27.9944 ]

[ 5 -2.2152 32.9245 28.0129 ]

[ 6 -2.04416 32.9938 28.0064 ]

[ 7 -2.00626 33.0022 28.0021 ]

[ 8 -1.99998 33.0014 28.0006 ]

[ 9 -1.99957 33.0005 28.0001 ]

[ 10 -1.9998 33.0001 28 ]

[ 11 -1.99994 33 28 ]

[ 12 -1.99998 33 28 ]

[ 13 -2 33 28 ]

[ 14 -2 33 28 ]

%> Gauss-seidal iteration by user-function

#> A = [ 2,-1,1; -1,3,-2; 1,-2,5 ];;

#> gauss_seidel( A, [ -9; 45; 72 ] );

gauss_seidel.ans = [ -4.5 13.5 20.7 ]

gauss_seidel.ans = [ -8.1 26.1 26.46 ]

gauss_seidel.iter = 15

ans =

[ -2 ]

[ 33 ]

[ 28 ]

■ Relaxation. In order to accelerate the convergence rate, the relaxation

method is utilized to solve . In the relaxation, a relaxation factor is

defined and used to update the solution in the following manner

(78)

where it is called

Page 37: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

37

• under-relaxation if

• over-relaxation if

Since the case of

results in diverging solution, the commonly used

relaxation ranges .

Section 3-8 Numerical Difficulties

Even though not frequently encountered in practical applications, round-

off errors play a significant role in obtaining numerical solutions. These are

called ill-conditioned problems. In this section, ill-conditioned problems are

briefly discussed.

■ Characteristics of ill-conditioned problems. A matrix

having ill-

condition exhibits the difficulties listed below

• a small change in matrix element affects the solution significantly

• diagonal dominance is not frequently met

deviates significantly from the unity

is substantially different from the identity matrix

• differs appreciably from

• is more seriously different from than

Although it is not easy to predict whether or not the above difficulties may

occur for a given matrix, a commonly accepted criterion is helpful to predict ill-

condition. The most famous ill-condition is encountered with the well-known

Hilbert matrix

(79)

To find an inverse matrix of the Hilbert matrix and to examine the degree of

disparity between

and the identity matrix, the following M#math code

#> n = 12;;

#> A = matrix[n,n].i.j (1/(i+j-1));; // matrix initialization

#> (A*A.inv) .endrow .(n-2:n) ; // (A*A.inv).endrow .entry(n-2:n) ;

Page 38: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

38

ans = [ 2.45703 -1.03271 1.18677 ]

are executed to yield

It can be noted that errors are too serious to regard the above as the elements of

an identity matrix. In case single instead of double precision is used, the effect

of errors is noticeable even for small dimension of .

■ Condition number and matrix norm. Introduced below is the matrix norm

of a matrix

to estimate the degree of ill-condition. The matrix norm is

defined as an effect of the matrix on the vectors. The most easiest way to define

a matrix norm is derived starting from the

norm of a vector

(80)

which will be extended to the case of matrices. If , the -norm of

vectors is just the maximum/minimum element of a vector

(81)

By extending the -norm to a matrix, we have

(82)

With the above-defined matrix norm, a condition number is defined as

(83)

Note that the condition number of the identity matrix is 1, and the condition

number of other matrices are always equal to or greater than 1. This implies that

a small condition number corresponds to well-conditioned state, whereas a large

condition number to ill-conditioned state. In general, for a condition number of

Page 39: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

39

,

digits are lost during matrix operations. In the case of Hilbert

matrix, a condition number is

#> A = matrix.hilb(4); // 4 x 4 Hilbert matrix

A =

[ 1 0.5 0.333333 0.25 ]

[ 0.5 0.333333 0.25 0.2 ]

[ 0.333333 0.25 0.2 0.166667 ]

[ 0.25 0.2 0.166667 0.142857 ]

#> A.cond; // A.norminf*A.inv.norminf;

ans = 28375

In other words, corresponds to loss of 4 effective digits

even for a small size of . Therefore, the reason for the failure of finding

the inverse of the afore-mentioned Hilbert matrix can be understood

from the fact that

#> matrix.hilb(12).cond; // 12 x 12 Hilbert matrix

ans = 3.8727742e+016

This result implies that about 16 digits are lost during the course of finding

inverse matrix and therefore the so-obtained inverse is totally useless. Further

description of condition number is beyond the scope of this book and thus is

omitted.

(Example 9) Find a condition number for

where , and

find an ill-conditioned situation.

To find a condition number, it is demanded to find an inverse matrix

from which

(84)

is obtained. If

is used, the condition number takes a large number as

Page 40: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

40

In reality, the given matrix is associated with the following linear equations

If approaches the unity, the above two lines tend to parallel to each other.

This means that even a small change in or would result in a significant

variation in the solution. This situation is a geometrical interpretation of a large

condition number. From the above example, it can be seen that the behavior of

ill-conditioned can be estimated in a certain quantitative degree.

#> k = 1001;

#> A = [ k,1; 1,1 ];

#> A.cond ; // A.norminf * A.inv.norminf;

k = 1001

A =

[ 1001 1 ]

[ 1 1 ]

ans = 1004.004

☑ However, due to the difficulty of obtaining inverse matrices, several

different definitions are employed to find the condition number.

Section 3-9 Exercises ===========================

【1】Solve the coupled linear equations by the Gauss elimination method.

(a)

(b)

【2】Find the determinant of the following matrix

(a)

(b)

(c)

【3】Find the inverse matrices.

Page 41: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

41

(a)

(b)

(c)

【4】Find the LU decompositions by the Crout method.

(a)

(b)

(c)

【5】Find the LU decompositions by the Crout method.

(a)

(b)

(c)

(d)

【6】Find the LU decompositions by the Crout method.

(a)

(b)

(c)

【7】Given the Hilbert matrix where , find the

followings for matrix.

(a) (b)

(c)

【8】Given coupled linear equations where

find a solution for the case of .

Page 42: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

42

【9】Solve the following linear equations by the Gauss-Seidel iteration method

(first, check the diagonal dominance).

【10】Solve the following indeterminate system of linear equations.

(a)

(b)

【11】The Vandermonde determinant is defined such that

0 1 2 1

1 1 1 1

0 1 2 1

2 2 2 2

( 1) / 20 1 2 1

3 3 3 3

0 1 2 1

...

...

( 1) ( )...

... ... ... ... ...

...

n

n

n nnj k

j k n

n

n n n n

For example, we have

【12】Consider a particular tridiagonal matrix equation written as

Note that this linear system is indeterminate since any solution vector can be

added by a constant without altering the system, namely

Page 43: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

43

In such a case, the associated matrix has a zero determinant . An

example is

1 1 0 0 0 0

1 2 1 0 0 0

0 1 2 1 0 0, 0

0 0 1 2 1 0

0 0 0 1 2 1

0 0 0 0 1 1

i i ia b c

A

This indeterminate system is made to be determinate by placing .

Then, linear system is solved again.

//----------------------------------------------------------------------

// end of Lectures

//----------------------------------------------------------------------

Section 3-10 Tridiagonal Matrix and TDMA (optional)

Tridiagonal matrices occur in many applications such as spline

interpolation, solutions to ordinary/partial differential equations. In this section,

a so-called TDMA (Tri-Diagonal Matrix Algorithm) is discussed.

■ Tridiagonal matrix equation. Consider the following coupled linear

equations

1 1 1 2 1

2 1 2 2 2 3 2

3 2 3 3 3 4 3

1 2 1 1 1 1

1

... ... ... ... ...

n n n n n n n

n n n n n

a x c x d

b x a x c x d

b x a x c x d

b x a x c x d

b x a x d

(85)

Page 44: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

44

which can be re-written in a matrix form

1 1 1 1

2 2 2 2 2

3 3 3 3 3

1 1 1

0 . . 0 0

. . 0 0

0 . 0 0

0 . ... ... ... . . . .

. . . ... ... ... 0 . .

. . . . . .

0 0 0 . .

n n n

n n n n

a c x d

b a c x d

b a c x d

b a c

b a x d

(86)

It is evident that the associated matrix is of tridiagonal form. Although LU

decomposition can be used to solve the above tridiagonal matrix equation, it has

been accepted that TDMA is more efficient than LU decomposition as long as

only one source vector is taken into consideration.

■ TDMA. Equation (86) can be written in a representative manner such

that

(87)

Now, suppose that the backward substitution takes the following form

(88)

Then, by setting , we have

(89)

which is then inserted into equation (87) to yield

(90)

Page 45: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

45

Comparing the above with equation (88), the following recurrence relations

(91)

are derived. It can be noted that these are by nature similar to the forward

substitution since the index is increasing. Especially for , the condition

yields

or

(92)

are determined as starting values. Note that the above results are alternatively

obtained by using and in equation (91). Also for , using

gives

so that

(93)

is obtained as a solution from equation (88). The backward substitution

then follows to find in sequence

. Summarized below are the major steps of TDMA.

(Tri-Diagonal Matrix Algorithm)

(94)

(95)

(96)

(97)

//----------------------------------------------------------------------

Page 46: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

46

// tridiagonal matrix algorithm (TDMA)

// a(i)*x(i) + b(i)*x(i-1) + c(i)*x(i+1) = d(i), b(1) = c(n) = 0

//----------------------------------------------------------------------

matrix user_tdma(a,b,c,d, double na,nb) {

n = a.mn;

P = Q = x = .zeros(1,n);

P(na) = -c(na)/a(na);

Q(na) = d(na)/a(na);

for.i(na+1,nb) { // for(i = na+1; i <= nb; i++)

den = 1/( a(i) + b(i)*P(i-1) + _eps );

P(i) = -c(i) * den;

Q(i) = (d(i) - b(i) * Q(i-1))*den;

}

x(nb) = Q(nb);

for.i(nb-1,na,-1) x(i) = P(i)*x(i+1)+Q(i); // for(i = nb-1; i >= na; i--)

return x;

}

//----------------------------------------------------------------------

// a(i)*x(i) + b(i)*x(i-1) + c(i)*x(i+1) = d(i), b(1) = c(n) = 0

//----------------------------------------------------------------------

double user_tdmares(matrix x, &a,&b,&c,&d) {

n = a.mn;

res = d(1)-a(1)*x(1)-c(1)*x(2)

+ d(n)-a(n)*x(n)-b(n)*x(n-1);

for.i(2,n-1) { // for(i = 2; i < n; i++)

res += d(i) - a(i)*x(i) - b(i)*x(i-1) - c(i)*x(i+1);

}

return res;

}

// An example is shown below

#> n = 6;

#> a = b = c = d = .zeros(1,n);; // create matrices

#> a = 2;

#> b = 1;

#> c = 1;

#> for.i(1,n) d(i) = i;;

#> x = user_tdma(a,b,c,d, 1,n).trun12;

n = 6

a = [ 2 2 2 2 2 2 ]

b = [ 1 1 1 1 1 1 ]

c = [ 1 1 1 1 1 1 ]

ans = [ 0 1 0 2 0 3 ]

#> user_tdmares(x, a,b,c,d);

ans = 1.2878587e-014

Page 47: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

47

It is recommended to utilize a built-in dot function as follows.

#> .tdma(a,b,c,d, 1,n).trun12; // built-in dot function

ans = [ 0 1 0 2 0 3 ]

Section 3-11 LU Decomposition (optional)

The LU decomposition method to solve the coupled linear equations is

discussed in this section. Although LU decomposition belongs to a direct

method, it can be also adopted by iteration methods in a form of ILU

(incomplete LU decomposition).

■ LU decomposition for linear system. The LU decomposition first

converts a given square matrix into a product of lower-triangular matrix

and an upper-triangular matrix , namely .

Given an square matrix , LU decomposition is applied

to a linear system . Then we have

(98)

which can be solved by the forward substitution as

(99)

In the above, is regarded to be a symbolic representation of forward

substitution rather than multiplying an inverse matrix. Subsequently, applying

the backward substitution gives the final solution vector.

(100)

■ Procedure of LU decomposition . Consider an LU

decomposition of written below.

Page 48: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

48

(101)

It can be noted that a total of terms are given on the left-hand side, whereas

a total of unknowns are recognized on the right-hand side. Therefore,

in order to uniquely determine matrix elements, the principal diagonal elements

are a priori prescribed as known values. Depending on the way of prescribing

diagonal elements, LU decomposition is classified into three groups such that

• Crout method,

• Dolittle method,

• Choleski method,

Let us first confine our interest to the Crout method where is pre-

assigned. By expanding equation (101) explicitly, we have

(102)

where only a few leftmost columns are displayed. Noting that , all the

elements in the first column can be determined as

Next, elements in the first row can be found with known , i.e.

Page 49: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

49

This procedure also continues to find (in the vertical

direction) from

Then, the next step in turn moves to the horizontal direction. Focusing only on

the unknowns to be determined in sequence, this procedure can be

schematically envisioned as below

(103)

where the horizontal and vertical blocks designate the sequence of operations.

Another interesting feature is that storing both lower- and upper-triangular

matrices can be done with a single matrix as in equation (103). This is because

the diagonal elements of an upper-triangular matrix are already specified to be

in the Crout method. A similar treatment can be adopted to find LU

decomposition by the Dolittle method (i.e. ).

An explicit algorithm to develop a function can be summarized below.

First, is expanded depending on or .

For with (i.e. lower-triangular and diagonal elements),

(104)

where (i.e. ) were inserted in the

above. An example is .

Page 50: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

50

For with (i.e. upper-triangular elements),

(105)

where (i.e. ) were used in the above.

☑ If a zero divisor occurs in equation (105), an alternative PLU decomposition

should be employed (see Appendix).

//----------------------------------------------------------------------

// LU decomposition

//----------------------------------------------------------------------

void user_lu(matrix& A) {

n = A.m;

L = U = .zeros(n);

for.s(1,n) { // for(s=1; s<=n; s++)

for.i(s,n) L(i,s) = A(i,s) - L..(i)(1:s-1) ** U..(1:s-1)(s);

U(s,s) = 1; // horizontal direction

for.j(s+1,n) U(s,j) = (A(s,j) - L..(s)(1:s-1) ** U..(1:s-1)(j))/L(s,s);

[ L, U ] ;; // display L..(j:n)(j), U..(i)(i:n)

}

}

(Example 10) Given

, find a LU decomposition by the

Crout method.

Referring to equation (103), matrix elements are determined in the following

sequence( 'N' means next elements to determine)

This can be done by calling a user function defined above.

#> A = [ 2,-1,1; -1,3,-2; 1,-2,5 ];

Page 51: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

51

A =

[ 2 -1 1 ]

[ -1 3 -2 ]

[ 1 -2 5 ]

#> user_lu(A);

ans =

[ 2 0 0 1 -0.5 0.5 ]

[ -1 0 0 0 0 0 ]

[ 1 0 0 0 0 0 ]

ans =

[ 2 0 0 1 -0.5 0.5 ]

[ -1 2.5 0 0 1 -0.6 ]

[ 1 -1.5 0 0 0 0 ]

ans =

[ 2 0 0 1 -0.5 0.5 ]

[ -1 2.5 0 0 1 -0.6 ]

[ 1 -1.5 3.6 0 0 1 ]

Using a built-in function is more convenient as follows.

#> (L,U) = A.lu; // tuple assignment by LU decomposition

L =

[ 2 0 0 ]

[ -1 2.5 0 ]

[ 1 -1.5 3.6 ]

U =

[ 1 -0.5 0.5 ]

[ 0 1 -0.6 ]

[ 0 0 1 ]

Section 3-12 Choleski Decomposition (optional)

■ Choleski decomposition. A large number of matrices arising from

applications are symmetric and positive-definite. A symmetric and positive-

definite matrix can be uniquely decomposed by the Choleski method into

(106)

where is incorporated. In the Choleski method, the diagonal elements

are first determined by modifying equation (104) with

Page 52: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

52

(107)

Subsequently, for the case of , equation (105) is used to give

(108)

where .

//----------------------------------------------------------------------

// Choleski decomposition

//----------------------------------------------------------------------

void user_chol(matrix& A) {

n = A.m;

U = .zeros(n);

for.i(1,n) {

sum = A(i,i) - U..(1:i-1)(i).norm22;

if( sum < 0 ) { "negative diagonal in Choleski";; break; }

U(i,i) = sqrt(sum);

for.j(i+1,n) U(i,j) = (A(i,j) - U..(1:i-1)(i) ** U..(1:i-1)(j))/U(i,i);

U ;; // display U..(i)(i:n)

}

}

(Example 11) Find a Choleski decomposition of

For symmetric matrices, there is no need to consider the elements in lower-

triangular region. Therefore, the Choleski decomposition is performed only for

the upper-triangular matrix

#> A = [ 2,-1,1; -1,3,-2; 1,-2,5 ];

A =

[ 2 -1 1 ]

[ -1 3 -2 ]

[ 1 -2 5 ]

Page 53: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

53

#> user_chol(A); // user function listed above

ans =

[ 1.41421 -0.707107 0.707107 ]

[ 0 0 0 ]

[ 0 0 0 ]

ans =

[ 1.41421 -0.707107 0.707107 ]

[ 0 1.58114 -0.948683 ]

[ 0 0 0 ]

ans =

[ 1.41421 -0.707107 0.707107 ]

[ 0 1.58114 -0.948683 ]

[ 0 0 1.89737 ]

#> A.chol; // built-in function

ans =

[ 1.41421 -0.707107 0.707107 ]

[ 0 1.58114 -0.948683 ]

[ 0 0 1.89737 ]

Section 3-13 QR Decomposition (optional)

The QR decomposition is a prerequisite to understand the QR method. In

this section, the Gram-Schmidt orthonormalization process is used to find QR

decomposition.

■ Gram-Schmidt method and QR decomposition. If three independent unit

vectors satisfy

(109)

then they are called the orthonormal basis. Also, a projection vector of onto

a unit vector is defined as

(110)

With a use of projection vector, any three independent vectors can be

transformed into a set of orthonormal basis. This is called the Gram-Schmidt

method and begins with normalizing the first vector

Page 54: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

54

Next, using the given vector and the unit vector , a new vector

can be constructed, which is orthogonal to the unit vector since

(111)

Therefore, the vector is again normalized as

Up to now, two unit vectors orthogonal to each other are obtained. The

last vector is constructed as

(112)

which also satisfies , i.e. orthogonal to the foregoing two

unit vectors. This procedure can be easily extended to an -dimensional space.

//----------------------------------------------------------------------

// QR decomposition

//----------------------------------------------------------------------

matrix user_qr(matrix& A) {

n = A.m;

Q = A.unit;

S = .I(n);

for.i(2,n) { // for(i=2; i<=n; i++) {

x = Q.col(i-1);

S -= x * x.tr;

Q.col(i) = ( S * Q.col(i) ).unit.trun12; }

return Q;

}

Page 55: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

55

(Example 12) Given Find an

orthonormal basis by the Gram-Schmidt method.

First construct a matrix having three vectors as column vectors

(113)

A first unit vector is determined from the first column as

Next, calculating

and normalizing yields the second unit vector

(114)

Finally

or

is obtained. With the foregoing unit vectors, an orthogonal matrix

Page 56: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

56

(115)

is defined. Also, user function 'user_qr' defined above can be used as follows.

#> A = [ 1,-1,3; 1,-1,-2; 0,1,1 ];

A =

[ 1 -1 3 ]

[ 1 -1 -2 ]

[ 0 1 1 ]

#> sqrt(2)*user_qr(A); // user function listed above

ans =

[ 1 0 1 ]

[ 1 0 -1 ]

[ 0 1.41421 0 ]

Then, since for an orthogonal matrix

can be determined. It is noteworthy that the above matrix is upper-triangular (as

a matter of fact, QR decomposition must be called QU decomposition. But Q

and QU is similar in sound and thus is renamed as QR). By defining an upper-

triangular matrix such that

(116)

we have the following QR decomposition,

(117)

As was in the above, the Gram-Schmidt method is applied to a given matrix

to find an orthogonal matrix . Then, the QR decomposition can be

Page 57: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

57

determined. By the way, it has been conventional to denote instead of for

the upper-triangular matrix in the QR decomposition.

Using a built-in function is more convenient as follows.

// tuple assignment by QR decomposition

#> (Q,R) = A.qr;; Q.trun10; R.trun10;

ans =

[ 0.707107 0 0.707107 ]

[ 0.707107 0 -0.707107 ]

[ 0 1 0 ]

ans =

[ 1.41421 -1.41421 0.707107 ]

[ 0 1 1 ]

[ 0 0 3.53553 ]

Section 3-14 PLU Decomposition (advanced)

In this section, a theoretical approach to the PLU decomposition is

discussed where represents a permutation matrix. Particularly, an

appropriate interchanging of rows allows for as long as the given

matrix is nonsingular.

■ Permutation. For theoretical treatment henceforth, let us represent the

elementary row operations in terms of matrix operations. As a beginning step, a

simple matrix is considered to define a permutation matrix , for

example

(118)

Let be a permutation matrix which interchanges -th and -th rows. Then

the permutation matrix appears as

Page 58: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

58

(119)

where all other unmarked elements are zeros. Note that multiplying a

permutation matrix twice to a matrix sends back to the original matrix.

Therefore, it can be easily expected that

(120)

(Example 13) A matrix is permuted as below

Find the corresponding permutation matrix .

Transformation from to involves two permutations; first interchange

and , and then interchange and . In doing this, the row of smaller

index is interchanged first, i.e. the permutation is first performed.

Hence the final permutation matrix becomes

(121)

where an earlier permutation is located on the rightmost side. This can be

reaffirmed by performing actual multiplication as

■ Gauss elimination. The elementary row operations which make all the

elements below zeros can be represented equivalently by matrix operations.

Page 59: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

59

For example, for the case matrices, the Gauss elimination can be re-

written by employing a simple Householder matrix such that

(122)

where subscript 1 in the simple Householder matrix reflects the fact that it is

applied to the first column. Subsequently, a simple Householder matrix

zeros all the elements below , as shown below

Meanwhile, inverses of simple Householder matrices are

(123)

where only the signs have been changed from the original matrices. Also, the

product of all the simple Householder matrices can be obtained just by

superposing all the nonzero column elements in the lower-triangular region,

namely

Page 60: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

60

(124)

In fact, matrix representation of Gauss elimination is a convenient tool for

theoretical treatment.

■ Gauss elimination and PLU decomposition. At this point, let us discuss

again the Gauss elimination process from the standpoint of matrix operation.

The elementary row operations acted on a matrix during the Gauss

elimination can be represented by a pair of operations (i.e. permutation and

simple Householder transformation). Therefore, the Gauss elimination can be

stated as

(125)

where denotes a permutation matrix (i.e. pivoting). Of course, the

final result of the Gauss elimination is an upper-triangular matrix as appeared in

the right-hand side.

In general, commuting and is admitted in the linear algebra

(without proof) so that equation (125) can be re-written as

Since a simple Householder matrix is a lower-triangular matrix, their product is

also lower-triangular. Therefore, the above equation becomes

(126)

At a last step, by defining

(127)

the original matrix is found to have a PLU decomposition shown below

(128)

Page 61: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

61

where the product of permutation matrix is orthogonal, i.e. .

Note that the principal diagonal of a lower-triangular matrix are all the unity,

i.e. .

(Example 14) Given a matrix

, find a PLU

decomposition, and find a solution of with a source vector

.

Since , LU decomposition is impossible as it is. Therefore, PLU

decomposition is in demand. So, interchanging the first and third rows

is obtained. And multiplying a simple Householder matrix yields

In the above, is the greatest absolute element so that pivoting is not

necessary. Subsequently, a simple Householder matrix is multiplied to get an

upper-triangular matrix.

From the foregoing computation, all the matrices

Page 62: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

62

and PLU decomposition

(129)

#> A = [ 0,2,1; 3,2; 4,8,-16 ]; (P,L,U) = A.plu;

A =

[ 0 2 1 ]

[ 3 2 0 ]

[ 4 8 -16 ]

P =

[ 0 0 1 ]

[ 0 1 0 ]

[ 1 0 0 ]

L =

[ 1 0 0 ]

[ 0.75 1 0 ]

[ 0 -0.5 1 ]

U =

[ 4 8 -16 ]

[ 0 -4 12 ]

[ 0 0 7 ]

#> b = [ 13, 16, 0 ].tr; x = U \ ( L \ (P.tr * b) );

b =

[ 13 ]

[ 16 ]

[ 0 ]

x =

[ 2 ]

[ 5 ]

[ 3 ]

is obtained. Meanwhile, by solving with PLU decomposition gives a

solution .

■ Comparison between PLU and LU decomposition. The efficiency of LU

decomposition can be substantially enhanced by performing PLU

decomposition instead. For example, a matrix shown below

Page 63: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

63

has an LU decomposition with unknowns on the right-hand side. However,

a matrix

Page 64: Chapter 3 Numerical Linear Algebra - msharpmath, The …€¦ ·  · 2014-10-20Chapter 3 Numerical Linear Algebra * Translator or contributor ** * ... 3-5 Forward and Backward Substitution

[101] 003 Numerical Analysis Using M#math, www.msharpmath.com

64

involves only unknowns. The difference in computational cost between the

above LU and PLU decompositions becomes steeply wider as increases. On

the other hand, using , the linear system can be written as

(130)

from which a solution can be easily determined.

//----------------------------------------------------------------------

// end of file

//----------------------------------------------------------------------


Recommended