+ All Categories
Home > Documents > Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  ·...

Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  ·...

Date post: 22-May-2018
Category:
Upload: nguyenhanh
View: 214 times
Download: 2 times
Share this document with a friend
76
Practical Numerical Training UKNum 7: Systems of linear equations C. Mordasini Max Planck Institute for Astronomy, Heidelberg Program: 1) Introduction 2) Gauss Elimination 3) Gauss with Pivoting 4) Determinants 5) LU Decomposition 6) Inverse of a matrix with LU
Transcript
Page 1: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Practical Numerical Training UKNum7: Systems of linear equations

C. Mordasini

Max Planck Institute for Astronomy, Heidelberg

Program:1) Introduction2) Gauss Elimination3) Gauss with Pivoting4) Determinants5) LU Decomposition6) Inverse of a matrix with LU

Page 2: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

1 Introduction

Page 3: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

•One of the most important numerical tasks:•Solve a system of linear equations.

Task

•N unknowns:

Sample page from

NUMERICAL RECIPES IN FO

RTRAN 77: THE ART OF SCIENTIFIC CO

MPUTING

(ISBN 0-521-43064-X)Copyright (C) 1986-1992 by Cam

bridge University Press.Programs Copyright (C) 1986-1992 by Num

erical Recipes Software. Perm

ission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of m

achine-readable files (including this one) to any servercom

puter, is strictly prohibited. To order Numerical Recipes books

or CDROM

s, visit websitehttp://www.nr.com

or call 1-800-872-7423 (North America only),or send em

ail to directcustserv@cam

bridge.org (outside North America).

Chapter 2. Solution of LinearAlgebraic Equations

2.0 Introduction

A set of linear algebraic equations looks like this:

a11x1 + a12x2 + a13x3 + · · · + a1NxN = b1

a21x1 + a22x2 + a23x3 + · · · + a2NxN = b2

a31x1 + a32x2 + a33x3 + · · · + a3NxN = b3

· · · · · ·

aM1x1 + aM2x2 + aM3x3 + · · · + aMNxN = bM

(2.0.1)

Here the N unknowns xj , j = 1, 2, . . . , N are related by M equations. Thecoefficients aij with i = 1, 2, . . . , M and j = 1, 2, . . . , N are known numbers, asare the right-hand side quantities bi, i = 1, 2, . . . , M .

Nonsingular versus Singular Sets of Equations

If N = M then there are as many equations as unknowns, and there is a goodchance of solving for a unique solution set of x j ’s. Analytically, there can fail tobe a unique solution if one or more of the M equations is a linear combination ofthe others, a condition called row degeneracy, or if all equations contain certainvariables only in exactly the same linear combination, called column degeneracy.(For square matrices, a row degeneracy implies a column degeneracy, and viceversa.) A set of equations that is degenerate is called singular. We will considersingular matrices in some detail in §2.6.

Numerically, at least two additional things can go wrong:• While not exact linear combinations of each other, some of the equationsmay be so close to linearly dependent that roundoff errors in the machinerender them linearly dependent at some stage in the solution process. Inthis case your numerical procedure will fail, and it can tell you that ithas failed.

22

Sample page from

NUMERICAL RECIPES IN FO

RTRAN 77: THE ART OF SCIENTIFIC CO

MPUTING

(ISBN 0-521-43064-X)Copyright (C) 1986-1992 by Cam

bridge University Press.Programs Copyright (C) 1986-1992 by Num

erical Recipes Software. Perm

ission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of m

achine-readable files (including this one) to any servercom

puter, is strictly prohibited. To order Numerical Recipes books

or CDROM

s, visit websitehttp://www.nr.com

or call 1-800-872-7423 (North America only),or send em

ail to directcustserv@cam

bridge.org (outside North America).

Chapter 2. Solution of LinearAlgebraic Equations

2.0 Introduction

A set of linear algebraic equations looks like this:

a11x1 + a12x2 + a13x3 + · · · + a1NxN = b1

a21x1 + a22x2 + a23x3 + · · · + a2NxN = b2

a31x1 + a32x2 + a33x3 + · · · + a3NxN = b3

· · · · · ·

aM1x1 + aM2x2 + aM3x3 + · · · + aMNxN = bM

(2.0.1)

Here the N unknowns xj , j = 1, 2, . . . , N are related by M equations. Thecoefficients aij with i = 1, 2, . . . , M and j = 1, 2, . . . , N are known numbers, asare the right-hand side quantities bi, i = 1, 2, . . . , M .

Nonsingular versus Singular Sets of Equations

If N = M then there are as many equations as unknowns, and there is a goodchance of solving for a unique solution set of x j ’s. Analytically, there can fail tobe a unique solution if one or more of the M equations is a linear combination ofthe others, a condition called row degeneracy, or if all equations contain certainvariables only in exactly the same linear combination, called column degeneracy.(For square matrices, a row degeneracy implies a column degeneracy, and viceversa.) A set of equations that is degenerate is called singular. We will considersingular matrices in some detail in §2.6.

Numerically, at least two additional things can go wrong:• While not exact linear combinations of each other, some of the equationsmay be so close to linearly dependent that roundoff errors in the machinerender them linearly dependent at some stage in the solution process. Inthis case your numerical procedure will fail, and it can tell you that ithas failed.

22

•M equations, with the known coefficients

Sample page from

NUMERICAL RECIPES IN FO

RTRAN 77: THE ART OF SCIENTIFIC CO

MPUTING

(ISBN 0-521-43064-X)Copyright (C) 1986-1992 by Cam

bridge University Press.Programs Copyright (C) 1986-1992 by Num

erical Recipes Software. Perm

ission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of m

achine-readable files (including this one) to any servercom

puter, is strictly prohibited. To order Numerical Recipes books

or CDROM

s, visit websitehttp://www.nr.com

or call 1-800-872-7423 (North America only),or send em

ail to directcustserv@cam

bridge.org (outside North America).

Chapter 2. Solution of LinearAlgebraic Equations

2.0 Introduction

A set of linear algebraic equations looks like this:

a11x1 + a12x2 + a13x3 + · · · + a1NxN = b1

a21x1 + a22x2 + a23x3 + · · · + a2NxN = b2

a31x1 + a32x2 + a33x3 + · · · + a3NxN = b3

· · · · · ·

aM1x1 + aM2x2 + aM3x3 + · · · + aMNxN = bM

(2.0.1)

Here the N unknowns xj , j = 1, 2, . . . , N are related by M equations. Thecoefficients aij with i = 1, 2, . . . , M and j = 1, 2, . . . , N are known numbers, asare the right-hand side quantities bi, i = 1, 2, . . . , M .

Nonsingular versus Singular Sets of Equations

If N = M then there are as many equations as unknowns, and there is a goodchance of solving for a unique solution set of x j ’s. Analytically, there can fail tobe a unique solution if one or more of the M equations is a linear combination ofthe others, a condition called row degeneracy, or if all equations contain certainvariables only in exactly the same linear combination, called column degeneracy.(For square matrices, a row degeneracy implies a column degeneracy, and viceversa.) A set of equations that is degenerate is called singular. We will considersingular matrices in some detail in §2.6.

Numerically, at least two additional things can go wrong:• While not exact linear combinations of each other, some of the equationsmay be so close to linearly dependent that roundoff errors in the machinerender them linearly dependent at some stage in the solution process. Inthis case your numerical procedure will fail, and it can tell you that ithas failed.

22

•And known right hand side

Sample page from

NUMERICAL RECIPES IN FO

RTRAN 77: THE ART OF SCIENTIFIC CO

MPUTING

(ISBN 0-521-43064-X)Copyright (C) 1986-1992 by Cam

bridge University Press.Programs Copyright (C) 1986-1992 by Num

erical Recipes Software. Perm

ission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of m

achine-readable files (including this one) to any servercom

puter, is strictly prohibited. To order Numerical Recipes books

or CDROM

s, visit websitehttp://www.nr.com

or call 1-800-872-7423 (North America only),or send em

ail to directcustserv@cam

bridge.org (outside North America).

Chapter 2. Solution of LinearAlgebraic Equations

2.0 Introduction

A set of linear algebraic equations looks like this:

a11x1 + a12x2 + a13x3 + · · · + a1NxN = b1

a21x1 + a22x2 + a23x3 + · · · + a2NxN = b2

a31x1 + a32x2 + a33x3 + · · · + a3NxN = b3

· · · · · ·

aM1x1 + aM2x2 + aM3x3 + · · · + aMNxN = bM

(2.0.1)

Here the N unknowns xj , j = 1, 2, . . . , N are related by M equations. Thecoefficients aij with i = 1, 2, . . . , M and j = 1, 2, . . . , N are known numbers, asare the right-hand side quantities bi, i = 1, 2, . . . , M .

Nonsingular versus Singular Sets of Equations

If N = M then there are as many equations as unknowns, and there is a goodchance of solving for a unique solution set of x j ’s. Analytically, there can fail tobe a unique solution if one or more of the M equations is a linear combination ofthe others, a condition called row degeneracy, or if all equations contain certainvariables only in exactly the same linear combination, called column degeneracy.(For square matrices, a row degeneracy implies a column degeneracy, and viceversa.) A set of equations that is degenerate is called singular. We will considersingular matrices in some detail in §2.6.

Numerically, at least two additional things can go wrong:• While not exact linear combinations of each other, some of the equationsmay be so close to linearly dependent that roundoff errors in the machinerender them linearly dependent at some stage in the solution process. Inthis case your numerical procedure will fail, and it can tell you that ithas failed.

22

Page 4: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

(Non-)Singular systems•If N=M: same number of equations and unknowns.

•Chance to find a unique solution.

•But not always: •One or several of the M equations is a linear combination of the other ones (row degeneracy).•All equations contain certain variables only in exactly the same linear combination (column degeneracy).•For square matrixes, one implies the other.

•Such matrixes are called singular.

•This is an analytical problem. Additional conditions arise from the numerical treatment of the task.

Page 5: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Numerical issues•While not exact linear combinations of each other, some of the equations may be so close to linearly dependent that roundoff errors in the machine render them linearly dependent at some stage in the solution process. In this case your numerical procedure will fail, even if there would be an analytical solution.

•Accumulated roundoff errors in the solution process can swamp the true solution. This problem particularly emerges if N is too large. The numerical procedure does not fail algorithmically. However, it returns a set of x’s that are wrong, as can be discovered by direct substitution back into the original equations. The closer a set of equations is to being singular, the more likely this is to happen, since increasingly close cancellations will occur during the solution.

• In fact, the preceding item can be viewed as the special case where the loss of significance is unfortunately total.

Press et al.

Page 6: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Matrix form•Our master equation is usually written in the form of a matrix equation:

2.0 Introduction 23

Sample page from

NUMERICAL RECIPES IN FO

RTRAN 77: THE ART OF SCIENTIFIC CO

MPUTING

(ISBN 0-521-43064-X)Copyright (C) 1986-1992 by Cam

bridge University Press.Programs Copyright (C) 1986-1992 by Num

erical Recipes Software. Perm

ission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of m

achine-readable files (including this one) to any servercom

puter, is strictly prohibited. To order Numerical Recipes books

or CDROM

s, visit websitehttp://www.nr.com

or call 1-800-872-7423 (North America only),or send em

ail to directcustserv@cam

bridge.org (outside North America).

• Accumulated roundoff errors in the solution process can swamp the truesolution. This problem particularly emerges if N is too large. Thenumerical procedure does not fail algorithmically. However, it returns aset of x’s that are wrong, as can be discovered by direct substitution backinto the original equations. The closer a set of equations is to being singular,the more likely this is to happen, since increasingly close cancellationswill occur during the solution. In fact, the preceding item can be viewedas the special case where the loss of significance is unfortunately total.

Much of the sophistication of complicated “linear equation-solving packages”is devoted to the detection and/or correction of these two pathologies. As youwork with large linear sets of equations, you will develop a feeling for when suchsophistication is needed. It is difficult to give any firm guidelines, since there is nosuch thing as a “typical” linear problem. But here is a rough idea: Linear sets withN as large as 20 or 50 can be routinely solved in single precision (32 bit floatingrepresentations) without resorting to sophisticated methods, if the equations are notclose to singular. With double precision (60 or 64 bits), this number can readilybe extended to N as large as several hundred, after which point the limiting factoris generally machine time, not accuracy.

Even larger linear sets, N in the thousands or greater, can be solved when thecoefficients are sparse (that is, mostly zero), by methods that take advantage of thesparseness. We discuss this further in §2.7.

At the other end of the spectrum, one seems just as often to encounter linearproblems which, by their underlying nature, are close to singular. In this case, youmight need to resort to sophisticated methods even for the case of N = 10 (thoughrarely for N = 5). Singular value decomposition (§2.6) is a technique that cansometimes turn singular problems into nonsingular ones, in which case additionalsophistication becomes unnecessary.

Matrices

Equation (2.0.1) can be written in matrix form as

A · x = b (2.0.2)

Here the raised dot denotes matrix multiplication,A is the matrix of coefficients, andb is the right-hand side written as a column vector,

A =

!

"

#

a11 a12 . . . a1N

a21 a22 . . . a2N

· · ·aM1 aM2 . . . aMN

$

%

&b =

!

"

#

b1

b2

· · ·bM

$

%

&(2.0.3)

By convention, the first index on an element a ij denotes its row, the secondindex its column. A computer will store the matrix A as a two-dimensional array.However, computer memory is numbered sequentially by its address, and so isintrinsically one-dimensional. Therefore the two-dimensional array A will, at thehardware level, either be stored by columns in the order

a11, a21, . . . , aM1, a12, a22, . . . , aM2, . . . , a1N , a2N , . . . aMN

•A is the matrix of coefficients, and b is the right-hand side written as a column vector

2.0 Introduction 23

Sample page from

NUMERICAL RECIPES IN FO

RTRAN 77: THE ART OF SCIENTIFIC CO

MPUTING

(ISBN 0-521-43064-X)Copyright (C) 1986-1992 by Cam

bridge University Press.Programs Copyright (C) 1986-1992 by Num

erical Recipes Software. Perm

ission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of m

achine-readable files (including this one) to any servercom

puter, is strictly prohibited. To order Numerical Recipes books

or CDROM

s, visit websitehttp://www.nr.com

or call 1-800-872-7423 (North America only),or send em

ail to directcustserv@cam

bridge.org (outside North America).

• Accumulated roundoff errors in the solution process can swamp the truesolution. This problem particularly emerges if N is too large. Thenumerical procedure does not fail algorithmically. However, it returns aset of x’s that are wrong, as can be discovered by direct substitution backinto the original equations. The closer a set of equations is to being singular,the more likely this is to happen, since increasingly close cancellationswill occur during the solution. In fact, the preceding item can be viewedas the special case where the loss of significance is unfortunately total.

Much of the sophistication of complicated “linear equation-solving packages”is devoted to the detection and/or correction of these two pathologies. As youwork with large linear sets of equations, you will develop a feeling for when suchsophistication is needed. It is difficult to give any firm guidelines, since there is nosuch thing as a “typical” linear problem. But here is a rough idea: Linear sets withN as large as 20 or 50 can be routinely solved in single precision (32 bit floatingrepresentations) without resorting to sophisticated methods, if the equations are notclose to singular. With double precision (60 or 64 bits), this number can readilybe extended to N as large as several hundred, after which point the limiting factoris generally machine time, not accuracy.

Even larger linear sets, N in the thousands or greater, can be solved when thecoefficients are sparse (that is, mostly zero), by methods that take advantage of thesparseness. We discuss this further in §2.7.

At the other end of the spectrum, one seems just as often to encounter linearproblems which, by their underlying nature, are close to singular. In this case, youmight need to resort to sophisticated methods even for the case of N = 10 (thoughrarely for N = 5). Singular value decomposition (§2.6) is a technique that cansometimes turn singular problems into nonsingular ones, in which case additionalsophistication becomes unnecessary.

Matrices

Equation (2.0.1) can be written in matrix form as

A · x = b (2.0.2)

Here the raised dot denotes matrix multiplication,A is the matrix of coefficients, andb is the right-hand side written as a column vector,

A =

!

"

#

a11 a12 . . . a1N

a21 a22 . . . a2N

· · ·aM1 aM2 . . . aMN

$

%

&b =

!

"

#

b1

b2

· · ·bM

$

%

&(2.0.3)

By convention, the first index on an element a ij denotes its row, the secondindex its column. A computer will store the matrix A as a two-dimensional array.However, computer memory is numbered sequentially by its address, and so isintrinsically one-dimensional. Therefore the two-dimensional array A will, at thehardware level, either be stored by columns in the order

a11, a21, . . . , aM1, a12, a22, . . . , aM2, . . . , a1N , a2N , . . . aMN

2.0 Introduction 23

Sample page from

NUMERICAL RECIPES IN FO

RTRAN 77: THE ART OF SCIENTIFIC CO

MPUTING

(ISBN 0-521-43064-X)Copyright (C) 1986-1992 by Cam

bridge University Press.Programs Copyright (C) 1986-1992 by Num

erical Recipes Software. Perm

ission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of m

achine-readable files (including this one) to any servercom

puter, is strictly prohibited. To order Numerical Recipes books

or CDROM

s, visit websitehttp://www.nr.com

or call 1-800-872-7423 (North America only),or send em

ail to directcustserv@cam

bridge.org (outside North America).

• Accumulated roundoff errors in the solution process can swamp the truesolution. This problem particularly emerges if N is too large. Thenumerical procedure does not fail algorithmically. However, it returns aset of x’s that are wrong, as can be discovered by direct substitution backinto the original equations. The closer a set of equations is to being singular,the more likely this is to happen, since increasingly close cancellationswill occur during the solution. In fact, the preceding item can be viewedas the special case where the loss of significance is unfortunately total.

Much of the sophistication of complicated “linear equation-solving packages”is devoted to the detection and/or correction of these two pathologies. As youwork with large linear sets of equations, you will develop a feeling for when suchsophistication is needed. It is difficult to give any firm guidelines, since there is nosuch thing as a “typical” linear problem. But here is a rough idea: Linear sets withN as large as 20 or 50 can be routinely solved in single precision (32 bit floatingrepresentations) without resorting to sophisticated methods, if the equations are notclose to singular. With double precision (60 or 64 bits), this number can readilybe extended to N as large as several hundred, after which point the limiting factoris generally machine time, not accuracy.

Even larger linear sets, N in the thousands or greater, can be solved when thecoefficients are sparse (that is, mostly zero), by methods that take advantage of thesparseness. We discuss this further in §2.7.

At the other end of the spectrum, one seems just as often to encounter linearproblems which, by their underlying nature, are close to singular. In this case, youmight need to resort to sophisticated methods even for the case of N = 10 (thoughrarely for N = 5). Singular value decomposition (§2.6) is a technique that cansometimes turn singular problems into nonsingular ones, in which case additionalsophistication becomes unnecessary.

Matrices

Equation (2.0.1) can be written in matrix form as

A · x = b (2.0.2)

Here the raised dot denotes matrix multiplication,A is the matrix of coefficients, andb is the right-hand side written as a column vector,

A =

!

"

#

a11 a12 . . . a1N

a21 a22 . . . a2N

· · ·aM1 aM2 . . . aMN

$

%

&b =

!

"

#

b1

b2

· · ·bM

$

%

&(2.0.3)

By convention, the first index on an element a ij denotes its row, the secondindex its column. A computer will store the matrix A as a two-dimensional array.However, computer memory is numbered sequentially by its address, and so isintrinsically one-dimensional. Therefore the two-dimensional array A will, at thehardware level, either be stored by columns in the order

a11, a21, . . . , aM1, a12, a22, . . . , aM2, . . . , a1N , a2N , . . . aMN

•By convention, the first index on an element aij denotes its row, the second index its column.

Page 7: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Special matrixesIn this context, several special types of matrixes are important:

!

1) Unit matrixA diagonal matrix with all diagonal elements equal to one is called an identity/unit matrix.

!

2)Upper triangular matrix

All the elements below the diagonal entries are zero.

Page 8: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Special matrixes II3)Lower triangular matrix

All the elements above the diagonal entries are zero.

A tridiagonal matrix is a square matrix in which all elements not on the following are zero - the major diagonal, the diagonal above the major diagonal, and the diagonal below the major diagonal.

4)Tridiagonal matrix

!

Page 9: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Other special matrixes • In many situations (e.g. solutions of partial differential equations in two or

more dimensions), one is dealing with so called sparse linear systems, i.e. matrixes where only a small fraction of all matrix elements are nonzero. 2.7 Sparse Linear Systems 65

Sample page from

NUMERICAL RECIPES IN FO

RTRAN 77: THE ART OF SCIENTIFIC CO

MPUTING

(ISBN 0-521-43064-X)Copyright (C) 1986-1992 by Cam

bridge University Press.Programs Copyright (C) 1986-1992 by Num

erical Recipes Software. Perm

ission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of m

achine-readable files (including this one) to any servercom

puter, is strictly prohibited. To order Numerical Recipes books

or CDROM

s, visit websitehttp://www.nr.com

or call 1-800-872-7423 (North America only),or send em

ail to directcustserv@cam

bridge.org (outside North America).

(a) (b) (c)

(d) (e) (f )

(g) (h) (i)

( j) (k)

zeros

zeros

zeros

Figure 2.7.1. Some standard forms for sparse matrices. (a) Band diagonal; (b) block triangular; (c) blocktridiagonal; (d) singly bordered block diagonal; (e) doubly bordered block diagonal; (f) singly borderedblock triangular; (g) bordered band-triangular; (h) and (i) singly and doubly bordered band diagonal; (j)and (k) other! (after Tewarson) [1].

Sherman-Morrison Formula

Suppose that you have already obtained, by herculean effort, the inverse matrixA!1 of a square matrix A. Now you want to make a “small” change in A, forexample change one element aij , or a few elements, or one row, or one column.Is there any way of calculating the corresponding change in A!1 without repeating

• For such matrixes, one should use specialized solvers, as they can be much more rapid than general purpose solvers.

• For large systems (N~1000-10000), one wants to consider pre-written specialized libraries, like LAPACK which is freely available.

Press et al.

Page 10: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Methods to solve• In this lecture, we consider the following methods:

• Naive Gaussian Elimination• Gaussian Elimination with Partial Pivoting• LU Decomposition• (Gauss Seidel Iteration)

• For large systems (N~1000-10000), one wants to consider pre-written specialized libraries, like LAPACK which is freely available. They have also been parallelized and ported onto GPUs.

Page 11: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

2 Gaussian Elimination

Page 12: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Naive Gaussian elimination• Two step procedure:

• Forward elimination: In this step, an unknown is eliminated in each equation starting with the first equation. This way, the equations are reduced to one unknown in each equation.

• Back Substitution: In this step, starting from the last equation, each of the unknowns is found.

Page 13: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Forward elimination goalThe goal of forward elimination is to transform the coefficient matrix into an upper triangular matrix:

Page 14: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Initial equationsA set of n equations and n unknowns

. . . . . .

To bring this to the form of an upper triangular matrix, (n-1) steps of forward elimination are needed.

Page 15: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Forward elimination step 1 aIn the first step of forward elimination, the first unknown, x1 is eliminated from all rows below the first row.The first equation is selected as the pivot equation to eliminate x1. So, to eliminate x1 in the second equation, one divides the first equation by a11 (hence called the pivot element) and then multiplies it by a21.

This is the same as multiplying the first equation by a21/a11 to give

Page 16: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Forward elimination step 1 bNow, this equation can be subtracted from the second equation to give

− _________________________________________________

or Note that we have eliminated in the new second equation x1.

Page 17: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Forward elimination step 1 cWe can write this new second equation also as

" a 22 = a22 −a21

a11

a12

" a 2n = a2n −a21

a11

a1n

!

where

Page 18: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Forward elimination step 1 dRepeat this procedure of eliminating x1 for the remaining equations (3 to n) to reduce the set of equations as

. . . . . . . . .

This is the end of step 1.

Page 19: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Forward elimination step 2For the second step of forward elimination, we start with the second equation as the pivot equation and a’22 as the pivot element. To eliminate x2 in the third equation, one thus divides the second equation by a’22 (the pivot element) and then multiply it by a’32. Then we subtract this from the third equation. This makes the coefficient of x2 zero in the third equation. The same procedure is now repeated for the fourth equation until the nth equation to give

. . . . . .

Page 20: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Forward elimination further stepsThe next steps of forward elimination are conducted by using the third equation as a pivot equation and so on. Thus, there will be a total of n-1 steps of forward elimination. At the end of n-1 steps of forward elimination, we get a set of equations that look like

. . . . . .

Page 21: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Back substitution Example of a system of 3 equations:

Solve each equation starting (obviously) from the last equation as it has only one unknown:

25 5 10 −4.8 −1.560 0 0.7

#

$

% % %

&

'

( ( ( x1

x2

x3

#

$

% % %

&

'

( ( (

=

106.8−96.210.735

#

$

% % %

&

'

( ( (

We have now determined a first unknown quantity.

Page 22: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Back substitution IIThen the second last equation, that is the (n-1)th equation, has two unknowns: xn and xn-1, but xn is already known from just before. This reduces the (n-1)th equation also to one unknown. We now iteratively work up to the first equation. Back substitution hence can be represented for all equations by the formula

xi =bi

i−1( ) − ai,i+1i−1( )xi+1 − ai,i+2

i−1( )xi+2 − ...− ai,ni−1( )xn

aiii−1( ) for i = n −1,...,1

xi =bi

i−1( ) − aiji−1( )x j

j= i+1

n∑

aiii−1( ) for i = n −1,...,1

Page 23: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example I The upward velocity of a rocket is given at three different times

The velocity data is approximated by a polynomial as:

Find the velocity at t=6 seconds.

Page 24: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example II

We need to determine the unknown coefficients ai. This results in a matrix of the form (Vandermonde matrix)

Using data from the table, the matrix becomes:

Page 25: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example IIIWe can also write this as:

1) Forward elimination

Number of steps of forward elimination is (n−1)=(3−1)=2

Page 26: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example IVForward elimination step 1

Divide Equation 1 by 25 and

multiply it by 64, .

25 5 1 106.8[ ] × 2.56 = 64 12.8 2.56 273.408[ ]

Subtract the result from Equation 2

Substitute new equation for Eq. 2

Page 27: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example VForward elimination step 1 continued

25 5 1 106.8[ ] × 5.76 = 144 28.8 5.76 615.168[ ]

Divide Equation 1 by 25 and

multiply it by 144, .

Subtract the result from Equation 3

Substitute new equation for Equation 3

Page 28: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example VIForward elimination step 2

0 −4.8 −1.56 −96.208[ ] × 3.5 = 0 −16.8 −5.46 −336.728[ ]

Subtract the result from Equation 3

Substitute new equation for Equation 3

Divide Equation 2 by −4.8and multiply it by −16.8, !

Page 29: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example VIIBack substitution

25 5 10 −4.8 −1.560 0 0.7

#

$

% % %

&

'

( ( ( a1

a2

a3

#

$

% % %

&

'

( ( (

=

106.8−96.208

0.76

#

$

% % %

&

'

( ( (

Solving for a2 Solving for a3

Page 30: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example VIIIBack substitution continued

25 5 10 −4.8 −1.560 0 0.7

#

$

% % %

&

'

( ( ( a1

a2

a3

#

$

% % %

&

'

( ( (

=

106.8−96.208

0.76

#

$

% % %

&

'

( ( (

Solving for a1

Page 31: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example IX

Solution The solution vector is

The polynomial that passes through the three data points is then:

Page 32: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Possible issues

1) Division by zeroThere are two pitfalls of the Naive Gauss elimination method.

Here, we cannot even start the algorithm as we would have to devise by zero. This problem can also occur later on:

Page 33: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Possible issues IIHere we can start the algorithm. However, at the first step of forward elimination, we get the following equations in matrix form

Now at the beginning of the 2nd step of forward elimination, the coefficient of x2 in Equation 2 would be used as the pivot element. That element is zero and hence would create the division by zero problem.

Division by zero is a possibility at any step of forward elimination.

Page 34: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Possible issues III2) Large round-off errorsThe Naive Gauss elimination method is prone to round-off errors. This is true when there are large numbers of equations as errors propagate. Also, if there is subtraction of numbers from each other, it may create large errors. Example:

Exact Solution

Page 35: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Possible issues IIISolve it on a computer using 6 significant digits with chopping. One finds

Solve it on a computer using 5 significant digits with chopping

Is there a way to reduce the round off error?Obviously, increase the number of significant digits (always use double precision). This

• Decreases round-off error• Does not avoid division by zero

Page 36: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

3 Gaussian Elimination with Partial Pivoting

Page 37: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Partial pivoting

This method is an improvement of the algorithm we just saw before.

Gaussian Elimination with Partial Pivoting• Avoids division by zero

• Reduces round off error

At the beginning of the kth step of forward elimination, find the maximum of

If the maximum of the values is in the p th row, then switch rows p and k.

What is Different About Partial Pivoting?

Otherwise, exactly the same algorithm as naive Gauss elimination except that we switch rows before each of the (n-1) steps of forward elimination.

Page 38: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Switching rowsMatrix Form at Beginning of 2nd Step of Forward Elimination

Switched Rows

Page 39: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

ExampleWe solve the following set of equations by Gaussian elimination with partial pivoting

Number of steps of forward elimination is again (n−1)=(3−1)=2.

Page 40: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example II

Examine absolute values of first column, first row and below.

• Largest absolute value is 144 and exists in row 3.• Switch row 1 and row 3.

Divide Equation 1 by 144 and

multiply it by 64, .

•Begin forward elimination

Forward Elimination: Step 1

Page 41: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example III

144 12 1 279.2[ ] × 0.4444 = 63.99 5.333 0.4444 124.1[ ]

Subtract the result from Equation 2

Substitute new equation for Equation 2

Divide Equation 1 by 144 and

multiply it by 25, .

Equation 3

144 12 1 279.2[ ] × 0.1736 = 25.00 2.083 0.1736 48.47[ ]

Page 42: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example IVSubtract the result from Equation 3

Substitute new equation for Equation 3

Forward Elimination: Step 2Examine absolute values of second column, second row and below

• Largest absolute value is 2.917 and exists in row 3.• Switch row 2 and row 3.

Page 43: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example V

.

Divide Equation 2 by 2.917 andmultiply it by 2.667,

Subtract the result from Equation 3

Substitute new equation for Equation 3

0 2.917 0.8264 58.33[ ] × 0.9143 = 0 2.667 0.7556 53.33[ ]

Page 44: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example VI

Solve exactly as before for a3

Back Substitution

then for a2 and finally for a1

Page 45: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Round off errors Comparison with/without pivoting

Exact Solution

We studied earlier the system

We found for Gaussian Elimination without pivoting, and 5 significant digits

With pivoting, and 5 digits, one finds in contrast the exact solution. The fact that the round off error is fully removed here is by coincidence only, the general trend of a smaller error is however not a coincidence.

Page 46: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

4 Determinants of a Square Matrix

Page 47: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

DeterminantsUsing naive Gauss Eliminations to find the determinant of a square matrixOne of the more efficient ways to find the determinant of a square matrix is by taking advantage of the following two theorems on a determinant of matrices coupled with Naive Gauss elimination.

If a multiple of one row of [A]nxn is added or subtracted to another row of [A]nxn to result in [B]nxn then det(A)=det(B). The same is true for column operations also.

Theorem 1

Theorem 2 The determinant of an upper triangular, lower triangular or diagonal matrix [A]nxn is given by

Page 48: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Determinants II

This implies that if we apply the forward elimination steps of the Naive Gauss elimination method, the determinant of the matrix stays the same according to Theorem 1. Then since at the end of the forward elimination steps, the resulting matrix is upper triangular, the determinant will be given by Theorem 2.

Page 49: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

ExampleFind the determinant of (rocket example)

!

For a 3 x 3 Matrix we of course know directly that

But we can also use (in particular for higher dimensions) that with Naive Gaussian Elimination, we get the upper triangular matrix.

!

In this example, we found earlier

Page 50: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example IITherefore, according to theorem 1 and 2, the determinant is

!

= 25 × (−4.8) × 0.7!

Page 51: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Determinants and pivoting What if one cannot find the determinant of the matrix using the Naive Gauss elimination method, for example, because one gets division by zero problems during the Naïve Gauss elimination method?

In this case, one can apply Gaussian elimination with partial pivoting. However, the determinant of the resulting upper triangular matrix may differ by a sign. The following theorem applies in addition to the previous two to find the determinant of a square matrix.

Theorem 3 Let [A]nxn be a n x n matrix. Then, if [B]nxn is a matrix that results from switching one row with another row, then det(A)=-det(B).

Page 52: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

5 LU Decomposition

Page 53: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

LU DecompositionLU Decomposition is another method to solve a set of simultaneous linear equations

Which is better, Gauss Elimination or LU Decomposition?

To answer this, a closer look at LU decomposition is needed.

Page 54: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

MethodFor most non-singular matrix [A] that one could conduct Naive Gauss Elimination forward elimination steps, one can write it as

[A] = [L][U]where [L] = lower triangular matrix [U] = upper triangular matrix

LU Decomposition

Page 55: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

LU Decomposition idea

Note: [Z] is a n x 1 matrix, i.e. a vector

Page 56: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

LU Decomposition algorithmHow can this be used?

Given [A][X] = [C]

1. Decompose [A] into [L] and [U]

2. Solve [L][Z] = [C] for [Z]

3. Solve [U][X] = [Z] for [X]

As we will see below, both solving equation 2 and 3 is very simple. Eq. 2 is solved using forward substitution and then we use Eq. 3 to calculate the solution vector [X] by back substitution.

Page 57: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Computational time

Gaussian Elimination LU Decomposition

T=clock cycle time n = size of the matrix

So both methods are equally efficient.However, for calculating the inverse of a matrix, LU can be much faster (cf below)

To solve [A][X] = [B]

Page 58: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Method: Decompose [A] to [L] and [U]

[U] is the same as the coefficient matrix at the end of the forward elimination step during Naive Gauss elimination. [L] is obtained using the multipliers that were used in the forward elimination process

A[ ] = L[ ] U[ ] =

1 0 0 21 1 0 31 32 1

"

#

$ $ $

%

&

' ' '

u11 u12 u130 u22 u230 0 u33

"

#

$ $ $

%

&

' ' '

Page 59: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example: Find the [U] matrix IUsing the Forward Elimination Procedure like for Gauss Elimination

Step 1:

Page 60: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Step 2:

Matrix after Step 1:

Example: Find the [U] matrix II

Page 61: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example: Find the [L] matrix I

Using the multipliers used during the Forward Elimination Procedure

From the first step of forward elimination

Page 62: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

From the second step of forward elimination

Example: Find the [L] matrix II

The [L] matrix is thus obtained for “free” if we do Naive Gaussian Elimination.

Page 63: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Does [L][U] = [A]?

L[ ] U[ ] =

1 0 02.56 1 05.76 3.5 1

⎢ ⎢ ⎢

⎥ ⎥ ⎥

25 5 10 −4.8 −1.560 0 0.7

⎢ ⎢ ⎢

⎥ ⎥ ⎥

=

?This can always be checked by simple matrix multiplication.

Page 64: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

LU Decomposition to solve SLEs I

Solve the following set of linear equations using LU Decomposition

Step 1Use the algorithm we just saw for finding the [L] and [U] matrices

A[ ] = L[ ] U[ ] =

1 0 02.56 1 05.76 3.5 1

⎢ ⎢ ⎢

⎥ ⎥ ⎥

25 5 10 −4.8 −1.560 0 0.7

⎢ ⎢ ⎢

⎥ ⎥ ⎥

Page 65: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Set [L][Z] = [C]

LU Decomposition to solve SLEs IIStep 2

z1 =106.82.56z1 + z2 =177.25.76z1 + 3.5z2 + z3 = 279.2

and then solve for [Z]. Due to the particular shape of [L], this is very simple by forward substitution starting from the first equation:

Page 66: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Complete the forward substitution to solve for [Z]

LU Decomposition to solve SLEs III

Step 2 continued

Page 67: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Set [U][X] = [Z]

Solve for [X] The 3 equations become

LU Decomposition to solve SLEs IVStep 3

Again, due to the particular shape of [U], this is very simple by backward substitution starting from the last equation.

Page 68: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

From the 3rd equation Substituting in a3 and using the second equation

LU Decomposition to solve SLEs V

Substituting in a3 and a2 in the first equation

Hence the Solution Vector [X] is:

Step 3 continued

Page 69: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

6 Inverse of a Matrix with LU Decomposition

Page 70: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Finding the inverse of a square matrixThe inverse [B] of a square matrix [A] is defined as

[A][B] = [I] = [B][A]

How can LU Decomposition be used to find the inverse?Assume the first column of [B] to be [b11 b12 … bn1]T. Use this and the definition of matrix multiplication

First column of [B] Second column of [B]

The remaining columns in [B] can be found in the same manner. Thus, we now determine [B] in one column after another, where such a column is taking the role of the [X] vector in the last examples.

Page 71: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example: Inverse of a Matrix IFind the inverse of a square matrix [A]

A[ ] = L[ ] U[ ] =

1 0 02.56 1 05.76 3.5 1

⎢ ⎢ ⎢

⎥ ⎥ ⎥ 25 5 10 −4.8 −1.560 0 0.7

⎢ ⎢ ⎢

⎥ ⎥ ⎥

Step 1: Using the decomposition procedure, the [L] and [U] matrices are found to be

These two matrixes will now be used for all columns of [B].

Page 72: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Example: Inverse of a Matrix IIAs before solving for the each column of [B] requires two furhter steps2) Solve [L] [Z] = [C] for [Z] 3) Solve [U] [X] = [Z] for [X] Step 2:

L[ ] Z[ ] = C[ ] →1 0 02.56 1 05.76 3.5 1

⎢ ⎢ ⎢

⎥ ⎥ ⎥

z1z2z3

⎢ ⎢ ⎢

⎥ ⎥ ⎥

=

100

⎢ ⎢ ⎢

⎥ ⎥ ⎥

This generates the equations:

Page 73: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Solving [U][X] = [Z] for [X]

Example: Inverse of a Matrix IIIStep 2:

This generates the equations: So the first column of the inverse of [A] is:

Page 74: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Repeating for the second and third columns of the inverse

Example: Inverse of a Matrix IV

Third Column Second Column

Final result: The inverse of [A] is

To check: [A][A]-1 = [I] = [A]-1[A]

Page 75: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

Inverse of a Matrix with LUFor calculations of columns of the inverse of the matrix, the LU decomposition needs to be done only once, then the forward substitution n times, and the back substitution n times.In comparison, if Gaussian elimination method were used to find the inverse of a matrix, the forward elimination as well as the back substitution will have to be done n times. Time taken by Gaussian Elimination Time taken by LU Decomposition

n 10 100 1000 10000CT|inverse GE / CT|inverse LU 3.28 25.83 250.8 2501

For large n, the term n4 will become dominant. Comparing computational times of finding inverse of a matrix using LU decomposition and Gaussian elimination.

Page 76: Practical Numerical Training UKNum - MPIA.demordasini/UKNUM/Lecture_07.pdf ·  · 2012-02-28Practical Numerical Training UKNum 7: Systems of linear equations ... 4) Determinants

References

•This script is based on http://numericalmethods.eng.usf.eduby Autar Kaw, Jai Paul and Numerical Recipes (2nd/3rd Edition) by Press et al., Cambridge University Press

http://www.nr.com/oldverswitcher.html


Recommended