Prof. A. Richard NewtonUniversity of California at Berkeley
Page 1Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.1
EE219A: Computer Analysis of Electrical CircuitsOutline
Lecture 2.2
u Solving Linear Equations: Indirect Methods
EE219A Fall 1998 2.2.2
Disadvantages of Direct MethodsDisadvantages of Direct Methods
uFor large system of equations:sO(n3 ) time complexity too prohibitive
(even O(n1.4) too much for sparse matrixcase)
s Space complexity may also be an issue(excessive fillins)
s The error grows linearly with the size ofthe problem
s Some large matrices may inherently bedense
uFor large system of equations:sO(n3 ) time complexity too prohibitive
(even O(n1.4) too much for sparse matrixcase)
s Space complexity may also be an issue(excessive fillins)
s The error grows linearly with the size ofthe problem
s Some large matrices may inherently bedense
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 2Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.3
General ApproachGeneral Approachu Guess an initial solution and successively refine it
until some error criterion is metu Always inferior to direct methods on infinite
precision computersu Rate of convergence depends on the spectral
properties of the coefficient matrixu Use a second matrix (generally) to transform the
coefficient matrix to something with more favorablespectral properties
u Main computation: matrix vector productu Stationary and non-stationary methods
u Guess an initial solution and successively refine ituntil some error criterion is met
u Always inferior to direct methods on infiniteprecision computers
u Rate of convergence depends on the spectralproperties of the coefficient matrix
u Use a second matrix (generally) to transform thecoefficient matrix to something with more favorablespectral properties
u Main computation: matrix vector productu Stationary and non-stationary methods
EE219A Fall 1998 2.2.4
Stationary MethodsStationary Methods
u Can be expressed in the formx(k) = Bx(k-1) + c
where neither B or c depend on the iterationcount k.
u Four methods:s Jacobi (J)s Gauss-Seidel (GS)s Successive Overrelaxation (SOR)s Symmetric Successive Overrelaxation (SSOR)
u Can be expressed in the formx(k) = Bx(k-1) + c
where neither B or c depend on the iterationcount k.
u Four methods:s Jacobi (J)s Gauss-Seidel (GS)s Successive Overrelaxation (SOR)s Symmetric Successive Overrelaxation (SSOR)
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 3Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.5
Jacobi MethodJacobi Method
EE219A Fall 1998 2.2.6
Jacobi MethodJacobi Method
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 4Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.7
Gauss-SeidelGauss-Seidel
EE219A Fall 1998 2.2.8
Gauss-SeidelGauss-Seidel
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 5Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.9
SOR and SSORSOR and SSOR
EE219A Fall 1998 2.2.10
Nonstationary MethodsNonstationary Methods
u Computations involve information that changes allthe time
u Most Common:s Conjugate Gradient (CG)s MINRESs Generalised Minimal Residual (GMRES)s BiConjugate Gradient (BiCG)s Quasi-Minimal Residue (QMR)s Conjugate Gradient Squared (CGS)s Biconjugate Gradient Stabilised (BiCGSTAB)
u ...
u Computations involve information that changes allthe time
u Most Common:s Conjugate Gradient (CG)s MINRESs Generalised Minimal Residual (GMRES)s BiConjugate Gradient (BiCG)s Quasi-Minimal Residue (QMR)s Conjugate Gradient Squared (CGS)s Biconjugate Gradient Stabilised (BiCGSTAB)
u ...
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 6Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.11
Preliminaries: Krylov SubspacePreliminaries: Krylov Subspace
EE219A Fall 1998 2.2.12
Preliminaries: Krylov SubspacePreliminaries: Krylov Subspace
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 7Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.13
General ApproachGeneral Approach
EE219A Fall 1998 2.2.14
The Conjugate Gradient MethodThe Conjugate Gradient Method
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 8Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.15
The Conjugate Gradient MethodThe Conjugate Gradient Method
EE219A Fall 1998 2.2.16
Conjugate Gradient AlgorithmConjugate Gradient Algorithm
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 9Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.17
Properties of Conjugate GradientProperties of Conjugate Gradient
EE219A Fall 1998 2.2.18
Properties of Conjugate GradientProperties of Conjugate Gradient
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 10Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.19
Where is Conjugate Gradient Applicable?Where is Conjugate Gradient Applicable?
EE219A Fall 1998 2.2.20
MINRESMINRES
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 11Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.21
MINRESMINRES
EE219A Fall 1998 2.2.22
General A : GMRESGeneral A : GMRES
u The successive residues are still orthogonal.u Problem: Complexity and storage space increases linearly with
each iteration!!u Restart after a few iterations with the solution of the last iteration
of the previous restart as the new guess
u The successive residues are still orthogonal.u Problem: Complexity and storage space increases linearly with
each iteration!!u Restart after a few iterations with the solution of the last iteration
of the previous restart as the new guess
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 12Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.23
Biconjugate Gradient MethodBiconjugate Gradient Method
EE219A Fall 1998 2.2.24
Biconjugate Gradient MethodBiconjugate Gradient Method
Prof. A. Richard NewtonUniversity of California at Berkeley
Page 13Copyright © 1997, A. Richard Newton
EE219A Fall 1998 2.2.25
Fixes for BiCGFixes for BiCG