+ All Categories
Home > Documents > Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast...

Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast...

Date post: 19-Jun-2020
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
17
Lecture 8: Fast Linear Solvers (Part 5) 1
Transcript
Page 1: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Lecture 8: Fast Linear Solvers (Part 5)

1

Page 2: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Conjugate Gradient (CG) Method

• Solve 𝐴𝒙 = 𝒃 with 𝐴 being an 𝑛 × 𝑛 symmetric positive definite matrix.

• Define the quadratic function

𝜙 𝒙 =1

2𝒙𝑇𝐴𝒙 − 𝒙𝑇𝒃

Suppose 𝒙 minimizes 𝜙 𝒙 , 𝒙 is the solution to 𝐴𝒙 = 𝒃.

• 𝛻𝜙 𝒙 = 𝜕𝜙

𝜕𝑥1, … ,

𝜕𝜙

𝜕𝑥𝑛= 𝐴𝒙 − 𝒃

• The iteration takes form 𝒙(𝑘+1) = 𝒙 𝑘 + 𝛼𝑘𝒗(𝑘) where 𝒗(𝑘) is the search direction and 𝛼𝑘 is the step size.

• Define 𝒓 𝑘 = 𝒃 − 𝐴𝒙 𝑘 to be the residual vector. 2

Page 3: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

• Let 𝒙 and 𝒗 ≠ 𝟎 𝜙 𝒙 + 𝛼𝒗 be fixed vectors and 𝛼 a real number variable.

Define:

ℎ 𝛼 = 𝜙 𝒙 + 𝛼𝒗 = 𝜙 𝒙 + 2𝛼 < 𝒗, 𝐴𝒙 − 𝒃 > +𝛼2 < 𝒗, 𝐴𝒙 >

ℎ 𝛼 has a minimum when ℎ′(𝛼) = 0. This occurs when

𝛼 =𝒗𝑇(𝒃 − 𝐴𝒙)

𝒗𝑻𝐴𝒗.

So ℎ 𝛼 = 𝜙 𝒙 −(𝒗𝑇(𝒃−𝐴𝒙))2

𝒗𝑻𝐴𝒗.

Suppose 𝒙∗ is a vector that minimizes 𝜙 𝒙 . So 𝜙 𝒙 + 𝛼 𝒗 ≥ 𝜙 𝒙∗ . This implies 𝒗𝑇 𝒃 − 𝐴𝒙∗ = 0. Therefore 𝒃 − 𝐴𝒙∗ = 0.

3

Page 4: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

• For any 𝒗 ≠ 𝟎, 𝜙 𝒙 + 𝛼𝒗 < 𝜙 𝒙 unless

𝒗𝑇 𝒃 − 𝐴𝒙 = 0 with 𝛼 =𝒗𝑇(𝒃−𝐴𝒙)

𝒗𝑻𝐴𝒗.

• How to choose the search direction 𝒗? – Method of steepest descent: 𝒗 = −𝛻𝜙 𝒙

• Remark: Slow convergence for linear systems

Algorithm. Let 𝒙(0) be initial guess. for 𝑘 = 1,2, … 𝒗(𝑘) = 𝒃 − 𝐴𝒙(𝑘−1)

𝛼𝑘 =<𝒗 𝑘 , 𝒃−𝐴𝒙(𝑘−1) >

<𝒗(𝑘),𝐴𝒗(𝑘)>

𝒙(𝑘) = 𝒙(𝑘−1) + 𝛼𝑘𝒗(𝑘) end

4

Page 5: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Steepest descent method when 𝜆𝑚𝑎𝑥

𝜆𝑚𝑖𝑛 is large

• Consider to solve 𝐴𝒙 = 𝒃 with 𝐴 =𝜆1 00 𝜆2

,

𝒃 =𝜆1

𝜆2 and the start vector 𝒗 =

−9−1

.

Reduction of ||𝐴𝒙(𝑘) − 𝒃||2 < 10−4.

– With 𝜆1 = 1, 𝜆2 = 2, it takes about 10 iterations

– With 𝜆1 = 1, 𝜆2 = 10, it takes about 40 iterations

5

Page 6: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

• Second approach to choose the search direction 𝒗?

– A-orthogonal approach: use a set of nonzero direction

vectors {𝒗 1 , … , 𝒗(𝑛)} that satisfy < 𝒗(𝑖), 𝐴𝒗(𝑗) > = 0, if

𝑖 ≠ 𝑗. The set {𝒗 1 , … , 𝒗(𝑛)} is called A-orthogonal.

• Theorem. Let {𝒗 1 , … , 𝒗(𝑛)} be an A-orthogonal set of nonzero vectors associated with the

symmetric, positive definite matrix 𝐴, and let 𝒙(0)

be arbitrary. Define 𝛼𝑘 =<𝒗 𝑘 , 𝒃−𝐴𝒙(𝑘−1) >

<𝒗(𝑘),𝐴𝒗(𝑘)> and

𝒙 𝑘 = 𝒙 𝑘−1 + 𝛼𝑘𝒗 𝑘 for 𝑘 = 1,2 … 𝑛. Then

𝐴𝒙 𝑛 = 𝒃 when arithmetic is exact.

6

Page 7: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Conjugate Gradient Method

• The conjugate gradient method of Hestenes and Stiefel.

• Main idea: Construct {𝒗 1 , 𝒗 2 … } during

iteration so that the residual vectors 𝒓 𝑘 are

mutually orthogonal.

7

Page 8: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Algorithm of CG Method

Let 𝒙(0) be initial guess. Set 𝒓(0) = 𝒃 − 𝐴𝒙(0); 𝒗(1) = 𝒓(0). for 𝑘 = 1,2,…

𝛼𝑘 =<𝒓 𝑘−1 ,𝒓 𝑘−1 >

<𝒗(𝑘),𝐴𝒗(𝑘)>

𝒙(𝑘) = 𝒙(𝑘−1) + 𝛼𝑘𝒗(𝑘) 𝒓(𝑘) = 𝒓(𝑘−1) − 𝛼𝑘𝐴𝒗(𝑘) // construct residual

𝜌𝑘 =< 𝒓 𝑘 , 𝒓 𝑘 > if 𝜌𝑘 < 𝜀 exit. //convergence test

𝑠𝑘 =<𝒓 𝑘 ,𝒓 𝑘 >

<𝒓(𝑘−1),𝒓(𝑘−1)>

𝒗(𝑘+1) = 𝒓(𝑘) + 𝑠𝑘𝒗(𝑘) // construct new search direction end

8

Page 9: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Remarks

• Constructed {𝒗 1 , 𝒗 2 … } are pair-wise A-orthogonal.

• Each iteration, there are one matrix-vector multiplication, two dot products and three scalar multiplications.

• Due to round-off errors, in practice, we need more than 𝑛 iterations to get the solution.

• If the matrix 𝐴 is ill-conditioned, the CG method is sensitive to round-off errors (CG is not good as Gaussian elimination with pivoting).

• Main usage of CG is as iterative method applied to bettered conditioned system.

9

Page 10: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

CG as Krylov Subspace Method

Theorem. 𝒙(𝑘) of the CG method minimizes the function 𝜙 𝒙 with respect to the subspace

Κ𝑘 𝐴, 𝒓 0 =

𝑠𝑝𝑎𝑛{𝒓 0 , 𝐴𝒓 0 , 𝐴2𝒓 0 , … , 𝐴𝑘−1𝒓 0 }.

I.e.

𝜙 𝒙(𝑘) = 𝑚𝑖𝑛𝑐𝑖𝜙 𝒙(0) + 𝑐𝑖𝐴

𝑖𝒓 0𝑘−1𝑖=0

The subspace Κ𝑘 𝐴, 𝒓 0 is called Krylov subspace.

10

Page 11: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Error Estimate

• Define an energy norm || ∙ ||𝐴 of vector 𝒖 with respect to matrix 𝐴: ||𝒖||𝐴 = (𝒖𝑇𝐴𝒖)1/2

• Define the error 𝒆(𝑘) = 𝒙(𝑘) − 𝒙∗ where 𝒙∗ is the exact solution.

• Theorem.

||𝒙(𝑘) − 𝒙∗||𝐴 ≤ 2(𝜅 𝐴 −1

𝜅 𝐴 +1)𝑘||𝒙(0) − 𝒙∗||𝐴 with

𝜅 𝐴 = 𝑐𝑜𝑛𝑑 𝐴 =𝜆𝑚𝑎𝑥(𝐴)

𝜆𝑚𝑖𝑛(𝐴)≥ 1.

Remark: Convergence is fast if matrix 𝐴 is well-conditioned.

11

Page 12: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Preconditioning

Let the symmetric positive definite matrix 𝑀 be a preconditioner for 𝐴 and 𝐿𝐿𝑇 = 𝑀 be its Cholesky factorization. 𝑀−1𝐴 is better conditioned than 𝐴. The preconditioned system of equations is

𝑀−1𝐴𝒙 = 𝑀−1𝒃 or

𝐿−𝑇𝐿−1𝐴𝒙 = 𝐿−𝑇𝐿−1𝒃 where 𝐿−𝑇 = (𝐿𝑇)−1. Multiply with 𝐿𝑇 to obtain

𝐿−1𝐴𝐿−𝑇𝐿𝑇𝒙 = 𝐿−1𝒃

Define: 𝐴 = 𝐿−1𝐴𝐿−𝑇; 𝒙 = 𝐿𝑇𝒙; 𝒃 = 𝐿−1𝒃

Now apply CG to 𝐴 𝒙 = 𝒃 .

12

Page 13: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Preconditioned CG Method

• Define 𝒛(𝑘) = 𝑀−1𝒓(𝑘) to be the preconditioned residual.

Let 𝒙(0) be initial guess. Set 𝒓(0) = 𝒃 − 𝐴𝒙(0); Solve 𝑀𝒛(0) = 𝒓(0) for 𝒛(0) Set 𝒗(1) = 𝒛(0) for 𝑘 = 1,2, …

𝛼𝑘 =<𝒛 𝑘−1 ,𝒓 𝑘−1 >

<𝒗(𝑘),𝐴𝒗(𝑘)>

𝒙(𝑘) = 𝒙(𝑘−1) + 𝛼𝑘𝒗(𝑘)

𝒓(𝑘) = 𝒓(𝑘−1) − 𝛼𝑘𝐴𝒗(𝑘)

solve 𝑀𝒛(𝑘) = 𝒓(𝑘) for 𝒛(𝑘)

𝜌𝑘 =< 𝒓 𝑘 , 𝒓 𝑘 > if 𝜌𝑘 < 𝜀 exit. //convergence test

𝑠𝑘 =<𝒛 𝑘 ,𝒓 𝑘 >

<𝒛(𝑘−1),𝒓(𝑘−1)>

𝒗(𝑘+1) = 𝒓(𝑘) + 𝑠𝑘𝒗(𝑘) end 13

Page 14: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Incomplete Cholesky Factorization

• Assume 𝐴 is symmetric and positive definite. 𝐴 is sparse. • Factor 𝐴 = 𝐿𝐿𝑇 + 𝑅, 𝑅 ≠ 𝟎. 𝐿 has similar sparse structure as

𝐴.

14

for 𝑘 = 1, … , 𝑛 𝑙𝑘𝑘 = 𝑎𝑘𝑘 for 𝑖 = 𝑘 + 1, … , 𝑛

𝑙𝑖𝑘 =𝑎𝑖𝑘

𝑙𝑘𝑘

for 𝑗 = 𝑘 + 1, … , 𝑛 if 𝑎𝑖𝑗 = 0 then

𝑙𝑖𝑗 = 0

else 𝑎𝑖𝑗 = 𝑎𝑖𝑗 − 𝑙𝑖𝑘𝑙𝑘𝑗

endif endfor endfor endfor

Page 15: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

In diagonal or Jacobi preconditioning 𝑀 = 𝑑𝑖𝑎𝑔(𝐴) • Jacobi preconditioning is cheap if it works, i.e.

solving 𝑀𝒛(𝑘) = 𝒓(𝑘) for 𝒛(𝑘) almost cost nothing. References • CT. Kelley. Iterative Methods for Linear and Nonlinear Equations • T. F. Chan and H. A. van der Vorst, Approximate and incomplete factorizations, D.

E. Keyes, A. Sameh, and V. Venkatakrishnan, eds., Parallel Numerical Algorithms, pp. 167-202, Kluwer, 1997

• M. J. Grote and T. Huckle, Parallel preconditioning with sparse approximate inverses, SIAM J. Sci. Comput. 18:838-853, 1997

• Y. Saad, Highly parallel preconditioners for general sparse matrices, G. Golub, A. Greenbaum, and M. Luskin, eds., Recent Advances in Iterative Methods, pp. 165-199, Springer-Verlag, 1994

• H. A. van der Vorst, High performance preconditioning, SIAM J. Sci. Stat. Comput. 10:1174-1185, 1989

Jacobi Preconditioning

15

Page 16: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Row-wise Block Striped Decomposition of a Symmetrically Banded Matrix

Row decomposition 16

Page 17: Lecture 8: Fast Linear Solvers (Part 5)zxu2/acms60212-40212-S12/Lec-09-5.pdf · Lecture 8: Fast Linear Solvers (Part 5) 1 . Conjugate Gradient (CG) Method •Solve 𝐴𝒙=𝒃 with

Parallel CG Algorithm • Assume a row-wise block-striped decomposition of matrix 𝐴 and partition all vectors

uniformly among tasks.

Let 𝒙(0) be initial guess. Set 𝒓(0) = 𝒃 − 𝐴𝒙(0); Solve 𝑀𝒛(0) = 𝒓(0) for 𝒛(0) Set 𝒗(1) = 𝒛(0) for 𝑘 = 1,2, … 𝒈 = 𝐴𝒗(𝑘) // parallel matrix-vector multiplication

𝑧𝑟 =< 𝒛 𝑘−1 , 𝒓 𝑘−1 > // parallel dot product by MPI_Allreduce

𝛼𝑘 =𝑧𝑟

<𝒗(𝑘),𝒈> // parallel dot product by MPI_Allreduce

𝒙(𝑘) = 𝒙(𝑘−1) + 𝛼𝑘𝒗(𝑘) // 𝒓(𝑘) = 𝒓(𝑘−1) − 𝛼𝑘𝒈 //

solve 𝑀𝒛(𝑘) = 𝒓(𝑘) for 𝒛(𝑘) // Solve matrix system, can involve additional complexity

𝜌𝑘 =< 𝒓 𝑘 , 𝒓 𝑘 > // MPI_Allreduce if 𝜌𝑘 < 𝜀 exit. //convergence test

𝑧𝑟_𝑛 =< 𝒛 𝑘 , 𝒓 𝑘 > // parallel dot product

𝑠𝑘 =𝑧𝑟_𝑛

𝑧𝑟

𝒗(𝑘+1) = 𝒓(𝑘) + 𝑠𝑘𝒗(𝑘) end

17


Recommended