Solving Poisson Equation using Conjugate Gradient Methodand its implementation

Post on 14-Jul-2015

337 views 7 download

transcript

Solving Poisson Equation using Conjugate Gradient Method

and its implementation

Jongsu Kim

Theoretical

From the Basics, Ax=b

Linear Systems

๐ด๐‘ฅ = ๐‘

Goal of this presentation

What have you learned?

โ€ข Direct Methodโ€ข Gauss Eliminationโ€ข Thomas Algorithm (TDMA) (for tridiagonal matrix only)

โ€ข Iterative Methodโ€ข Jacobi methodโ€ข SOR methodโ€ข Conjugate Gradient Methodโ€ข Red Black Jacobi Method

Iterative Method

Start with decomposition

๐ด = ๐ท โˆ’ ๐ธ โˆ’ ๐น

Jacobi Method

๐‘ฅ๐‘˜+1 = ๐ทโˆ’1 ๐ธ + ๐น ๐‘ฅ๐‘˜ + ๐ทโˆ’1๐‘

Gauss-Seidel Method

๐‘ฅ๐‘˜+1 = (๐ท โˆ’ ๐ธ)โˆ’1๐น๐‘ฅ๐‘˜ + ๐ท โˆ’ ๐ธ โˆ’1๐‘

๐ด๐‘ฅ = ๐‘

Backward Gauss-Seidel Iteration

๐ท โˆ’ ๐น ๐‘ฅ๐‘˜+1 = ๐ธ๐‘ฅ๐‘˜ + ๐‘

(๐‘– = 1,โ€ฆ , ๐‘› โˆ’ 1, ๐‘›)

(๐‘– = ๐‘›, ๐‘› โˆ’ 1โ€ฆ , )

Splitting of A matrix

Previous method has a common form

๐ด = ๐ท โˆ’ ๐ธ โˆ’ ๐น

๐‘ฅ๐‘˜+1 = ๐ทโˆ’1 ๐ธ + ๐น ๐‘ฅ๐‘˜ + ๐ทโˆ’1๐‘

๐‘ฅ๐‘˜+1 = (๐ท โˆ’ ๐ธ)โˆ’1๐น๐‘ฅ๐‘˜ + ๐ท โˆ’ ๐ธ โˆ’1๐‘

๐ด๐‘ฅ = ๐‘

๐ท โˆ’ ๐น ๐‘ฅ๐‘˜+1 = ๐ธ๐‘ฅ๐‘˜ + ๐‘

๐‘ด๐’™๐’Œ+๐Ÿ = ๐‘ต๐’™๐’Œ + ๐’ƒ = ๐‘ดโˆ’ ๐‘จ ๐’™๐’Œ + ๐’ƒ

๐‘จ = ๐‘ดโˆ’๐‘ต

Introducing SOR (Successive Over Relaxation) method

๐Ž๐‘จ = ๐‘ซ โˆ’๐Ž๐‘ฌ โˆ’ (๐Ž๐‘ญ + ๐Ÿ โˆ’๐Ž ๐‘ซ)

๐‘ซ โˆ’๐Ž๐‘ฌ ๐’™๐’Œ+๐Ÿ = ๐Ž๐‘ญ + ๐Ÿ โˆ’๐Ž ๐‘ซ ๐’™๐’Œ +๐Ž๐’ƒ

SOR to SSOR

Gauss Seidel method

๐‘ฅ๐‘˜+1 = (๐ท โˆ’ ๐ธ)โˆ’1๐น๐‘ฅ๐‘˜ + ๐ท โˆ’ ๐ธ โˆ’1๐‘

SOR (Successive Over Relaxation) method

๐ท โˆ’ ๐œ”๐ธ ๐‘ฅ๐‘˜+1 = ๐œ”๐น + 1 โˆ’ ๐œ” ๐ท ๐‘ฅ๐‘˜ + ๐œ”๐‘

(๐ท โˆ’ ๐ธ)๐‘ฅ๐‘˜+1= ๐น๐‘ฅ๐‘˜ + ๐‘

๐ท โˆ’ ๐น ๐‘ฅ๐‘˜+1 = ๐ธ๐‘ฅ๐‘˜ + ๐‘

Backward Gauss Seidel method

Backward SOR method

๐ท โˆ’ ๐œ”๐น ๐‘ฅ๐‘˜+1 = ๐œ”๐ธ + 1 โˆ’ ๐œ” ๐ท ๐‘ฅ๐‘˜ + ๐œ”๐‘

SSOR method

SSOR (Symmetric Successive Over Relaxation) method

SOR step followed by backward SOR step for symmetric matrix

๐ท โˆ’ ๐œ”๐ธ ๐‘ฅ๐‘˜+1/2 = ๐œ”๐น + 1 โˆ’ ๐œ” ๐ท ๐‘ฅ๐‘˜ + ๐œ”๐‘

๐ท โˆ’ ๐œ”๐น ๐‘ฅ๐‘˜+1 = ๐œ”๐ธ + 1 โˆ’ ๐œ” ๐ท ๐‘ฅ๐‘˜+1/2 + ๐œ”๐‘

๐‘ฅ๐‘˜+1 = ๐‘ฎ๐Ž๐‘ฅ๐‘˜ + ๐’‡๐Ž

๐‘ฎ๐Ž = ๐ท โˆ’ ๐œ”๐น โˆ’1 ๐œ”๐ธ + 1 โˆ’ ๐œ” ๐ท ร— ๐ท โˆ’ ๐œ”๐ธ โˆ’1 ๐œ”๐น + 1 โˆ’ ๐œ” ๐ท

๐’‡๐Ž = ๐œ” ๐ท โˆ’ ๐œ”๐น โˆ’1 ๐ผ + ๐œ”๐ธ + 1 โˆ’ ๐œ” ๐ท ๐ท โˆ’ ๐œ”๐ธ โˆ’1 ๐‘

Observing that

๐œ”๐ธ + 1 โˆ’ ๐œ” ๐ท ๐ท โˆ’ ๐œ”๐ธ โˆ’1 = โˆ’ ๐ท โˆ’ ๐œ”๐ธ + 2 โˆ’ ๐œ” ๐ท ๐ท โˆ’ ๐œ”๐ธ โˆ’1

= โˆ’๐ผ + 2 โˆ’ ๐œ” ๐ท ๐ท โˆ’ ๐œ”๐ธ โˆ’1

๐’‡๐Ž = ๐œ” 2 โˆ’ ๐œ” ๐ท โˆ’ ๐œ”๐น โˆ’1๐ท ๐ท โˆ’ ๐œ”๐ธ โˆ’1๐‘

Used as preconditioner (explain later)

Preconditioned System

๐’™๐’Œ+๐Ÿ = ๐‘ฎ๐Ž๐’™๐’Œ + ๐’‡๐Ž

๐บ๐บ๐‘† ๐ด = ๐ผ โˆ’ (๐ท โˆ’ ๐ธ)โˆ’1๐ด๐บ๐ฝ๐ด ๐ด = ๐ผ โˆ’ ๐ทโˆ’1๐ด,

๐’™๐’Œ+๐Ÿ = ๐‘ดโˆ’๐Ÿ๐‘ต๐’™๐’Œ +๐‘ดโˆ’๐Ÿ๐’ƒ

We have two forms for iterative method

Ex)

๐บ = ๐‘€โˆ’1๐‘ = ๐‘€โˆ’1 ๐‘€ โˆ’ ๐ด = ๐ผ โˆ’๐‘€โˆ’1๐ด ๐‘“ = ๐‘€โˆ’1๐‘

๐ผ โˆ’ ๐บ ๐‘ฅ = ๐‘“

Another viewโ€ฆ

[๐ผ โˆ’ (๐ผ โˆ’ ๐‘€โˆ’1๐ด)]๐‘ฅ = ๐‘“

๐‘€โˆ’1๐ด๐‘ฅ = ๐‘“

๐‘ดโˆ’๐Ÿ๐‘จ๐’™ = ๐‘ดโˆ’๐Ÿ๐’ƒ Preconditioner ๐‘€

Preconditioned System

๐‘ดโˆ’๐Ÿ๐‘จ๐’™ = ๐‘ดโˆ’๐Ÿ๐’ƒ With Preconditioner ๐‘€

๐‘€๐บ๐‘† = ๐ท โˆ’ ๐ธGauss-Seidel

๐‘€๐‘†๐‘†๐‘‚๐‘… =1

๐œ” 2 โˆ’ ๐œ”๐ท โˆ’ ๐œ”๐ธ ๐ทโˆ’1(๐ท โˆ’ ๐œ”๐น)SSOR

๐‘€๐ฝ๐ด = ๐ทJacobi

๐‘€๐ฝ๐ด =1

๐œ”(๐ท โˆ’ ๐œ”๐ธ)SOR

It may not be โ€œSPARSEโ€ due to inverse (๐‘€โˆ’1 )

How to compute this?

๐‘ค = ๐‘€โˆ’1๐ด๐‘ฃ ๐‘Ÿ = ๐ด๐‘ฃ and ๐‘€๐‘ค = ๐‘Ÿ

๐ด๐‘ฃ might be expensive. Much better?

๐‘ค = ๐‘€โˆ’1๐ด๐‘ฃ = ๐‘€โˆ’1 ๐‘€ โˆ’๐‘ ๐‘ฃ = ๐ผ โˆ’๐‘€โˆ’1๐‘ ๐‘ฃ

๐‘Ÿ = ๐‘๐‘ฃ

๐‘ค = ๐‘€โˆ’1๐‘Ÿ

๐‘ค โ‰” ๐‘ฃ โˆ’๐‘ค

N may be sparser than A and less expensive than ๐ด๐‘ฃ

Minimization Problem

Forget about ๐ด๐‘ฅ = ๐‘ temporarily, but thinking about some quadratic function ๐‘“

Function Matrix

๐‘“(x) =1

2๐ด๐‘ฅ2 โˆ’ ๐‘๐‘ฅ + ๐‘ ๐‘“ ๐‘ฅ =

1

2๐‘ฅ๐‘‡๐ด๐‘ฅ โˆ’ ๐‘๐‘‡๐‘ฅ + ๐‘

๐‘“โ€ฒ x = ๐ด๐‘ฅ โˆ’ b ๐‘“โ€ฒ x =1

2๐ด๐‘‡๐‘ฅ +

1

2A๐‘ฅ โˆ’ b

If Matrix ๐ด is symmetric, ๐ด๐‘‡ = ๐ด, then

๐’‡โ€ฒ ๐’™ = ๐‘จ๐’™ โˆ’ ๐’ƒ

Setting the gradient to zero, we get the linear system we wish to solve.

Our original GOAL!!

(a) Quadratic form for a positive definite matrix

(b) Quadratic form for a negative definite matrix

(c) Singular (and positive-indefinite) matrix; A line that runs through bottom of the valley is the set of solutions

(d) For an indefinite matrix. Saddle point.

For a Symmetric and Positive Definite Matrix, minimizing

๐‘“ ๐‘ฅ =1

2๐‘ฅ๐‘‡๐ด๐‘ฅ โˆ’ ๐‘๐‘‡๐‘ฅ + ๐‘

Reduced to our solution

Minimization Problem

Steep Descent Method

Choose direction in which ๐‘“ decrease most quickly, which is the direction opposite ๐‘“โ€ฒ(๐‘ฅ ๐‘– )

๐‘Ÿ(๐‘–) = ๐‘ โˆ’ ๐ด๐‘ฅ(๐‘–)

โˆ’๐‘“โ€ฒ ๐‘ฅ ๐‘– = ๐‘Ÿ(๐‘–) = ๐‘ โˆ’ ๐ด๐‘ฅ(๐‘–)

๐‘ฅ(1) = ๐‘ฅ(0) + ๐›ผ๐‘Ÿ(0)

To Find ๐›ผ, set ๐‘‘

๐‘‘๐›ผ๐‘“ ๐‘ฅ 1 = 0

๐‘‘

๐‘‘๐›ผ๐‘“ ๐‘ฅ 1 = ๐‘“โ€ฒ ๐‘ฅ 1

๐‘‡ ๐‘‘

๐‘‘๐›ผ๐‘ฅ(1) = ๐‘“โ€ฒ ๐‘ฅ 1

๐‘‡๐‘Ÿ(0)

๐‘“โ€ฒ ๐‘ฅ ๐‘–+1๐‘‡

and ๐‘Ÿ(๐‘–) are orthogonal!

โˆ’๐‘“โ€ฒ ๐‘ฅ ๐‘–+1 = ๐‘Ÿ(๐‘–+1)

๐‘“โ€ฒ ๐‘ฅ ๐‘–+1๐‘‡๐‘Ÿ(๐‘–) = 0

๐‘Ÿ ๐‘–+1๐‘‡ ๐‘Ÿ(๐‘–) = 0

๐œถ =๐’“ ๐’Š๐‘ป ๐’“ ๐’Š

๐’“(๐’Š)๐‘ป ๐‘จ๐’“(๐’Š)

Conjugate Gradient Method

Steep Descent Method not always converge well

Worst case of steep descent method

โ€ข Solid lines : worst convergence lineโ€ข Dashed line : steps toward convergence

Why it doesnโ€™t directly go along line for fast convergence? โ†’ related to eigen value problem

Introducing Conjugate Gradient method

Conjugate Gradient Method

What is the meaning of conjugate?โ€ข Definition : A binomial formed by negating the second term of binomialโ€ข ๐‘ฅ + ๐‘ฆ โ† conjugate โ†’ ๐‘ฅ โˆ’ ๐‘ฆ

Then, what is the meaning of conjugate gradient?โ€ข Steep descent method often finds itself taking steps in the same directionโ€ข Wouldnโ€™t it better if we got it right the every step?โ€ข Here is a step

โ€ข error ๐‘’(๐‘–) = ๐‘ฅ(๐‘–) โˆ’ ๐‘ฅ, residual ๐‘Ÿ(๐‘–) = ๐‘ โˆ’ ๐ด๐‘ฅ(๐‘–), ๐‘‘(๐‘–) a set of orthogonal search

directionโ€ข for each step, we choose a point ๐‘ฅ(๐‘–+1) = ๐‘ฅ(๐‘–) + ๐›ผ(๐‘–)๐‘‘(๐‘–)โ€ข To find ๐›ผ, ๐‘’(๐‘–+1) should be orthogonal to ๐‘‘(๐‘–). (๐‘’ ๐‘–+1 = ๐‘’ ๐‘– + ๐›ผ ๐‘– ๐‘‘ ๐‘– )

๐‘‘(๐‘–)๐‘‡ ๐‘’(๐‘–+1) = 0

๐‘‘(๐‘–)๐‘‡ (๐‘’ ๐‘– +๐›ผ(๐‘–)๐‘‘(๐‘–)) = 0

๐›ผ(๐‘–) = โˆ’๐‘‘ ๐‘–๐‘‡ ๐‘’ ๐‘–

๐‘‘(๐‘–)๐‘‡ ๐‘‘(๐‘–)

We donโ€™t know anything about ๐‘’(๐‘–), because if we know ๐‘’(๐‘–), it means we know the answer.

Conjugate Gradient Method

Instead of orthogonal, introduce ๐ด-orthogonal

๐’…(๐’Š)๐‘ป ๐‘จ๐’…(๐’‹) = ๐ŸŽ, if ๐‘‘(๐‘–) and ๐‘‘(๐‘—) are ๐ด-orthogonal, or conjugate

๐’†(๐’Š+๐Ÿ) is ๐‘จ-orthogonal to ๐’…(๐’Š), and this condition is equivalent to finding the minimum

point along the search direction ๐‘‘(๐‘–) , as in steep descent method

๐‘‘

๐‘‘๐›ผ๐‘“ ๐‘ฅ ๐‘–+1 = 0

๐›ผ minimize ๐‘“ when directional derivative is equal to zero

๐‘“โ€ฒ ๐‘ฅ ๐‘–+1๐‘‡ ๐‘‘

๐‘‘๐›ผ๐‘ฅ ๐‘–+1 = 0

โˆ’๐‘Ÿ ๐‘–+1๐‘‡ ๐‘‘(๐‘–) = 0

Chain rule

๐‘“โ€ฒ ๐‘ฅ(๐‘–+1) = ๐ด๐‘ฅ(๐‘–+1) โˆ’ ๐‘

๐‘Ÿ(๐‘–) = ๐‘ โˆ’ ๐ด๐‘ฅ(๐‘–)๐‘ฅ(๐‘–+1) = ๐‘ฅ(๐‘–) + ๐›ผ(๐‘–)๐‘‘(๐‘–)

๐‘‘(๐‘–)๐‘‡ ๐ด๐‘’(๐‘–+1) = 0 ๐‘ฅ(๐‘–+1)

๐‘‡ ๐ด๐‘‡๐‘‘(๐‘–) โˆ’ ๐‘๐‘‡๐‘‘ ๐‘– = 0

๐‘ฅ(๐‘–+1)๐‘‡ ๐ด๐‘‡๐‘‘(๐‘–) โˆ’ ๐‘ฅ๐‘‡๐ด๐‘‡๐‘‘ ๐‘– = 0

๐‘’ ๐‘–+1๐‘‡ ๐ด๐‘‡๐‘‘(๐‘–) = 0 Transpose again

How it can be same as orthogonality used in steep descent method?

๐‘’(๐‘–+1) = ๐‘ฅ(๐‘–+1) โˆ’ ๐‘ฅ

๐œถ(๐’Š) = โˆ’๐’… ๐’Š๐‘ป ๐’“ ๐’Š

๐’…(๐’Š)๐‘ป ๐‘จ๐’…(๐’Š)

Conjugate Gradient Method

๐‘‘(๐‘–)๐‘‡ ๐ด๐‘’(๐‘–+1) = 0

๐‘ฅ(๐‘–+1) = ๐‘ฅ(๐‘–) + ๐›ผ๐‘‘(๐‘–)๐‘’(๐‘–+1) = (๐‘ฅ(๐‘–) + ๐›ผ๐‘Ÿ(๐‘–)) โˆ’ ๐‘ฅ

๐‘‘(๐‘–)๐‘‡ ๐ด๐‘’(๐‘–+1) = ๐‘‘ ๐‘–

๐‘‡ ๐ด((๐‘ฅ ๐‘– + ๐›ผ๐‘‘ ๐‘– ) โˆ’ ๐‘ฅ)

๐‘‘ ๐‘–๐‘‡ ๐ด๐‘ฅ(๐‘–) + ๐›ผ๐‘‘ ๐‘–

๐‘‡ ๐ด๐‘‘(๐‘–) โˆ’ ๐‘‘ ๐‘–๐‘‡ ๐ด๐‘ฅ = 0

๐‘‘ ๐‘–๐‘‡ ๐ด๐‘ฅ ๐‘– โˆ’ ๐‘ = โˆ’๐›ผ๐‘‘ ๐‘–

๐‘‡ ๐ด๐‘‘(๐‘–)

How to find ๐‘‘(๐‘–)?

Gram-Schmidt Process

๐‘‘(๐‘–) = ๐‘ข(๐‘–) + ฮฃk=0๐‘–โˆ’1๐›ฝ๐‘–๐‘˜๐‘‘(๐‘˜)

Find set of ๐ด-orthogonal vector

๐›ฝ๐‘–๐‘˜ = โˆ’๐‘ข๐‘–๐‘‡๐ด๐‘‘ ๐‘—

๐‘‘(๐‘—)๐‘‡ ๐ด๐‘‘(๐‘—)

For set of independent vectors ๐‘ข๐‘–

due to ๐‘‘ ๐‘–๐‘‡ ๐ด๐‘‘(๐‘—) = 0

๐‘– > ๐‘—

Conjugate Gradient Method

Overall Algorithm

Initialization๐‘– = 0

๐‘Ÿ = ๐‘ โˆ’ ๐ด๐‘ฅ๐‘‘ = ๐‘Ÿ

๐›ฟ๐‘›๐‘’๐‘ค = ๐‘Ÿ๐‘‡๐‘Ÿ๐›ฟ0 = ๐›ฟ๐‘›๐‘’๐‘ค๐œ– = 1.0๐‘’ โˆ’ 6

Iteration checkWhile i<imax && ๐›ฟ๐‘›๐‘’๐‘ค > ๐œ–2๐›ฟ0

Inside loop๐‘ž = ๐ด๐‘‘

๐›ผ =๐›ฟ๐‘›๐‘’๐‘ค๐‘‘๐‘‡๐‘ž

๐‘ฅ = ๐‘ฅ + ๐›ผ๐‘‘If ๐‘– is divisible by 50

๐‘Ÿ = ๐‘ โˆ’ ๐ด๐‘ฅelse

๐‘Ÿ = ๐‘Ÿ โˆ’ ๐›ผ๐‘žendif

๐›ฟ๐‘œ๐‘™๐‘‘ = ๐›ฟ๐‘›๐‘’๐‘ค๐›ฟ๐‘›๐‘’๐‘ค = ๐‘Ÿ๐‘‡๐‘Ÿ

๐›ฝ =๐›ฟ๐‘›๐‘’๐‘ค๐›ฟ๐‘œ๐‘™๐‘‘

๐‘‘ = ๐‘Ÿ + ๐›ฝ๐‘‘๐‘– = ๐‘– + 1

Preconditioner Again

๐‘ดโˆ’๐Ÿ๐‘จ๐’™ = ๐‘ดโˆ’๐Ÿ๐’ƒ With Preconditioner ๐‘€

๐‘€๐บ๐‘† = ๐ท โˆ’ ๐ธGauss-Seidel

๐‘€๐‘†๐‘†๐‘‚๐‘… =1

๐œ” 2 โˆ’ ๐œ”๐ท โˆ’ ๐œ”๐ธ ๐ทโˆ’1(๐ท โˆ’ ๐œ”๐น)SSOR

๐‘€๐ฝ๐ด = ๐ทJacobi

๐‘€๐ฝ๐ด =1

๐œ”(๐ท โˆ’ ๐œ”๐ธ)SOR

Incomplete LU Decomposition ๐ด = ๐ฟ๐‘ˆ โˆ’ ๐‘… ๐‘… : residual error

Incomplete Cholesky Decomposition ๐ด = ๐ฟ๐ฟ๐‘‡ โˆ’ ๐‘…

If A is SPD (Symmetric Positive Definite), above two decomposition are same

To make sparse system, used incomplete Factorization

Implementation

Implementation Issue

โ€ข For 3D case, Matrix ๐ด would be huge. (for (128 ร— 128 ร— 128) grid, ๐ด matrix has 128 ร— 128 ร— 128 ร— 128 ร— 128 ร— 128 = 32๐‘‡๐ต, (for 2D it takes only 2GB)

โ€ข However, there are almost 0 in ๐ด matrix for poisson equation. โ‡’ Sparse Matrix!

How to represent Sparse Matrix?

โ€ข Simplest thing. Store nonzero value and row, column index. (Coordinate Format, COO)

Too many duplication

Sparse Matrix Format

Compressed Sparse Row (CSR)

โ€ข Store only non-zero valuesโ€ข Available three or four arrays

โ€ข Not easy to construct the algorithm such as ILU or IC preconditioner

Use MKL (Intel Math Kernel Library)

MKL?

โ€ข a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math. The routines in MKL are hand-optimized specifically for Intel processors.

โ€ข For my problem, I usually use BLAS, fast Fourier transforms (for poissonequation solver with Neumann, periodic, dirichlet BC)

BLAS?

โ€ข a specified set of low-level subroutines that perform common linear algebra operations, widely used. Even in MATLAB!

โ€ข Usually used in vector or matrix multiplication, dot product like operations.โ€ข Level 1 : vector โ€“ vector operationโ€ข Level 2 : matrix โ€“ vector operationโ€ข Level 3 : matrix โ€“ matrix operationโ€ข Parallelized internally by Intel. Just turn on the option.โ€ข Reference manual : https://software.intel.com/en-us/mkl_11.1_ref

How to use Library

For MKL

โ€ข For compile (when creating .c files in your makefile) โ€ข -i8 -openmp -I$(MKLROOT)/include

โ€ข For link (when creating executable files using โ€“o option)โ€ข -L$(MKLROOT)/lib/intel64 -lmkl_core -lmkl_intel_thread

-lpthread โ€“lm โ€ข https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor

Library Linking Process

โ€ข Compileโ€ข -I option indicate where is

header file (.h file), specifying include path

โ€ข Linkingโ€ข -L option indicate where is

library file (.lib, .dll, .a, .so), specifying linking path

โ€ข -l option indicate library name

Reference

โ€ข Shewchuk, Jonathan Richard. "An introduction to the conjugate gradient method without the agonizing pain." (1994).

โ€ข Deepak Chandan, โ€œUsing Sparse Matrix and Solver Routines from Intel MKLโ€, Scinet User Group Meeting, (2013)

โ€ข Saad, Yousef. Iterative methods for sparse linear systems. Siam, 2003.โ€ข Akhunov, R. R., et al. "Optimization of the ILU(0) factorization algorithm with

the use of compressed sparse row format." Zapiski Nauchnykh Seminarov POMI405 (2012): 40-53.