+ All Categories
Home > Documents > Iterative methods

Iterative methods

Date post: 22-Feb-2016
Category:
Upload: jadyn
View: 38 times
Download: 0 times
Share this document with a friend
Description:
Iterative methods. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A. Iterative methods for Ax=b. Iterative methods produce a sequence of approximations So, they also produce a sequence of errors We say that a method converges if . - PowerPoint PPT Presentation
20
Iterative methods
Transcript
Page 1: Iterative methods

Iterative methods

Page 2: Iterative methods

Iterative methods for Ax=b

Iterative methods produce a sequence of approximations

So, they also produce a sequence of errors

We say that a method converges if

2

Page 3: Iterative methods

What is computationally appealing about iterative methods ?

Iterative methods produce the approximation using a very basic primitive: matrix-vector multiplication

If a matrix A has m non-zero entries then A*x can be computed in time O(m). Essentially every non-zero matrix is recalled from the RAM only one time.

In some applications we don’t even know the matrix A explicitly. For example we may have only a function f(x), which returns A*x for some unknown matrix A.

3

Page 4: Iterative methods

What is computationally appealing about iterative methods ?

The memory requirements are also O(m). If the matrix can fit into the RAM, the method will run too. This is a HUGE IMPROVEMENT with respect to the O(m2) of Gaussian elimination.

In most applications Gaussian elimination is impossible because the memory is limited. Iterative methods are the only approach that can potentially work.

4

Page 5: Iterative methods

What might be problematic about iterative methods?

Do they converge always?

If they converge, how fast do they converge?

Speed of convergence: How many steps are required so that we gain one decimal place in the error, in other words:

5

Page 6: Iterative methods

Richardson’s iteration

Perhaps the most simple iteration

It can be seen as a fixed point iteration for (I-A)x+b

The error reduction equation is simple too!

6

Page 7: Iterative methods

Richardson’s iteration for diagonal matrices

Let’s study convergence for positive diagonal matrices Assume initial error is the all-ones vector and

Then

Let’s play with numbers…..7

Page 8: Iterative methods

Richardson’s iteration for diagonal matrices

So it seems that we must have

…and Richardson’s won’t converge for all matrices

ARE WE DOOMED?

8

Page 9: Iterative methods

Maybe we’re not (yet) doomed

Suppose we know a d such that

We can then try to solve the equivalent system

The new matrix A’ is a diagonal matrix with elements d’i such that:

9

Page 10: Iterative methods

Richardson’s iterationconverges

for an equivalent “scaled” system

10

Page 11: Iterative methods

You’re cheating us:Diagonal matrices are easy anyways!

Spectral decomposition theorem

Every non-singular matrix has the eigenvalue decomposition:

So what is (I-A)i ?

So if (I-D)i goes to zero then (I-A)i goes to zero too

11

Page 12: Iterative methods

Can we scale any positive system?

Ghersghorin’s Theorem

Recall the definition of infinity norm for matrices

12

Page 13: Iterative methods

Richardson’s iterationconverges

for an equivalent “scaled” systemfor all positive matrices

13

Page 14: Iterative methods

How about the rate of convergence?

We showed that the jth coordinate of the error ei is:

How many steps do we need to take to make this coordinate smaller than 1/10 ?

…. And the answer is

14

Page 15: Iterative methods

How about the rate of convergence?

So in the worst case the number of iterations we need is:

ARE WE DOOMED version2 ?

15

Page 16: Iterative methods

We may have some more information about the matrix A

Suppose we know numbers d’i such that

We can then try to solve the equivalent system

The new matrix A’ is a diagonal matrix with elements d’i such that:

16

Page 17: Iterative methods

Preconditioning

In general, changing the system to

is called preconditioning

Why? Because we try to make a new matrix with a better condition.

Do we need to have the inverse of B? So, B must be a matrix such that By = z can be solved more easily.

17

Page 18: Iterative methods

Richardson’s iterationconverges fast

for an equivalent systemwhen we have some more infoin the form of a preconditioner

18

Page 19: Iterative methods

Symmetric, negative off-diagonals, zero row-sums Symmetric diagonally dominant matrix: Add a positive diagonal to

Laplacian and flip off-diagonal signs, keeping symmetry.

1

Laplacians of weighted graphs

2

2015

30

1

Page 20: Iterative methods

Preconditioning for Laplacians

The star graph: A simple diagonal scaling makes the condition number good for

The line graph The condition number is bad despite even with scaling

ARE WE DOOMED version3 ?

20


Recommended