+ All Categories
Home > Documents > Lecture 010

Lecture 010

Date post: 03-Apr-2018
Category:
Upload: mahmoud-el-mahdy
View: 222 times
Download: 0 times
Share this document with a friend

of 33

Transcript
  • 7/29/2019 Lecture 010

    1/33

    Numerical Integration

    1.1 Gaussian Numerical Integration Continued

    The numerical methods studied in the last section were based on integrating

    linear and quadratic interpolating polynomials, and the resulting formulas were

    applied on subdivisions of ever smaller subintervals. In this section, we

    consider a numerical method that is based on the exact integration of

    polynomials of increasing degree; no subdivision of the integration interval is

    used. To motivate this approach, recall from Section 2.4 of Chapter 2 thematerial on approximation of functions.

    Let f+x/ be continuous on #a, b'. Then n+f/ denotes the smallest error boundthat can be attained in approximating f+x/ with a polynomial pn+x/ of degree non the given interval a x b. The polynomial pn+x/ that yields thisapproximation is called the minimax approximation of degree n forf+x/,maxaxb f+x/ pn+x/ n+f/and n+f/ is called the minimax error. From Theorem 3.1 of Chapter 3, it can beseen that n+f/ will often converge to zero quite rapidly.If we have a numerical integration formula to integrate low- to moderate-degree

    polynomials exactly, then the hope is that the same formula will integrate other

    functions f+x/ almost exactly, iff+x/ is well approximated by such polynomials.To illustrate the derivation of such integration formulas, we restrict our attention

    to the integral

    I+f/ 11

    f+x/ x.Its relation to integrals over other intervals #a, b' will be discussed later.The integration formula is to have the general form

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    2/33

    In+f/ j1

    n

    wj f,xj0

    and we require that the nodes

    x1, , xn

    and weights

    w1, , wn

    be so

    chosen that In+f/ I+f/ for all polynomials f+x/ of as large a degree as possible.Case n 1 The integration formula has the form

    1

    1f+x/ x w1 f+x1/.

    It is to be exact for polynomials of as large a degree as possible.

    Using f+x/ 1 and forcing equality in (0.0) gives us2 w1.

    Now use f+x/ xand again force equality in (0.0). Then0 w1 x1

    which implies x1 0. Thus (0.0) becomes

    1

    1f+x/ x 2 f+0/ I1+f/.

    This is the midpoint formula from the trapezoidal approximation. The formula

    (0.0) is exact for all linear polynomials.

    To see that (0.0) is not exact for quadratics, let f+x/ x2. Then the error in (0.0)is given by

    1

    1x2 x 2 +0/2 2

    3 0.

    Case n 2 The integration formula is

    1

    1f+x/ x w1 f+x1/ w2 f+x2/.

    and it has four unspecified quantities: x1, x2, w1, and w2. To determine these,

    we require it to be exact for the four monomials

    2 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    3/33

    f+x/ 1, x, x2, x3.This leads to the four equations

    2 w1

    w2

    0 w1 x1 w2 x22

    3 w1 x1

    2 w2 x22

    0 w1 x13 w2 x2

    3

    This is a nonlinear system in four unknowns;

    equations 2f w1 w2, 0f w1 x1 w2 x2, 23f w1 x12 w2 x22,

    0f w1 x13 w2 x23!;equations ss TableForm2 fw1 w2

    0 fw1 x1 w2 x2

    2

    3f w1 x12 w2 x22

    0 fw1 x13 w2 x23

    its solution can be shown to be

    solution Solve#equations, x1, x2, w1, w2'

    w1 1, w2 1, x2 13

    , x1 1

    3

    !,

    w1 1, w2 1, x2 13

    , x1 1

    3

    !!

    This yields the integration formula

    Clear#f'

    Lecture_010.nb 3

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    4/33

    interpol w1 f+x1/ w2 f+x2/ s.solution

    f 13

    f 1

    3

    , f1

    3

    f 1

    3

    !

    Thus the second order interpolation formula becomes

    I2+f/ f 13

    f1

    3

    1

    1f+x/ x.

    From being exact for the monomials in (0.0), one can show this formula will be

    exact for all polynomials of degree 3. It also can be shown by direct

    calculation to not be exact for the degree 4 polynomial f

    +x

    / x4. Thus I

    2+f

    /has

    degree of precision 3.

    For cases n 3 there occurs a problem in the solution for the weights and the

    interpolation points because the determining system of equations becomes

    nonlinear. The following function generates the determining equations for a

    Gau integration.

    gaussIntegration#f_, x_, a_, b_, n_' : Block%,

    varsX Table#ToExpression#StringJoin#"x", ToString#i''',i, 1, n';

    varsW

    Table#ToExpression#StringJoin#"w", ToString#i''',i, 1, n';

    vec1 Table$varsXi, i, 0, 2 n 1(;

    vecB Table%If%EvenQ#i', Abs% 22 i 1

    ), 0),

    i, 0, 2 n 1);equations Thread#Map#varsW. &, vec1' vecB'

    )

    4 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    5/33

    soli gaussIntegration#f, x, a, b, 3' ss TableForm

    w1 w2 w3 m 2

    w1 x1 w2 x2 w3 x3 m 0

    w1 x12 w2 x22 w3 x32 m 23

    w1 x13 w2 x23 w3 x33 m 0

    w1 x14 w2 x24 w3 x34 m2

    7

    w1 x15 w2 x25 w3 x35 m 0

    We clearly observe that the equations are nonlinear due to the fact that the xi

    are not specified yet. The problem here is that there exist no reliable procedure

    to find the solutions for nonlinear algebraic equations.

    1.2 Sinc Quadrature

    The standard approach uses shifted Sinc functions as a basis and inverse

    conformal maps to set up the approximation points. A function f defined on

    x +a, b/ is thus approximated by

    f+x/ ! kM

    Nf+xk/ S+k, h/+x/

    where log++x a/ s +b x// is the conformal map and xk +k h/ 1+k h/are the Sinc points based on the equdistant discrete values k h of step length

    h t N . The shifted Sinc function S+k, h/+x/ Sinc+ ++x/ k h/ s h/ isused as basis for the approximation. In fact this relation is an approximation of

    the cardinal function C+f/ defined forN M .A definite integral on a finite interval x +a, b/ is given by the followingapproximation formula

    a

    bf+x/ x! h

    kM

    N

    f+xk/ 1 ' +xk/

    h Vm+f/.Vm+1 s '/.

    Lecture_010.nb 5

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    6/33

    This relation follows from

    a

    bf+x/ x!

    a

    b kM

    N

    f+xk/ S+k, h/+x/ x ! kM

    N

    f+xk/ a

    bS+k, h/+x/ x !

    kM

    N

    f+xk/ h +a/

    +b/S+k, h/u 1

    ' +u k h/ u ! h kM

    N

    f+xk/ 1 ' +xk/

    .

    Example

    Given the function f+x/ x find the integral forx #1, 1' using Sinc methodswith differnt M

    First define the function

    f#x_' : x

    The inverse conformal map is

    \#x_, a_, b_' : +a bx/s+1 x/

    The step length is given by

    h

    M

    S

    M

    The conformal map is

    I#x_, a_, b_' : Log%x a

    b x)

    ForM 2 we find

    6 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    7/33

    M 2

    2

    integral2 N%h kM

    M f++h k, 1, 1//+x,1,1/

    x

    s. x +h k, 1, 1/ )

    2.31753

    ForM 4 we find

    M 4

    4

    integral4 N%h kM

    M f++h k, 1, 1//+x,1,1/

    x

    s. x +h k, 1, 1/ )

    2.34454

    ForM 8 we find

    M 8

    8

    integral8 N%h kM

    M f++h k, 1, 1//+x,1,1/

    x

    s. x +h k, 1, 1/ )

    2.34992

    Lecture_010.nb 7

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    8/33

    ForM 16 we find

    M 16

    16

    integral16 N%h kM

    M f++h k, 1, 1//+x,1,1/

    x

    s. x +h k, 1, 1/ )

    2.35039

    The exact integral is

    exact N%1

    1

    f+x/x)

    2.3504

    Comparing the exact value with the different approximations shows an

    exponential decay of the error

    8 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    9/33

    ListLogLogPlot#2, Abs#exact integral2',4, Abs#exact integral4', 8, Abs#exact integral8',16, Abs#exact integral16', Frame True,

    FrameLabel

    "M", "H"

    '

    10.05.02.0 3.0 15.07.0

    104

    0.001

    0.01

    0.1

    1

    M

    Solutions of EquationsSystems of simultaneous linear equations occur in solving problems in a wide

    variety of disciplines, including mathematics, statistics, the physical, biological,

    and social sciences, engineering, and business. They arise directly in solving

    real-world problems, and they also occur as part of the solution process for

    other problems, for example, solving systems of simultaneous nonlinear

    equations. Numerical solutions of boundary value problems and initial boundary

    value problems for differential equations are a rich source of linear systems,especially large-size ones. In this chapter we will examine some classical

    methods for solving linear systems, including direct methods such as the

    Gauian elimination method, and iterative methods such as the Jacobi method

    and Gau-Seidel method.

    Lecture_010.nb 9

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    10/33

    2.1 Systems of linear Equations

    One of the topics studied in elementary algebra is the solution of pairs of linear

    equations such as

    a x b y c

    d x e y f

    The coefficients a, b, , f are given constants, and the task is to find the

    unknown values x, y. In this chapter, we examine the problem of finding

    solutions to longer systems of linear equations, containing more equations and

    unknowns.

    To write the most general system of linear equations that we will study, we must

    change the notation used in (0.0) to something more convenient. Let n be a

    positive integer. The general form for a system ofn linear equations in the n

    unknowns x1, x2, x3, xn is

    a11 x1 a12 x2 ... a1 n xn b1

    a21 x1 a22 x2 ... a1 n xn b2

    am1 x1 am2 x2 ... amn xn bm.

    The coefficients are given symbolically by aij, with i the number of the equation

    and j the number of the associated unknown component. On some occasions,

    to avoid possible confusion, we also use the symbol ai,j. The right-hand side

    b1, b2, , bn are given numbers; and the problem is to calculate the unknowns

    x1, x2, , xn. The linear system is said to be of ordern.

    A solution of a linear equation a1 x1 a2 x2 an xn b is a sequence ofn

    numbers s1, s2, s3, , sn such that the equation is satisfied when we substitute

    x1 s1, x2 s2, , xn sn. The set of all solutions of the equation is called its

    solution set or sometimes the general solution of the equation.

    A finite set of linear equations in the variables x1, x2, , xn is called a system

    of linear equations or a linear system. The sequence of numbers s1, s2, s3, ,

    sn is called a solution of the system if x1 s1, x2 s2, , xn sn is a solution

    of every equation in the system.

    10 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    11/33

    A system of equations that has no solutions is said to be inconsistent; if there is

    at least one solution of the system, it is called consistent. To illustrate the

    possibilities that can occur in solving systems of linear equations, consider a

    general system of two linear equations in the unknowns x1 xand x2 y:

    a1 x b1 y c1 with a1, b1 not both zero

    a2 x b2 y c2 with a2, b2 not both zero.

    The graphs of these equations are lines. Since a point +x, y/ lies on a line if andonly if the numbers x and y satisfy the equation of the line, the solutions of the

    system of equations correspond to points of intersection of line l1 and line l2.

    There are three possibilities, illustrated in Figure 0.0

    Lecture_010.nb 11

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    12/33

    l1

    l2

    0 1 2 3 4 5

    1.0

    0.5

    0.0

    0.51.0

    1.5

    2.0

    2.5

    x

    y

    l1

    l2

    0 1 2 3 4 5

    0.0

    0.5

    1.01.5

    2.0

    2.5

    3.0

    x

    y

    l1

    l2

    0 1 2 3 4 5

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    x

    y

    Figure 0.0. The three possible scenarios to solve a system of linear equations.

    Top a single and unique solution exists, middle no solution exists, and bottom

    an infinite number of solutions exists.

    The line l1 and l2 may intersect at only one point, in which the system has

    exactly one solution

    The lines l1 and l2 may be parallel, in which case there is no intersection and

    consequently no solution to the system.

    The lines l1 and l2 may coincide, in which case there are infinitely many points

    12 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    13/33

    of intersection and consequently infinitely many solutions to the system.

    Although we have considered only two equations with two unknowns here, we

    will show that the same three possibilities hold for arbitrary linear systems:

    Remark 0.0. Every system of linear equations has no solutions, or has exactlyone solution, or has infinitely many solutions.

    For linear systems of small orders, such as the system (0.0) it is possible to

    solve them by paper and pencil calculations or with the help of a calculator, with

    methods learned in elementary algebra. For systems arising in most

    applications, however, it is common to have larger orders, from several dozens

    to millions. Evidently, there is no hope to solve such large systems by hand. We

    need to employ numerical methods for their solutions. For this purpose, it is

    most convenient to use the matrix/vector notation to represent linear systems

    and to use the corresponding matrix/vector arithmetic for their numerical

    treatment.

    The linear system of equations (0.0) is completely specified by knowing the

    coefficients aij and the right-hand constants bi. These coefficients are arranged

    as the elements of a matrix:

    A

    a11 a12 a1 n

    a21 a22 a2 n

    an1 an2 ann

    We say aij is the +i, j/ entry of a matrix A. Similarly, the right-hand constants biare arranged in the form of a vector

    b

    b1

    b2

    bn

    The letters A and b are the names given to the matrix and the vector. The

    indices ofaij now give the numbers of the row and column ofA that contain aij.

    The solution x1, x2, , xn is written similarly

    Lecture_010.nb 13

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    14/33

    x

    x1

    x2

    xn

    With this notation the linear system (0.0) is then written in compact form

    A.x b.

    The reader with some knowledge of linear algebra will immediately recognize

    that the left-hand side of (0.0) is the matrix multiplied by a vectorx, and (0.0)

    expresses the equality between the two vectors A.x and b.

    The basic method for solving a system of linear equations is to replace the

    given system by a new system that has the same solution set but is easier tosolve. This new system is generally obtained in a series of steps by applying

    the following three types of operations to eliminate unknowns symbolically:

    1. Multiply an equation through by a nonzero constant.

    2. Interchange two equations.

    3. Add a multiple of one equation to another.

    Since the rows (horizontal lines) of an augmented matrix correspond to the

    equations in the associated system, these three operations correspond to the

    following operations on the rows of the augmented matrix:

    4. Multiply a row through by a nonzero constant.

    5. Interchange two rows.

    6. Add a multiple of one row to another row.

    These are called elementary row operations. The following example illustrates

    how these operations can be used to solve systems of linear equations. Since asystematic procedure for finding solutions will be derived in the next section, it

    is not necessary to worry about how the steps in this example were selected.

    The main effort at this time should be devoted to understanding the

    computations and the discussion.

    Example 0.0. Using Elementary Row Operations

    14 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    15/33

    In the first line below we solve a system of linear equations by operating on the

    equations in the system, and in the same line we solve the same system by

    operating on the rows of the augmented matrix.

    Solution 0.0.

    x y 2 z 9

    2x 4 y 3 z 1

    3x 6 y 5 z 0

    1 1 2 9

    2 4 3 1

    3 6 5 0

    Add 2 times the first equation (row) to the second to obtain which

    x y 2 z 9

    2 y 7 z 17

    3x 6 y 5 z 0

    1 1 2 9

    0 2 7 17

    3 6 5 0

    Add 3 times the first equation (row) to the third to obtain

    x y 2 z 9

    2 y 7 z 17

    3 y 11 z 27

    1 1 2 9

    0 2 7 17

    0 3 11 27

    Multiply the second equation (row) by 1 s 2 to obtainx y 2 z 9

    y 72

    z 172

    3 y 11 z 27

    1 1 2 9

    0 1 72

    172

    0 3 11 27

    Add 3 times the second equation (row) to the third to obtain

    x y 2 z 9

    y7

    2z

    17

    2

    1

    2z

    3

    2

    1 1 2 9

    0 17

    2

    17

    2

    0 01

    2

    3

    2

    Multiply the third equation (row) by 2 to obtain

    x y 2 z 9

    y7

    2z

    17

    2

    z 3

    1 1 2 9

    0 17

    2

    17

    2

    0 0 1 3

    Lecture_010.nb 15

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    16/33

    Add 1 times the second equation (row) to the first to obtain

    x11

    2z

    35

    2

    y7

    2

    z17

    2z 3

    1 011

    2

    35

    2

    0 17

    2

    17

    20 0 1 3

    Add11

    2times the third equation (row) to the first and

    7

    2times the third equation

    to the second to obtain

    x 1

    y 2

    z 3

    1 0 0 1

    0 1 0 2

    0 0 1 3

    The solution thus is x 1, y 2, z 3.

    2.1.1 Gau Elimination Method

    We have just seen how easy it is to solve a system of linear equations once its

    augmented matrix is in reduced row-echelon form. Now we shall give a step-by-

    step elimination procedure that can be used to reduce any matrix to reduced

    row-echelon form. As we state each step in the procedure, we shall illustrate

    the idea by reducing the following matrix to reduced row-echelon form. To

    demonstrate the procedure let us consider the following augmented matrix

    0 0 2 0 7 12

    2 4 10 6 12 28

    2 4 5 6 5 1

    Step 1: Locate the leftmost column that does not consist entirely of zeros.

    0 0 2 0 7 12

    2 4 10 6 12 28

    2 4 5 6 5 1

    Step 2: Interchange the top row with another row, if necessary, to bring a

    nonzero entry to the top of the column found in step 1.

    16 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    17/33

    2 4 10 6 12 28

    0 0 2 0 7 12

    2 4 5 6 5 1

    Step 3: If the entry that is now at the top of the column found in step 1 is a,multiply the first row by 1 s a in order to introduce a leading 1.

    1 2 5 3 6 14

    0 0 2 0 7 12

    2 4 5 6 5 1

    Step 4: Add suitable multiples of the top row to the rows below so that all

    enters below the leading 1 become zeros.

    1 2 5 3 6 14

    0 0 2 0 7 12

    0 0 5 0 17 29

    Step 5: Now cover the top row in the matrix and begin again with step 1 applied

    to the submatrix that remains. Continue in this way until the entire matrix is in

    row-echelon form.

    1 2 5 3 6 14

    0 0 1 07

    26

    0 0 5 0 17 29

    1 2 5 3 6 14

    0 0 1 07

    26

    0 0 0 01

    21

    1 2 5 3 6 14

    0 0 1 07

    26

    0 0 0 0 1 2

    The entire matrix is now in row-echelon form. To find the reduced row echelon

    form we need the following additional step.

    Step 6: Beginning with the last nonzero row and working upward, add suitable

    multiples of each row to the rows above to introduce zeros above the leading

    Lecture_010.nb 17

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    18/33

    1's.

    1 2 5 3 6 14

    0 0 1 07

    26

    0 0 0 0 1 2

    1 2 5 3 6 14

    0 0 1 0 0 1

    0 0 0 0 1 2

    1 2 5 3 0 2

    0 0 1 0 0 1

    0 0 0 0 1 2

    1 2 0 3 0 70 0 1 0 0 1

    0 0 0 0 1 2

    The last matrix is in reduced row-echelon form.

    If we use only the first five steps, the above procedure creates a row-echelon

    form and is called Gauian elimination. Caring out step 6 in addition to the first

    five steps which generates the reduced row-echelon form is called Gau-

    Jordan elimination.

    Remark 0.0. It can be shown that every matrix has a unique reduced-echelon

    form; that is, one will arrive at the same reduced row-echelon form for a given

    matrix no matter how the row operations are varied. In contrast, a row-echelon

    form of a given matrix is not unique; different sequences of row operations can

    produce different row-echelon forms.

    In Mathematica there exists a function which generates the reduced-echelon

    form of a given matrix. For the example above the calculation is done by

    18 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    19/33

    MatrixForm%RowReduce%0 0 2 0 7 12

    2 4 10 6 12 28

    2 4 5 6 5 1

    ))

    1 2 0 3 0 7

    0 0 1 0 0 1

    0 0 0 0 1 2

    Example 0.0. Gau-Jordan Elimination

    Solve by Gau-Jordan elimination the following system of linear equations

    x1 5x2 3x3 5x5 1

    3x1 7x2 4x3 x4 3x5 2x6 32x1 9x2 9x4 3x5 12x6 7

    Solution 0.0. The augmented matrix of this system is

    am

    1 5 3 0 5 0 1

    3 7 4 1 3 2 3

    2 9 0 9 3 12 7

    ;

    The reduced-echelon form is gained by adding 3 times the first row to the

    second row

    am

    1 5 3 0 5 0 1

    0 8 5 1 12 2 0

    2 9 0 9 3 12 7

    ;

    Adding in addition 2 times the first row to the third one gives

    am

    1 5 3 0 5 0 1

    0 8 5 1 12 2 0

    0 1 6 9 7 12 5

    ;

    Interchanging the second with the third row and multiplying the third by 1 will

    give

    Lecture_010.nb 19

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    20/33

    am

    1 5 3 0 5 0 1

    0 1 6 9 7 12 5

    0 8 5 1 12 2 0

    ;

    A multiple of the second row of 8 will give

    am

    1 5 3 0 5 0 1

    0 1 6 9 7 12 5

    0 0 43 73 44 98 40

    ;

    Division of the last row by 43 gives

    am

    1 5 3 0 5 0 1

    0 1 6 9 7 12 5

    0 0 173

    43

    44

    43

    98

    43

    40

    43

    ;

    Adding a multiple of 6 of the third row to the second row produces

    am

    1 5 3 0 5 0 1

    0 1 051

    43

    37

    43

    72

    43

    25

    43

    0 0 173

    43

    44

    43

    98

    43

    40

    43

    ;

    Three time the last row added to the first row generates

    am

    1 5 0219

    43

    83

    43

    294

    43

    163

    43

    0 1 051

    43

    37

    43

    72

    43

    25

    43

    0 0 173

    43

    44

    43

    98

    43

    40

    43

    ;

    5 times the second row added to the first row gives

    20 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    21/33

    am

    1 0 0 36

    43

    102

    43

    66

    43

    38

    43

    0 1 051

    43

    37

    43

    72

    43

    25

    43

    0 0 1

    73

    43

    44

    43

    98

    43

    40

    43

    ;

    The same result is generated by

    MatrixForm#RowReduce#am''1 0 0

    36

    43

    102

    43

    66

    43

    38

    43

    0 1 051

    43

    37

    43

    72

    43

    25

    43

    0 0 173

    43 44

    43 98

    43

    40

    43

    The same augmented matrix am can be treated by a function which prints the

    intermediate steps.

    MatrixForm#am'1 5 3 0 5 0 1

    3 7 4 1 3 2 3

    2 9 0 9 3 12 7

    The application of the function GaussJordanForm to the matrix am is shown in

    the following lines. On the left of the matrix the operations carried out on rows

    are stated

    GaussJordanForm#am' ss MatrixFormForward pass

    (Row 2)-(3)*(Row 1)

    1 5 3 0 5 0 1

    0 8 5 1 12 2 0

    2 9 0 9 3 12 7

    (Row 3)-(2)*(Row 1)

    1 5 3 0 5 0 1

    0 8 5 1 12 2 0

    0 1 6 9 7 12 5

    Lecture_010.nb 21

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    22/33

  • 7/29/2019 Lecture 010

    23/33

    2.1.2 Operations Count

    It is important to know the length of a computation and, for that reason, we

    count the number of arithmetic operations involved in Gau-Jordan elimination.

    For reasons that will be apparent in later sections, the count will be divided intothree parts.

    Table 0.0. The following Table contains the number of operations for each of

    the steps in the elimination.

    Step Additions Multiplications Divisions

    1 +n 1/2 +n 1/2 n 12 +n 2/2 +n 2/2 n 2

    n 1 1 1 1

    Totaln+n1/ +2 n1/

    6

    n+n1/ +2 n1/6

    n+n1/2

    The elimination step. We count the additions/subtractions (A,S),

    multiplications (M), and the divisions (D) in going from the original system to the

    triangular system. We consider only the operations for the coefficients ofA and

    not for the right-hand side b. Generally the division and multiplications are

    counted together, since they are about the same in operation time. Doing this

    gives us

    AS n +n 1/ +2 n 1/

    6

    MD n +n 1/ +2 n 1/

    6

    n +n 1/2

    1

    3n ,n2 10

    AS denotes the number of addition and subtraction and MD denotes that of

    multiplication and division.

    Modification of the right side b. Proceeding as before, we get

    AS +n 1/ +n 2/ 1 n+n 1/2

    Lecture_010.nb 23

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    24/33

    MD +n 1/ +n 2/ 1 n+n 1/2

    The back substitution step. As before

    AS 0 1 +n 1/ n+n 1/2

    MD 1 2 n n+n 1/

    2.

    Combining these results, we observe that the total number of operations to

    obtain x is

    AS

    n

    +n 1

    / +2 n 1

    /6 n

    +n 1

    /2 n

    +n 1

    /2 n

    +n 1

    / +2 n 5

    /6

    MD n,n2 3 n 10

    3

    Since AS and MD are almost the same in all of these counts, only MD is

    discussed. These operations are also slightly more expensive in running time.

    For large values ofn, the operation count for Gau-Jordan elimination is about

    n3 t 3. This means that as n is doubled, the cost of solving the linear systemgoes by a factor of 8. In addition, most of the cost of Gau-Jordan elimination

    is in the elimination step since for the remaining steps

    MD n+n 1/

    2

    n+n 1/2

    n2.

    Thus, once the elimination steps have been completed, it is much less

    expensive to solve the linear system.

    Consider solving the linear system

    A.x b

    where A has ordern and is non singular. Then A1 exists and

    A1.,A.x0 A1.b

    24 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    25/33

    x A1

    .b.

    Thus, ifA1 is known, then xcan be found by matrix multiplication.

    For this, it might at first seem reasonable to find A

    1

    and to solve forx. But thisis not an efficient procedure, because of the great cost needed to find A1. The

    operations cost for finding A1 can be shown to be

    MD n3.

    This is about three times the cost of finding x by the Gau-Jordan elimination

    method. Actually there is no savings in using A1.

    The chief value ofA1 is a theoretical tool for examining the solution of non

    singular systems of linear equations. With a few exceptions, one seldom needs

    to calculate A1 explicitly.

    2.1.3 Iterative Solutions

    The linear system A.x b that occur in many applications can have a very large

    order. For such systems, the Gauian elimination method is often too

    expensive in either computation time or computer memory requirements, or

    possibly both. Moreover the accumulation of round-off errors can sometimes

    prevent the numerical solution from being accurate. As an alternative, suchlinear systems are usually solved with iteration methods, and that is the subject

    of this section.

    In an iterative method, a sequence of progressively accurate iterates is

    produced to approximate the solution. Thus, in general, we do not expect to get

    the exact solution in a finite number of iteration steps, even if the round-off error

    effect is not taken into account. In contrast, if round-off errors are ignored, the

    Gauian elimination method produces the exact solution after+n 1/ steps ofelimination and backward substitution for the resulting upper triangular system.

    Gauian elimination method and its variants are usually called direct methods.

    In the study of iteration methods, a most important issue is the convergence

    property. We provide a framework for the convergence analysis of a general

    iteration method. For the two classical iteration methods, the Jacobi and Gau-

    Seidel methods, studied in this section, a sufficient condition for convergence is

    stated.

    Lecture_010.nb 25

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    26/33

    We begin with some numerical examples that illustrate to popular iteration

    methods. Following that, we give a more general discussion of iteration

    methods. Consider the linear system

    9x1 x2 x3 b12x1 10x2 3x3 b2

    3x1 4x2 11x3 b3

    One class of iteration methods for solving (0.0) proceeds as follows. In the

    equation numbered k, solve forxk in terms of the remaining unknowns. In the

    above case,

    x1 1

    9+b1 x2 x3/

    x2 1

    10 +b2 2x1 3x3/x3

    1

    11+b3 3x1 4x2/

    Let x+0/ x1+0/, x2+0/, x3+0/!Tbe an initial guess of the true solution x. Then definean iteration sequence:

    x1+k1/

    1

    9-b1 x2+k/ x3+k/1

    x2

    +k1/

    1

    10 -b

    2 2x

    1

    +k/ 3x

    3

    +k/

    1x3+k1/ 111

    -b3 3x1+k/ 4x2+k/1

    fork 0, 1, 2, This is called the Jacobi iteration method or the method of

    simultaneous replacements.

    The system from above is solved for

    b 10, 19, 010, 19, 0

    and the initial guess for the solution

    26 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    27/33

    x1, x2, x3 0, 0, 00, 0, 0

    The following steps show interactively how an iterative method works step by

    step. The first step uses the initial guess of the solution

    x1, x2, x3 N% 1

    9+b317 x2 x3/, 1

    10+b327 2 x1 3 x3/, 1

    11+b337 3 x1 4 x2/!)

    1.11111, 1.9, 0.

    The second step which uses the values from the first iteration step generates

    x1, x2, x3 N% 1

    9+b317 x2 x3/, 1

    10+b327 2 x1 3 x3/, 1

    11+b337 3 x1 4 x2/!)

    0.9, 1.67778, 0.993939

    The values derived are used again in the next iteration step

    x1, x2, x3 N% 1

    9+b317 x2 x3/, 1

    10+b327 2 x1 3 x3/, 1

    11+b337 3 x1 4 x2/!)

    1.03513, 2.01818, 0.855556

    again the resulting values are used in the next step

    Lecture_010.nb 27

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    28/33

    x1, x2, x3 N% 1

    9+b317 x2 x3/, 1

    10+b327 2 x1 3 x3/, 1

    11+b337 3 x1 4 x2/!)

    0.98193, 1.94964, 1.01619and so on

    x1, x2, x3 N% 1

    9+b317 x2 x3/, 1

    10+b327 2 x1 3 x3/, 1

    11+b337 3 x1 4 x2/!)

    1.00739, 2.00847, 0.97676The final result after this few iterations is an approximation of the true solution

    vectorx 1, 2, 1. To measure the accuracy or the error of the solution weuse the norm of vectors. Thus the error of the solution is estimated to be

    x+k1/ x+k/ which gives a crude estimation of the total error in the calculation. The error of

    the individual components may be different for each component because the

    norm estimates an integral error of the iterates.

    To understand the behavior of the iteration method it is best to put the iteration

    formula into a vector-matrix format. Rewrite the linear system A.x b as

    N.x b P.x

    where A N P is a splitting ofA. The matrix N must be non singular; and

    usually it is chosen so that the linear systems

    N.z fare relatively easy to solve for general vectors f

    . For example, N could be

    diagonal, triangular, or tridiagonal. The iteration method is defined by

    N.x+k1/

    b P.x+k/ k 0, 1, 2,

    28 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    29/33

    The Jacobi iteration method is based on the diagonal structure ofN. The

    following function implements the steps of a simple Jacobi iteration

    jacobiMethod#A_, b_, x0_, eps___' :Block%H 102, Ha 100, P, x0in,+ determine the diagonal matrix N and its

    inverse /diag DiagonalMatrix#Tr#A, List'';idiag DiagonalMatrix#1 s Tr#A, List'';P diag A;

    Print#"N^+1/", " P ", N#Norm#idiag.P, 1''';Print#"+N^+1/ P/2 ", N#Norm#idiag.P, 2''';

    + set the initial guess for the solution /x0in x0;+ iterate as long as the error is larger

    than the specified value /While%Ha ! H,

    xk1 idiag.+P.x0in/ idiag.b;Ha N% +xk1 x0in/.+xk1 x0in/ );Print#xk, " ", PaddedForm#N#xk1', 8, 6',

    " H ", PaddedForm#Ha, 8, 6'';x0in N#xk1'

    );xk1

    )

    The application of the function to the example discussed above delivers the

    successive iterates and the estimated error in printed form and a list of

    numbers as final result.

    jacobiMethod#9, 1, 1, 2, 10, 3, 3, 4, 11,10, 19, 0, 0, 0, 0'

    Lecture_010.nb 29

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    30/33

    N^+1/ P 0.474747

    +N^+1/ P/2 0.496804

    xk 1.111111, 1.900000, 0.000000 H 2.201038

    xk 0.900000, 1.677778, 0.993939 H 1.040128

    xk 1.035129, 2.018182, 0.855556 H 0.391516

    xk 0.981930, 1.949641, 1.016192 H 0.182571

    xk 1.007395, 2.008472, 0.976760 H 0.075262

    xk 0.996476, 1.991549, 1.005097 H 0.034765

    xk 1.001505, 2.002234, 0.995966 H 0.014928

    xk 0.999304, 1.998489, 1.001223 H 0.006820

    0.999304, 1.99849, 1.00122

    The printed values demonstrate that the error decreases and the solution

    approaches the result x 1, 2, 1 as expected from the calculations above.For the general matrix A aij of ordern, the Jacobi method is defined with

    N

    a11 0 0

    0 a22

    0

    0 0 ann

    and P NA. For the Gau-Seidel method, let

    N

    a11 0 0

    a21 a22

    0

    an1 an2 ann

    and P NA. The linear system N.z f

    is easily solved because N is diagonal

    for the Jacobi iteration methods and lower triangular for the Gau-Seidel

    method. For systems A.x b to which the Jacobi and Gau-Seidel methods

    are often applied, the above matrices N are non singular.

    To analyze the convergence of the iteration, subtract (0.0) from (0.0) and let

    +k/ xx+k/, obtaining

    30 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    31/33

    N+k1/

    P+k/

    +k1/

    M+k/

    where M N1

    P. Using the vector and matrix norm, we get

    +k1/ M +k/ .By induction on k, this implies

    +k/ Mk +0/ .Thus, the error converges to zero if

    || M || < 1.

    We attempt to choose the splitting A N P so that this will be true, while also

    having the system N.z f

    be easily solvable. The result also says the error will

    decrease at each step by at least a factor of M; and that convergence willoccur for any initial guess x+0/.

    The following lines implement the Seidel method. In Matrix form the Gau-

    Seidel method reads

    +L D

    /.x

    +k1/ U.x

    +k/ b

    which is equivalent to

    x+k1/ +L D/1.,b U.x+k/0and so can be viewed as a linear system of equations forx+k1/ with lowertriangular coefficient matrix L D.

    Lecture_010.nb 31

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    32/33

    seidelMethod#A_, b_, x0_, eps___' :Block%H 106, Ha 100, P, x0in,+ determine the diagonal matrix N and its

    inverse /diag DiagonalMatrix#Tr#A, List'';off A diag;

    l Table#If#i ! j, A3i, j7, 0', i, 1, Length#A',j, 1, Length#A'';

    u off l;

    ld diag l;

    ip Inverse#ld';Print#"N^+1/ P ", N#Norm#ip.u, 1''';Print#"+N^+1/ P/2 ", N#Norm#ip.u, 2''';+ set the initial guess for the soluti

    on /x0in x0;

    + iterate as long as the error is largerthan the default value /

    While%Ha ! H,xk1 ip.+b u.x0in/;Ha N% +xk1 x0in/.+xk1 x0in/ );Print#xk, " ", PaddedForm#N#xk1', 8, 6',

    " H ", PaddedForm#Ha, 8, 6'';x0in N#xk1'

    );xk1

    )

    The same example as used for the Gau-Jacobi method is used todemonstrate that the method converges quite rapidly.

    seidelMethod#9, 1, 1, 2, 10, 3, 3, 4, 11,10, 19, 0, 0, 0, 0'

    32 Lecture_010.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 010

    33/33

    N^+1/ P 0.520202

    +N^+1/ P/2 0.328064

    tk 1.111111, 1.677778, 0.913131 H 2.209822

    tk 1.026150, 1.968709, 0.995753 H 0.314143

    tk 1.003005, 1.998125, 1.000138 H 0.037686

    tk 1.000224, 1.999997, 1.000060 H 0.003353

    tk 1.000007, 2.000017, 1.000008 H 0.000224

    tk 0.999999, 2.000003, 1.000001 H 0.000018

    tk 1.000000, 2.000000, 1.000000

    H 2.523111106

    tk 1.000000, 2.000000, 1.000000

    H 2.98062210

    7

    1., 2., 1.

    Lecture_010.nb 33


Recommended