+ All Categories
Home > Documents > 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical...

10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical...

Date post: 17-Mar-2018
Category:
Upload: buithuan
View: 260 times
Download: 8 times
Share this document with a friend
35
10.1 Gaussian Elimination with Partial Pivoting 10.2 Iterative Methods for Solving Linear Systems 10.3 Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash Simulations (p. 513) Laboratory Experiment Probabilities (p. 512) Graphic Design (p. 500) Pharmacokinetics (p. 493) Analyzing Hyperlinks (p. 507) Clockwise from top left, Evgeny Murtola, 2009/Used under license from Shutterstock.com; lculig/Shutterstock.com; Leah-Anne Thompson, 2009/Used under license from Shutterstock.com; Perov Stanislav/Shutterstock.com; Perov Stanislav/Shutterstock.com
Transcript
Page 1: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

10.1 Gaussian Elimination with Partial Pivoting10.2 Iterative Methods for Solving Linear Systems10.3 Power Method for Approximating Eigenvalues10.4 Applications of Numerical Methods

10

487

Numerical Methods

Car Crash Simulations (p. 513)

Laboratory Experiment Probabilities (p. 512)

Graphic Design (p. 500)

Pharmacokinetics (p. 493)

Analyzing Hyperlinks (p. 507)

Clockwise from top left, Evgeny Murtola, 2009/Used under license from Shutterstock.com; lculig/Shutterstock.com; Leah-Anne Thompson, 2009/Used under license from Shutterstock.com; Perov Stanislav/Shutterstock.com; Perov Stanislav/Shutterstock.com

9781133110873_1001.qxp 3/10/12 7:04 AM Page 487

Page 2: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

488 Chapter 10 Numerical Methods

10.1 Gaussian Elimination with Partial Pivoting

Express a real number in floating point form, and determine thestored value of a real number rounded to a specified number ofsignificant digits.

Compare exact solutions with rounded solutions obtained byGaussian elimination with partial pivoting, and observe roundingerror with ill-conditioned systems.

STORED VALUE AND ROUNDING ERROR

In Chapter 1, two methods of solving a system of linear equations in variables were discussed. When either of these methods (Gaussian elimination and Gauss-Jordanelimination) is used with a digital computer, the computer introduces a problem that hasnot yet been discussed—rounding error.

Digital computers store real numbers in floating point form,

where is an integer and the mantissa satisfies the inequality For instance, the floating point forms of some real numbers are as follows.

Real Number Floating Point Form527

0.00045

The number of decimal places that can be stored in the mantissa depends on the computer. If places are stored, then it is said that the computer stores significantdigits. Additional digits are either truncated or rounded off. When a number is truncatedto significant digits, all digits after the first significant digits are simply omitted. Forinstance, truncated to two significant digits, the number 0.1251 becomes 0.12.

When a number is rounded to significant digits, the last retained digit isincreased by 1 if the discarded portion is greater than half a digit, and the last retaineddigit is not changed if the discarded portion is less than half a digit. For the special case in which the discarded portion is precisely half a digit, round so that thelast retained digit is even. For instance, the following numbers have been rounded totwo significant digits.

Number Rounded Number0.1249 0.120.125 0.120.1251 0.130.1349 0.130.135 0.140.1351 0.14

Most computers store numbers in binary form (base 2) rather than decimal form (base 10). Although rounding occurs in both systems, this discussion will be restrictedto the more familiar base 10. Whenever a computer truncates or rounds a number,a rounding error that can affect subsequent calculations is introduced. The result afterrounding or truncating is called the stored value.

n

nn

nn

0.45 � 10�3 �0.381623 � 101�3.81623

0.527 � 103

0.1 � M < 1.Mk

±M � 10k

nn

9781133110873_1001.qxp 3/10/12 7:04 AM Page 488

Page 3: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

Finding the Stored Values of Numbers

Determine the stored value of each of the real numbers listed below in a computer thatrounds to three significant digits.

a. 54.7 b. 0.1134 c. d. 0.08335 e. 0.08345

SOLUTION

Number Floating Point Form Stored Value

a. 54.7

b. 0.1134

c.

d. 0.08335

e. 0.08345

Note in parts (d) and (e) that when the discarded portion of a decimal is precisely half a digit, the number is rounded so that the stored value ends in an even digit.

Rounding error tends to propagate as the number of arithmetic operations increases.This phenomenon is illustrated in the next example.

Propagation of Rounding Error

Evaluate the determinant of the matrix

rounding each intermediate calculation to two significant digits. Then find the exactsolution and compare the two results.

SOLUTION

Rounding each intermediate calculation to two significant digits produces

Round to two significant digits.

The exact solution is So, to two significant digits, the correct solution is . Note that the rounded solution is not correct totwo significant digits, even though each arithmetic operation was performed with twosignificant digits of accuracy. This is what is meant when it is said that arithmetic operations tend to propagate rounding error.

In Example 2, rounding at the intermediate steps introduced a rounding error of

Rounding error

Although this error may seem slight, it represents a percentage error of

Percentage error

In most practical applications, a percentage error of this magnitude would be intolerable.Keep in mind that this particular percentage error arose with only a few arithmetic steps.When the number of arithmetic steps increases, the likelihood of a large percentage erroralso increases.

0.0008

0.0132� 0.061 � 6.1%.

�0.0132 � ��0.014� � 0.0008.

�0.013 � �0.0132. �A� � 0.0144 � 0.0276

� �0.014.

� 0.014 � 0.028

� 0.0144 � 0.0276

�A� � �0.12��0.12� � �0.12��0.23�

A � �0.12

0.12

0.23

0.12�

0.834 � 10�1 0.8345 � 10�1

0.834 � 10�1 0.8335 � 10�1

�0.823 � 101 �0.82256 � 101�8.2256

0.113 � 100 0.1134 � 100

0.547 � 102 0.547 � 102

�8.2256

10.1 Gaussian Elimination with Partial Pivoting 489

9781133110873_1001.qxp 3/10/12 7:04 AM Page 489

Page 4: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

For large systems of linear equations, Gaussian elimination can involve hundreds of arithmetic computations, each of which can produce rounding error. The followingstraightforward example illustrates the potential magnitude of this problem.

Gaussian Elimination and Rounding Error

Use Gaussian elimination to solve the following system.

After each intermediate calculation, round the result to three significant digits.

SOLUTION

Applying Gaussian elimination to the augmented matrix for this system produces

So, and using back-substitution, you can obtain andTry checking this “solution” in the original system of equations to see

that it is not correct. The correct solution is and x3 � �3.�x1 � 1, x2 � 2,�x1 � �0.900.

x2 � �2.82x3 � �2.00,

�1.00

�0.00

0.00

2.50

1.00

0.00

14.1

4.89

1.00

�36.2

�12.6

�2.00�.

�1.00

�0.00

0.00

2.50

1.00

0.00

14.1

4.89

�1.00

�36.2

�12.6

2.00�

�1.00

�0.00

0.00

2.50

1.00

�32.3

14.1

4.89

�159

�36.2

�12.6

409 �

�1.00

�0.00

0.00

2.50

4.19

�32.3

14.1

20.5

�159

�36.2

�52.9

409 �

�1.00

�0.00

11.2

2.50

4.19

�4.30

14.1

20.5

�0.605

�36.2

�52.9

4.42�

�1.00

�1.31

11.2

2.50

0.911

�4.30

14.1

1.99

�0.605

�36.2

�5.46

4.42�

� 0.143

�1.31

11.2

0.357

0.911

�4.30

2.01

1.99

�0.605

�5.17

�5.46

4.42�

11.2x1 � 4.30x2 � 0.605x3 � 4.415

�1.31x1 � 0.911x2 � 1.99x3 � �5.458

0.143x1 � 0.357x2 � 2.01x3 � �5.173

490 Chapter 10 Numerical Methods

Dividing the first row by 0.143 produces a new first row.

Adding 1.31 times thefirst row to the second rowproduces a new second row.

Adding times thefirst row to the third rowproduces a new third row.

�11.2

Dividing the second rowby 4.19 produces a newsecond row.

Adding 32.3 times thesecond row to the third rowproduces a new third row.

Multiplying the third rowby produces a newthird row.

�1

9781133110873_1001.qxp 3/10/12 7:04 AM Page 490

Page 5: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

What went wrong with the Gaussian elimination procedure used in Example 3?Clearly, rounding error propagated to such an extent that the final “solution” becamehopelessly inaccurate.

Part of the problem is that the original augmented matrix contains entries that differ in orders of magnitude. For instance, the first column of the matrix

has entries that increase roughly by powers of 10 as one moves down the column. Insubsequent elementary row operations, the first row was multiplied by 1.31 and and the second row was multiplied by 32.3. When floating point arithmetic is used, suchlarge row multipliers tend to propagate rounding error. For instance, notice what happened during Gaussian elimination when the first row was multiplied by andadded to the third row:

The third entry in the third row changed from to it lost three decimalplaces of accuracy.

This type of error propagation can be lessened by appropriate row interchanges thatproduce smaller multipliers. One method of restricting the size of the multipliers iscalled Gaussian elimination with partial pivoting.

Example 4 on the next page shows what happens when this partial pivoting technique is used on the system of linear equations from Example 3.

�159;�0.605

�1.00

�0.00

0.00

2.50

4.19

�32.3

14.1

20.5

�159

�36.2

�52.9

409 �

�1.00

�0.00

11.2

2.50

4.19

�4.30

14.1

20.5

�0.605

�36.2

�52.9

4.42�

�11.2

�11.2

�0.143

�1.31

11.2

0.357

0.911

�4.30

2.01

1.99

�0.605

�5.17

�5.46

4.42�

10.1 Gaussian Elimination with Partial Pivoting 491

Gaussian Elimination with Partial Pivoting

1. Find the entry in the first column with the largest absolute value. This entryis called the pivot.

2. Perform a row interchange, if necessary, so that the pivot is in the first row.3. Divide the first row by the pivot. (This step is unnecessary if the pivot is 1.)4. Use elementary row operations to reduce the remaining entries in the first

column to 0.

The completion of these four steps is called a pass. After performing the firstpass, ignore the first row and first column and repeat the four steps on the remaining submatrix. Continue this process until the matrix is in row-echelonform.

Adding times the firstrow to the third row produces a new third row.

�11.2

9781133110873_1001.qxp 3/10/12 7:04 AM Page 491

Page 6: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

492 Chapter 10 Numerical Methods

Interchange thefirst and thirdrows.

Dividing the first rowby 11.2 produces a newfirst row.

Adding 1.31 times the first row to the second row produces a new second row.

Adding times thefirst row to the third rowproduces a new third row.

�0.143

Interchange the second and thirdrows.

Dividing the second rowby 0.412 produces a new second row.

Adding times thesecond row to the third row produces a new third row.

�0.408

Gaussian Elimination with Partial Pivoting

Use Gaussian elimination with partial pivoting to solve the system of linear equationsfrom Example 3. After each intermediate calculation, round the result to three significant digits.

SOLUTION

As in Example 3, the augmented matrix for this system is

Pivot

In the first column, 11.2 is the pivot because it has the largest absolute value. So,interchange the first and third rows and apply elementary row operations, as follows.

Pivot

This completes the first pass. For the second pass, consider the submatrix formed bydeleting the first row and first column. In this matrix the pivot is 0.412, so interchangethe second and third rows and proceed with Gaussian elimination, as follows.

This completes the second pass, and the entire procedure can be completed by dividingthe third row by as follows.

So, and back-substitution produces and which agrees with the exact solution of and x3 � �3.x1 � 1, x2 � 2,

x1 � 1.00,x2 � 2.00x3 � �3.00,

�1.00

�0.00

0.00

�0.384

1.00

0.00

�0.0540

4.90

1.00

0.395

�12.7

�3.00�

�0.0800,

�1.00

�0.00

0.00

�0.384

1.00

0.00

�0.0540

4.90

�0.0800

0.395

�12.7

0.240�

�1.00

�0.00

0.00

�0.384

1.00

0.408

�0.0540

4.90

1.92

0.395

�12.7

�4.94�

�1.00

�0.00

0.00

�0.384

0.412

0.408

�0.0540

2.02

1.92

0.395

�5.23

�4.94�

�1.00

�0.00

0.00

�0.384

0.408

0.412

�0.0540

1.92

2.02

0.395

�4.94

�5.23�

�1.00

�0.00

0.143

�0.384

0.408

0.357

�0.0540

1.92

2.01

0.395

�4.94

�5.17�

�1.00

�1.31

0.143

�0.384

0.911

0.357

�0.0540

1.99

2.01

0.395

�5.46

�5.17�

�11.2

�1.31

0.143

�4.30

0.911

0.357

�0.605

1.99

2.01

4.42

�5.46

�5.17�

�0.143

�1.31

11.2

0.357

0.911

�4.30

2.01

1.99

�0.605

�5.17

�5.46

4.42�.

REMARK

Note that the row multipliersused in Example 4 are 1.31,

and as contrasted with the multipliersof 1.31, and 32.3 encountered in Example 3.

�11.2,

�0.408,�0.143,

9781133110873_1001.qxp 3/10/12 7:04 AM Page 492

Page 7: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

The term partial in partial pivoting refers to the fact that in each pivot search, onlyentries in the first column of the matrix or submatrix are considered. This search can beextended to include every entry in the coefficient matrix or submatrix; the resultingtechnique is called Gaussian elimination with complete pivoting. Unfortunately,neither complete pivoting nor partial pivoting solves all problems of rounding error.Some systems of linear equations, called ill-conditioned systems, are extremely sensitive to numerical errors. For such systems, pivoting is not much help. A commontype of system of linear equations that tends to be ill-conditioned is one for which thedeterminant of the coefficient matrix is nearly zero. The next example illustrates this problem.

An Ill-Conditioned System of Linear Equations

Use Gaussian elimination to solve the system of linear equations.

Round each intermediate calculation to four significant digits.

SOLUTION

Gaussian elimination with rational arithmetic produces the exact solution andBut rounding to four significant digits introduces a

large rounding error, as follows.

So, and back-substitution produces

This “solution” represents a percentage error of 25% for both the -value and the -value. Note that this error was caused by a rounding error of only 0.0005 (when

1.0025 was rounded to 1.002).y

x

� �10,000.

x � �y

y � 10,000

�1

0

1

1.00

0

10,000��1

0

1

0.002

0

20��1

1

1

1.002

0

20�

401�400 � 1.0025x � �8000.y � 8000

x �401400

y � 20

x � y � 0

10.1 Gaussian Elimination with Partial Pivoting 493

LINEAR ALGEBRA APPLIED

Pharmacokinetics researchers study the absorption, distribution, and elimination of a drug from the body. They use mathematical formulas to model data collected,analyze the credibility of the data, account for environmentalerror factors, and detect subpopulations of subjects whoshare certain characteristics.

The mathematical models are used to develop dosage regimens for a patient to achieve a target goal at a targettime, so it is vital that the computations are not skewed byrounding errors. Researchers rely heavily on pharmacokineticand statistical software packages that use numerical methods like the one shown in this chapter.

luchschen/Shutterstock.com

9781133110873_1001.qxp 3/10/12 7:04 AM Page 493

Page 8: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

494 Chapter 10 Numerical Methods

10.1 Exercises

Floating Point Form In Exercises 1–8, express the realnumber in floating point form.

1. 4281 2. 321.61 3. 4.

5. 6. 0.00026 7. 8.

Finding the Stored Value In Exercises 9–16, determinethe stored value of the real number in a computer that rounds to (a) three significant digits and (b) four significant digits.

9. 331 10. 21.4 11. 12. 216.964

13. 14. 15. 16.

Rounding Error In Exercises 17 and 18, evaluate thedeterminant of the matrix, rounding each intermediatecalculation to three significant digits. Then compare therounded value with the exact solution.

17. 18.

Gaussian Elimination In Exercises 19 and 20, useGaussian elimination to solve the system of linear equations. After each intermediate calculation, roundthe result to three significant digits. Then compare thissolution with the exact solution.

19. 20.

Partial Pivoting In Exercises 21–24, (a) use Gaussianelimination without partial pivoting to solve the system, rounding to three significant digits after eachintermediate calculation, (b) use partial pivoting to solvethe same system, again rounding to three significant digitsafter each intermediate calculation, and (c) compareboth solutions with the exact solution provided.

21.

22.

23.

24.

Solving an Ill-Conditioned System In Exercises 25and 26, use Gaussian elimination to solve the ill-conditioned system of linear equations, rounding eachintermediate calculation to three significant digits. Thencompare this solution with the exact solution provided.

25.

26.

27. Comparing Ill-Conditioned Systems Solve thefollowing ill-conditioned systems and compare theirsolutions.

(a) (b)

29. The Hilbert matrix of size is the symmetricmatrix where As increases, the Hilbert matrix becomes more and moreill-conditioned. Use Gaussian elimination to solve thesystem of linear equations shown below, rounding to twosignificant digits after each intermediate calculation.Compare this solution with the exact solution

and

30. Repeat Exercise 29 for whererounding to four significant digits.

Compare this solution with the exact solution

31. The inverse of the Hilbert matrix has integerentries. Use a software program or graphing utility tocalculate the inverses of the Hilbert matrices for

For what values of do the inversesappear to be accurate?

nn � 4, 5, 6, and 7.Hn

Hnn � n

140�.and x4 ��x1 � �4, x2 � 60, x3 � �180,

b � 1 1 1 1T,H4x � b,

13x1 �14x2 �

15x3 � 1

12x1 �13x2 �

14x3 � 1

x1 �12x2 �

13x3 � 1

x3 � 30�.x2 � �24,�x1 � 3,

naij � 1��i � j � 1�.Hn � aij,n � nn � n

xx

y1.0001y

22.0001

xx

y1.0001y

22

y � 48,060��Exact: x � 48,010,

x �

�x �

800801 y �

y �

10

50

y � �10,818��Exact: x � 10,820,

x �

x �

y �600601 y �

2

20

�Exact: x � 1, y � 1, z � 1� 81.400x � 61.12y � 1.180z � 83.7 4.810x � 05.92y � 1.110z � 00.0 0.007x � 61.20y � 0.093z � 61.3

�Exact: x � �0.49, y � 0.1, z � 20� 2x � 4.05y � 0.05000z � �0.385

�x � 4.00y � 0.00600z � �0.21 x � 4.01y � 0.00445z � �0.00

�Exact: x � 10, y � 1�99.00x � 449.0y � 541.000.51x � 992.6y � 997.7

�Exact: x � 1, y � 1�6x � 6.20y � 12.206x � 1.04y � 12.04

14.4x81.6x

17.1y97.4y

31.5179.0

1.21x4.66x

16.7y64.4y

28.8111.0

�2.12

1.07

4.22

2.12�� 1.24

66.00

56.00

1.02�

16

17

532

716

�92.646

1612

18�0.00121

�21.001�2.62

28. By inspection, determinewhether the following system is more likely to be solvable by Gaussian elimination with partial pivoting, or is more likely to be ill-conditioned.Explain your reasoning.

�0.216x1

2.05x1

23.1x1

0.044x2

1.02x2

6.05x2

8.15x3

0.937x3

0.021x3

7.7624.183

52.271

9781133110873_1001.qxp 3/10/12 7:04 AM Page 494

Page 9: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

10.2 Iterative Methods for Solving Linear Systems 495

10.2 Iterative Methods for Solving Linear Systems

Use the Jacobi iterative method to solve a system of linear equations.

Use the Gauss-Seidel iterative method to solve a system of linearequations.

Determine whether an iterative method converges or diverges, and use strictly diagonally dominant coefficient matrices to obtainconvergence.

THE JACOBI METHOD

As a numerical technique, Gaussian elimination is rather unusual because it is direct.That is, a solution is obtained after a single application of Gaussian elimination. Oncea “solution” has been obtained, Gaussian elimination offers no method of refinement.The lack of refinement can be a problem because, as the preceding section shows,Gaussian elimination is sensitive to rounding error.

Numerical techniques more commonly involve an iterative method. For instance,in calculus, Newton’s iterative method is used to approximate the zeros of a differentiable function. This section introduces two iterative methods for approximatingthe solution of a system of linear equations in variables.

The first iterative technique is called the Jacobi method, after Carl Gustav JacobJacobi (1804–1851). This method makes two assumptions: (1) that the system

has a unique solution and (2) that the coefficient matrix has no zeros on its main diagonal. If any of the diagonal entries are zero, then rows or columnsmust be interchanged to obtain a coefficient matrix that has all nonzero entries on themain diagonal.

To begin the Jacobi method, solve the first equation for the second equation for and so on, as follows.

Then make an initial approximation of the solution

Initial approximation

and substitute these values of on the right-hand sides of the rewritten equations to obtain the first approximation. After this procedure has been completed, one iteration has been performed. In the same way, the second approximation is formed bysubstituting the first approximation’s -values on the right-hand sides of the rewrittenequations. By repeated iterations, you will form a sequence of approximations thatoften converges to the actual solution. This procedure is illustrated in Example 1.

x

xi

(x1, x2, x3, . . . , xn)

xn �1

ann �bn � an1x1 � an2x2 � . . . � an,n�1 xn�1�

.

.

.

x2 �1

a22 �b2 � a21x1 � a23x3 � . . . � a2n xn�

x1 �1

a11 �b1 � a12x2 � a13x3 � . . . � a1n xn�

x2,x1,

a11, a22, . . . , ann

A

an1x1 � an2x2 � . . . � annxn � bn

.

.

....

.

.

....

a21x1 � a22x2 � . . . � a2nxn � b2

a11x1 � a12x2 � . . . � a1nxn � b1

nn

9781133110873_1002.qxp 3/10/12 7:01 AM Page 495

Page 10: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

n 0 1 2 3 4 5 6 7

x1 0.000 �0.200 0.146 0.191 0.181 0.185 0.186 0.186

x2 0.000 0.222 0.203 0.328 0.332 0.329 0.331 0.331

x3 0.000 �0.429 �0.517 �0.416 �0.421 �0.424 �0.423 �0.423

Applying the Jacobi Method

Use the Jacobi method to approximate the solution of the following system of linearequations.

Continue the iterations until two successive approximations are identical when roundedto three significant digits.

SOLUTION

To begin, write the system in the form

Because you do not know the actual solution, choose

Initial approximation

as a convenient initial approximation. So, the first approximation is

Now substitute these x-values into the system to obtain the second approximation.

Continuing this procedure, you obtain the sequence of approximations shown in the table.

Because the last two columns in the table are identical, you can conclude that to threesignificant digits the solution is

For the system of linear equations in Example 1, the Jacobi method is said to converge. That is, repeated iterations succeed in producing an approximation that iscorrect to three significant digits. As is generally true for iterative methods, greateraccuracy would require more iterations.

x3 � �0.423.x2 � 0.331,x1 � 0.186,

x1 �

x2 �

x3 �

�1529

�37

25�0.222�

39��0.200�27��0.200�

35��0.429�19��0.429�

17�0.222�

���

0.146

0.203

�0.517.

x3 � �37 �

27�0� �

17�0� � �0.429.

x2 � 29 �39 �0� �

19 �0� � 0.222

x1 � �15 �

25 �0� �

35 �0� � �0.200

x3 � 0x2 � 0,x1 � 0,

x3 � �37 �

27 x1 �

17 x2 .

x2 � �29 �

39 x1 �

19 x3

x1 � �15 �

25 x2 �

35 x3

2x1 � x2 � 7x3 � 3

�3x1 � 9x2 � x3 � 2

5x1 � 2x2 � 3x3 � �1

496 Chapter 10 Numerical Methods

9781133110873_1002.qxp 3/10/12 7:01 AM Page 496

Page 11: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

n 0 1 2 3 4 5 6

x1 0.000 �0.200 0.167 0.191 0.187 0.186 0.186

x2 0.000 0.156 0.334 0.334 0.331 0.331 0.331

x3 0.000 �0.508 �0.429 �0.422 �0.422 �0.423 �0.423

THE GAUSS-SEIDEL METHOD

You will now look at a modification of the Jacobi method called the Gauss-Seidelmethod, named after Carl Friedrich Gauss (1777–1855) and Philipp L. Seidel(1821–1896). This modification is no more difficult to use than the Jacobi method, andit often requires fewer iterations to produce the same degree of accuracy.

With the Jacobi method, the values of obtained in the th approximation remainunchanged until the entire th approximation has been calculated. On the otherhand, with the Gauss-Seidel method you use the new values of each as soon as theyare known. That is, once you have determined from the first equation, its value is thenused in the second equation to obtain the new Similarly, the new and are usedin the third equation to obtain the new and so on. This procedure is demonstrated inExample 2.

Applying the Gauss-Seidel Method

Use the Gauss-Seidel iteration method to approximate the solution to the system of equations in Example 1.

SOLUTION

The first computation is identical to that in Example 1. That is, usingas the initial approximation, you obtain the new value of

Now that you have a new value of , use it to compute a new value of That is,

Similarly, use and to compute a new value of That is,

So, the first approximation is , and Now performing the next iteration produces

Continued iterations produce the sequence of approximations shown in the table.

Note that after only six iterations of the Gauss-Seidel method, you achieved the same accuracy as was obtained with seven iterations of the Jacobi method inExample 1.

x1

x2

x3

�1529

�37

25�0.156�39�0.167�27�0.167�

35��0.508�19��0.508�

17�0.334�

���

0.167

0.334

�0.429.

x3 � �0.508.x2 � 0.156,x1 � �0.200

x3 � �37 �

27��0.200� �

17 �0.156� � �0.508.

x3.x2 � 0.156x1 � �0.200

x2 �29 �

39 ��0.200� �

19�0� � 0.156.

x2.x1

x1 � �15 �

25(0) �

35(0) � �0.200

x1.�x1, x2, x3� � �0, 0, 0�

x3,x2x1x2.

x1

xi

�n � 1�nxi

10.2 Iterative Methods for Solving Linear Systems 497

9781133110873_1002.qxp 3/10/12 7:01 AM Page 497

Page 12: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

n 0 1 2 3 4 5 6 7

x1 0 �4 �34 �174 �1224 �6124 �42,874 �214,374

x2 0 �6 �34 �244 �1224 �8574 �42,874 �300,124

n 0 1 2 3 4 5

x1 0 �4 �174 �6124 �214,374 �7,503,124

x2 0 �34 �1224 �42,874 �1,500,624 �52,521,874

CONVERGENCE AND DIVERGENCE

Neither of the iterative methods presented in this section always converges. That is, itis possible to apply the Jacobi method or the Gauss-Seidel method to a system of linear equations and obtain a divergent sequence of approximations. In such cases, it issaid that the method diverges.

An Example of Divergence

Apply the Jacobi method to the system

using the initial approximation and show that the method diverges.

SOLUTION

As usual, begin by rewriting the system in the form

Then the initial approximation produces the first approximation

Repeated iterations produce the sequence of approximations shown in the table.

For this particular system of linear equations you can determine that the actualsolution is and So you can see from the table that the approximationsprovided by the Jacobi method become progressively worse instead of better, and you can conclude that the method diverges.

The problem of divergence in Example 3 is not resolved by using the Gauss-Seidelmethod rather than the Jacobi method. In fact, for this particular system the Gauss-Seidel method diverges even more rapidly, as shown in the table.

You will now look at a special type of coefficient matrix called a strictly diagonally dominant matrix, for which it is guaranteed that both methods will converge.

A,

x2 � 1.x1 � 1

x2 � �6 � 7�0� � �6.

x1 � �4 � 5�0� � �4

�0, 0�

x2 � �6 � 7x1.

x1 � �4 � 5x2

�x1, x2� � �0, 0�,

7x1 � x2 � 6

x1 � 5x2 � �4

498 Chapter 10 Numerical Methods

9781133110873_1002.qxp 3/10/12 7:01 AM Page 498

Page 13: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

Strictly Diagonally Dominant Matrices

Which of the systems of linear equations shown below has a strictly diagonally dominant coefficient matrix?

a. b.

SOLUTION

a. The coefficient matrix

is strictly diagonally dominant because and

b. The coefficient matrix

is not strictly diagonally dominant because the entries in the second and third rowsdo not conform to the definition. For instance, in the second row,and and it is not true that Interchanging the secondand third rows in the original system of linear equations, however, produces the coefficient matrix

which is strictly diagonally dominant.

The next theorem, which is listed without proof, states that strict diagonal dominance is sufficient for the convergence of either the Jacobi method or the Gauss-Seidel method.

A� � �4

3

1

2

�5

0

�1

1

2�,

�a22� > �a21� � �a23�.a23 � 2,a22 � 0,a21 � 1,

A � �4

1

3

2

0

�5

�1

2

1�

�5� > �2�.�3� > ��1�

A � �3

2

�1

5�

4x1

x1

3x1

2x2

5x2

x3

2x3

x3

�1�4

3

3x1 �

2x1 �

x2 �

5x2 �

�42

10.2 Iterative Methods for Solving Linear Systems 499

Definition of Strictly Diagonally Dominant Matrix

An matrix is strictly diagonally dominant when the absolute value ofeach entry on the main diagonal is greater than the sum of the absolute valuesof the other entries in the same row. That is,

�ann� > �an1� � �an2� � . . . � �an, n�1�....

�a22� > �a21� � �a23� � . . . � �a2n��a11� > �a12� � �a13� � . . . � �a1n�

An � n

THEOREM 10.1 Convergence of the Jacobi

and Gauss-Seidel Methods

If is strictly diagonally dominant, then the system of linear equations given by has a unique solution to which the Jacobi method and theGauss-Seidel method will converge for any initial approximation.

Ax � bA

9781133110873_1002.qxp 3/10/12 7:01 AM Page 499

Page 14: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

n 0 1 2 3 4 5

x1 0.0000 0.8571 0.9959 0.9999 1.000 1.000

x2 0.0000 0.9714 0.9992 1.000 1.000 1.000

500 Chapter 10 Numerical Methods

In Example 3, you looked at a system of linear equations for which the Jacobi andGauss-Seidel methods diverged. In the next example, you can see that by interchanging the rows of the system in Example 3, you can obtain a coefficient matrix that is strictlydiagonally dominant. After this interchange, convergence is assured.

Interchanging Rows to Obtain Convergence

Interchange the rows of the system in Example 3 to obtain one with a strictly diagonallydominant coefficient matrix. Then apply the Gauss-Seidel method to approximate thesolution to four significant digits.

SOLUTION

Begin by interchanging the two rows of the system to obtain

Note that the coefficient matrix of this system is strictly diagonally dominant. Thensolve for and as shown below.

Using the initial approximation obtain the sequence of approximationsshown in the table.

So, the solution is and x2 � 1.x1 � 1

�x1, x2� � �0, 0�,

x2 �45 �

15x1

x1 �67 �

17x2

x2,x1

x1 � 5x2 � �4.

7x1 � x2 � 6

REMARK

Do not conclude from Theorem 10.1 that strict diagonal dominance is a necessary condition for convergence of the Jacobi orGauss-Seidel methods (seeExercise 21).

LINEAR ALGEBRA APPLIED

The Jacobi method can be applied to boundary value problems to analyze a region having a quantity distributedevenly across it. A grid, or mesh, is drawn over the regionand values are estimated at the grid points.

Some illustration software programs have a gradient meshfeature that evenly distributes color in a region betweenand around user-defined grid points. In the diagram below,a grid has been superimposed over a triangular region, and the exterior points of the grid have been specified bythe user as percentages of blue. The value of color at eachinterior grid point is equal to the average of the colors atadjacent grid points.

As the mesh spacing decreases,the linear system of equationsbecomes larger. Numericalmethods allow the illustrationsoftware to determine colors ateach individual point along themesh, resulting in a smoothdistribution of color.

c3 � 0.25�c1 � 70 � 50 � c2�c2 � 0.25�60 � c3 � 40 � 40�c1 � 0.25�80 � 80 � c3 � 60�

100

20 30 40 50 60

40 70

60 80

80 90

c1

c3c2

Image copyright Slaven, 2010. Used under license from Shutterstock.com.

9781133110873_1002.qxp 3/10/12 7:01 AM Page 500

Page 15: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

10.2 Exercises 501

10.2 Exercises

The Jacobi Method In Exercises 1–4, apply the Jacobimethod to the system of linear equations, using the initialapproximation Continueperforming iterations until two successive approximationsare identical when rounded to three significant digits.

1.

2.

3.

4.

The Gauss-Seidel Method In Exercises 5–8, apply theGauss-Seidel method to the system of linear equations inthe given exercise.

5. Exercise 1 6. Exercise 2

7. Exercise 3 8. Exercise 4

Showing Divergence In Exercises 9–12, show that theGauss-Seidel method diverges for the system using theinitial approximation

9.

10.

11.

12.

Strictly Diagonally Dominant Matrices In Exercises13–16, determine whether the matrix is strictly diagonally dominant.

13. 14.

15. 16.

Interchanging Rows In Exercises 17–20, interchangethe rows of the system of linear equations in the given exercise to obtain a system with a strictly diagonally dominant coefficient matrix. Then apply the Gauss-Seidel method to approximate the solution to two significant digits.

17. Exercise 9 18. Exercise 10

19. Exercise 11 20. Exercise 12

Showing Convergence In Exercises 21 and 22, thecoefficient matrix of the system of linear equations is not strictly diagonally dominant. Show that the Jacobiand Gauss-Seidel methods converge using an initialapproximation of

21. 22.

True or False? In Exercises 23–25, determine whethereach statement is true or false. If a statement is true, givea reason or cite an appropriate statement from the text.If a statement is false, provide an example that shows thestatement is not true in all cases or cite an appropriatestatement from the text.

23. The Jacobi method is said to converge when it produces a sequence of repeated iterations accurate to within aspecific number of decimal places.

24. If the Jacobi method or the Gauss-Seidel methoddiverges, then the system has no solution.

25. If a matrix is strictly diagonally dominant, then thesystem of linear equations represented by hasno unique solution.

Ax � bA

3x1 � x2 � 4x3 � 5

x1 � 3x2 � x3 � 7 x1 � 2x2 � 3

4x1 � 2x2 � 2x3 � 0 �4x1 � 5x2 � 1

�0, 0, . . . , 0.�x1, x2, . . . , xn �

�7

1

0

5

�4

2

�1

1

�3��

12

2

0

6

�3

6

0

2

13�

��1

0

�2

1��2

3

1

5�

x2 � 2x3 � 1

3x1 � x2 � 5

x1 � 3x2 � x3 � 5

3x1 � x3 � 13

x1 � 3x2 � 10x3 � 9

2x1 � 3x2 � �7

3x1 � 2x2 � 2

�x1 � 4x2 � 1

2x1 � x2 � 3

x1 � 2x2 � �1

�0, 0, . . . , 0.�x1, x2, . . . , xn �

3x1 � 4x3 � 11

x1 � 7x2 � 2x3 � �2

4x1 � x2 � x3 � 7

�x1 � x2 � 3x3 � �6

x1 � 3x2 � x3 � �2

2x1 � x2 � 2

3x1 � 5x2 � 1

�4x1 � 2x2 � �6

x1 � 4x2 � 5

3x1 � x2 � 2

�0, 0, . . . 0.�x1, x2, . . . , xn �

26. Consider the system of linearequations

(a) Describe all the values of and that will allowyou to use the Jacobi method to approximate thesolution of this system.

(b) Describe all the values of and that will guarantee that the Jacobi and Gauss-Seidel methods converge.

(c) Let and What, if anything,can you determine about the convergence or divergence of the Jacobi and Gauss-Seidel methods? Explain.

c � 4.b � 1,a � 8,

cb,a,

cb,a,

ax1

3x1

�2x1

5x2

bx2

x2

2x3

x3

cx3

19�1

9.

9781133110873_1002.qxp 3/10/12 7:01 AM Page 501

Page 16: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

502 Chapter 10 Numerical Methods

10.3 Power Method for Approximating Eigenvalues

Determine the existence of a dominant eigenvalue, and find thecorresponding dominant eigenvector.

Use the power method with scaling to approximate a dominanteigenvector of a matrix, and use the Rayleigh quotient to computeits dominant eigenvalue.

DOMINANT EIGENVALUES AND DOMINANT EIGENVECTORS

In Chapter 7 the eigenvalues of an matrix were obtained by solving its characteristic equation

For large values of polynomial equations such as this one are difficult and time consuming to solve. Moreover, numerical techniques for approximating roots of polynomial equations of high degree are sensitive to rounding errors. This section introduces an alternative method for approximating eigenvalues. As presented here, themethod can be used only to find the eigenvalue of that is largest in absolute value—thiseigenvalue is called the dominant eigenvalue of Although this restriction may seemsevere, dominant eigenvalues are of primary interest in many physical applications.

Not every matrix has a dominant eigenvalue. For instance, the matrices

have no dominant eigenvalues.

Finding a Dominant Eigenvalue

Find the dominant eigenvalue and corresponding eigenvectors of the matrix

SOLUTION

From Example 4 in Section 7.1, the characteristic polynomial of isSo, the eigenvalues of are and

of which the dominant one is From the same example, the dominant eigenvectors of those corresponding to are of the form

t � 0.x � t�3

1�,

�2 � �2��A�2 � �2.

�2 � �2,�1 � �1A�2 � 3� � 2 � �� � 1��� � 2�.A

A � �2

1

�12

�5�.

B � �2

0

0

0

2

0

0

0

1�A � �1

0

0

�1�

A.A

n,

�n � cn�1�n�1 � cn�2�

n�2 � . . . � c0 � 0.

An � n

Definitions of Dominant Eigenvalue and Dominant Eigenvector

Let and be the eigenvalues of an matrix is called thedominant eigenvalue of when

The eigenvectors corresponding to are called dominant eigenvectors of A.�i

i � j.��i� > ��j�,A

�iA.n � n�n�1, �2, . . . ,

REMARK

Note that the eigenvalues of are and andthe eigenvalues of are

and �3 � 1.�2 � 2,�1 � 2,B

�2 � �1,�1 � 1A

9781133110873_1003.qxp 3/10/12 7:03 AM Page 502

Page 17: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

THE POWER METHOD

Like the Jacobi and Gauss-Seidel methods, the power method for approximating eigenvalues is iterative. First assume that the matrix has a dominant eigenvalue withcorresponding dominant eigenvectors. Then choose an initial approximation of oneof the dominant eigenvectors of This initial approximation must be a nonzero vectorin Finally, form the sequence

For large powers of and by properly scaling this sequence, you obtain a good approximation of the dominant eigenvector of This is illustrated in Example 2.

Approximating a Dominant Eigenvector

by the Power Method

Complete six iterations of the power method to approximate a dominant eigenvector of

SOLUTION

Begin with an initial nonzero approximation of

Then obtain the approximations shown below.

Iteration “Scaled” Approximation

Note that the approximations in Example 2 are approaching scalar multiples of

which is a dominant eigenvector of the matrix from Example 1. In this case the dominant eigenvalue of the matrix is known to be For the sake of demonstration, however, assume that the dominant eigenvalue of is unknown. Thenext theorem provides a formula for determining the eigenvalue corresponding to a given eigenvector. This theorem is credited to the English physicist John WilliamRayleigh (1842–1919).

A� � �2.A

A

�3

1�

190�2.99

1.00�x6 � Ax5 � �2

1

�12

�5���280

�94� � �568

190��94�2.98

1.00�x5 � Ax4 � �2

1

�12

�5��136

46� � ��280

�94�

46�2.96

1.00�x4 � Ax3 � �2

1

�12

�5���64

�22� � �136

46�

�22�2.91

1.00�x3 � Ax2 � �2

1

�12

�5��28

10� � ��64

�22�

10�2.80

1.00�x2 � Ax1 � �2

1

�12

�5���10

�4� � �28

10�

�4�2.50

1.00�x1 � Ax0 � �2

1

�12

�5��1

1� � ��10

�4�

x0 � �1

1�.

A � �2

1

�12

�5�.

A.k,

xk � Axk�1 � A�Ak�1x0� � Akx0.

.

.

.

x3 � Ax2 � A�A2x0� � A3x0

x2 � Ax1 � A�Ax0� � A2x0

x1 � Ax0

Rn.A.

x0

A

10.3 Power Method for Approximating Eigenvalues 503

9781133110873_1003.qxp 3/10/12 7:03 AM Page 503

Page 18: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

504 Chapter 10 Numerical Methods

PROOF

Because is an eigenvector of it follows that and you can write

In cases for which the power method generates a good approximation of a dominanteigenvector, the Rayleigh quotient provides a correspondingly good approximation of thedominant eigenvalue. The use of the Rayleigh quotient is demonstrated in Example 3.

Approximating a Dominant Eigenvalue

Use the result of Example 2 to approximate the dominant eigenvalue of the matrix

SOLUTION

In Example 2, the sixth iteration of the power method produced

With as the approximation of a dominant eigenvector of use theRayleigh quotient to obtain an approximation of the dominant eigenvalue of Firstcompute the product

Then, because

and

you can compute the Rayleigh quotient to be

which is a good approximation of the dominant eigenvalue

As shown in Example 2, the power method tends to produce approximations withlarge entries. In practice it is best to “scale down” each approximation before proceedingto the next iteration. One way to accomplish this scaling is to determine the component of that has the largest absolute value and multiply the vector by the reciprocal ofthis component. The resulting vector will then have components whose absolute values are less than or equal to 1.

AxiAxi

� � �2.

� �Ax � xx � x

��20.09.94

� �2.01

x � x � �2.99��2.99� � �1��1� � 9.94

Ax � x � ��6.02��2.99� � ��2.01��1� � �20.0

Ax � �2

1

�12

�5� �2.99

1.00� � ��6.02

�2.01�Ax.

A.A,x � �2.99, 1�

x6 � �568

190� � 190�2.99

1.00� .

A � �2

1

�12

�5� .

Ax � xx � x

��x � xx � x

���x � x�

x � x� �.

Ax � �xA,x

THEOREM 10.2 Determining an Eigenvalue

from an Eigenvector

If is an eigenvector of a matrix then its corresponding eigenvalue is given by

This quotient is called the Rayleigh quotient.

� �Ax � xx � x

.

A,x

REMARK

Other scaling techniques arepossible. For examples, seeExercises 23 and 24.

9781133110873_1003.qxp 3/10/12 7:03 AM Page 504

Page 19: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

x0 x1 x2 x3 x4 x5 x6

�1.00

1.00

1.00� �

0.60

0.20

1.00� �

0.45

0.45

1.00� �

0.48

0.55

1.00� �

0.50

0.51

1.00� �

0.50

0.50

1.00� �

0.50

0.50

1.00�

The Power Method with Scaling

Calculate six iterations of the power method with scaling to approximate a dominanteigenvector of the matrix

Use as the initial approximation.

SOLUTION

One iteration of the power method produces

and by scaling you obtain the approximation

A second iteration yields

and

Continuing this process produces the sequence of approximations shown in the table.

From the table, the approximate dominant eigenvector of is

Then use the Rayleigh quotient to approximate the dominant eigenvalue of to be(For this example, you can check that the approximations of and are exact.)�x� � 3.

A

x � �0.50

0.50

1.00�.

A

�0.45

0.45

1.00�.x2 �

1

2.20�1.00

1.00

2.20� �

Ax1 � �1

�2

1

2

1

3

0

2

1� �

0.60

0.20

1.00� � �

1.00

1.00

2.20�

x1 �1

5�3

1

5� � �

0.60

0.20

1.00�.

Ax0 � �1

�2

1

2

1

3

0

2

1� �

1

1

1� � �

3

1

5�

x0 � �1 1 1T

A � �1

�2

1

2

1

3

0

2

1�.

10.3 Power Method for Approximating Eigenvalues 505

REMARK

Note that the scaling factorsused to obtain the vectors inthe table

5.002.202.803.133.033.00

are approaching the dominanteigenvalue � � 3.

x6

x5

x4

x3

x2

x1

9781133110873_1003.qxp 3/10/12 7:03 AM Page 505

Page 20: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

In Example 4, the power method with scaling converges to a dominant eigenvector.The next theorem states that a sufficient condition for convergence of the power methodis that the matrix be diagonalizable (and have a dominant eigenvalue).

PROOF

Because is diagonalizable, from Theorem 7.5 it has linearly independent eigenvectorswith corresponding eigenvalues of Assume that these

eigenvalues are ordered so that is the dominant eigenvalue (with a correspondingeigenvector of ). Because the eigenvectors are linearly independent,they must form a basis for For the initial approximation choose a nonzero vector such that the linear combination has a nonzeroleading coefficient. (If the power method may not converge, and a different

must be used as the initial approximation. See Exercises 19 and 20.) Now,multiplying both sides of this equation by produces

Repeated multiplication of both sides of this equation by produces

which implies that

Now, from the original assumption that is larger in absolute value than the othereigenvalues, it follows that each of the fractions

. . . ,

is less than 1 in absolute value. So, as approaches infinity, each of the factors

. . . ,

must approach 0. This implies that the approximation improvesas increases. Because is a dominant eigenvector, it follows that any scalar multipleof is also a dominant eigenvector, which shows that approaches a multiple of the dominant eigenvector of

The proof of Theorem 10.3 provides some insight into the rate of convergence ofthe power method. That is, if the eigenvalues of are ordered so that

then the power method will converge quickly if is small, and slowly if is close to 1. This principle is illustrated in Example 5.��2���1�

��2���1���1� > ��2� ��3� . . . ��n�

A

A.Akx0x1

x1kc1 � 0,Akx0 � �1

k c1x1,

��n

�1�k��3

�1�

k

,��2

�1�k

,

k

�n

�1

�3

�1

,�2

�1

,

�1

Akx0 � �1k�c1x1 � c2��2

�1�

k

x2 � . . . � cn��n

�1�

k

xn�.

Akx0 � c1��1kx1� � c2��2

kx2� � . . . � cn��nkxn�

A

� c1��1x1� � c2��2x2� � . . . � cn��nxn�.

� c1�Ax1� � c2�Ax2� � . . . � cn�Axn� Ax0 � A�c1x1 � c2x2 � . . . � cnxn�

Ax0

c1 � 0,x0 � c1x1 � c2x2 � . . . � cnxn

x0,Rn.x1, x2, . . . , xnnx1

�1

�1, �2, . . . , �n.x1, x2, . . . , xn

nA

A

506 Chapter 10 Numerical Methods

THEOREM 10.3 Convergence of the Power Method

If is an diagonalizable matrix with a dominant eigenvalue, then thereexists a nonzero vector such that the sequence of vectors given by

. . . , . . .

approaches a multiple of the dominant eigenvector of A.

Akx0,A4x0,A3x0,A2x0,Ax0,

x0

n � nA

9781133110873_1003.qxp 3/10/12 7:03 AM Page 506

Page 21: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

x0 x1 x2 x3 x4

�1.000

1.000� �0.818

1.000� �0.835

1.000� �0.833

1.000� �0.833

1.000�

x0 x1 x2 x66 x67 x68

�1.000

1.000� �0.500

1.000� �0.941

1.000�. . .

. . . �0.715

1.000� �0.714

1.000� �0.714

1.000�

The Rate of Convergence of the Power Method

a. The matrix

has eigenvalues of and So the ratio is 0.1. For thismatrix, only four iterations are required to obtain successive approximations thatagree when rounded to three significant digits.

b. The matrix

has eigenvalues of and For this matrix, the ratio is 0.9,and the power method does not produce successive approximations that agree tothree significant digits until 68 iterations have been performed.

In this section the power method was used to approximate the dominant eigenvalueof a matrix. This method can be modified to approximate other eigenvalues through useof a procedure called deflation. Moreover, the power method is only one of several techniques that can be used to approximate the eigenvalues of a matrix. Another popular method is called the QR algorithm.

This is the method used in most programs and calculators for finding eigenvalues andeigenvectors. The algorithm uses the factorization of the matrix, as presented inChapter 5. Discussions of the deflation method and the algorithm can be found inmost texts on numerical methods.

QRQR-QR

��2���1��2 � �9.�1 � 10

A � ��4

7

10

5�

��2���1��2 � �1.�1 � 10

A � �4

6

5

5�

10.3 Power Method for Approximating Eigenvalues 507

REMARK

The approximations in thisexample are rounded to three significant digits. The intermediate calculations, however, are performed with a higher degree of accuracy to minimize rounding error. You may get slightly differentresults, depending on yourrounding method or themethod used by your technology.

LINEAR ALGEBRA APPLIED

Internet search engines use web crawlers to continually traverse the Internet, maintaining a master index of contentand hyperlinks on billions of web pages. By analyzing howmuch each web page is referenced by other sites, searchresults can be ordered based on relevance. Some algorithms do this using matrices and eigenvectors.

Suppose a search set contains web sites. Define matrix such that

if site references site and

if site does not reference site or

Then scale each row vector of to sum to 1, and definematrix as the collection of scaled row vectors.

Using the power method on produces a dominant eigenvector whose entries are used to determine the pageranks of the sites in order of importance.

MnMn � n

A

i � j.j,iaij � 0

jiaij � 1

An � nn

rudall30/Shutterstock.com

9781133110873_1003.qxp 3/10/12 7:03 AM Page 507

Page 22: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

508 Chapter 10 Numerical Methods

10.3 Exercises

Finding a Dominant Eigenvector In Exercises 1–4,use the techniques presented in Chapter 7 to find theeigenvalues of the matrix If has a dominant eigenvalue, find a corresponding dominant eigenvector.

1. 2.

3. 4.

Using the Rayleigh Quotient In Exercises 5 and 6, usethe Rayleigh quotient to compute the eigenvalue of corresponding to the eigenvector x.

5.

6.

Eigenvectors and Eigenvalues In Exercises 7–10,use the power method with scaling to approximate adominant eigenvector of the matrix Start with

and calculate five iterations. Then use toapproximate the dominant eigenvalue of

7. 8.

9. 10.

Eigenvectors and Eigenvalues In Exercises 11–14,use the power method with scaling to approximate adominant eigenvector of the matrix Start with

and calculate four iterations. Then useto approximate the dominant eigenvalue of

11. 12.

13. 14.

Power Method with Scaling In Exercises 15 and 16,the matrix does not have a dominant eigenvalue. Applythe power method with scaling, starting with

and observe the results of the first fouriterations.

15. 16.

17. Rates of Convergence

(a) Compute the eigenvalues of

(b) Apply four iterations of the power method withscaling to each matrix in part (a), starting with

(c) Compute the ratios for and For whichmatrix do you expect faster convergence?

Writing In Exercises 19 and 20, (a) find the eigenvaluesand corresponding eigenvectors of (b) use the initialapproximation to calculate two iterations of the powermethod with scaling, and (c) explain why the methoddoes not seem to converge to a dominant eigenvector.

19.

20.

Other Eigenvalues In Exercises 21 and 22, observe thatimplies that Apply five

iterations of the power method (with scaling) on tocompute the eigenvalue of with the smallest magnitude.

21. 22.

Another Scaling Technique In Exercises 23 and 24,apply four iterations of the power method (with scaling)to approximate the dominant eigenvalue of the matrix.After each iteration, scale the approximation by dividingby the length so that the resulting approximation will bea unit vector.

23. 24.

25. Use the proof of Theorem 10.3 to show that

for large values of That is, show that the scale factorsobtained by the power method approach the dominanteigenvalue.

k.

A�Akx0� � �1�Akx0�

A � �7

16

8

�4

�9

�4

2

6

5�A � �5

4

6

3�

A � �2

0

0

3

�1

0

1

2

3�A � �2

1�12�5�

AA�1

A�1x � 1��x.Ax � �x

x0 � �111�A � �

�3

0

0

0

�1

1

2

0

�2�,

x0 � �11�A � � 3

�2

�1

4�,

x0

A,

B.A�2�1

x0 � ��1 2T.

A � �2

1

1

2� and B � �2

1

3

4�.

A � �1

�2

�6

2

5

6

�2

�2

�3�A � �

1

3

0

1

�1

0

0

0

�2�

x0 � [1 1 1]T,

A

A � �0

0

2

6

�4

1

0

0

1�A � �

�1

2

1

�6

7

2

0

0

�1�

A � �1

0

0

2

�7

0

0

1

0�A � �

3

1

0

0

�1

2

0

0

8�

A.x4

x0 � [1 1 1]TA.

A � � 6

�2

�3

1�A � � 1

�2

�4

8�

A � ��1

1

0

6�A � �2

0

1

�7�A.

x5x0 � [1 1]TA.

A � ��1

0

2

8�, x � �2

9�

A � �4

2

�5

�3�, x � �5

2�

A�

A � ��5

3

4

0

5

�2

0

0

3�A � �

1

2

1

3

1

1

0

2

0�

A � ��2

3

0

�6�A � � 1

�3

�5

�1�

AA.

18. Rework Example 2 using thepower method with scaling. Compare your answerwith the one found in Example 2.

9781133110873_1003.qxp 3/10/12 7:03 AM Page 508

Page 23: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

10.4 Applications of Numerical Methods 509

10.4 Applications of Numerical Methods

Find a second-degree least squares regression polynomial or athird-degree least squares regression polynomial for a set of data.

Use finite element analysis to determine the probability of an event.

Model population growth using an age transition matrix and usethe power method to find a stable age distribution vector.

APPLICATIONS OF GAUSSIAN ELIMINATION WITH PIVOTING

In Section 2.5, least squares regression analysis was used to find linear mathematicalmodels that best fit a set of points in the plane. This procedure can be extended tocover polynomial models of any degree, as follows.

Note that if this system of equations reduces to

which has a solution of

and

Exercise 18 asks you to show that this formula is equivalent to the matrix formula forlinear regression that was presented in Section 2.5.

Example 1 illustrates the use of regression analysis to find a second-degree polynomial model.

a0 �� yi

n� a1

� xi

n.a1 �

n�xiyi � �� xi��� yi�n� xi

2 � �� xi�2

�� xi�a0 � �� xi2�a1 � �xiyi

na0 � �� xi�a1 � �yi

m � 1,

n

Regression Analysis for Polynomials

The least squares regression polynomial of degree for the points

is given by

where the coefficients are determined by the following system of linearequations.

��xim�a0 � ��xi

m�1�a1 � ��xim�2�a2 � . . . � ��xi

2m�am � �ximyi

.

.

.

��xi2�a0 � ��xi

3�a1 � ��xi4�a2 � . . . � ��xi

m�2�am � �xi2yi

��xi�a0 � ��xi2�a1 � ��xi

3�a2 � . . . � ��xim�1�am � �xiyi

na0 � ��xi�a1 � ��xi2�a2 � . . . � ��xi

m�am � �yi

m � 1

y � amxm � am�1xm�1 � . . . � a2x2 � a1x � a0

. . . , �xn, yn����x1, y1�, �x2, y2�,

m

9781133110873_1004.qxp 3/10/12 7:02 AM Page 509

Page 24: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

510 Chapter 10 Numerical Methods

x

y

Predictedpoints

2020

2015

−1 1 2 3 4 5 6 7 8 9

123

56789

Figure 10.1

Least Squares Regression Analysis

The world populations in billions for selected years from 1980 to 2010 are shownbelow. (Source: U.S. Census Bureau)

Find the second-degree least squares regression polynomial for these data and use theresulting model to predict the world populations in 2015 and 2020.

SOLUTION

Begin by letting represent 1980, represent 1985, and so on. So, the collection of points is represented by

which yields

So, the system of linear equations giving the coefficients of the quadratic modelis

Gaussian elimination with pivoting on the matrix

produces

So, back-substitution produces the solution

and the regression quadratic is

Figure 10.1 compares this model with the given points. To predict the world populationin 2015, let and obtain

Similarly, the prediction for 2020 is

y � �0.0017�82� � 0.4129�8� � 4.4445 7.64 billion.

�x � 8�

y � �0.0017�72� � 0.4129�7� � 4.4445 7.25 billion.

x � 7,

y � �0.0017x2 � 0.4129x � 4.4445.

a2 � �0.0017, a1 � 0.4129, a0 � 4.4445

1

0

0

4.8462

1

0

25

6.5

1

6.4036

0.4020

�0.0017�.

7

21

91

21

91

441

91

441

2275

39.63

130.17

582.73�

91a0 � 441a1 � 2275a2 � 582.73.

21a0 � 91a1 � 441a2 � 130.17

7a0 � 21a1 � 91a2 � 39.63

y � a2x2 � a1x � a0

�7

i�1

xi2yi � 582.73.�

7

i�1

xi yi � 130.17,�7

i�1

yi � 39.63,�7

i�1

xi4 � 2275,

�7

i�1

xi3 � 441,�

7

i�1

xi2 � 91,�

7

i�1

xi � 21,n � 7,

�6, 6.87��,�5, 6.45�,�4, 6.07�,�2, 5.27�, �3, 5.68�,��0, 4.45�, �1, 4.84�,

x � 1x � 0

Year 1980 1985 1990 1995 2000 2005 2010

Population 4.45 4.84 5.27 5.68 6.07 6.45 6.87

9781133110873_1004.qxp 3/10/12 7:02 AM Page 510

Page 25: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

Least Squares Regression Analysis

Find the third-degree least squares regression polynomial

for the points

SOLUTION

For this set of points, the linear system

becomes

Using Gaussian elimination with pivoting on the matrix

produces

which implies

So, the cubic model is

Figure 10.2 compares this model with the given points.

Figure 10.2

1

2

3

4

y

x(0, 0)

(1, 2)

(2, 3)

(3, 2)

(4, 1)

(5, 2)

(6, 4)

1 2 3 4 5 6

y � 0.1667x3 � 1.5003x2 � 3.6912x � 0.0718.

a3 � 0.1667, a2 � �1.5003, a1 � 3.6912, a0 � �0.0718.

1.0000

0.0000

0.0000

0.0000

5.1587

1.0000

0.0000

0.0000

27.6667

8.5313

1.0000

0.0000

152.3152

58.3482

9.7714

1.0000

2.8526

0.6183

0.1286

0.1667�

7

21

91

441

21

91

441

2275

91

441

2275

12,201

441

2275

12,201

67,171

14

52

242

1258�

441a0 � 2275a1 � 12,201a2 � 67,171a3 � 1258.

91a0 � 441a1 � 2275a2 � 12,201a3 � 242

21a0 � 91a1 � 441a2 � 2275a3 � 52

7a0 � 21a1 � 91a2 � 441a3 � 14

��xi3�a0 � ��xi

4�a1 � ��xi5�a2 � ��xi

6�a3 � �xi3yi

��xi2�a0 � ��xi

3�a1 � ��xi4�a2 � ��xi

5�a3 � �xi2yi

��xi�a0 � ��xi2�a1 � ��xi

3�a2 � ��xi4�a3 � �xiyi

na0 � ��xi�a1 � ��xi2�a2 � ��xi

3�a3 � �yi

��0, 0�, �1, 2�, �2, 3�, �3, 2�, �4, 1�, �5, 2�, �6, 4��.

y � a3x3 � a2x

2 � a1x � a0

10.4 Applications of Numerical Methods 511

9781133110873_1004.qxp 3/10/12 7:02 AM Page 511

Page 26: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

APPLICATIONS OF THE GAUSS-SEIDEL METHOD

An Application to Probability

Figure 10.3 is a diagram of a maze used in a laboratory experiment. The experimentbegins by placing a mouse at one of the ten interior intersections of the maze. Once themouse emerges in the outer corridor, it cannot return to the maze. When the mouse is at aninterior intersection, its choice of paths is assumed to be random. What is the probabilitythat the mouse will emerge in the “food corridor” when it begins at the th intersection?

SOLUTION

Let the probability of winning (getting food) when starting at the th intersection be represented by Then form a linear equation involving and the probabilities associated with the intersections bordering the th intersection. For instance, at the firstintersection the mouse has a probability of of choosing the upper right path and losing, a probability of of choosing the upper left path and losing, a probability of of choosing the lower right path (at which point it has a probability of of winning),and a probability of of choosing the lower left path (at which point it has a probabilityof of winning). So

Upper Upper Lower Lowerright left right left

Using similar reasoning, the other nine probabilities can be represented by the following equations.

Rewriting these equations in standard form produces the following system of ten linearequations in ten variables.

0

0

0

0

0

0

1

1

1

1 � p6 � p9 � 4p10 �

� p5 � p6 � p8 � 5p9 � p10 �

� p4 � p5 � p7 � 5p8 � p9 �

� p4 � 4p7 � p8 �

� p3 � p5 � 5p6 � p9 � p10 �

� p2 � p3 � p4 � 6p5 � p6 � p8 � p9 �

� p2 � 5p4 � p5 � p7 � p8 �

�p1 � p2 � 5p3 � p5 � p6 �

�p1 � 5p2 � p3 � p4 � p5 �

4p1 � p2 � p3 �

p10 �14�0� �

14�1� �

14p6 �

14p9

p9 �15�1� �

15p5� �

15p6 �

15p8 �

15p10

p8 �15�1� �

15p4� �

15p5 �

15p7 �

15p9

p7 �14�0� �

14�1� �

14 p4 �

14 p8

p6 �15�0� �

15p3� �

15p5 �

15p9 �

15p10

p5 �16p2� �

16p3� �

16p4 �

16p6 �

16p81

�16p9

p4 �15�0� �

15p2� �

15p5 �

15p7 �

15p8

p3 �15�0� �

15p1� �

15p2 �

15p5 �

15p6

p2 �15�0� �

15p1� �

15p3 �

15p4 �

15p5

p1 �14�0� �

14�0� �

14 p3 �

14 p2.

p2

14

p3

14

14

14

ipipi.

i

i

512 Chapter 10 Numerical Methods

Figure 10.3

1

2 3

5 64

9 107 8

Food

9781133110873_1004.qxp 3/10/12 7:02 AM Page 512

Page 27: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

The augmented matrix for this system is

4 0 0 0 0 0 0 0 0

5 0 0 0 0 0 0

5 0 0 0 0 0 0

0 0 5 0 0 0 0

0 6 0 0 0

0 0 0 5 0 0 0 .

0 0 0 0 0 4 0 0 1

0 0 0 0 5 0 1

0 0 0 0 0 5 1

0 0 0 0 0 0 0 4 1

Using the Gauss-Seidel method with an initial approximation of produces (after 18 iterations) an approximation of

The structure of the probability problem described in Example 3 is related to atechnique called finite element analysis, which is used in many engineering problems.

Note that the matrix developed in Example 3 has mostly zero entries. Such matricesare called sparse. For solving systems of equations with sparse coefficient matrices, theJacobi and Gauss-Seidel methods are much more efficient than Gaussian elimination.

p10 � 0.455.p9 � 0.522,

p8 � 0.522p7 � 0.455,

p6 � 0.298p5 � 0.333,

p4 � 0.298p3 � 0.180,

p2 � 0.180p1 � 0.090,

p1 � p2 � . . . � p10 � 0

�1�1

�1�1�1�1

�1�1�1�1

�1�1

�1�1�1�1

�1�1�1�1�1�1

�1�1�1�1

�1�1�1�1

�1�1�1�1

�1�1

10.4 Applications of Numerical Methods 513

LINEAR ALGEBRA APPLIED

Finite element analysis is used in engineering to visualizehow a car will deform in a crash. It allows engineers toidentify potential problem areas in a car design prior tomanufacturing. The car is modeled by a complex mesh ofnodes in order to analyze the distribution of stresses anddisplacements. Consider a simple bar with two nodes.

The nodes defined at the ends of the bar can have displacements in the and directions, denoted by

and and the stresses imposed on the bar during displacement result in internal forces and Each is related to by a stiffness coefficient Thesecoefficients form a stiffness matrix for the bar. Thematrix is used to determine the displacements of the nodesgiven a force

For complex three-dimensional structures, the meshincludes the material and structural properties that definehow the structure will react to certain conditions.

P.

4 � 4kij.ujFi

F4.F3,F2,F1,u4,u3,

u2,u1,y-x-

u2

u1

u4

u3

x

y

NodeNode

Pakhnyushcha/Shutterstock.com

9781133110873_1004.qxp 3/10/12 7:02 AM Page 513

Page 28: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

APPLICATIONS OF THE POWER METHOD

Section 7.4 introduced the idea of an age transition matrix as a model for populationgrowth. Recall that this model was developed by grouping the population into ageclasses of equal duration. So, for a maximum life span of years, the age classes arerepresented by the intervals listed below.

First age Second age nth ageclass class class

. . . . ,

The number of population members in each age class is then represented by the age distribution vector

.

Over a period of years, the probability that a member of the th age class will survive to become a member of the th age class is given by where

2,

The average number of offspring produced by a member of the th age class is given by where

2,

These numbers can be written in matrix form as follows.

Multiplying this age transition matrix by the age distribution vector for a specific timeperiod produces the age distribution vector for the next time period. That is,

In Section 7.4 you saw that the growth pattern for a population is stable when the samepercentage of the total population is in each age class each year. That is,

For populations with many age classes, the solution to this eigenvalue problem can befound with the power method, as illustrated in Example 4.

Axi � xi�1 � �xi.

Axi � xi�1.

A �

b1

p1

0...0

b2

0

p2...0

b3

0

0...0

. . .

. . .

. . .

. . .

bn�1

0

0...

pn�1

bn

0

0...0�

. . . , n.i � 1,0 � bi,

bi,i

. . . , n � 1.i � 1,0 � pi � 1,

pi,�i � 1�iL�n

Number in first age class

Number in second age class

Number in nth age class

x � x1

x2...

xn

�n � l�Ln

, L�Ln

, 2Ln �,0,

Ln�,

Ln

514 Chapter 10 Numerical Methods

9781133110873_1004.qxp 3/10/12 7:02 AM Page 514

Page 29: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

A Population Growth Model

Assume that a population of human females has the characteristics listed at the left. The table shows the age class (in years), the average number of female children duringa 10-year period, and the probability of surviving to the next age class. Find a stable agedistribution vector for this population.

SOLUTION

The age transition matrix for this population is

To apply the power method with scaling to find an eigenvector for this matrix, use an initial approximation of An approximationfor an eigenvector of with the percentage of each age in the total population, isshown below.

Percentage inEigenvector Age Class Age Class

1.000 15.270.925 14.130.864 13.200.806 12.310.749 11.440.686 10.480.605 9.240.492 7.510.314 4.800.106 1.62

The eigenvalue corresponding to the eigenvector in Example 4 is That is,

This means that the population in Example 4 increases by 6.5% every 10 years.

1.0000.9250.8640.8060.7490.6860.6050.4920.3140.106

1.0650.9850.9210.8590.7980.7300.6450.5240.3350.113

1.065

1.0000.9250.8640.8060.7490.6860.6050.4920.3140.106

.Ax � A

� 1.065.x

90, 100� 80, 90� 70, 80� 60, 70� 50, 60� 40, 50�x �

30, 40� 20, 30� 10, 20� 0, 10�

A,x0 � 1 1 1 1 1 1 1 1 1 1�T.

A �

10.4 Applications of Numerical Methods 515

Age Class(in years)

FemaleChildren Probability

0, 10� 0.000 0.985

10, 20� 0.174 0.996

20, 30� 0.782 0.994

30, 40� 0.263 0.990

40, 50� 0.022 0.975

50, 60� 0.000 0.940

60, 70� 0.000 0.866

70, 80� 0.000 0.680

80, 90� 0.000 0.361

90, 100� 0.000 0.000

0.000 0.174 0.782 0.263 0.022 0.000 0.000 0.000 0.000 0.0000.985 0 0 0 0 0 0 0 0 0

0 0.996 0 0 0 0 0 0 0 00 0 0.994 0 0 0 0 0 0 00 0 0 0.990 0 0 0 0 0 0 .0 0 0 0 0.975 0 0 0 0 00 0 0 0 0 0.940 0 0 0 00 0 0 0 0 0 0.866 0 0 00 0 0 0 0 0 0 0.680 0 00 0 0 0 0 0 0 0 0.361 0

REMARK

Try duplicating the results ofExample 4. Notice that the convergence of the powermethod for this problem isvery slow. The reason is thatthe dominant eigenvalue of

is only slightly largerin absolute value than the nextlargest eigenvalue.

� 1.065

9781133110873_1004.qxp 3/10/12 7:02 AM Page 515

Page 30: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

516 Chapter 10 Numerical Methods

10.4 Exercises

Least Squares Regression Analysis In Exercises 1–4,find the second-degree least squares regression polynomial for the given data. Then graphically comparethe model with the given points.

1.

2.

3.

4.

Least Squares Regression Analysis In Exercises 5–10, find the third-degree least squares regression polynomial for the given data. Then graphically comparethe model with the given points.

5.

6.

7.

8.

9.

10.

11. Decompression Sickness The numbers of minutesa scuba diver can stay at particular depths withoutacquiring decompression sickness are shown in thetable. (Source: United States Navy’s Standard AirDecompression Tables)

(a) Find the least squares regression line for these data.

(b) Find the second-degree least squares regressionpolynomial for these data.

(c) Sketch the graphs of the models found in parts (a)and (b).

(d) Use the models found in parts (a) and (b) to approximate the maximum number of minutes adiver should stay at a depth of 120 feet. (The valuefrom the Navy’s tables is 15 minutes.)

12. Health Expenditures The total national healthexpenditures (in trillions of dollars) in the United Statesfrom 2002 to 2009 are shown in the table. (Source:U.S. Census Bureau)

(a) Find the second-degree least squares regressionpolynomial for these data. Let correspond to2002.

(b) Use the result of part (a) to predict the expendituresfor the years 2010 through 2012.

13. Population The total numbers of people (in millions)in the United States 65 years of age or older in selectedyears are shown in the table. (Source: U.S. CensusBureau)

(a) Find the second-degree least squares regressionpolynomial for the data. Let correspond to1990, correspond to 1995, and so on.

(b) Use the result of part (a) to predict the total numbersof people in the United States 65 years of age orolder in 2015, 2020, and 2025.

14. Mail-Order Prescriptions The total amounts of money(in billions of dollars) spent on mail-order prescriptionsales in the United States from 2005 to 2010 are shownin the table. (Source: U.S. Census Bureau)

(a) Find the second-degree least squares regressionpolynomial for the data. Let correspond to2005.

(b) Use the result of part (a) to predict the total mail-order sales in 2015 and 2020.

x � 5

Year 2005 2006 2007 2008 2009 2010

AmountSpent

45.5 50.9 52.5 55.4 61.3 62.6

x � 1x � 0

Year 1990 1995 2000 2005 2010

Numberof People

29.6 31.7 32.6 35.2 38.6

x � 2

Year 2006 2007 2008 2009

Health Expenditures

2.152 2.284 2.391 2.486

Year 2002 2003 2004 2005

Health Expenditures

1.637 1.772 1.895 2.021

Depth (in feet) 80 90 100 110

Time (in minutes) 40 30 25 20

Depth (in feet) 35 40 50 60 70

Time (in minutes) 310 200 100 60 50

�0, 0�, �2, 10�, �4, 12�, �6, 0�, �8, �8� ��1, 0�, �0, 3�, �1, 2�, �2, 0�, �3, �2�, �4, �3���7, 2�, ��3, 0�, �1, �1�, �2, 3�, �4, 6���3, 4�, ��1, 1�, �0, 0�, �1, 2�, �2, 5��1, 1�, �2, 4�, �3, 4�, �5, 1�, �6, 2��0, 0�, �1, 2�, �2, 4�, �3, 1�, �4, 0�, �5, 1�

�1, 1�, �2, 1�, �3, 0�, �4, �1�, �5, �4���2, 1�, ��1, 2�, �0, 6�, �1, 3�, �2, 0�, �3, �1��0, 4�, �1, 2�, �2, �1�, �3, 0�, �4, 1�, �5, 4���2, 1�, ��1, 0�, �0, 0�, �1, 1�, �3, 2�

9781133110873_1004.qxp 3/10/12 7:02 AM Page 516

Page 31: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

10.4 Exercises 517

15. Trigonometry Find the second-degree least squaresregression polynomial for the points

Then use the results to approximate Comparethe approximation with the exact value.

16. Trigonometry Find the third-degree least squaresregression polynomial for the points

Then use the result to approximate Comparethe approximation with the exact value.

17. Least Squares Regression Line Find the leastsquares regression line for the population data fromExample 1. Then use the model to predict the worldpopulations in 2015 and 2020, and compare the resultswith the predictions obtained in Example 1.

18. Least Squares Regression Line Show that the formula for the least squares regression line presentedin Section 2.5 is equivalent to the formula presented inthis section. That is, if

then the matrix equation is equivalent to

and

19. Probability Suppose that the experiment in Example 3is performed with the maze shown in the figure. Findthe probability that the mouse will emerge in the foodcorridor when it begins at the th intersection.

20. Probability Suppose that the experiment in Example 3is performed with the maze shown in the figure. Findthe probability that the mouse will emerge in the foodcorridor when it begins at the th intersection.

21. Temperature A square metal plate has a constanttemperature on each of its four boundaries, as shown in the figure. Use a grid to approximate the temperature distribution in the interior of the plate.Assume that the temperature at each interior point is the average of the temperatures at the four closestneighboring points.

22. Temperature A rectangular metal plate has a constant temperature on each of its four boundaries, asshown in the figure. Use a grid to approximate the temperature distribution in the interior of the plate. Assume that the temperature at each interior pointis the average of the temperatures at the four closestneighboring points.

1

5

9

2

6

10

3

7

11

4

8

12

120°

40°80°

4 � 5

1

4

7

2

5

8

3

6

9

100°

0°100°

4 � 4

321

54 6

Food

i

3

4

7

5

1 2

8

6

9

Food

i

a0 ��yi

n� a1

�xi

n.a1 �

n�xiyi � ��xi���yi�n�xi

2 � ��xi�2

A � �XTX��1XTY

Y � y1

y2...

yn

�, X � 1

1...1

x1

x2...

xn

�, A � a0

a1�

tan���6�.

��

4, 1�.��

3, �3�,�0, 0�,���

3, ��3�,���

4, �1�,

cos���4�.

��

2, 0�.��

3,

12�,�0, 1�,���

3,

12�,���

2, 0�,

9781133110873_1004.qxp 3/10/12 7:02 AM Page 517

Page 32: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

518 Chapter 10 Numerical Methods

Stable Age Distribution In Exercises 23–28, thematrix represents the age transition matrix for a population. Use the power method with scaling to find a stable age distribution vector.

23.

24.

25.

26.

27.

28.

29. Population Growth Model The age transitionmatrix for a laboratory population of rabbits is

Find a stable age distribution vector for this population.

30. Population Growth Model A population has thefollowing characteristics.

(a) A total of 75% of the population survives its firstyear. Of that 75%, 25% survives its second year.The maximum life span is 3 years.

(b) The average numbers of offspring for each memberof the population are 2 the first year, 4 the secondyear, and 2 the third year.

Find a stable age distribution vector for this population.(See Exercise 13, Section 7.4.)

31. Population Growth Model A population has thefollowing characteristics.

(a) A total of 80% of the population survives the firstyear. Of that 80%, 25% survives the second year.The maximum life span is 3 years.

(b) The average number of offspring for each memberof the population is 3 the first year, 6 the secondyear, and 3 the third year.

Find a stable age distribution vector for this population.(See Exercise 14, Section 7.4.)

32. Fibonacci Sequence Apply the power method to thematrix

discussed in Chapter 7 (Fibonacci sequence). Use thepower method to approximate the dominant eigenvalueof

33. Writing In Example 2 in Section 2.5, the stochasticmatrix

represents the transition probabilities for a consumerpreference model. Use the power method to approximate a dominant eigenvector for this matrix.How does the approximation relate to the steady statematrix described in the discussion following Example 3in Section 2.5?

34. Smokers and Nonsmokers In Exercise 7 in Section 2.5, a population of 10,000 is divided into nonsmokers, smokers of one pack or less per day, andsmokers of more than one pack per day. Use the powermethod to approximate a dominant eigenvector for this matrix.

35. Television Watching In Exercise 8 in Section 2.5,a college dormitory of 200 students is divided into twogroups according to how much television they watch ona given day. Students in the first group watch an hour or more, and students in the second group watch lessthan an hour. Use the power method to approximate adominant eigenvector for this matrix.

P � 0.70

0.20

0.10

0.15

0.80

0.05

0.15

0.15

0.70�

A.

A � 1

1

1

0�

A � 0

0.50

70

0.5

1000 �.

A � 012

0

2

0

1

2

0

0�

A � 134

0

4

014

2

0

0�

A � 113

0

2

013

2

0

0�

A � 012

0

1

014

2

0

0�

A � 114

2

0�

A � 112

4

0�

36. Explain each of the following.

(a) How to use Gaussian elimination with pivoting tofind a least squares regression polynomial

(b) How to use finite element analysis to determinethe probability of an event

(c) How to model population growth with an age distribution matrix and use the power method tofind a stable age distribution vector

9781133110873_1004.qxp 3/10/12 7:02 AM Page 518

Page 33: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

Review Exercises 519

Floating Point Form In Exercises 1–6, express the realnumber in floating point form.

1. 528.6 2. 475.2

3. 4.

5. 6.

Finding the Stored Value In Exercises 7–12, determinethe stored value of the real number in a computer that(a) rounds to three significant digits, and (b) rounds tofour significant digits.

7. 25.2 8.

9. 10. 628.742

11. 12.

Rounding Error In Exercises 13 and 14, evaluate thedeterminant of the matrix, rounding each intermediatestep to three significant digits. Compare the roundedsolution with the exact solution.

13. 14.

Gaussian Elimination In Exercises 15 and 16, useGaussian elimination to solve the system of linear equations. After each intermediate step, round the resultto three significant digits. Compare the rounded solutionwith the exact solution.

15.

16.

Partial Pivoting In Exercises 17 and 18, use Gaussianelimination without partial pivoting to solve the systemof linear equations, rounding each intermediate step tothree significant digits. Then use Gaussian eliminationwith partial pivoting to solve the same system, againrounding each intermediate step to three significant digits. Finally, compare both solutions with the exactsolution provided.

17.

Exact solution:

18.

Exact solution:

Ill-Conditioned Systems In Exercises 19 and 20,use Gaussian elimination to solve the ill-conditioned system of linear equations, rounding each intermediatecalculation to three significant digits. Compare the solution with the exact solution provided.

19.

Exact solution:

20.

Exact solution:

The Jacobi Method In Exercises 21 and 22, apply theJacobi method to the system of linear equations, usingthe initial approximation

Continue performing iterations until two successiveapproximations are identical when rounded to three significant digits.

21. 22.

The Gauss-Seidel Method In Exercises 23 and 24,apply the Gauss-Seidel method to the system from theindicated exercise.

23. Exercise 21 24. Exercise 22

Strictly Diagonally Dominant Matrices In Exercises25–28, determine whether the matrix is strictly diagonally dominant.

25. 26.

27. 28.

Interchanging Rows In Exercises 29–32, interchangethe rows of the system of linear equations to obtain a system with a strictly diagonally dominant matrix. Thenapply the Gauss-Seidel method to approximate the solution to four significant digits.

29. 30.

31. 32. x1

x1

3x1

3x2

x2

x2

x3

3x3

x3

2�1

1

2x1

4x1

x1

4x2

x2

x2

x3

x3

4x3

�212

x1

2x1

4x2

x2

�46

x1

5x1

2x2

x2

�58

�401

2�2

1

�1�1�1��

4101

012

�2

2�2

0�� 1

�1�2�3��4

02

�3�

2x1 � x2 � 1x1 � 2x2 � 7

x1 � 4x2 � �32x1 � x2 � �1

�0, 0, 0, . . . , 0�.�x1, x2, x3, . . . , xn� �

x � 20,001, y � 20,002

x

�99

100x

y

y

�120,101

100

x � 5000, y � �5001

x

x

y9991000

y

�140014000

x � 1, y � 2

4.25x6.32x

6.3y2.14y

16.8510.6

x � 3, y � 1

2.15x3.12x

7.25y6.3y

13.7 15.66

3.2x � 15.1y � 4.1

12.5x � 18.2y � 56.8

2.33x � 16.1y � 43.85

2.53x � 8.5y � 29.65

� 8.510.25

3.28.1�� 12.5

20.242.5

6.25�

316

512

�250.231

�41.2

47831

2

�22.5�4.85

10 Review Exercises

9781133110873_10REV.qxp 3/10/12 7:03 AM Page 519

Page 34: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

520 Chapter 10 Numerical Methods

Finding Eigenvalues In Exercises 33–36, use the techniques presented in Chapter 7 to find the eigenvaluesof the matrix If has a dominant eigenvalue, find acorresponding dominant eigenvector.

33. 34.

35. 36.

Using the Rayleigh Quotient In Exercises 37–42, usethe Rayleigh quotient to compute the eigenvalue of thematrix corresponding to the eigenvector

37.

38.

39.

40.

41.

42.

Approximating Eigenvectors and Eigenvalues InExercises 43–46, use the power method with scaling toapproximate a dominant eigenvector of the matrix Start with and calculate four iterations. Then use to approximate the dominant eigenvalue of

43. 44.

45. 46.

Least Squares Regression Analysis In Exercises 47and 48, find the second-degree least squares regressionpolynomial for the data. Then use a software program or a graphing utility with regression features to find a second-degree regression polynomial. Compare theresults.

47.

48.

49. Hospital Care The table shows the amounts spent forhospital care (in billions of dollars) in the United Statesfrom 2005 through 2009. (Source: U.S. Centers forMedicare and Medicaid Services)

Find the second-degree least squares regression polynomial for the data. Let correspond to 2005.Then use a software program or a graphing utility withregression features to find a second-degree regressionpolynomial. Compare the results.

50. Revenue The table shows the total yearly revenues(in billions of dollars) for golf courses and countryclubs in the United States from 2005 through 2009.(Source: U.S. Census Bureau)

Find the second-degree least squares regression polynomial for the data. Let correspond to 2005.Then use a software program or a graphing utility withregression features to find a second-degree regressionpolynomial. Compare the results.

51. Baseball Salaries The table shows the averagesalaries (in millions of dollars) of Major LeagueBaseball players on opening day of baseball season from2006 through 2011. (Source: Major League Baseball)

Find the third-degree least squares regression polynomialfor the data. Let correspond to 2006. Then use asoftware program or a graphing utility with regressionfeatures to find a third-degree regression polynomial.Compare the results.

52. Dormitory Costs The table shows the average costs(in dollars) of a college dormitory room from 2006through 2010. (Source: Digest of Education Statistics)

Find the third-degree least squares regression polynomialfor the data. Let correspond to 2006. Then use asoftware program or a graphing utility with regressionfeatures to find a third-degree regression polynomial.Compare the results.

x � 6

Year 2006 2007 2008 2009 2010

Cost 3804 4019 4214 4446 4657

x � 6

Year 2006 2007 2008 2009 2010 2011

AverageSalary 2.867 2.945 3.155 3.240 3.298 3.305

x � 5

Year 2005 2006 2007 2008 2009

Revenue 19.4 20.5 21.2 21.0 20.36

x � 5

Year 2005 2006 2007 2008 2009

AmountSpent 606.5 648.3 686.8 722.1 759.1

��2, 2�, ��1, 1�, �0, �1�, �1, �1�, and �3, 0���2, 0�, ��1, 2�, �0, 3�, �1, 2�, and �3, 0�

A � �63

0�3�A � �2

01

�4�

A � ��35

102�A � �7

224�

A.x4

x0 � [1 1]TA.

A � �3

�3�1

2�4�2

�395�, x � �

301�

A � �021

�141

120�, x � �

�151�

A � �1

�2�6

256

�2�2�3�, x � �

113�

A � �200

030

141�, x � �

010�

A � � 6�2

�31�, x � � 3

�1�

A � �21

�12�5�, x � �3

1�x.A

A � �100

250

�314�A � �

�22

�1

21

�2

�3�6

0�A � �2

014�A � �1

111�

AA.

9781133110873_10REV.qxp 3/10/12 7:03 AM Page 520

Page 35: 10 Numerical Methods Power Method for Approximating Eigenvalues 10.4 Applications of Numerical Methods 10 487 Numerical Methods Car Crash …

10 Project

The populations (in millions) of the United States by decade from 1900 to 2010 areshown in the table. (Source: U.S. Census Bureau)

Use a software program or a graphing utility to create a scatter plot of the data. Letcorrespond to 1900, correspond to 1910, and so on.

Using the scatter plot, describe any patterns in the data. Do the data appear to belinear, quadratic, or cubic in nature? Explain.

Use the techniques presented in this chapter to find

(a) a linear least squares regression equation.

(b) a second-degree least squares regression equation.

(c) a cubic least squares regression equation.

Graph each equation with the data. Briefly describe which of the regression equations best fits the data.

Use each model to predict the populations of the United States for the years 2020,2030, 2040, and 2050. Which, if any, of the regression equations appears to be thebest model for predicting future populations? Explain your reasoning.

The 2012 Statistical Abstract projected the populations of the United States for thesame years, as shown in the table below. Do any of your models produce the same projections? Explain any possible differences between your projections and theStatistical Abstract projections.

Year 2020 2030 2040 2050

Population 341.4 373.5 405.7 439.0

x � 1x � 0

Year 1960 1970 1980 1990 2000 2010

Population 179.3 203.3 226.5 248.7 281.4 308.7

Year 1900 1910 1920 1930 1940 1950

Population 76.2 92.2 106.0 123.2 132.2 151.3

Chapter 10 Project 521

East/Shutterstock.com

9781133110873_10REV.qxp 3/10/12 7:03 AM Page 521


Recommended