+ All Categories
Home > Documents > I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR...

I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR...

Date post: 02-May-2018
Category:
Upload: hathuan
View: 215 times
Download: 1 times
Share this document with a friend
33
CONTENT I. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S METHOD FOR SYSTEMS OF EQUATIONS 4-8 II. NEWTON SEARCH FOR A MINIMUM 8-9 III. THE GOLDEN RATIO SEARCH FOR A MINIMUM 9-11 IV. NELDER-MEAD SEARCH FOR A MINIMUM 11-13 V. NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS AND SYSTEM OF EQUATIONS V. 1 RUNGE-KUTTA METHODS 13-18 V. 2 FINITE DIFFERENCE METHODS 18-21 VI. MATHEMATICAL NOTIONS 21-27
Transcript
Page 1: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

CONTENT

I. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS

I.1. NEWTON'S METHOD FOR EQUATIONS 2-4

I.2. NEWTON'S METHOD FOR SYSTEMS OF EQUATIONS 4-8

II. NEWTON SEARCH FOR A MINIMUM 8-9

III. THE GOLDEN RATIO SEARCH FOR A MINIMUM 9-11

IV. NELDER-MEAD SEARCH FOR A MINIMUM 11-13

V. NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS AND SYSTEM OF EQUATIONS

V. 1 RUNGE-KUTTA METHODS 13-18

V. 2 FINITE DIFFERENCE METHODS 18-21

VI. MATHEMATICAL NOTIONS 21-27

Page 2: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

I. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS

I.1. NEWTON'S METHOD FOR EQUATIONS

Here we try to approximate a root z of the equation in such a way that we use the tangential straight line of at different points. Starting from the left figure shows a successful trial and the right figure shows an unsuccessful one:

So, approximation of z can be got by Taylor's series of the function used only the constant and linear term of it. Hence is the solution of the linear equation

.

If this equation has unique solution for , then

and it is called the formula of Newton's method for the equation .

THEOREM.Let the function be twice continuously differentiable on . Assume that there is a root of the equation on the interval and

. Then Newton's iteration converges (starting from , if and from , if ) to the

unique solution on and

Page 3: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

where , .

NUMERICAL EXAMPLE

Determine the root of the equation

by Newton's method and with error bound .

(1) We can get a suitable interval (for which the conditions of our theorem are satisfied) by graphical way and (or) tabling. If we write our equation into the form

and represent the function of left and right sides (roughly), then we get. a (rough) estimate about the root:

We can see the equation has unique root (z) and it is on the interval (1/3, 2). Moreover, by the table

z is on the interval (2/3, 1), too.

(2) Now let us examine the conditions concerning the convergence of the method

a)

Page 4: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

b) Since and , so on a small neighbourhood of z

the graph of is the following:

Hence the right choice.

c) Since and is strictly monotone decreasing on ,

Furthermore, and it is strictly monotone increasing on (because of

f'''(x)>0). Therefore, is strictly monotone decreasing and

(3) The computation of the iterations and the estimation of the errors:

,

,

,

As is fulfilled, can be considered a suitable approximation of the root. (If we used more rought bounds for m and M then the value M/2m would be larger and we might need more iterations to guarantee the same accuracy.)

I.2. NEWTON'S METHOD FOR SYSTEMS OF EQUATIONS

Let the nonlinear system of equations

Page 5: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

be given. In shortly form:

.

Starting from the initial vector make on such a way that the functions are made „linear" by the first (n + 1) terms of their Taylor series at It means the surfaces are replaced by their tangential planes concerning . Hence, we can get from the system of linear equations

if there exists unique solution of it. Using matrices we can write

Shortly

,

where is called Jacobian matrix of the function system . If there exists unique solution (det ) then we can express from this equation with using the inverse matrix of

So, the formula of Newton's method (written and into the places of and respectively)

The following theorem gives a fundamental characterization of the method.

THEOREM

Page 6: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

Suppose that each of the partial derivatives is continuous on a neighbourhood of a solution and the matrix is invertable. Then Newton's method is convergent starting from sufficiently short distance. And if the partial derivates

are also continuous on the above mentioned neighbourhood then the error decreases at least quadratically, that is

where c is a positive constant.

REMARK

It is not practical to compute by the formula

(here we have to compute the inverse of a matrix of order n in every step), but it is practical to solve the original system of linear equations for the unknowns and then we can get the values immediately. Namely, solving system of linear equations requires smaller work than the computing of the inverse matrix.

NUMERICAL EXAMPLE.

Let a circle and a paraboloid be given by the formulas

, and

.

respectively. Determine the distance of the given curve and surface.

Geometrically we have to find the shortest way starting from a circle point ending at a paraboloid point:

Page 7: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

The square of the distance

For the extremum points it is a necessary condition that

This way we have a nonlinear system of equations. Now introduce the notations , where and try to solve the nonlinear system by Newton's

method starting from . We determine the vector from the linear system of equations

again and again. For the solution of the linear system is

}700131.0,613877.0,895051.0{eand

.299869.1,386123.1,895051.201 exx

The next approximations are

x 2 2.954787 , 1.112591 , 0.742890 , x3 3.066840 , 1.047144 , 0.295795 ,

x 4 3.131045 , 1.019094 , 0.042924 , },001324.0,000788.1,141308.3{)5( x -}.000002.0,000001.1,141592.3{)6( x

In the practice the following stop-criteria are usually used:

a) the iteration is stopped if , where is a given positive number;

b) the iteration is stopped if the distance of the last two approximating vectors (in some norm) is small enough.

If we use the criteria a) and , then is the last iteration, since

, , .

Page 8: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

II. NEWTON SEARCH FOR A MINIMUM

Definition (Gradient). Assume that is a function of two variables, ,

and has partial derivatives and . The gradient of , denoted by , is the vector function

.

Definition (Jacobian Matrix). Assume that are functions of two variables, , their Jacobian matrix is

.

Definition (Hessian Matrix). Assume that is a function of two variables, , and has partial derivatives up to the order two. The Hessian matrix is

defined as follows:

.

Lemma 1. For the Hessian matrix is the Jacobian matrix for the two functions , i. e.

.

Lemma 2. If the second order partial derivatives of are continuous then the Hessian matrix is symmetric.

Outline of the Newton Method for Finding a Minimum

Start with the approximation to the minimum point . Set .

(i) Evaluate the gradient vector and Hessian matrix

(ii) Compute the next point .

(iii) Perform the termination test for minimization. Set .

Repeat the process.

Page 9: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

Example 3. Use the Newton search method to find the minimum of .

Solution 3.

III. THE GOLDEN RATIO SEARCH FOR A MINIMUM

Definition (Unimodal Function)  The function     is unimodal on   ,  if there exists a unique number   such that                 is decreasing on   ,      and           is increasing on   .  

Golden Ratio Search

    If     is known to be unimodal on   ,  then it is possible to replace the interval with a subinterval on which     takes on its minimum value.  One approach is to select two interior points   .  This results in   .  The condition that     is unimodal guarantees that the function values and are less than   .  

    If   ,  then the minimum must occur in the subinterval , and we replace b with d and continue the search in the new subinterval .   If   , then the minimum must occur in the subinterval , and we replace a with c and continue the search in the new subinterval .  These choices are shown in Figure 1 below.  

    

If , then squeeze from the right and use the                       If , then squeeze from the left and use thenew interval and the four points   .                      new interval and the four points   .

                    Figure 1.  The decision process for the golden ratio search.   

 

Page 10: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

    The interior points c and d of the original interval   ,  must be constructed so that the resulting intervals ,  and are symmetrical in   .   This requires that   ,  and produces the two equations  

(1)         ,      and(2)         ,  

    where        (to preserve the ordering   ).

    We want the value of  r  to remain constant on each subinterval.  If  r  is chosen judicially then only one new point  e  (shown in green in Figure 1) needs to be constructed for the next iteration.  Additionally, one of the old interior points (either c or d) will be used as an interior point of the next subinterval, while the other interior point (d or c) will become an endpoint of the next subinterval in the iteration process.  Thus, for each iteration only one new point e will have to be constructed and only one new function evaluation , will have to be made.   As a consequence, this means that the value  r  must be chosen carefully to split the interval of

into subintervals which have the following ratios:

(3)             and     ,      and(4)             and     .  

If and only one new function evaluation is to be made in the interval , then we must have

         .  

Use the facts in (3) and (4) to rewrite this equation and then simplify.  

         ,  

         ,  

         ,  

         .  

Now the quadratic equation can be applied and we get  

         .  

Page 11: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

The value we seek is     and it is often referred to as the "golden ratio."   Similarly,

if , then it can be shown that   .  

 

Example 1. Find the minimum of the unimodal function     on the interval   .  Solution 1.

 

IV. NELDER-MEAD SEARCH FOR A MINIMUM

Nelder-Mead Method

The Nelder-Mead method is a simplex method for finding a local minimum of a function of several variables. It's discovery is attributed to J. A. Nelder and R. Mead. For two variables, a simplex is a triangle, and the method is a pattern search that compares function values at the three vertices of a triangle. The worst vertex, where is largest, is rejected and replaced with a new vertex. A new triangle is formed and the search is continued. The process generates a sequence of triangles (which might have different shapes), for which the function values at the vertices get smaller and smaller. The size of the triangles is reduced and the coordinates of the minimum point are found. The algorithm is stated using the term simplex (a generalized triangle in n dimensions) and will find the minimum of a function of n variables. It is effective and computationally compact.

Initial Triangle

Let be the function that is to be minimized. To start, we are given three vertices of a triangle: , for . The function is then evaluated at each of the three points: , for . The subscripts are then reordered so that . We use the notation

(1) , , and .

to help remember that is the best vertex, is good (next to best), and is the worst vertex.

Midpoint of the Good Side

The construction process uses the midpoint of the line segment joining

Page 12: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

and . It is found by averaging the coordinates:

(2) .

Reflection Using the Point

The function decreases as we move along the side of the triangle from to , and it decreases as we move along the side from to . Hence it is feasible that takes on smaller values at points that lie away from on the opposite side of the line between and . We choose a test point that is obtained by “reflecting” the triangle through the side . To determine , we first find the midpoint of the side . Then draw the line segment from to and call its length d. This last segment is extended a distance d through to locate the point . The vector formula for is

(3) .

Expansion Using the Point

If the function value at is smaller than the function value at , then we have moved in the correct direction toward the minimum. Perhaps the minimum is just a bit farther than the point . So we extend the line segment through and to the point . This forms an expanded triangle

. The point is found by moving an additional distance d along the line joining and . If the function value at is less than the function value at

, then we have found a better vertex than . The vector formula for is

(4) .

Contraction Using the Point

If the function values at and are the same, another point must be tested. Perhaps the function is smaller at , but we cannot replace with

because we must have a triangle. Consider the two midpoints and of the line segments and , respectively. The point with the smaller function value is called , and the new triangle is . Note: The choice between and might seem inappropriate for the two-dimensional case, but it is important in higher dimensions.

Shrink Toward

If the function value at is not less than the value at , the points and

Page 13: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

must be shrunk toward . The point is replaced with , and is replaced with , which is the midpoint of the line segment joining with .

Logical Decisions for Each Step

A computationally efficient algorithm should perform function evaluations only if needed. In each step, a new vertex is found, which replaces . As soon as it is found, further investigation is not needed, and the iteration step is completed. The logical details for two-dimensional cases are given in the proof.

Example 1. Use the Nelder-Mead method to find the minimum of .

Solution 1.

 

V. NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS AND SYSTEM OF EQUATIONS

V. 1 RUNGE-KUTTA METHODS

Let the initial value problem

be given and suppose that we would like to determine (approximating) values of the solution function in the finite interval . (It is well known that this problem is correctly given if

and are continuous in a large enough neighbourhood of the point .

The basic idea of the so-called one-step methods (including Runge-Kutta methods) is the following: if we have a formula for determining a good approximating value to by using the value , then starting from and using the previous (approximating) value at every step, we can give approximating values to the solution function at the points .

Page 14: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

From the equality

it seems reasonable to try with a formula

( ) ( ) ( ) py u h v hD u v h O h

which is the general form of the so-called one-step explicit formulas. (Of course we would like to choose such a function that is sufficiently large.) The speciality of Runge-Kutta methods is that the function value is given by several values of

originated recursively.

In detail, determine members

and using these values give an approximating value in form

.

We have a concrete formula after choosing the number of members and determining the coefficients . When determining the coefficients we harmonize the Taylor series belonging to of the left side accurate value and the Taylor series belonging to of the right side approximating value to the largest power of the step length .

Look at three different cases:

Page 15: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

a) Let . Then

that is,

The Taylor series belonging to of the left side is:

the Taylor series belonging to of the right side is itself, so the Taylor series are harmonized concerning to power 1, if . The obtained formula

1

1

( )( )

k hf u vy u h v k

is called Euler's formula. The so-called local error concerning of this formula (which gives the error after one step starting from an accurate value) is (where

) or in short form this local error is

b) Let . Then

that is

The Taylor series belonging to of the left side is

where it was used that

The Taylor series belonging to of the right side is

Page 16: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

The two Taylor series are harmonized concerning to power 2, if

So, we have three equations for four unknowns, and we can get infinitely many solutions. E.g. is a possible choise and the formula is

1

2 1

1 2

( ),

1( )2

k hf u vk hf u h v k

y u h v k k

which is called Heun's formula (or trapezoidal formula). The local error is characterized by here.

c) Let . Then we can originate 11 equations to the 13 unknowns by harmonizing the Taylor series concerning to power 4. (For harmonizing to power 5 the 13 parameters are not enough.) A possible choise is

and the formula is

1 2 3 41( ) ( 2 2 )6

y u h v k k k k

which is called the classical Runge-Kutta formula. The local error can be characterized by now.

We have already mentioned that each Runge-Kutta type formula has the form

( ) ( ) y u h v hD u v h

after substituting the expressions into the last "equation" (see the general form of one-step explicit methods). Hence, using such a formula, first we determine the

Page 17: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

approximating value

concerning starting from the accurate initial couple . Then we determine the approximating value

concerning starting from the approximating couple , etc.

So, we can give the process by the formula

.

DEFINITION

The quantity

where , is called local error belonging to the place . The quantity

(where is originated by the above mentioned formula) is called accumulated error belonging to the place .

THEOREM

If is smooth enough in a suitable neighbourhood of the point in the case of the problem , then for the Euler, Heun, and Runge-Kutta methods (which have local errors, respectively) the accumulated error can be given by

in the finite interval .

NUMERICAL EXAMPLE

1. Given the differential equation

Page 18: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

Determine the approximating values of that particular solution which undergoes the point by the classical Runge-Kutta formula to the interval if .

The results:

For

V. 2 FINITE DIFFERENCE METHODS

The basic idea of these methods is the following: we substitute the derivatives in the (ordinary or partial) differential equation and in the initial or (and) boundary conditions by expressions of approximating derivation formulas. In this way we can get an "approximating system of equations" for the required function values. We illustrate the possibilities on two well-known problems.

a) Let the boundary value problem

be given, where , and are at least continuous on . Take an equidistant partition on the interval

°

Page 19: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

by the mesh-points .

If the solution function is four times continuously differentiable on then by using the values belonging to the places we can get

where . Introducing the notation , substituting the previous expression into the differential equation for and arranging the equations with respect to , we can obtain the system of linear equations

for the required function values . Introducing the notations

our linear system can be written in form

Page 20: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

.

During the practical computation we cannot determine the elements of the vector , and therefore we omit this member (the linear system is truncated) and we determine the solution vector of the linear system

,

hoping that lies near enough to . (Both linear system have unique solution because the matrix is diagonally dominant due to ).

THEOREM

If then

NUMERICAL EXAMPLES

1. Give approximating values to the solution function of the boundary value problem

If we use the former approximating formula concerning at the inner meshpoints , then

Respecting the boundary values and arranging the equations, we get the linear system

This has the unique solution

So , the solution:

Page 21: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

Since the solution function is , therefore the used approximating formula is accurate, i.e. = in this problem.

 

VI. MATHEMATICAL NOTIONS

1. DEFINITIONThe set is called linear vector space if the addition and the multiplication by real number are defined on the set and these operations have the following properties:

The elements of such a set are called vectors.

Examplesa) Let

Page 22: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

where . are real numbers, that is, is the set of such "objects" which have "coordinates". If the addition and multiplication by number are defined by the traditional rules, that is,

then the mentioned properties can be proved easily. This linear space is usually denoted by .

b) Let the interval be given and denote the set of the continuous functions on .

If the addition and the multiplication by number are defined by the traditional rules, then the required properties can be proved easily. This linear space is usually denoted by .

2. DEFINITIONThe elements (where L is a vector space and the number of the given vectors is finite) form a linearly independent system if

Examplesa) In the vector space the vectors and from linearly

independent system, since the zero element can only be obtained in form .

b) In the vector space the vectors form linearly independent system, since the zero element (the identically zero function) can only be obtained in form .

3. DEFINITIONThe linearly independent system formed by the vectors is a basis of , if arbitrary element of can be obtained as a linear combination of the elements , that is, in form

.

Examples

Page 23: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

a) In the linear space the vectors form a well-known basis.

Another possible basis is given e.g. by vectors .

b) Denote the set of the polynomials of degree at most 2. The traditional addition and multiplication by number do not lead out from the set, is a vector space.

The elements (functions, vectors) form a basis of , because they form a linearly independent system and arbitrary at most quadratic polynomial can be obtained in form

.

Another basis is given e.g. by elements .

4. DEFINITIONIf we can order real numbers to all elements . of the linear space respectively in such a way that

then the vector space is called normed space and the real number is called the norm of the vector .

Examplesa) Define the quantities

for each element of the linear space . These quantities define three different norms which are called norm number 1, norm number 2 (or Euclidean norm) and norm number infinity.

b) Denote the set of the real (square) matrices of order . form a linear space with the traditional addition and multiplication by number. The quantities

Page 24: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

where is the largest eigenvalue of the matrix and

define three different norms. The norms belonging to the same notation (the same name) in a) and b) are called compatible, because such pairs have also the properties

for any beside the properties appearing in the definition. By the latter property we could have defined the above mentioned norms of the matrix as the smallest upper bound of the suitable vector norm under the condition . In our lecture notes we supposed compatible norms at using every time.

c) In the vector space we can define norms e.g. by equalities

Here we mention that the distance of the elements of a normed space (just like at traditional case) we can measure by the quantity

.

5. DEFINITIONIf we can order a real number c to arbitrary two elements of the linear space in such a way that

Page 25: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

then the vector space is called Euclidean space and the real number is called the scalar product of the elements (vectors) and .

Examplesa) Let and be two arbitrary elements of Then the quantity

defines the traditional scalar product on .

b) If and are two arbitrary elements of , then the quantity

defines a scalar product

6. DEFINITIONThe matrix of order is called positive definite matrix if

.

7. DEFINITIONThe matrix of order is called negative definite matrix if

.

8. DEFINITIONThe matrix of order is called positive semidefinite matrix if

Page 26: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

.

9. DEFINITIONThe matrix of order is called negative semidefinite matrix if

.

10. DefinitionDenote the limit of the convergent sequence of real numbers .If there exist numbers such that for arbitrary

where , then we say: the sequence converges to in order . If only the left (right) side inequality is fulfilled then we say: the order of convergence is at most (at least) .

Examplesa) For (quadratic convergence), which often appears at numerical methods, we illustrate the rapidity of convergence on a simple example. Suppose that and the distance of and is 0.1. Then (by using the right side inequality):

which means a surprising rapidity comparing the traditional sequences.

b) If we use the vector sequence instead of the number sequence and we use a norm instead of absolute value, then the previous

definition can be repeat with the inequality

11. DEFINITIONLet and . Then the equality

Page 27: I - University of Miskolcmatmkg/mschandbook.doc · Web viewI. NUMERICAL METHODS FOR NONLINEAR EQUATIONS AND SYSTEMS OF EQUATIONS I.1. NEWTON'S METHOD FOR EQUATIONS 2-4 I.2. NEWTON'S

(where can be say capital ordo ) means there exists positive constant such that

, .

Examplesa) Let be the limit of the sequence . Then the equality

, where ,

means that the error of , where is independent on . In other words, it means that "the error is proportional to ". Such a type of decreasing the error we can see e.g. at power-method for eigenvalues.

b) Let be the previous sequence. Then the equality (supposed it is true)

(where is a constant and if ) means that "the error is proportional to ”. Such a type of decreasing the error we can see e.g. at Runge-Kutta methods.

c) Sometimes we use the sign for truncation of a polynomial. E.g. the equality

, where ,

means that

,

where is independent on . Such an application of sign can be found e.g. at elimination methods .


Recommended