Home > Documents > Ordinary Differential Equations Notes

# Ordinary Differential Equations Notes

Date post: 10-Apr-2018
Category:
View: 227 times

of 95

Transcript
• 8/8/2019 Ordinary Differential Equations Notes

1/95

• 8/8/2019 Ordinary Differential Equations Notes

2/95

• 8/8/2019 Ordinary Differential Equations Notes

3/95

• 8/8/2019 Ordinary Differential Equations Notes

4/95

• 8/8/2019 Ordinary Differential Equations Notes

5/95

• 8/8/2019 Ordinary Differential Equations Notes

6/95

• 8/8/2019 Ordinary Differential Equations Notes

7/95

• 8/8/2019 Ordinary Differential Equations Notes

8/95

• 8/8/2019 Ordinary Differential Equations Notes

9/95

• 8/8/2019 Ordinary Differential Equations Notes

10/95

• 8/8/2019 Ordinary Differential Equations Notes

11/95

• 8/8/2019 Ordinary Differential Equations Notes

12/95

• 8/8/2019 Ordinary Differential Equations Notes

13/95

• 8/8/2019 Ordinary Differential Equations Notes

14/95

• 8/8/2019 Ordinary Differential Equations Notes

15/95

• 8/8/2019 Ordinary Differential Equations Notes

16/95

• 8/8/2019 Ordinary Differential Equations Notes

17/95

• 8/8/2019 Ordinary Differential Equations Notes

18/95

• 8/8/2019 Ordinary Differential Equations Notes

19/95

2.1. GENERAL THEORY 19

In order to discuss structures of solutions, we need the following denition.

Denition. Functions 1(x), , k (x) are linearly dependent on (a, b) if there exists constants c1 , , ck , not all zero, such that

c11(x) + + ck k (x) = 0for all x (a, b). A set of functions are linearly independent on (a, b) if they are not linearlydependent on (a, b).

Lemma 2.4 Functions 1(x) , , k (x) are linearly dependent on (a, b) if and only if the followingvector-valued functions1(x)1(x)

(n 1)1 (x)

, ,k (x)k (x)

(n 1)k (x)

(2.1.6)

are linearly dependent on (a, b).

Proof. = is obvious. To show =, assume that 1 , , k are linearly dependent on (a, b).

There exists constants c1 , , ck , not all zero, such that, for all x(a, b),c11(x) + + ck k (x) = 0 .

Differentiating this equality successively we nd that

c11(x) + + ck k (x) = 0 , c1

(n 1)1 (x) + + ck (n 1)k (x) = 0 .

Thus

c1

1(x)1(x)

(n 1)1 (x)

+ + ckk (x)k (x)

(n 1)k (x)

= 0

for all x(a, b). Hence the k vector-valued functions are linearly dependent on (a, b).Recall that, n vectors in R n are linearly dependent if and only if the determinant of matrix formedby these vectors is zero.

Denition. The Wronskian of n functions 1(x), , n (x) is dened by

W (1 ,

, n )(x) =

1(x) n (x)

(n 1)1 (x) (n 1)n (x). (2.1.7)

• 8/8/2019 Ordinary Differential Equations Notes

20/95

20 CHAPTER 2. HIGHER ORDER LINEAR EQUATIONS

Note that Wronskian of 1 , , n is the determinant of the matrix formed by the vector-valuedfunctions given in (2.1.6).

Theorem 2.5 Let y1(x) , , yn (x) be n solutions of (2.1.2) on (a, b) and let W (x) be their Wron-skian.(1) y1(x) , , yn (x) are linearly dependent on (a, b) if and only if W (x) 0 on (a, b).(2) y1(x) , , yn (x) are linearly independent on (a, b) if and only if W (x) does not vanish on(a, b).

Corollary 2.6 (1) The Wronskian of n solutions of (2.1.2) is either identically zero, or nowhere zero.

(2) n solutions y1 , , yn of (2.1.2) are linearly independent on (a, b) if and only if the set of vectorsy1(x0)y1(x0)

y(n 1)1 (x0)

, ,yn (x0)yn (x0)

y(n 1)n (x0)

are linearly independent for some x0(a, b).

Proof of Theorem 2.5. Let y1 , , yn be solutions of (2.1.2) on (a, b), and let W (x) be theirWronskian.

Step 1. We rst show that, if y1 , , yn are linearly dependent on (a, b), then W (x) 0.Since these solutions are linearly dependent, from Lemma 2.3, n vector-valued functionsy1(x)y1(x)

y(n 1)1 (x)

, ,yn (x)yn (x)

y(n 1)n (x)

are linearly dependent on (a, b). Thus for all x (a, b), the determinant of the matrix formed bythese vectors, namely, the Wronskian of y1 , , yn , is zero.

Step 2. Now, assume that the Wronskian W (x) of n solutions y1 , , yn vanishes at x0 (a, b).We shall show that y1 , , yn are linearly dependent on (a, b).Since W (x0) = 0 , the n vectors

y1(x0)y1(x0)

y(n 1)1 (x0)

, ,yn (x0)yn (x0)

y(n 1)n (x0)

are linearly dependent. Thus there exist n constants c1 , , cn , not all zero, such that

c1

y1(x0)y1(x0)

y(n 1)1 (x0)+ + cn

y1(x0)yn (x0)

y(n 1)n (x0)= 0 (2.1.8)

• 8/8/2019 Ordinary Differential Equations Notes

21/95

2.1. GENERAL THEORY 21

Deney0(x) = c1y1(x) + + cn yn (x).

From Theorem 2.3, y0 is a solution of (2.1.2). From (2.1.8), y0 satises the initial conditions

y(x0) = 0 , y (x0) = 0 , , y(n 1) (x0) = 0 . (2.1.9)From Corollary 2.2, we have y0 0, namely,

c1y1(x) + + cn yn (x) = 0for all x(a, b). Thus y1 , , yn are linearly dependent on (a, b).

Example. Consider the differential equation y 1x y = 0 on the interval (0, ). Both 1(x) = 1and 2(x) = x2 are solutions of the differential equation. W (1 , 2)(x) =

1 x2

0 2x= 2 x = 0

for x > 0. Thus 1 and 2 are linearly independent solutions.

Theorem 2.7 Let a1(x) , , an (x) and f (x) be continuous on the interval (a, b). The homoge-neous equation (2.1.2) has n linearly independent solutions on (a, b). Let y1 , , yn be n linearly independent solutions of (2.1.2) dened on (a, b). The general solutionof (2.1.2) is given by

y(x) = c1y1(x) + + cn yn (x), (2.1.10)where c1 , , cn are arbitrary constants.Proof. (1) Fix x0(a, b). For k = 1 , 2, , n , let yk be the solution of (2.1.2) satisfying the initialconditions

y( j )k (x0) =0 if j = k 1,1 if j = k 1.

The n vectorsy1(x0)y1(x0)

y(n 1)1 (x0)

, ,yn (x0)yn (x0)

y(n 1)n (x0)

are lineally independent since they form the identity matrix. From Corollary 2.6, y1 , , yn arelinearly independent on (a, b). From Theorem 2.3, for any constants c1 , , cn , y = c1y1 + +cn yn is a solution of (2.1.2).

(2) Now let y1 , , yn be n linearly independent solutions of (2.1.2) on (a, b). We shall show thatthe general solution of (2.1.2) is given by

y = c1y1 + + cn yn . (2.1.11)

• 8/8/2019 Ordinary Differential Equations Notes

22/95

22 CHAPTER 2. HIGHER ORDER LINEAR EQUATIONS

Given a solution y of (2.1.2), and x x0 (a, b). Since y1 , , yn are linearly independent on(a, b), the vectors

y1(x0)y1(x0)

y(n 1)1 (x0)

, ,yn (x0)yn (x0)

y(n 1)n (x0)

are linearly independent vectors. They form a basis for R n . Thus the vector

y(x0)y (x0)

y(n 1) (x0)can be represented as a linear combination of the n vectors, namely, there exist n constants c1 , ,cn such that

y(x0)y (x0)

y(n 1) (x0)

= c1

y1(x0)y1(x0)

y(n 1)1 (x0)

+ + cnyn (x0)yn (x0)

y(n 1)n (x0)

.

Let

(x) = y(x)

[c1y1(x) +

+ cn yn (x)].

(x) is a solution of (2.1.2) and satises the initial conditions (2.1.4) at x = x0 . By Corollary 2.2,(x) 0 on (a, b). Thus

y(x) = c1y1(x) + + cn yn (x).So (2.1.11) gives a general solution of (2.1.2).Any set of n linearly independent solutions is called a fundamental set of solutions .

Now we consider the non-homogeneous equation (2.1.1). We have

Theorem 2.8 Let y p be a particular solution of (2.1.1), and y1 ,

, yn be a fundamental set of

solutions for the associated homogeneous equation (2.1.2). The general solution of (2.1.1) is givenby

y(x) = c1y1(x) + + cn yn (x) + y p(x). (2.1.12)Proof. Let y be a solution of the non-homogeneous equation. Then y y p is a solution of thehomogeneous equation. Thus y(x) y p(x) = c1y1(x) + + cn yn (x).

2.2 Linear Equations with Constant Coefcients

Let us begin with second order linear equation with constant coefcients

y + ay + by = 0 , (2.2.1)

• 8/8/2019 Ordinary Differential Equations Notes

23/95

2.2. LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS 23

where a and b are constants. We look for a solution of the form y = ex

. Plugging into (2.2.1) wend that, ex is a solution of (2.2.1) if and only if

2 + a + b = 0 . (2.2.2)

(2.2.2) is called the auxiliary equation or characteristic equation of (2.2.1). The roots of (2.2.2) arecalled characteristic values (or eigenvalues):

1 =12

(a + a2 4b),2 =

12

(a a2 4b).1. If a

2

4b > 0, (2.2.2) has two distinct real roots 1 , 2 , and the general solutions of (2.2.1) isy = c1e 1 x + c2e 2 x .

2. If a2 4b = 0 , (2.2.2) has one real roots (we may say that (2.2.2) has two equal roots 1 = 2).The general solution of (2.2.2) is

y = c1ex + c2xex .

3. If a2 4b < 0, (2.2.2) has a pair of complex conjugate roots1 = + i, 2 = i.

The general solution of (2.2.2) is

y = c1ex cos(x ) + c2ex sin(x ).

Example. Solve y + y 2y = 0 , y(0) = 4 , y (0) = 5.Ans: 1 = 1 , 2 = 2, y = ex + 3 e2x .

Example. Solve y 4y + 4 y = 0 , y(0) = 3 , y (0) = 1 .Ans: 1 = 2 = 2 , y = (3 5x)e2x .

Example. Solve y

2y + 10 y = 0 .

Ans: 1 = 1 + 3 i, 2 = 1 3i,y = ex (c1 cos3x + c2 sin3x).

Now we consider n -th order homogeneous linear equations with constant coefcients

y(n ) + a1y(n 1) + + an 1y + an y = 0 , (2.2.3)where a1 , , an are real constants.y = ex is a solution of (2.2.3) if and only if satises

n + a1n 1 + + an 1 + an = 0 . (2.2.4)The solutions of (2.2.4) are called characteristic values or eigenvalues for the equation (2.2.3).

• 8/8/2019 Ordinary Differential Equations Notes

24/95

24 CHAPTER 2. HIGHER ORDER LINEAR EQUATIONS

Let 1 , , s be the distinct eigenvalues for (2.2.3). Then we can writen + a1n 1 + + an 1 + an= ( 1)m 1 ( 2)m 2 ( s )m s ,

(2.2.5)

where m1 , , m s are positive integers andm1 + + m s = n.

We call them the multiplicity of the eigenvalues 1 , , s respectively.Lemma 2.9 Assume is an eigenvalue of (2.2.3) of multiplicity m .(i) ex is a solution of (2.2.3).(ii) If m > 1 , then for any positive integer 1 k m 1 , xk ex is a solution of (2.2.3).(iii) If = + i , then

xk ex cos(x ), xk ex sin(x )

are solutions of (2.2.3), where 0 k m 1.Theorem 2.10 Let 1 , , s be the distinct eigenvalues for (2.2.3), with multiplicity m1 , , m srespectively. Then (2.2.3) has a fundamental set of solutions

e 1 x , xe 1 x , , xm 1 1e 1 x ; ;e s x , xe s x , , xm s 1e s x .

(2.2.6)

Proof of Lemma 2.9 and 2.10

Consider the n -th order linear equation with constant coefcients

y(n ) + a1y(n 1) + + an 1y + an y = 0 , (A1)where y(k ) = d

k ydx k . Let L(y) = y

(n ) + a1y(n 1) + + an 1y + an y and p(z) = zn + a1zn 1 + + an 1z + an . Note that p is a polynomial in z of degree n . Then we have

L(ezx

) = p(z)ezx

(A2)

Before we begin the proof, lets observe that 2

zx ezx =

2

xz ezx by Clairauts theorem because

ezx is differentiable in (x, z ) as a function of two variable and all the higher order partial derivativesexist and continuous. That means we can interchange the order of differentiation with respect to xand z as we wish. Therefore ddz L(e

zx ) = L( ddz ezx ). For instance, one may verify directly that

ddz

dk

dxk(ezx ) = xz k ezx + kz k1ezx = d

k

dxk(

ddz

ezx ).

Here one may need to use Leibnizs rule of taking the k-th derivative of a product of two functions:

(u v)(k )

=

k

i=0

ki u

( i )

v(k

i )

. (A3)

• 8/8/2019 Ordinary Differential Equations Notes

25/95

2.2. LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS 25

More generally,dk

dz k L(ezx

) = L(dk

dz k ezx

). (Strictly speaking, partial derivative notations should beused.) Now lets prove our results.

(1) If is a root of p, then L(ex ) = 0 by (A2) so that ex is a solution of (A1).(2) If is a root of p of multiplicity m , then p() = 0 , p () = 0 , p () = 0 , . . . , p (m 1) () = 0 .Now for k = 1 , . . . , m 1, differentiating (A2) k times with respect to z, we have

L(xk ezx ) = L(dk

dzkezx ) =

dk

dzkL(ezx ) =

dk

dzk( p(z)ezx ) =

k

i =0

ki

p( i ) (z)xki ezx .

Thus L(xk ex ) = 0 and xk ex is solution of (A1).(3) Let 1 ,

, s be the distinct roots of p, with multiplicity m1 ,

, m s respectively. Then we

wish to prove thate 1 x , xe 1 x , , xm 1 1e 1 x ; ;e s x , xe s x , , xm s 1e s x .

(A4)

are linearly independent over R . To prove this, suppose for all x in R

c11 e 1 x + c12 xe 1 x + + c1m 1 xm 1 1e 1 x+ +cs 1e s x + cs 2xe s x + + csm s xm s 1e s x = 0 .

Lets write this asP 1(x)e 1 x + P 2(x)e 2 x + + P s (x)e s x = 0 ,

for all x in R , where P i (x) = ci1 + ci2x + + cim i xm i 1 . We need to prove P i (x) 0 for alli. Assume that one of the P i s is not identically zero. By re-labelling the P i s, we may assume thatP s (x) 0. Dividing the above equation by e 1 x , we have

P 1(x) + P 2(x)e( 2 1 )x + + P s (x)e( s 1 )x = 0 ,for all x in R . Upon differentiating this equation sufciently many times (at most m1 times sinceP 1(x) is a polynomial of degree m1 1), we can reduce P 1(x) to 0. Note that in this process, thedegree of the resulting polynomial multiplied by e( i 1 )x remains unchanged. Therefore, we get

Q2(x)e( 2 1 )x + + Qs (x)e( s 1 )x = 0 ,where deg Q i = deg P i . Cancelling the term e 1 x we have

Q2(x)e 2 x + + Qs (x)e s x = 0 .Repeating this procedure, we arrive at

R s (x)e s x = 0 ,

where deg R s = deg P s . Hence R s (x) 0 on R , contradicting the fact that deg R s = deg P s andP s is not identically zero. Thus all the P i (x) are identically zero. That means all cij s are zero andthe functions in (A4) are linearly independent.

• 8/8/2019 Ordinary Differential Equations Notes

26/95

26 CHAPTER 2. HIGHER ORDER LINEAR EQUATIONS

Remark. If (2.2.3) has a complex eigenvalue = + i , then = i is also an eigenvalue.Thus both xk e( + i )x and xk e( i )x appear in (2.2.6), where 0 k m 1. In order to obtain a

fundamental set of real solutions, the pair of solutions xk e( + i )x and xk e( i )x in (2.2.6) shouldbe replaced by xk ex cos(x ) and xk ex sin(x ).

In the following we discuss solution of non-homogeneous equations. For simplicity we consider

y + by + cy = f (x), (2.2.7)

where b and c are real constants. The associated homogeneous equation is (2.2.1), and the charac-teristic equation of (2.2.1) is (2.2.2). We shall look for a particular solution of (2.2.7). This method

works in general even if a and b are functions of x. Also the method applies to higher order equa-tions.

1. Methods of variation of parameters.

Let y1 and y2 be two linearly independent solutions of the associated homogeneous equation (2.2.1)and W (x) be their Wronskian. Look for a particular solution of (2.2.7) in the form

y p = u1y1 + u2y2 ,

where u1 and u2 are functions to be determined. Suppose

u1y1 + u2y2 = 0 .Plugging y p into (2.2.7) we get

u1y1 + u2y2 = f.

Hence u1 and u2 satisfyu1y1 + u2y2 = 0 ,

u1y1 + u2y2 = f.(2.2.8)

Solving it, we nd that

u1 = y2W

f, u 2 =y1W

f.

Integrating yields

u1(x) = x

x 0

y2(t)W (t)

f (t)dt,

u2(x) = x

x 0

y1(t)W (t)

f (t)dt.(2.2.9)

Example. Solve the differential equation. y + y = sec x.Solution . A basis for the solutions of the homogeneous equation consists of y1 = cos x and y2 =sin x . Now W (y1 , y2) = cos x cos x (sin x)sin x = 1 . Thus u1 = sin x sec x dx =ln |cos x| + c1 and u2 = cos x sec x dx = x + c2 . From this, a particular solution is givenby y p = cos x ln

|cos x

|+ x sin x . Therefore, the general solution is y = c1 cos x + c2 sin x +

cos x ln |cos x|+ x sin x.

• 8/8/2019 Ordinary Differential Equations Notes

27/95

2.2. LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS 27

The method of variation of parameters can also be used to nd another solution of a second orderhomogeneous linear differential equation when one solution is given. Suppose z is a known solutionof the equation

y + P (x)y + Q(x)y = 0 .

We assume y = vz is a solution so that

0 = (vz) + P (vz) + Q(vz) = ( v z + 2 v z + vz ) + P (v z + vz ) + Qvz= (v z + 2 v z + P v z) + v(z + P z + Qz) = v z + v (2z + P z ).

That isvv = 2

zz P.

An integration gives v = z2e P dx and v = z2e P dx dx . We leave it as an exercise to

show that z and vz are linearly independent solutions by computing their Wronskian.

Example. Given y1 = x is a solution of x2y + xy y = 0 , nd another solution.Solution. Lets write the DE in the form y + 1x y 1x 2 y = 0 . Then P (x) = 1 /x . Thus a secondlinearly independent solution is given y = vx , where

v = x2e 1/xdx dx = x2x1 dx = 1x2 .Therefore the second solution is y = x1 and the general solution is y = c1x + c2x1 .2. Method of undetermined coefcients.

Case 1. f (x) = P n (x)ex , where P n (x) is a polynomial of degree n 0.We look for a particular solution in the form

y = Q(x)ex ,

where Q(x) is a polynomial. Plugging it into (2.2.7) we nd

Q + (2 + b)Q + ( 2 + b + c)Q = P n (x). (2.2.10)

Subcase 1.1. If 2 + b + c = 0 , namely, is not a root of (2.2.2), we choose Q = Rn , a

polynomial of degree n , andy = Rn (x)ex .

The coefcients of Rn can be determined by comparing the terms of same power in the two sides of (2.2.10). Note that in this case both sides of (2.2.10) are polynomials of degree n .

Subcase 1.2. If 2 + b + c = 0 but 2 + b = 0 , namely, is a simple root of (2.2.2), then (2.2.10)is reduced to

Q + (2 + b)Q = P n . (2.2.11)

• 8/8/2019 Ordinary Differential Equations Notes

28/95

28 CHAPTER 2. HIGHER ORDER LINEAR EQUATIONS

We choose Q to be a polynomial of degree n + 1 . Since the constant term of Q does not appear in(2.2.11), we may choose Q(x) = xR n (x), where Rn (x) is a polynomial of degree n .

y = xR n (x)ex .

Subcase 1.3 If 2 + b + c = 0 and 2 + b = 0 , namely, is a root of (2.2.2) with multiplicity 2,then (2.2.10) is reduced to

Q = P n . (2.2.12)

We choose Q(x) = x2Rn (x), where Rn (x) is a polynomial of degree n .

y = x2Rn (x)ex .

Example. Find the general solution of y y 2y = 4 x2 .Ans: y = c1e2x + c2ex 3 + 2 x 2x2 .Example. Find a particular solution of y + 2 y y = 3 x2 2x + 1 .Ans: y = 27x 5x2 x3 .Example. Solve y 2y + y = xex .Ans: y = c1ex + c2xex + 16 x

3ex .

Case 2. f (x) = P n (x)ex

cos(x ) or f (x) = P n (x)ex

sin(x ), where P n (x) is a polynomial of degree n 0.We rst look for a solution of

y + by + cy = P n (x)e( + i )x . (2.2.13)

Using the method in Case 1 we obtain a complex-valued solution

z(x) = u(x) + iv(x),

where u(x) = (z(x)) , v(x) = (z(x)) . Substituting z(x) = u(x)+ iv(x) into (2.2.13) and takingthe real and imaginary parts, we can show that u(x) = (z(x)) is a solution of

y + by + cy = P n (x)ex cos(x ), (2.2.14)

and v(x) = (z(x)) is a solution of

y + by + cy = P n (x)ex sin(x ). (2.2.15)

Example. Solve y 2y + 2 y = ex cos x.Ans: y = c1ex cos x + c2ex sin x + 12 xe

x sin x.

The following conclusions will be useful.

• 8/8/2019 Ordinary Differential Equations Notes

29/95

2.3. OPERATOR METHODS 29

Theorem 2.11 Let y1 and y2 be particular solutions of the equations

y + ay + by = f 1(x)

and

y + ay + by = f 2(x)

respectively, then y p = y1 + y2 is a particular solution of

y + ay + by = f 1(x) + f 2(x).

Proof. Exercise.

Example. Solve y y = ex + sin x.Solution. A particular solution for y y = ex is given by y1 = 12 xex . Also a particular solutionfor y y = sin x is given by y2 = 12 x sin x. Thus 12 (xex sin x) is a particular solution of the given differential equation. The general solution of the corresponding homogeneous differentialequation is given by c1ex + c2ex . Hence the general solution of the given differential equation isc1ex + c2ex + 12 (xex sin x).

2.3 Operator methods

Let x denote independent variable, and y dependent variable. Introduce

Dy =d

dxy, D n y =

dn

dxny = y(n ) .

We dene D 0y = y. Given a polynomial L(x) = nj =0 a j xj , where a j s are constants, we dene

a differential operator L(D ) by

L(D )y =n

j =0

a j D j y.

Then the equationn

j =0

a j y( j ) = f (x) (2.3.1)

can be written as

L(D)y = f (x). (2.3.2)

Let L(D )1f denote any solution of (2.3.2). We have

D1D = DD 1 = D 0 ,L(D)1L(D ) = L(D)L(D )1 = D 0 .

However, L(D )1f is not unique.

To see the above properties, rst recall that D1f means a solution of y = f . Thus D1f =

f .

Hence it follows that D1D = DD 1= identity operator D 0 .

• 8/8/2019 Ordinary Differential Equations Notes

30/95

30 CHAPTER 2. HIGHER ORDER LINEAR EQUATIONS

For the second equality, note that a solution of L(D )y = L(D )f is simply f . Thus by denitionof L(D )1 , we have L(D )1(L(D )f ) = f . This means L(D)1L(D ) = D 0 . Lastly, sinceL(D)1f is a solution of L(D)y = f (x), it is clear that L(D )(L(D )1f ) = f . In other words,L(D)L(D)1 = D 0 .

More generally, we have:

1. D1f (x) = f (x)dx + C,2. (D a)1f (x) = Ce ax + eax eax f (x)dx,3. L(D )(eax f (x)) = eax L(D + a)f (x),

4. L(D )1(eax f (x)) = eax L(D + a)1f (x).

(2.3.3)

Proof. Property 2 is just the solution of the rst order linear ODE. To prove Property 3, rst observethat (D r )(eax f (x)) = eax D (f (x)) + ae ax f (x) re ax f (x) = eax (D + a r )(f (x)) . Thus(D s)(D r )(eax f (x)) = ( D s)[eax (D + a r )( f (x))] = eax (D + a s)(D + a r )(f (x)) .Now we may write L(D ) = ( D r 1) (D r n ). Then L(D )(eax f (x)) = eax L(D + a)f (x).This says that we can move the factor eax to the left of the operator L(D ) if we replace L(D ) byL(D + a).

To prove Property 4, apply L(D ) to the right hand side. We have

L(D)[eax L(D + a)1f (x)] = eax [L(D + a)(L(D + a)1f (x))] = eax f (x).

Thus L(D)1(eax f (x)) = eax L(D + a)1f (x).

Let L(x) = ( x r 1) (x r n ). The solution of (2.3.2) is given byy = L(D)1f (x) = ( D r 1)1 (D r n )1f (x). (2.3.4)

Then we obtain the solution by successive integration. Moreover, if r j s are distinct, we can write

1L(x)

=A1

x

r 1

+ +An

x

r n

,

where Aj s can be found by the method of partial fractions. Then the solution is given by

y = [A1(D r 1)1 + + An (D r n )1]f (x). (2.3.5)

Next consider the case of repeated roots. Let the multiple root be equal to m and the equation to besolved is

(D m)n y = f (x) (2.3.6)To solve this equation, let us assume a solution of the form y = emx v(x), where v(x) is a functionof x to be determined. One can easily verify that (D m)n emx v = emx D n v. Thus equation (2.3.6)reduces to

D n v = emx f (x) (2.3.7)

• 8/8/2019 Ordinary Differential Equations Notes

31/95

2.3. OPERATOR METHODS 31

If we integrate (2.3.7) n times, we obtain

v = emx f (x) dx dx + c0 + c1x + + cn 1xn 1 (2.3.8)Thus we see that

(D m)n f (x) = emx emx f (x)dx dx + c0 + c1x + + cn 1xn 1 (2.3.9)Example Solve (D 2 3D + 2) y = xex .Solution First

1D 2 3D +2 =

1D 2

1D 1 . Therefore

y = (D 2 3D + 2) 1(xex )= (D 2)1(xex ) (D 1)1(xex )= e2x D1(e2x xex ) ex D1(ex xex )= e2x D1(ex x) ex D1(x)= e2x (xex ex + c1) ex ( 12 x2 + c2)= ex ( 12 x2 + x + 1) + c1e2x + c2ex .

Example. Solve (D 3

3D 2 + 3 D

1)y = ex .

Solution. The DE is equivalent to (D 1)3y = ex . Therefore,

y = ( D 1)3ex = ex ex ex dx + c0 + c1x + c2x2 = ex 16x3 + c0 + c1x + c2x2 .If f (x) is a polynomial in x, then (1 D )(1+ D + D 2 + D 3 + )f = f . Thus (1 D )1(f ) =(1+ D + D 2 + D 3 + )f . Therefore, if f is a polynomial, we may formally expand (D r )1 intopower series in D and applying it to f . If the degree of f is n , then it is only necessary to expand(D

r )1 up to D n .

Example. Solve (D 4 2D 3 + D 2)y = x3 .Solution. We have

y = (D 4 2D 3 + D 2)1f = 1D 2 (1D ) 2 x3

= D2(1 + 2 D + 3 D 2 + 4 D 3 + 5 D 4 + 6 D 5)x3= D2(x3 + 6 x2 + 18 x + 24)= D1( x

4

4 + 2 x3 + 9 x2 + 24 x)

= x5

20 +x 42 + 3 x

3 + 12 x2 .

Therefore, the general solution is y = ( A + Bx )ex + ( C + Dx ) + x 520 + x4

2 + 3 x3 + 12 x2 .

• 8/8/2019 Ordinary Differential Equations Notes

32/95

32 CHAPTER 2. HIGHER ORDER LINEAR EQUATIONS

2.4 Exact 2nd order EquationsThe general 2nd order linear differential equation is of the form

p0(x)y + p1(x)y + p2(x)y = f (x) (A1)

The equation can be written as

( p0y p0y) + ( p1y) + ( p0 p1 + p2)y = f (x) (A2)It is said to be exact if

p0 p1 + p2 0. (A3)In the event that the equation is exact, a rst integral to (A1) is

p0(x)y p0(x)y + p1(x)y = f (x) dx + C 1 .Example. Find the general solution of the DE

1x

y + (1x

2x2

)y (1

x2 2

x3)y = ex .

Solution. Condition (A3) is fullled. The rst integral is

1x y + 1x2 y + ( 1x 2x2 )y = ex + C 1 .That is

y + (1 1x

)y = xex + C 1x.

From the last equation, the general solution is found to be

y =12

xex + C 1x + C 2xex .

2.5 The adjoint differential equation and integrating factorIf (A1) is multiplied by a function v(x) so that the resulting equation is exact, then v(x) is called anintegrating factor of (A1). That is

( p0v) ( p1v) + p2v = 0 . (A4)This is a differential equation for v, which is, more explicitly,

p0(x)v + (2 p0(x) p1(x))v + ( p0 (x) p1(x) + p2(x))v = 0 . (A5)Equation (A5) is called the adjoint of the given differential equation (A1). A function v(x) is thusan integrating factor for a given differential equation, if and only if it is a solution of the adjoint

• 8/8/2019 Ordinary Differential Equations Notes

33/95

2.5. THE ADJOINT DIFFERENTIAL EQUATION AND INTEGRATING FACTOR 33

equation. Note that the adjoint of (A5) is in turn found to be the associated homogeneous equationof (A1), thus each is the adjoint of the other.In this case, a rst integral to (A1) is

v(x) p0(x)y (v(x) p0(x)) y + v(x) p1(x)y = v(x)f (x) dx + C 1 .Example. Find the general solution of the DE

(x2 x)y + (2 x2 + 4 x 3)y + 8 xy = 1 .Solution. The adjoint of this equation is

(x2 x)v (2x2 1)v + (4 x 2)v = 0 .By the trial of xm , this equation is found to have x2 as a solution. Thus x2 is an integrating factorof the given differential equation. Multiplying the original equation by x2 , we obtain

(x4 x3)y + (2 x4 + 4 x3 3x2)y + 8 x3y = x2 .Thus a rst integral to it is

(x4 x3)y (4x3 3x2)y + (2 x4 + 4 x3 3x2)y = x2 dx + C.After simplication, we have

y +2x

x 1y =

13(x 1)

+C

x3(x 1).

An integrating factor for this rst order linear equation is e2x (x 1)2 . Thus the above equationbecomes

e2x (x 1)2y =13 (x 1)e2x dx + C e

2x (x 1)x3

dx + C 2 .

That is

e2x (x 1)2y =13

x2

34

e2x + C e2x

2x2+ C 2 .

Thus the general solution is

y = 1(x 1)2

x6

14

+ C 1x2

+ C 2e2x .

Exercise. Solve the following differential equation by nding an integrating factor of it.

y +4x

2x 1y +

8x 8(2x 1)2

y = 0 .

[Answer: y = C 12x1 +C 2 x

2x1 e2x .]

v 4x

2x 1v +4

2x 1v = 0 ,

• 8/8/2019 Ordinary Differential Equations Notes

34/95

34 CHAPTER 2. HIGHER ORDER LINEAR EQUATIONS

or equivalently(2x 1)v 4xv + 4 v = 0 .

An obvious solution is v = x . Therefore v = x is an integrating factor of the original differentialequation. Thus

xy +4x2

2x 1y +

(8x 8)x(2x 1)2

y = 0

is exact. The rst integral is

xy y +4x2

2x 1y = C 1 ,

or equivalently,

y +4x2

2x + 1

x(2x 1) y =C 1x .

That is

y + 2 1x

+2

2x 1y =

C 1x

.

Thus e 21x +

22 x 1 dx = e2x 2x1x is an integrating factor of this rst order equation. Multiplying

by this factor, we have

e2x2x 1

xy = C 1 e

2x (2x 1)x2

dx + C 2 .

That is

e2x 2x

1

x y = C 1e2x

x + C 2 ,or equivalently

y =C 1

2x 1+

C 2x2x 1

e2x .

• 8/8/2019 Ordinary Differential Equations Notes

35/95

Chapter 3

Linear Differential Systems

3.1 Linear Systems

The following system is called a linear differential system of rst order in normal form :

x1 = a11 (t)x1 + + a1n (t)xn + g1(t),

xn = an 1(t)x1 + + ann (t)xn + gn (t),where a ij (t) and gj (t) are continuous functions of t and denotes differentiation with respect to t .Denote

x (t) =x1(t)

xn (t)

, g (t) =g1(t)

gn (t)

, A (t) =a11 (t) a1n (t)

an 1(t) ann (t).

We call g (t) a continuous vector eld, or a continuous vector-valued function, if all its componentsare continuous functions. We call A (t) a continuous matrix, or a continuous matrix-valued function,if all its entries are continuous functions. Dene

x =x1(t)

xn (t)

, x (t)dt = x1(t)dt

xn (t)dt

.

Then the linear system can be written in the matrix form :

x = A (t)x + g (t). (3.1.1)

When g 0, (3.1.1) is reduced tox = A (t)x . (3.1.2)

(3.1.2) is called a homogeneous differential system, and (3.1.1) is called a non-homogeneous systemif g (t)

0 . We shall also call (3.1.2) the homogeneous system associated with (3.1.1), or the

associated homogeneous system .

35

• 8/8/2019 Ordinary Differential Equations Notes

36/95

36 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

Example.x1 = 2 x1 + 3 x2 + 3 t,x2 = x1 + x2 7sin t

is equivalent to

x1x2

=2 3

1 1x1x2

+3t

7sin t.

Example. Given a second order system

d2xdt2

= x + 2 y + 3 t,

d2y

dt2= 4 x + 5 y + 6 t,

it can be expressed into an equivalent rst order differential system by introducing more variables.For this example, let u = x and v = y . Then we have

x = uu = x + 2 y + 3 ty = vv = 4 x + 5 y + 6 t

.

Next, we begin with the initial value problem

x = A (t)x + g (t),

x (t0) = x 0 ,(3.1.3)

where x 0 is a constant vector. Similar to Theorem 2.1 we can show the following theorem.

Theorem 3.1 Assume that A (t) and g (t) are continuous on an open interval a < t < b containingt0 . Then, for any constant vector x 0 , (3.1.3) has a solution x (t) dened on this interval. Thissolution is unique.

Especially, if A (t) and g (t) are continuous on R , then for any t0

R and x 0

R n , (3.1.3) has aunique solution x (t) dened on R .

3.2 Homogeneous Linear Systems

In this section we assume A = a ij (t) is a continuous n by n matrix-valued function dened onthe interval (a, b). We shall discuss the structure of the set of all solutions of (3.1.2).

Lemma 3.2 Let x (t) and y (t) be two solutions of (3.1.2) on (a, b). Then for any numbers c1 , c2 ,z (t) = c1x (t) + c2y (t) is also a solution of (3.1.2) on (a, b).

• 8/8/2019 Ordinary Differential Equations Notes

37/95

3.2. HOMOGENEOUS LINEAR SYSTEMS 37

Denition x 1(t), , x r (t) are linearly dependent in (a, b), if there exists numbers c1 , , cr , notall zero, such thatc1x 1(t) + + cr x r (t) = 0 for all t(a, b).

x 1(t), , x r (t) are linearly independent on (a, b) if they are not linearly dependent.

Lemma 3.3 A set of solutions x 1(t), , x r (t) of (3.1.2) are linearly dependent on (a, b) if and only if x 1(t0), , x r (t0) are linearly dependent vectors for any xed t0(a, b).Proof. Obviously = is true. We show = . Suppose that, for some t0 (a, b),

x 1(t0), ,x r (t0) are linearly dependent. Then there exists constants c1 , , cr , not all zero, such that

c1x 1(t0) + + cr x r (t0) = 0 .Let y (t) = c1x 1(t) + + cr x r (t). Then y (t) is the solution of the initial value problem

x = A (t)x , x (t0) = 0 .

Since x (t) = 0 is also a solution of the initial value problem, by the uniqueness we have y (t) 0on (a, b), i.e.

c1x 1(t) + + cr x r (t) 0on (a, b). Since cj s are not all zero, x 1(t), , x r (t) are linearly dependent on (a, b).

Theorem 3.4 (i) (3.1.2) has n linearly independent solutions.(ii) Let x 1 , , x n be any set of n linearly independent solutions of (3.1.2) on (a, b). Then thegeneral solution of (3.1.2) is given by

x (t) = c1x 1(t) + + cn x n (t), (3.2.1)where cj s are arbitrary constants.

Proof. (i) Let e 1 , , e n be a set of linearly independent vectors in R n . Fix t0(a, b). For each j from 1 to n , consider the initial value problem

x = A (t)x , x (t0) = e j .

From Theorem 3.1, there exists a unique solution x j (t) dened on (a, b). From Lemma 3.3,x 1(t), , x n (t) are linearly independent on (a, b).(ii) Now let x 1(t), , x n (t) be any set of n linearly independent solutions of (3.1.2) on (a, b).Fix t0 (a, b). From Lemma 3.3,

x 1(t0), , x n (t0) are linearly independent vectors. Let x (t)be any solution of (3.2.1). Then x (t0) can be represented by a linear combination of x 1(t0), ,x n (t0), namely, there exists n constants c1 , , cn such that

x (t0) = c1x 1(t0) + + cn x n (t0).As in the proof of Lemma 3.3, we can show that

x (t) = c1x 1(t) + + cn x n (t).

• 8/8/2019 Ordinary Differential Equations Notes

38/95

38 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

Thus c1x 1(t) + + cn x n (t) is the general solution of (3.1.2).Recall that, n vectors

a 1 =a11

an 1

, , a n =a1n

ann

are linearly dependent if and only if the determinant

a11 a1n an 1

ann

= 0 .

In order to check whether n solutions are linearly independent, we need the following notation.

Denition. The Wronskian of n vector-valued functions

x 1(t) =x11 (t)

xn 1(t)

, , x n (t) =x1n (t)

xnn (t)

is the determinant

W (t) W (x

1 , ,x

n )( t) =

x11 (t) x12 (t) x1n (t)x12 (t) x22 (t)

x2n (t)

xn 1(t) xn 2(t) xnn (t).

Using Lemma 3.3 we can show that

Theorem 3.5 (i) The Wronskian of n solutions of (3.1.2) is either identically zero or nowhere zeroin (a, b).(ii) n solutions of (3.1.2) are linearly dependent in (a, b) if and only if their Wronskian is identically zero in (a, b).

Denition. A set of n linearly independent solutions of (3.1.2) is called a fundamental set of solu-tions , or a basis of solutions. Let

x 1(t) =x11 (t)

xn 1(t)

, , x n (t) =x1n (t)

xnn (t)

be a fundamental set of solutions of (3.1.2) on (a, b). The matrix-valued function

(t) =

x11 (t) x12 (t) x1n (t)x12 (t) x22 (t) x2n (t)

xn 1(t) xn 2(t) xnn (t)

• 8/8/2019 Ordinary Differential Equations Notes

39/95

3.3. NON-HOMOGENEOUS LINEAR SYSTEMS 39

is called a fundamental matrix of (3.1.2) on (a, b).Remark. (i) From Theorem 3.5, a fundamental matrix is non-singular for all t(a, b).(ii) A fundamental matrix (t) satises the following matrix equation :

= A (t). (3.2.2)

(iii) Let (t) and (t) are two fundamental matrices dened on (a, b). Then there exists a constant,non-singular matrix C such that

(t) = ( t)C .

Theorem 3.6 Let (t) be a fundamental matrix of (3.1.2) on (a, b). Then the general solution of

(3.1.2) is given byx (t) = ( t)c , (3.2.3)

where c =c1

cn

is an arbitrary constant vector.

3.3 Non-Homogeneous Linear Systems

In this section we consider the solutions of non-homogeneous system (3.1.1), where A = a ij (t)is a continuous n by n matrix-valued function and g (t) is a continuous vector-valued function, bothdened on the interval (a, b).

Theorem 3.7 Let x p(t) be a particular solution of (3.1.1), and (t) be a fundamental matrix of theassociated homogeneous system (3.1.2). Then the general solution of (3.1.1) is given by

x (t) = ( t)c + x p(t), (3.3.1)

where c is an arbitrary constant vector.

Proof. For any constant vector c , x (t) = ( t)c + x p(t) is a solution of (3.1.1). On the other hand,let x

(t)be a solution of (3.1.1) and set y

(t) =x

(t) x

p(t). Then y

=A

(t)y . From (3.2.3), there

exists a constant vector c such that y (t) = ( t) c . So x (t) = ( t) c + x p(t). Thus (3.3.1) gives ageneral solution of (3.1.1).

Method of variation of parameters.Let be a fundamental matrix of (3.1.2). We look for a particular solution of (3.1.1) in the form

x (t) = ( t)u (t), u (t) =u1(t)

un (t)

.

Plugging into (3.1.1) we get u + u = Au + g .

• 8/8/2019 Ordinary Differential Equations Notes

40/95

40 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

From (3.2.2), = A . So u = g , and thus u = 1g .

u (t) = t

t 0 1(s)g (s)ds + c . (3.3.2)

Choosing c = 0 , we get a particular solution:

x p(t) = (t) t

t 0 1(s)g (s)ds.

So we obtain the following:

Theorem 3.8 The general solution of (3.1.1) is given by

x (t) = (t)c + (t) t

t 0 1(s)g (s)ds, (3.3.3)

where (t) is a fundamental matrix of the associated homogeneous system (3.1.2).

Example . Solvex1 = 3 x1 x2 + tx2 = 2 x1 + t

.

3.4 Homogeneous Linear Systems with Constant CoefcientsConsider a homogeneous linear system

x = Ax , (3.4.1)

where A = ( a ij ) is a constant n by n matrix.Let us try to nd a solution of (3.4.1) in the form x (t) = et k , where k is a constant vector, k = 0 .Plugging it into (3.4.1) we nd

Ak = k . (3.4.2)

Denition. Assume that a number and a vector k = 0 satisfy (3.4.2), then we call an eigenvalueof A , and k an eigenvector associated with .

Lemma 3.9 is an eigenvalue of A if and only if

det (A I ) = 0 (3.4.3)(where I is the n n unit matrix), namely,

a11 a 12 a1na21 a22 a2n an 1 an 2 ann

= 0 .

• 8/8/2019 Ordinary Differential Equations Notes

41/95

3.4. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS 41

Remark. Let A be an n by n matrix and 1 , 2 , , k be the distinct roots of (3.4.3). Then thereexist positive integers m1 , m2 , , mk , such thatdet( A I ) = ( 1)n ( 1)m 1 ( 2)m 2 ( k )m k ,

and

m1 + m2 + + mk = n.m j is called the algebraic multiplicity (or simply multiplicity ) of the eigenvalue j . The number of linearly independent eigenvectors of A associated with j is called the geometric multiplicity of theeigenvalue j and is denoted by ( j ). We always have

( j ) m j .If ( j ) = m j then we say that the eigenvalue j is quasi-simple. Especially if m j = 1 we say that j is a simple eigenvalue. Note that in this case j is a simple root of (3.4.3).

Theorem 3.10 If is an eigenvalue of A and k is an associated eigenvector, then

x (t) = et k

is a solution of (3.4.1).

Let A be a real matrix. If is a complex eigenvalue of A , and k is an eigenvector associated with

, thenx 1 = (et k ), x 2 = (et k )

are two linearly independent real solutions of (3.4.1).

In the following we always assume that A is a real matrix.

Theorem 3.11 If A has n linearly independent eigenvectors k 1 , , k n associated with eigenval-ues 1 , , n respectively, then

(t) = ( e 1 t

k 1 , , e n t

k n )

is a fundamental matrix of (3.4.1), and the general solution is given by

x (t) = ( t)c = c1e 1 t k 1 + + cn e n t k n , (3.4.4)

where c =c1

cn

is an arbitrary constant vector.

Proof. We only need to show det ( t) = 0 . Since k 1 , , k n are linearly independent, sodet(0) = 0 . From Theorem 3.5 we see that det ( t) = 0 for any t . Hence (t) is a funda-mental matrix.

• 8/8/2019 Ordinary Differential Equations Notes

42/95

42 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

Remark. Under the conditions of Theorem 3.11, the eigenvalues 1 , , n of A need not to bedistinct. In fact we only assume that all the eigenvalues of A are quasi-simple.If A has n distinct eigenvalues 1 , , n , and let k 1 , , k n be the associated eigenvectors, thenthey are linearly independent. Hence the general solution is given by (3.4.4).

Example. x = 3 11 3

x .

A = 3 11 3

has eigenvalues 1 = 2, and 2 = 4.

For 1 =

2 we nd an eigenvector k 1 =

1

1

.

For 2 = 4 we nd an eigenvector k 2 =1

1.

The general solution is given by

x (t) = c1e2t11

+ c2e4t1

1.

Example. Solve the system

x = 3 1

1 3 x + e2t

6

2 .

We rst solve the associated homogeneous system

x = 3 11 3

x

and nd two linearly independent solutions x 1(t) =e2te2t

, x 2(t) =e4t

e4t. The fun-

damental matrix is

= ( x 1(t), x 2(t)) =e2t e4t

e2t

e4t .

1 = 1

2e6te4t e4te2t e2t

=12

e2t e2t

e4t e4t,

Let g (t) = e2t 62

. Then

1(t)g (t) = 24e2t

,

u (t) = t

0 1

(s)g (s)ds = 2t

2e2t + 2 ,

• 8/8/2019 Ordinary Differential Equations Notes

43/95

3.4. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS 43

(t)u (t) = 2 e2t t 1t + 1+ 2 e4t 1

1.

The general solution is

x = c1e2t11

+ c2e4t1

1 2te2t 1

1+ 2 e2t 1

1.

Example. Solve x =0 1

4 0x .

A =0 1

4 0has eigenvalues

2i.

For = 2 i we nd an eigenvector k =12i

.

e2it12i

=(cos 2 t + i sin2t)12i

=cos2t

2sin2t+ i

sin2t2cos2t

.

The general solution is given by

x (t) = c1cos2t

2sin2t+ c2

sin2t2cos2t

.

Example. Solvex = 3x + 4 y 2z,y = x + z,

z = 6 x 6y + 5 z.

A =3 4 21 0 16 6 5

has eigenvalues 1 = 2 , 2 = 1 , 3 = 1.

For 1 = 2 we nd an eigenvector k 1 =012

.

For 2 = 1 we nd an eigenvector k 2 =110

.

For 3 = 1 we nd an eigenvector k 3 =10

1.

The general solution is given by

xyz

= c1e2t012

+ c2et110

+ c3et10

1,

• 8/8/2019 Ordinary Differential Equations Notes

44/95

44 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

namelyx(t) = c2et + c3et ,y(t) = c1e2t + c2et ,

z(t) = 2 c1e2t c3et .

Example. Solve x = Ax , where

A =2 1 01 3 11 2 3

.

A has eigenvalues 1 = 2 , 2 = 3 = 3 i.

For 1 = 2 we nd an eigenvector k 1 =101

.

For 2 = 3 + i we nd an eigenvector k 2 =1

1 + i2 i

.

We have

e(3+ i ) t k 2 = e3tcos t + i sin t

cos t sin t + i(cos t + sin t)2cos t + sin t + i(2sin t cos t)

,

(e(3+ i ) t k 2) = e3tcos t

cos t sin t2cos t + sin t

(e(3+ i ) t k 2) = e3tsin t

cos t + sin t2sin t cos t

.

The general solution is

x (t) = c1e2t1

01

+ c2e3tcos t

cos t sin t2cos t + sin t+ c3e3t

sin t

cos t + sin t2sin t cos t

.

Example. Solve x = Ax , where

A =1 2 22 1 22 2 1

.

We have

det( A I ) = ( 3)2( + 3) .

• 8/8/2019 Ordinary Differential Equations Notes

45/95

3.4. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS 45

A has eigenvalues 1 = 2 = 3 , 3 = 3 (We may say that, = 3 is an eigenvalue of algebraicmultiplicity 2, and = 3 is a simple eigenvalue).For = 3 we solve the equation Ak = 3 k , namely

2 2 22 2 22 2 2

k1k2k3

= 0 .

The solution is k =v u

uv

. So we nd two eigenvectors k 1 =101

and k 2 =110

.

For 3 = 3 we nd an eigenvector k 3 =11

1.

The general solution is given by

x (t) = c1e3t101

+ c2e3t110

+ c3e3t11

1.

Now we consider the solutions of (3.4.1) associated with a multiple eigenvalue

, with geometricmultiplicity () less than the algebraic multiplicity.

Lemma 3.12 Assume is an eigenvalue of A with algebraic multiplicity m > 1. Then the followingsystem

(A I )m v = 0 (3.4.5)has exactly m linearly independent solutions.

By direct computations we can prove the following theorem.

Theorem 3.13 Assume that is an eigenvalue of A with algebraic multiplicity m > 1. Let v 0 = 0be a solutions of (3.4.5). Dene

v l = ( A I )v l1 , l = 1 , 2, , m 1, (3.4.6)and let

x (t) = et v 0 + tv 1 +t2

2v 2 + +

tm 1(m 1)!

v m 1 . (3.4.7)

Then x (t) is a solution of (3.4.1). Let v (1)0 ,

, v (m )0 be m linearly independent solutions of (3.4.5). They generate m linearly inde-

pendent solutions of (3.4.1) via (3.4.6) and (3.4.7).

• 8/8/2019 Ordinary Differential Equations Notes

46/95

46 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

Remark. In (3.4.6), we always have

(A I )v m 1 = 0 .If v m 1 = 0 then v m 1 is an eigenvector of A associated with the eigenvalue .

In practice, to nd the solutions of (3.4.1) associated with an eigenvalue of multiplicity m , we rstsolve (3.4.5) and nd m linearly independent solutions

v (1)0 , v(2)0 , , v

(m )0 .

For each of these vectors, say v (k )0 , we compute the iteration sequence

v (k )l = ( A I )v (k )l1 , l = 1 , 2, There is an integer 0 j m 1 ( j depends on the choice of v

(k )0 ) such that

v (k )j = 0 , (A I )v(k )j = 0 .

Thus v j is an eigenvector of A associated with the eigenvalue . Then the iteration stops and yieldsa solution

x (k ) (t) = et v (k )0 + tv(k )1 +

t2

2v (k )2 + +

t j

j !v (k )j . (3.4.8)

Example. Solve x = Ax , where

A =1 1 00 1 41 0 4

.

From det( A I ) = ( +3) 2 = 0 we nd eigenvalues 1 = 3 with multiplicity 2, and 2 = 0simple.For the double eigenvalue 1 = 3 we solve

(A + 3 I )2v =4 4 44 4 41 1 1

v = 0 ,

and nd two linearly independent solutions v (1)0 =10

1, v (2)0 =

01

1. Plugging v (1)0 ,

v (2)0 into (3.4.6), (3.4.7) we get

v (1)1 = ( A + 3 I )v(1)0 =

2

42

,

x (1) = e3t (v (1)0 + tv(1)1 ) = e3t

10

1+ t

2

4

2,

• 8/8/2019 Ordinary Differential Equations Notes

47/95

3.4. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS 47

v (2)1 = ( A + 3 I )v(2)0 =

141

,

x (2) = e3t (v (2)0 + tv(2)1 ) = e3t

01

1+ t

1

41

.

For the simple eigenvalue 2 = 0 we nd an eigenvector k 3 =441

.

So the general solution is

x (t) = c1x (1) + c2x (2) + c3k 3

= c1e3t10

1+ t

2

42

+ c2e3t01

1+ t

1

41

+ c3441

.

Example. Solve the system

x = 2 x + y + 2 z,

y = x + 4 y + 2 z,z = 3 z.

A =2 1 2

1 4 20 0 3

.

The eigenvalue is = 3 with multiplicity 3. Solving

(A 3I )3v =0 0 00 0 00 0 0

v = 0 ,

we obtain 3 obvious linearly independent solutions

v (1)0 =100

, v (2)0 =010

, v (3)0 =001

.

• 8/8/2019 Ordinary Differential Equations Notes

48/95

48 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

Plugging v( j )0 into (3.4.6), (3.4.7) we get

v (1)1 = ( A 3I )v(1)0 =

110

,

v (1)2 = ( A 3I )v(1)1 =

000

,

x (1) = e3t (v (1)0 + tv(1)1 ) = e

3t100

+ t110

;

v (2)1 = ( A 3I )v(2)0 =

110

,

v (2)2 = ( A 3I )v(2)1 =

000

,

x (2) = e3t (v (2)0 + tv(2)1 ) = e

3t010

+ t110

;

v (3)1 = ( A 3I )v(3)0 =

220

,

v (3)2 = ( A 3I )v(3)1 =

000

,

x (3) = e3t (v (3)0 + tv(3)1 ) = e

3t001

+ t220

.

The general solution is

x (t) = c1x (1) + c2x (2) + c3x (3)

= c1e3t100

+ t110

+ c2e3t010

+ t110

+ c3e3t001

+ t220

.

Remark . It is possible to reduce the number of constant vectors in the general solution of x = Axby using a basis for the Jordan canonical form of A . We will not go into the details of the Jordancanonical form. However the following algorithm usually works well if the size of A is small.

• 8/8/2019 Ordinary Differential Equations Notes

49/95

3.4. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS 49

Consider an eigenvalue of A with algebraic multiplicity m .Start with r = m . Let v be a vector such that (A I )r v = 0 but (A I )r 1v = 0 . [v is calleda generalized eigenvector of rank r associated with the eigenvalue . If no such v exists, reduce rby 1.] Then

u r = v , u r 1 = ( A I )v , u r 2 = ( A I )2v , . . . , u 1 = ( A I )r 1v ,form a chain of linearly independent solutions of (3.4.5) with u 1 being the base eigenvector corre-sponding to the eigenvalue . This gives r independent solutions of x = Ax :

x 1(t) = u 1et ,x 2(t) = ( u 1 t + u 2)et ,x 3(t) = ( 12 u 1 t

2 + u 2 t + u 3)et ,

x r (t) = ( 1( r 1)!u 1 t r 1 + + 12! u r 2 t2 + u r 1 t + u r )et .

Repeat this procedure by nding another v which is not in the span of the previous chains of vectors.Also do this for each eigenvalue of A . Results of linear algebra shows that(1) Any chain of generalized eigenvectors constitutes a linearly independent set of vectors.(2) If two chains of generalized eigenvectors are based on linearly independent eigenvectors, thenthe union of these vectors is a linearly independent set of vectors (whether the two base eigenvectors

are associated with different eigenvalues or with the same eigenvalue).

Example. Solve x = Ax , where A =

3 0 0 00 3 0 00 1 3 00 0 1 3

.

A has an eigenvalue = 3 of multiplicity 4. Direct calculation gives (A 3I ) =0 0 0 00 0 0 00 1 0 00 0 1 0

,

(A 3I )2 =0 0 0 00 0 0 00 0 0 00 1 0 0

, (A 3I )3 = 0 , and (A 3I )4 = 0 .

It can be seen that v 1 =

1000

and v 4 =

0001

are two linearly independent eigenvectors of A .

Together with v 2 =

0100

and v 3 =

0010

, they form a basis of {v | (A 3I )4v = 0}= R 4 .

• 8/8/2019 Ordinary Differential Equations Notes

50/95

50 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

Note that (A 3I )v 2 = v 3 , and (A 3I )v 3 = v 4 . Hence {v 4 , v 3 , v 2 }forms a chain of generalizedeigenvectors associated with the eigenvalue 3. {v 1 }alone is another chain. Therefore the generalsolution is

x (t) = e3t c1v 1 + c2(v 2 + tv 3 +t2

2v 4 ) + c3(v 3 + tv 4 ) + c4v 4 .

That is

x (t) =

c1e3t

c2e3t

(c2 t + c3)e3t

( c2 t2

2 + c3 t + c4)e3t

.

Exercise. Solve x = Ax , where

A =

7 5 3 20 1 0 0

12 10 5 44 4 2 1

.

Ans: x (t) = c1et102

1

+ c2et

1

202

+ c3et10

20

+ c4ett0

1 2t1

.

3.5 Higher Order Linear Equations

Consider n -th order linear equation

y(n ) + a1(t)y(n 1) + + an 1(t)y + an (t)y = f (t), (3.5.1)where y(k ) = d

k ydt k . Throughout this section we assume that a j (t)s and f (t) are continuous functions

dened on the interval (a, b). When f (t) 0, (3.5.1) is called a non-homogeneous equation. Theassociated homogeneous equation is

y(n ) + a1(t)y(n 1) + + an 1(t)y + an (t)y = 0 . (3.5.2)The general theory of solutions of (3.5.1) and (3.5.2) can be established by applying the results inthe previous sections to the equivalent systems.We begin with the initial value problem

y(n ) + a1(t)y(n 1) + + an (t)y = f (t),y(t0) = y0 ,

y (t0) = y1 ,

y(n 1) (t0) = yn 1 .

(3.5.3)

• 8/8/2019 Ordinary Differential Equations Notes

51/95

3.5. HIGHER ORDER LINEAR EQUATIONS 51

Theorem 3.14 Assume that a1(t) , , an (t) and f (t) are continuous functions dened on the in-terval (a, b). Then for any t0(a, b) and for any numbers y0 , , yn 1 , the initial value problem(3.5.3) has a unique solution dened on (a, b). Especially if a j (t)s and f (t) are continuous on R , then for any t0 and y0 , , yn 1 , the initialvalue problem (3.5.3) has a unique solution dened on R .

Next we consider the structure of solutions of the homogeneous equation (3.5.2).

Denition. Functions 1(t), , r (t) are linearly dependent on (a, b) if there exists constants c1 , , cr , not all zero, such that

c11(t) +

+ cr r (t) = 0

for all t (a, b). A set of functions are linearly independent on (a, b) if they are not linearlydependent on (a, b).

Lemma 3.15 Functions 1(t) , , r (t) are linearly dependent on (a, b) if and only if the followingvector-valued functions

1(t)1(t)

(n 1)1 (t)

, ,r (t)r (t)

(n 1)r (t)

are linearly dependent on (a, b).

Proof. = is obvious. To show =, assume that 1 , , r are linearly dependent on (a, b).

There exists constants c1 , , cr , not all zero, such thatc11(t) + + cr r (t) = 0

for all t(a, b). Differentiate this equality successively we nd

c11(t) + + cr r (t) = 0 , c1

(n 1)1 (t) + + cr (n 1)r (t) = 0 .Thus

c1

1(t)1(t)

(n 1)1 (t)

+ + crr (t)r (t)

(n 1)r (t)

= 0

for all t(a, b). Hence the r vector-valued functions are linearly dependent on (a, b).

The Wronskian of n functions 1(t), , n is dened by

W (1 ,

, n )( t) =

1(t) n (t)

(n 1)1 (t) (n 1)n (t). (3.5.4)

• 8/8/2019 Ordinary Differential Equations Notes

52/95

52 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

From Lemma 3.15 and Theorem 3.5 we get

Proposition 3.5.1 Let y1(t) , , yn (t) be n solutions of (3.5.2) on (a, b). They are linearly in-dependent on (a, b) if and only if their Wronskian W (t) W (y1 , , yn )( t) does not vanish on(a, b).

Theorem 3.16 Let a1(t) , , an (t) be continuous on the interval (a, b). The homogeneous equa-tion (3.5.2) has n linearly independent solutions on (a, b). Let y1 , , yn be n linearly independent solutions of (3.5.2) dened on (a, b). The general solutionof (3.5.2) is given by

y(t) = c1y1(t) + + cn yn (t), (3.5.5)where c1 , , cn are arbitrary constants.

Any set of n linearly independent solutions is called a fundamental set of solutions .Now we consider the non-homogeneous equation (3.5.1). We have

Theorem 3.17 Let y p be a particular solution of (3.5.1), and y1 , , yn be a fundamental set of solutions for the associated homogeneous equation (3.5.2). The general solution of (3.5.1) is givenby

y(t) = c1y1(t) + + cn yn (t) + y p(t). (3.5.6)

From (3.3.3) we can derive the variation of parameter formula for higher order equations. Considera second order equation

y + p(t)y + q(t)y = f (t). (3.5.7)

Let x1 = y, x2 = y , x =x1x2

. Then (3.5.7) is written as

x =0 1

q px +

0f

(3.5.8)

Assume y1(t) and y2(t) are two linearly independent solutions of the associated homogeneous equa-tion

y + py + qy = 0 .

We look for a solution of (3.5.7) in the form

y = u1y1 + u2y2 .

Choose a fundamental matrix (t) =y1 y2y1 y2

. The corresponding solution of (3.5.8) is in the

form

x =yy =

y1 y2y1 y2

u1u2 =

y1u1 + y2u2y1u1 + y2u2 (3.5.9)

• 8/8/2019 Ordinary Differential Equations Notes

53/95

3.5. HIGHER ORDER LINEAR EQUATIONS 53

Recall that, if ad bc = 0 , thena bc d

1=

d bc a

.

Thus

(t)1 = 1W (t)

y2 y2y1 y1

,

where W (t) is the Wronskian of y1(t), y2(t). Using (3.3.3) we can derive

u1(t) = y2(t)W (t) fdt, u 2(t) = y1(t)W (t) f (t)dt. (3.5.10)Note that, (3.5.9) implies

y1u1 + y2u2 = y = y1u1 + y2u2 + y1u1 + y2u2 .

Hence

y1u1 + y2u2 = 0 . (i)

Plugging y = y1u1 + y2u2 into (3.5.7) we nd

y1u1 + y2u2 = f. (ii)

Solving (i) (ii) we ndu1 =

y2W

f, u 2 =y1W

f.

Again we get (3.5.10).Now we consider linear equations with constant coefcients

y(n ) + a1y(n 1) + + an 1y + an y = f (t), (3.5.11)and the associated homogeneous equation

y(n ) + a1y(n 1) + + an 1y + an y = 0 , (3.5.12)

where a1 , , an are real constants. Recall that (3.5.12) is equivalent to a systemx = Ax ,

where

A =

0 1 0 00 0 0 0 0 0 0 1

an an 1 a2 a1

.

The equation for the eigenvalues of A is

det( I A ) = n + a1n 1 + + an 1 + an = 0 . (3.5.13)

• 8/8/2019 Ordinary Differential Equations Notes

54/95

54 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

The solutions of (3.5.13) are called characteristic values or eigenvalues for the equation (3.5.12).

Let 1 , , s be the distinct eigenvalues for (3.5.12). Then we can writen + a1n 1 + + an 1 + an= ( 1)m 1 ( 2)m 2 ( s )m s ,

where m1 , , m s are positive integers andm1 + + m s = n.

We call them the multiplicity of the eigenvalues 1 , , s respectively.

Lemma 3.18 Assume is an eigenvalue of (3.5.12) of multiplicity m .(i) et is a solution of (3.5.12).(ii) If m > 1 , then for any positive integer 1 k m 1 , tk et is a solution of (3.5.12).(iii) If = + i , then tk et cos(t ), tk et sin(t ) are solutions of (3.5.12), where 0 k m1.

Theorem 3.19 Let 1 , , s be the distinct eigenvalues for (3.5.12), with multiplicity m1 , , m srespectively. Then (3.5.12) has a fundamental set of solutions

e 1 t , te 1 t , , tm 1 1e 1 t ; ;e s t , te s t , , tm s 1e s t .

(3.5.14)

Proof. One way to prove Theorem 3.19 is to nd a fundamental matrix of the equivalent system,such that the rst row is given by the functions in (3.5.14).Another way to prove Theorem 3.19 is to show that each function in (3.5.14) is a solution of (3.5.12),and they are linearly independent.

Remark. If (3.5.12) has a complex eigenvalue = + i , then = i is also an eigenvalue.Thus both tk e( + i ) t and tk e( i ) t appear in (3.5.14), where 0 k m 1. In order to obtain afundamental set of real solutions, the pair of solutions tk e( + i ) t and tk e( i ) t in (3.5.14) shouldbe replaced by tk et cos(t ) and tk et sin(t ) respectively.

3.6 Appendix 1: Proof of Lemma 3.12

Lemma 3.12 Let A be an n n complex matrix and an eigenvalue of A with algebraic multiplicitym . Then

dim {x C n | (I A)m x = 0}= m.

Proof . The proof consists of several steps. Let T =

{x

C n

|(I

A)m x = 0

}. The space T is

called the generalized eigenspace corresponding to the eigenvalue .

• 8/8/2019 Ordinary Differential Equations Notes

55/95

3.6. APPENDIX 1: PROOF OF LEMMA 3.12 55

Step 1 . T is a subspace of C n

. This is just direct verication.Step 2 . T is invariant under A meaning A[T ]T . This is because if we take a vector

x in T , then(I A)m x = 0 so that A(I A)m x = 0 , which is the same as (I A)m (Ax ) = 0 . Therefore,Ax is also in T .Step 3 . By Step 2 which says that A[T ]T , we may consider A as a linear transformation onthe subspace T . In other words, A : T T . Let be an eigenvalue of A : T T . Thatis Av = v . Then v T implies that (I A)m v = 0 . Since Av = v , this simplies to( )m v = 0 . Being an eigenvector, v = 0 so that = . Therefore, all eigenvalues of A : T T are equal to .Step 4 . Let dim T = r . Certainly r n . Then by Step 3, the characteristic polynomial of A : T

T is (

z)r . Since T is an invariant subspace of A : C n

C n , one can choose a

basis of T and then extend it to a basis of C n so that with respect to this new basis of C n , the matrixA is similar to a matrix where the upper left hand r r submatrix represents A on T and the lowerleft hand (n r ) r submatrix is the zero matrix. From this, we see that ( z)r is a factor of thecharacteristic polynomial of A : C n C n . Hence r m .We also need the Cayley-Hamilton Theorem in the last step of the proof.Cayley-Hamilton Theorem Let p(z) be the characteristic polynomial of an n n matrix A. Then p(A) = 0 .Step 5 . Let p(z) = ( 1 z)m 1 (k z)m k be the characteristic polynomial of A, where1 , 2 , . . . , k are the distinct eigenvalues of A : C n C n . For each i from 1 to k, let

T i = {x Cn

| ( i I A)m i

x = 0}.By Step 4, we have dimT i m i .Let f i (z) = ( i z)m i and gi (z) = f 1(z) f i (z) f k (z), where f i (z) means the polynomialf i (z) is omitted. Note that f i (z)gi (z) = p(z) for all i.Resolving 1( 1 z ) m 1 ( k z )m k into partial fractions, we have the identity

1(1 z)m 1 (k z)m k

h1(z)(1 z)m 1

+h2(z)

(2 z)m 2+ +

hk (z)(k z)m k

,

where h1(z), . . . , h k (z) are polynomials in z. Finding the common denominator of the right handside and equate the numerators on both sides, we have 1 g1(z)h1(z) + + gk (z)hk (z). Substi-tuting the matrix A into this polynomial identity, we have

g1(A)h1(A) + + gk (A)hk (A) = I,where I is the identity n n matrix.Now for any x

C n , we have

g1(A)h1(A)x + + gk (A)hk (A)x = x .Note that each gi (A)h i (A)x is in T i because f i (A)[gi (A)h i (A)x ] = p(A)h i (A)x = 0 by theCayley-Hamilton Theorem. This shows that any vector in C n can be expressed as a sum of vectorswhere the i-summand is in T i . In other words,

C n = T 1 + T 2 + + T k .

• 8/8/2019 Ordinary Differential Equations Notes

56/95

56 CHAPTER 3. LINEAR DIFFERENTIAL SYSTEMS

Consequently, m1 + mk = n dimT 1 + + dim T k m1 + + mk so that dimT i = m i .Remarks

1. In factC n = T 1 T k .

2. If A is a real matrix and is a real eigenvalue of A of algebraic multiplicity m , then T is a realvector space and the real dimension of T is m . This is because for any set of real vectors in R n , it islinearly independent over R if and only if it is linearly independent over C .3. If A is a real matrix and is a complex eigenvalue of A of algebraic multiplicity m , then isalso an eigenvalue of A of algebraic multiplicity m . In this case, if {v 1 , . . . , v m }is a basis overC of T , where T is the generalized eigenspace corresponding to , then {v 1 , . . . , v m }is a basisover C of T . It can be shown that the 2m real vectors v 1 , . . . , v m , v 1 , . . . , v m are linearlyindependent over R and form a basis of (T T ) R n .

• 8/8/2019 Ordinary Differential Equations Notes

57/95

Chapter 4

Power Series Solutions

4.1 Power Series

An innite series of the form

n =0an (x x0)n = a0 + a1(x x0) + a2(x x0)2 + (4.1.1)

is a power series in x x0 . In what follows, we will be focusing mostly at the point x0 = 0 . That is

n =0 an xn

= a0 + a1x + a2x2

+ (4.1.2)(4.1.2) is said to converge at a point x if the limit lim

m

m

n =0an xn exists, and in this case the sum of

the series is the value of this limit. It is obvious that (4.1.2) always converges at x = 0 . It can beshowed that each power series like (4.1.2) corresponds to a positive real number R , called the radiusof convergence , with the property that the series converges if |x| < R and diverges if |x| > R . Itis customary to put R equal to 0 when the series converges only at x = 0 , and equal to when itconverges for all x. In many important cases, R can be found by the ratio test as follow.If each an = 0 in (4.1.2), and if for a xed point x = 0 we have

limn an +1 xn +1

an xn = limn an +1

an |x| = L,then (4.1.2) converges for L < 1 and diverges if L > 1. It follows from this that

R = limn

anan +1

if this limit exists (we put R = , if |an /a n +1 |)The interval (R, R ) is called the interval of convergence in the sense that inside the interval theseries converges and outside the interval the series diverges.

Consider the following power series

n =0 n!xn

= 1 + x + 2!x2

+ 3!x3

+ (4.1.3)57

• 8/8/2019 Ordinary Differential Equations Notes

58/95

58 CHAPTER 4. POWER SERIES SOLUTIONS

n =0x

n

n! = 1 + x + x2

2! + x3

3! + (4.1.4)

n =0xn = 1 + x + x2 + x3 + (4.1.5)

It is easy to verify that (4.1.3) converges only at x = 0 . Thus R = 0 . For (4.1.4), it converges for allx so that R = . For (4.1.5), the power series converges for |x| < 1 and R = 1 .Suppose that (4.1.2) converges for |x| < R with R > 0, and denote its sum by f (x). That is

f (x) =

n =0

an xn = a0 + a1x + a2x2 + (4.1.6)

Then one can prove that f is continuous and has derivatives of all orders for |x| < R . Also the seriescan be differentiated termwise in the sense that

f (x) =

n =1na n xn 1 = a1 + 2 a2x + 3 a3x2 + ,

f (x) =

n =2n(n 1)an xn 2 = 2 a2 + 3 2a3x + ,

and so on. Furthermore, the resulting series are still convergent for |x| < R . These successivedifferentiated series yield the following basic formula relating an to with f (x) and its derivatives.

an =f n (0)

n!(4.1.7)

Moreover, (4.1.6) can be integrated termwise provided the limits of integration lie inside the intervalof convergence.

If

g(x) =

n =0bn xn = b0 + b1x + b2x2 + (4.1.8)

is another power series with interval of convergence |x| < R , then (4.1.6) and (4.1.8) can be addedor subtracted termwise:

f (x) g(x) = n =0

(an bn )xn = ( a0 b0) + ( a1 b1)x + ( a2 b2)x2 + (4.1.9)

They can also be multiplied like polynomials, in the sense that

f (x)g(x) =

n =0cn xn ,

where cn = a0bn + a1bn 1 + + an b0 .Suppose two power series (4.1.6) and (4.1.8) converge to the same function so that f (x) = g(x) for

|x

|< R , then (4.1.7) implies that they have the same coefcients, an = bn for all n . In particular, if

f (x) = 0 for all |x| < R , then an = 0 , for all n .

• 8/8/2019 Ordinary Differential Equations Notes

59/95

4.1. POWER SERIES 59

Let f (x) be a continuous function that has derivatives of all orders for |x| < R . Can it be representedby a power series? If we use (4.1.7), it is natural to expectf (x) =

n =0

f (n ) (0)n!

xn = f (0) + f (0)x +f (0)

2!x2 + (4.1.10)

to hold for all |x| < R . Unfortunately, this is not always true. Instead, one can use Taylorsexpansion for f (x):

f (x) =n

k =0

f (k ) (0)k!

xk + Rn (x),

where the remainder Rn (x) is given by

Rn (x) =f (n +1) (x)(n + 1)! xn +1

for some point x between 0 and x. To verify (4.1.6), it sufces to show that Rn (x) 0 asn .Example The following familiar expansions are valid for all x .

ex =

n =0

xn

n!= 1 + x +

x2

2!+

x3

3!+

sin x =

n =0(1)n

x2n +1

(2n + 1)!= x

x3

3!+

x5

5!

cos x = n =0

(1)nx2n

(2n)!= 1

x22!

+x44!

A function f (x) with the property that a power series expansion of the form

f (x) =

n =0an (x x0)n (4.1.11)

is valid in some interval containing the point x0 is said to be analytic at x0 . In this case, an isnecessarily given by

an =f (n ) (x0)

n!,

and (4.1.11) is called the Taylor series of f (x) at x0 .Thus ex , sin x, cos x are analytic at all points. Concerning analytic functions, we have the followingbasic results.

1. Polynomials, ex , sin x, cos x are analytic at all points.

2. If f (x) and g(x) are analytic at x0 , then f (x) g(x), f (x)g(x), and f (x)/g (x) [providedg(x0) = 0 ] are also analytic at x0 .

3. If f (x) is analytic at x0 , and f 1(x) is a continuous inverse, then f 1(x) is analytic at f (x0) if f (x0) = 0 .

4. If g(x) is analytic at x0 and f (x) is analytic at g(x0), then f (g(x)) is analytic at x0 .

5. The sum of a power series is analytic at all points inside the interval of convergence.

• 8/8/2019 Ordinary Differential Equations Notes

60/95

60 CHAPTER 4. POWER SERIES SOLUTIONS

4.2 Series Solutions of First Order EquationsA rst order differential equation y = f (x, y ) can be solved by assuming that it has a power seriessolution. Lets illustrate this with two familiar examples.

Example . Consider the differential equation y = y. We assume it has a power series solution of theform

y = a0 + a1x + a2x2 + + an xn + (4.2.1)that converges for |x| < R . That is the equation y = y has a solution which is analytic at the origin.Then

y = a1 + 2 a2x +

+ na n xn 1 +

(4.2.2)

has the same interval of convergence. Since y = y, the series (4.2.1) and (4.2.2) have the samecoefcients. That is

(n + 1) an +1 = an , all for n = 0 , 1, 2, . . .

Thus an = 1n an 1 =1

n (n 1) an 2 = =1n !a0 . Therefore

y = a0 1 + x +x2

2!+

x3

3!+ +

xn

n!+ ,

where a0 is an arbitrary constant. In this case, we recognize this as the power series of ex . Thus the

general solution is y = a0ex

.

Example . The function y = (1 + x) p , where p is a real constant satises the differential equation

(1 + x)y = py, y(0) = 1 . (4.2.3)

As before, we assume it has a power series solution of the form

y = a0 + a1x + a2x2 + + an xn + with positive radius of convergence. Then

y = a1+ 2 a2x + 3 a3x2 +

+ ( n + 1) an +1 xn +

,

xy = a1x + 2 a2x2 + + na n xn + , py = pa0+ pa1x + pa2x2 + + pan xn + ,

Using (4.2.3) and equating coefcients, we have

(n + 1) an +1 + na n = pan , for all n = 0 , 1, 2, . . .

That is

an +1 =p nn + 1

an ,

so that

a1 = p, a2 = p( p 1)2 , a 3 = p( p 1)( p 2)2 3 ,...,an = p( p 1) ( p n + 1)n! .

• 8/8/2019 Ordinary Differential Equations Notes

61/95

4.3. SECOND ORDER LINEAR EQUATIONS AND ORDINARY POINTS 61

In other words,

y = 1 + px +p( p1)

2x2 +

p( p1)( p 2)2 3

x3 + +p( p1) ( pn + 1)

n!xn + .

By ratio test, this series converges for |x| < 1. Since (4.2.3) has a unique solution, we conclude that

(1 + x) p = 1 + px +p( p 1)

2x2 +

p( p 1)( p 2)2 3

x3 + +p( p 1) ( p n + 1)

n!xn + ,

for |x| < 1, This is just the binomial series of (1 + x) p .

4.3 Second Order Linear Equations and Ordinary Points

Consider the homogeneous second order linear differential equation

y + P (x)y + Q(x)y = 0 (4.3.1)

Denition The point x0 is said to be an ordinary point of (4.3.1) if P (x) and Q(x) are analytic atx0 . If at x = x0 , P (x) and/or Q(x) are not analytic, then x0 is said to be a singular point of (4.3.1).A singular point x0 at which the functions (x x0)P (x) and (x x0)2Q(x) are analytic is calleda regular singular point of (4.3.1). If a singular point x0 is not a regular singular point, then it iscalled an irregular singular point .

Example. If P (x) and Q(x) are constant, then every point is an ordinary point of (4.3.1).

Example. Consider the equation y + xy = 0 . Since the function Q(x) = x is analytic at everypoint, every point is an ordinary point.

Example. In the Cauchy-Euler equation y + a 1x y +a 2x 2 y = 0 , where a1 and a2 are constants, the

point x = 0 is a singular point, but every other point is an ordinary point.

Example. Consider the differential equation

y +1

(x 1)2 y +

8

x(x 1)y = 0 .

The singular points are 0 and 1. At the point 0, xP (x) = x(1 x)2 and x2Q(x) = 8x(1 x)1 ,which are analytic at x = 0 , and hence the point 0 is a regular singular point. At the point 1, wehave (x 1)P (x) = 1 / (x 1) which is not analytic at x = 1 , and hence the point 1 is an irregularsingular point.

To discuss the behavior of the singularities at innity, we use the transformation x = 1 /t , which con-verts the problem to the behavior of the transformed equation near the origin. Using the substitution

x = 1 /t , (4.3.1) becomes

d2ydt2 +

2t

1t2 P (

1t )

dydt +

1t4 Q(

1t )y = 0 (4.3.2)

• 8/8/2019 Ordinary Differential Equations Notes

62/95

62 CHAPTER 4. POWER SERIES SOLUTIONS

We dene the point at innity to be an ordinary point, a regular singular point, or an irregular singularpoint of (4.3.1) according as the origin of (4.3.2) is an ordinary point, a regular singular point, or anirregular singular point.

Example. Consider the differential equation

d2ydx2

+12

1x2

+1x

dydx

+1

2x3y = 0 .

The substitution x = 1 /t transforms the equation into

d2y

dt2 +

3 t2t

dy

dt

+1

2t

y = 0 .

Hence the point at innity is a regular singular point of the original differential equation.

Theorem 4.1 Let x0 be an ordinary point of the differential equation

y + P (x)y + Q(x)y = 0 ,

and let a0 and a1 be arbitrary constants. Then there exists a unique function y(x) that is analyticat x0 , is a solution of the differential equation in an interval containing x0 , and satises the initialconditions y(x0) = a0 , y (x0) = a1 . Furthermore, if the power series expansions of P (x) and

Q(x) are valid on an interval |x x0| < R , R > 0 , then the power series expansion of this solutionis also valid on the same interval.

Example . Find two linearly independent solutions of y xy x2y = 0Ans . y1(x) = 1 + 112 x

4 + 190 x6 + 31120 x

8 + and y2(x) = x + 16 x3 + 340 x5 + 131008 x7 + .Example . Using power series method, solve the initial value problem (1 + x2)y + 2 xy 2y = 0 ,y(0) = 0 , y (0) = 1 .Ans . y = x.

Legendres equation

(1 x2)y 2xy + p( p + 1) y = 0 ,where p is a constant called the order of Legendres equation.That is P (x) = 2x1x 2 and Q(x) =

p ( p+1)1x 2 . The origin is an ordinary point, and we expect a

solution of the form y = an xn . Thus the left hand side of the equation becomes

(1 x2)

n =0(n + 1)( n + 2) an +2 xn 2x

n =0(n + 1) an +1 xn + p( p + 1)

n =0an xn ,

or

n =0 (n + 1)( n + 2) an +2 xn

n =2 (n 1)na n xn

n =1 2na n xn

+

n =0 p( p + 1) an xn

.

• 8/8/2019 Ordinary Differential Equations Notes

63/95

4.3. SECOND ORDER LINEAR EQUATIONS AND ORDINARY POINTS 63

The sum of these series is required to be zero, so the coefcient of xn

must be zero for every n . Thisgives(n + 1)( n + 2) an +2 (n 1)na n 2na n + p( p + 1) an = 0 ,

for n = 2 , 3, . . . In other words,

an +2 = ( p n)( p + n + 1)

(n + 1)( n + 2)an .

This recursion formula enables us to express an in terms of a0 or a1 according as n is even or odd.In fact, for m > 0, we have

a2m = (

1)m

p( p 2)( p4) ( p2m + 2)( p + 1)( p + 3) ( p + 2 m 1)(2m)!

a0 ,

a2m +1 = ( 1)m( p 1)( p 3) ( p 2m + 1)( p + 2)( p + 4) ( p + 2 m)

(2m + 1)!a1 .

With that, we get two linearly independent solutions

y1(x) =

m =0a2m x2m and y2(x) =

m =0a2m +1 x2m +1 ,

and the general solution is given by

y = a0 1

p( p + 1)

2!x2 +

p( p 2)( p + 1)( p + 3)4!

x4

p( p 2)( p4)( p + 1)( p + 3)( p + 5)

6!x6 +

+ a1 x ( p 1)( p + 2)

3!x3 +

( p 1)( p 3)( p + 2)( p + 4)5!

x5

( p 1)( p3)( p 5)( p + 2)( p + 4)( p + 6)

7!x7 + .

When p is not an integer, the series representing y1 and y2 have radius of convergence R = 1 . For

example,

a2n +2 x2n +2

a2n x2n=

( p2n)( p + 2 n + 1)(2n + 1)(2 n + 2) |x

2||x|2

as n , and similarly for the second series. In fact, by Theorem 4.1, and the familiar expansion1

1 x2= 1 + x2 + x4 + , R = 1 ,

that R = 1 for both P (x) and Q(x). Thus, we know any solution of the form y = an xn must bevalid at least for |x| < 1.The functions dened in the series solution of Legendres equation are called Legendre functions.When p is a nonnegative integer, one of these series terminates and becomes a polynomial in x .

• 8/8/2019 Ordinary Differential Equations Notes

64/95

• 8/8/2019 Ordinary Differential Equations Notes

65/95

4.4. REGULAR SINGULAR POINTS AND THE METHOD OF FROBENIUS 65

4.4 Regular singular points and the method of FrobeniusConsider the second order linear homogeneous differential equation

x2y + xp(x)y + q(x)y = 0 , (4.4.1)

where p(x) and q(x) are analytic at x = 0 . In other words, 0 is a regular singular point of (4.4.1).Let p(x) = p0 + p1x + p2x2 + p3x3 + , and q(x) = q0 + q1x + q2x2 + q3x3 + . Suppose(4.4.1) has a series solution of the form

y = xr

n =0an xn =

n =0an xn + r (4.4.2)

An innite series of the form (4.4.2) is called a Frobenius series, and the method that we are goingto describe is called the method of Frobenius. We may assume a0 = 0 because the series must havea rst nonzero term. Termwise differentiation gives

y =

n =0an (n + r )xn + r 1 , (4.4.3)

and

y =

n =0an (n + r )(n + r 1)xn + r 2 . (4.4.4)

Substituting the series of y, y and y into (4.4.1) yields

[r (r 1)a0xr + ( r + 1) ra 1xr +1 + ] + [ p0x + p1x2 + ] [ra 0xr 1 + ( r + 1) a1xr + ]+[ q0 + q1x + ] [a0xr + a1xr +1 + ] = 0 . (4.4.5)

The lowest power of x in (4.4.5) is xr . If (4.4.5) is to be satised identically, the coefcient r (r 1)a0 + p0ra 0 + q0a0 of xr must vanish. As a0 = 0 , it follows that r satises the quadratic equation

r (r 1) + p0r + q0 = 0 . (4.4.6)This is the same equation obtained with the Cauchy-Euler equation. Equation (4.4.6) is called theindicial equation of (4.4.1) and its two roots (possibly equal) are the exponents of the differentialequation at the regular singular point x = 0 .

Let r 1 and r 2 be the roots of the indicial equation. If r 1 = r 2 , then there are two possible Frobeniussolutions and they are linearly independent. Whereas r 1 = r 2 , there is only one possible Frobeniusseries solution. The second one cannot be a Frobenius series and can only be found by other means.

Example. Find the exponents in the possible Frobenius series solutions of the equation

2x2(1 + x)y + 3 x(1 + x)3y (1 x2)y = 0 .Solution. Clearly x = 0 is a regular singular point since p(x) = 32 (1 + x)

2 and q(x) = 12 (1 x)are polynomials. Rewrite the equation in the standard form:

y +3

2 (1 + 2 x + x2

)x y + 1

2 (1 x)x2 y = 0 .

• 8/8/2019 Ordinary Differential Equations Notes

66/95

66 CHAPTER 4. POWER SERIES SOLUTIONS

We see that p0 =32 and q0 =

12 . Hence the indicial equation is

r (r 1) +32

r 12

= r 2 +12

r 12

= ( r + 1)( r 12

) = 0 ,

with roots r 1 = 12 and r 2 = 1. The two possible Frobenius series solutions are of the forms

y1(x) = x12

n =0an xn and y2(x) = x1

n =0an xn .

Once the exponents r 1 and r 2 are known, the coefcients in a Frobenius series solution can be foundby substitution of the series (4.4.2),(4.4.3) and (4.4.4) into the differential equation (4.4.1). If r 1 andr 2 are complex conjugates, we always get two linearly independent solutions. We shall restrict ourattention for real solutions of the indicial equation and seek solutions only for x > 0. The solutionson the interval x < 0 can be studied by changing the variable to t = x and solving the resultingequation for t > 0.

Lets work out the recursion relations for the coefcients. By (4.4.3), we have

1x p(x)y =

1x

n =0 pn xn

n =0an (n + r )xn + r 1

= xr 2

n =0

pn xn

n =0

an (n + r )xn

= xr 2

n =0

n

k=0

pn k ak (r + k) xn

= xr 2

n =0

n 1

k =0

pn k ak (r + k) + p0an (r + n) xn .

Also we have

1x 2 q(x)y =

1x2

n =0qn xn

n =0an xr + n

= 1xr 2 n =0

qn xn n =0

an xn

= xr 2

n =0

n

k =0

qn k ak xn

= xr 2

n =0

n 1

k=0

qn k ak + q0an xn .

Substituting these into the differential equation (4.4.1) and cancelling the term xr 2 , we have

n =0 an [(r + n)( r + n 1) + ( r + n) p0 + q0] +n 1

k=0 ak [(r + k) pn k + qn k ] xn

= 0 .

• 8/8/2019 Ordinary Differential Equations Notes

67/95

4.4. REGULAR SINGULAR POINTS AND THE METHOD OF FROBENIUS 67

Thus, equating the coefcients to zero, we have for n 0,an [(r + n)( r + n 1) + ( r + n) p0 + q0] +

n 1

k=0

ak [(r + k) pn k + qn k ] = 0. (4.4.7)

When n = 0 , we get r (r 1) + rp 0 + q0 = 0 , which is true because r is a root of the indicialequation. Then an can be determined by (4.4.7) recursively provided

(r + n)( r + n 1) + ( r + n) p0 + q0 = 0 .This would be the case if the two roots of the indicial equation do not differ by an integer. Suppose

r 1 > r 2 are the two roots of the indicial equation with r 1 = r 2 + N for some positive integer

N . If we start with the Frobenius series with the smaller exponent r 2 , then at the N -th step theprocess breaks off because the coefcient aN in (4.4.7) is zero. In this case, only the Frobeniusseries solution with the larger exponent exists. The other solution cannot be a Frobenius series.

Theorem 4.2 Assume that x = 0 is a regular singular point of the differential equation (4.4.1) and that the power series expansions of p(x) and q(x) are valid on an interval |x| < R with R > 0. Let the indicial equation (4.4.6) have real roots r 1 and r 2 with r 1 r 2 . Then (4.4.1) has at least onesolution

y1 = xr 1

n =0an xn , (a0 = 0) (4.4.8)

on the interval 0 < x < R , where an are determined in terms of a0 by the recursion formula (4.4.7)with r replaced by r 1 , and the series n =0 an xn converges for |x| < R . Furthermore, if r 1 r 2 isnot zero or a positive integer, then equation (4.4.1) has a second independent solution

y1 = xr 2

n =0an xn , (a0 = 0) (4.4.9)

on the same interval, where an are determined in terms of a0 by the recursion formula (4.4.7) withr replaced by r 2 , and again the series n =0 an xn converges for |x| < R .Remark. (1) If r 1 = r 2 , then there cannot be a second Frobenius series solution. (2) If r 1 r 2 = nis a positive integer and the summation of (4.4.7) is nonzero, then there cannot be a second Frobenius

series solution. (3) If r 1 r 2 = n is a positive integer and the summation of (4.4.7) is zero, thenan is unrestricted and can be assigned any value whatever. In particular, we can put an = 0 andcontinue to compute the coefcients without difculties. Hence, in this case, there does exist asecond Frobenius series solution. In many cases of (1) and (2), it is possible to determine a secondsolution by the method of variation of parameters. For instance a second solution for the Cauchy-Euler equation for the case where its indicial equation has equal roots is given by xr ln x.

Example . Find two linearly independent Frobenius series solutions of the differential equation

2x2y + x(2x + 1) y y = 0 .Ans . y1 = x(1 25 x + 435 x2 + ), y2 = x

12 (1 x + 12 x2 + ).

Example. Find the Frobenius series solutions of xy + 2 y + xy = 0 .

• 8/8/2019 Ordinary Differential Equations Notes

68/95

68 CHAPTER 4. POWER SERIES SOLUTIONS

Solution. Rewrite the equation in the standard form x2

y + 2 xy + x2

y = 0 . We see that p(x) = 2and q(x) = x2 . Thus p0 = 2 and q0 = 0 and the indicial equation is r (r 1) + 2 r = r (r + 1) = 0so that the exponents of the equation are r 1 = 0 and r 2 = 1. In this case, r 1 r 2 is an integerand we may not have two Frobenius series solutions. We know there is a Frobenius series solutioncorresponding to r 1 = 0 . Lets consider the possibility of the solution corresponding to the smaller

exponent r 2 = 1. Lets begin with y = x1

n =0cn xn =

n =0cn xn 1 . Substituting this into the

given equation, we obtain

n =0(n 1)(n 2)cn xn 2 + 2

n =0(n 1)cn xn 2 +

n =0cn xn = 0 ,

or equivalently

n =0n(n 1)cn xn 2 +

n =0cn xn = 0 ,

or

n =0n(n 1)cn xn 2 +

n =2cn 2x

n 2 = 0 .

The cases n = 0 and n = 1 reduce to 0 c0 = 0 and 0 c1 = 0 . Thus c0 and c1 are arbitrary andwe can expect to get two linearly independent Frobenius series solutions. Equating coefcients, weobtain the recurrence relation

cn

=

cn 2n(n 1)

, for n

2.

It follows from this that for n 1.c2n =

(1)n c0(2n)!

and c2n +1 =(1)n c1(2n + 1)!

.

Therefore, we have

y = x1

n =0cn xn =

c0x

n =0

(1)n(2n)!

x2n +c1x

n =0

(1)n(2n + 1)!

x2n +1 .

We recognize this general solution as

y =1x (c0 cos x + c1 sin x).

If we begin with the larger exponent, we will get the solution (sin x)/x .

4.9 Bessels equation

The second order linear homogeneous differential equation

x2y + xy + ( x2 p2)y = 0 , (4.5.1)where p is a constant is called Bessels equation. Its general solution is of the form

y = c1J p(x) + c2Y p(x). (4.5.2)

• 8/8/2019 Ordinary Differential Equations Notes

69/95

• 8/8/2019 Ordinary Differential Equations Notes

70/95

70 CHAPTER 4. POWER SERIES SOLUTIONS

If p = 0 , this is the only Frobenius series solution. In this case, if we choose a0 = 1 , we get asolution of Bessels equation of order 0 given by

J 0(x) =

m =0

(1)m x2m22m (m!)2

= 1 x2

4+

x4

64 x6

2304+ .

This special function J 0(x) is called the Bessel function of order zero of the rst kind. A secondlinearly independent solution can be obtained by other means, but it is not a Frobenius series.

The case r = p < 0. Our theorem does not guarantee the existence of a Frobenius solutionassociated with the smaller exponent. However, as we shall see, it does have a second Frobeniusseries solution so long as p is not an integer. Lets write bm in place of cm in (4.5.5). Thus we haveb1 = 0 and for m 2,

m(m 2 p)bm + bm 2 = 0 (4.5.7)

Note that there is a potential problem if it happens that 2 p is a positive integer, or equivalently if pis a positive integer or an odd integral multiple of 12 . Suppose p = k/ 2 where k is an odd positiveinteger. Then for m 2, (4.5.7) becomes

m(m k)bm = bm 2 (4.5.8)Recall b1 = 0 so that b3 = 0 , b5 = 0 , , bk2 = 0 by (4.5.8). Now in order to satisfy (4.5.8) form = k, we can simply choose bk = 0 . Subsequently all bm = 0 for all odd values of m . [If we letbk to be arbitrary and non-zero, the subsequent solution so obtained is just bk y1(x). Thus no newsolution arises in this situation.]So we only have to work out bm in terms of b0 for even values of m . In view of (4.5.8), it is possibleto solve bm in terms of bm 2 since m(m k) = 0 as m is always even while k is odd. The result isthe same as before except we should replace p by p. Thus in this case, we have a second solution

y2 = a0

m =0

(1)m22m m!( p + 1)( p + 2) ( p + m)

x2m p .

Since p(x) = 1 and q(x) = x2 p2 are just polynomials. The series representing y1 and y2 convergefor all x > 0. If p > 0, then the rst term in y1 is a0x p , whereas the rst term in y2 is b0x p . Hen

Recommended