+ All Categories
Home > Documents > Paper XII : Ordinary Di erential Equation

Paper XII : Ordinary Di erential Equation

Date post: 31-Jan-2022
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
106
Paper XII : Ordinary Differential Equation Lecture notes for the Post-graduate, sem 3 Course Department of Mathematics Ramakrishna Mission Vidyamandira Belur Math, INDIA Course Instructor : Dr. Arnab Jyoti Das Gupta August, 2020 to January, 2021
Transcript
Page 1: Paper XII : Ordinary Di erential Equation

Paper XII : Ordinary Differential Equation

Lecture notesfor the Post-graduate, sem 3 Course

Department of MathematicsRamakrishna Mission Vidyamandira

Belur Math, INDIACourse Instructor : Dr. Arnab Jyoti Das Gupta

August, 2020 to January, 2021

Page 2: Paper XII : Ordinary Di erential Equation

2

Syllabus

1. Preliminaries – Initial Value problem and the equivalent integral equation, mth order equation

in d-dimensions as a first order system, concepts of local existence, existence in the large and

uniqueness of solutions withy examples.

2. Basic Theorems – Ascoli-Arzela Theorem. A Theorem on convergence of solutions of a family

of initial-value problems.

3. Picard-Lindelof Theorem – Peano’s existence Theorem and corollary. Maximal intervals of exis-

tence. Extension Theorem and corollaries. Kamke’s convergence Theorem. Kneser’s Theorem

(Statement only).

4. Differential inequalities and Uniqueness – Gronwall’s inequality. Maximal and minimal solu-

tions. Differential inequalities. A Theorem of Winter. Uniqueness Theorems. Nagumo’s and

Osgood’s criteria.

5. Egres pointstand Lyapunov functions. Successive approximations.

6. Variation of constants, reduction to smaller sustems. Basic inequalities, constant coefficients.

Floquet Theory. Adjoint systems, Higher order equations.

7. Linear second order equations – Preliminaries. Basic facts. Theorems of Sturm. Sturm Liou-

vilee Boundary value Problems.

References

1. P. Hartman, Ordinary Differential Equations, John Wiley (1964).

2. E.A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill,

NY (1955).

3. G.F. Simmons : Differential Equaitons.

4. W. E. Boyce and R. C. DiPrima, Elementary Differential Equations and Boundary Value

problems.

5. S. L. Ross, Differential Equation

Page 3: Paper XII : Ordinary Di erential Equation

Contents

1 Existence and Uniqueness of solutions 5

1.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2 Initial Value problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.3 Uniqueness of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.3.1 Lipschitz condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.4 Method of successive spproximations . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1.5 Continuation of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1.6 System of differential equations of first order . . . . . . . . . . . . . . . . . . . . . . . 24

1.7 Higher order ODEs as system of first order ODEs . . . . . . . . . . . . . . . . . . . . 25

1.8 Dependency on Initial conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2 System of first order ordinary differential equations 33

2.1 System of First order ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.2 Systems of linear odes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2.3 Uniqueness of the solution of the system of differential equations . . . . . . . . . . . . 37

2.3.1 Existence of Fundamental set of solutions . . . . . . . . . . . . . . . . . . . . . 40

2.3.2 Linear Differential operators (constant coefficients) . . . . . . . . . . . . . . . 40

2.3.3 Linear Differential operators (Variable coefficients) . . . . . . . . . . . . . . . 44

2.3.4 Existence and uniqueness theorem . . . . . . . . . . . . . . . . . . . . . . . . . 45

2.4 Inhomogeneous system of first order linear odes . . . . . . . . . . . . . . . . . . . . . 48

2.4.1 n-th order linear ode as a system of first order linear odes . . . . . . . . . . . . 51

2.4.2 n-th order linear ode with constant coefficients . . . . . . . . . . . . . . . . . . 53

2.5 Phase potrait . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3 Differential Inequalities 65

3.1 Gronwall’s Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.2 Solution of a differential inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4 Some more Existence and Uniqueness results 71

4.1 Maximal And Minimal solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.2 Uniqueness results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

3

Page 4: Paper XII : Ordinary Di erential Equation

4 CONTENTS

5 Sturm-Liouville Theory 79

5.1 Adjoint of a second order linear ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

5.2 Self-adjoint 2nd order linear ode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.3 Basic results of Sturm theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

5.4 Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

6 Variation of parameters 91

6.1 General theory for second order linear odes . . . . . . . . . . . . . . . . . . . . . . . . 93

7 Liapunov functions 97

7.1 Stability of non-linear odes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

7.1.1 Liapunov’s direct method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

7.2 Instability theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Page 5: Paper XII : Ordinary Di erential Equation

Chapter 1

Existence and Uniqueness of solutions

Lecture 1

1.1 Notations

Through out our discussion we will be using the following notations.

• I = (a, b) will denote an open interval in R.

• Ck(I) will denote the set of all complex valued functions having k-continuous derivatives on I.

• When I is an interval other than open interval, we can extend the above definition as follows

– If f has right hand k-th derivative existing at a which is continuous from the right at a,

then we will say f ∈ Ck([a, b)).

– If f has left hand k-th derivative existing at b which is continuous from the left at b, then

we will say f ∈ Ck((a, b]).

– Analogously, we have the condition for f ∈ Ck([a, b]).

• D will denote the domain, meaning an open connected set in the real (t, x) plane, where t is

the independent variable and x will be a solution or the dependent variable.

• Ck(D) will denote the set of all complex valued functions on D such that all k-th order

partial derivatives ∂kf∂tp∂xq

, p+ q = k, exist and are continuous on D.

• C0(I) or C(I) will denote the set of all continuous functions on I.

• If D is such that it has multiple boundary points, which are also limit points, then one may

look at the continuity of the left-hand and / or righ-hand derivatives at each such points to

define Ck(D) accordingly.

5

Page 6: Paper XII : Ordinary Di erential Equation

6 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

Our Aim : To solve the following problem :-

Find a differentiable function ϕ defined on I such that

1. ∀t ∈ I, (t, ϕ(t)) ∈ D and

2. ϕ′(t) = f(t, ϕ(t)),∀t ∈ I.

where f ∈ C(D) and D is a domain.

Remark. 1. Such a problem is called an ordinary differential equation of the first order.

2. It is also represented as

(E) x′(t) = f(t, x), t ∈ I

3. If such a differentiable function ϕ exists then ϕ is called a solution of the differential equation

(E) on I.

4. Since, f ∈ C(D), ϕ′ ∈ C(I) =⇒ ϕ ∈ C1(I).

5. From the geometrical point of view, the above problem can be rephrased as finding a solution

ϕ ∈ C1(I) whose graph (t, ϕ(t)) has slope f(t, ϕ(t)) at the point (t, ϕ(t)).

1.2 Initial Value problem

To find an interval I containing τ and a solution ϕ of (E) on I satisfying ϕ(τ) = ξ, i.e. satisfying

x′(t) = f(t, x(t)), x(τ) = ξ

Remark. 1. ODEs like x′(t) = 1 have infinitely many solutions x(t) = t+ c, where c is a

constant.

2. To avoid such situations we try to impose conditions on the solutions to obtain either a

unique or a smaller class of solutions.

Coming back to our initial value problem

(IV P ) x′(t) = f(t, x(t)), x(τ) = ξ

Page 7: Paper XII : Ordinary Di erential Equation

1.2. INITIAL VALUE PROBLEM 7

If ϕ is a solution to the above problem, then we should be able to integrate both sides and obtain

ϕ(t)− ϕ(τ) =

∫ t

τ

f(s, ϕ(s))ds

=⇒ ϕ(t) = ϕ(τ) +

∫ t

τ

f(s, ϕ(s))ds,∀t ∈ I

On the other hand if we start with a function implicitly defined as

(∗) Ψ(t) = ξ +

∫ t

τ

f(s, ψ(s))ds,∀t ∈ I

then, we have Ψ ∈ C1(I) as f is continuous. Taking derivatives wrt t we have

Ψ ′(t) = f(t, Ψ(t)),∀t ∈ I

Additionally, we have

Ψ(τ) = ξ

Thus, Ψ is a solution of IVP.

This shows we have a one-to-one correspondence between the solutions of IVP and the C1(I)

functions of the form (∗). Hence, the above IVP is equivalent to finding the solution of the integral

equations (∗).

Remark. 1. Though we have obtained an equivalent problem of the IVP, we have not yet solved

it.

2. In fact, we still have not answered the question of whether such a solution exists or not.

3. Even if the solution exists for one particular interval I, will it exist on whole of R or on

certain other intervals I ′?

Example. Consider an example dydt

= y2 with the initial condition y(1) = −1. THen, clearly

y(t) = −1t

is a solution. But, note that 1t

is undefined when t = 0. THus, it will be a solution only

for those intervals I, that do not contain the point t = 0.

Remark. The above example shows the interval plays an important role in answering the question

of existence of solutions.

Page 8: Paper XII : Ordinary Di erential Equation

8 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

Definition 1.2.1 (ε-approximate solutions). Let f ∈ C(D) be real valued. A function ϕ ∈ C(I) is

said to be an ε-approximate solution of (E) on the interval I, if it satisfies the following conditions

1. (t, ϕ(t)) ∈ D, ∀t ∈ I

2. ϕ ∈ C1(I), except at most for finitely many points, where ϕ′ may have simple discontinuities.

3. |ϕ′(t)− f(t, ϕ(t))| ≤ ε,∀t ∈ I\S, where S is the set of all simple discontinuities of the function

ϕ′.

Remark. 1. Any function ϕ ∈ C(I) having property (2), given above, is said to have piecewise

continuous derivative on I and is denoted by ϕ ∈ C1p(I).

2. Recall that a function f has a simple discontinuity at a point c if the left and right limits of f

exist at c, but are not equal.

3. If ε = 0, then it means ϕ ∈ C1(I), in which case S = φ and we have our solution.

Some Notations to be used later :

1. Rectangular regions R will denote the following

|t− τ | ≤ a, |x− ξ| ≤ b, a, b > 0, i.e. R := [τ − a, τ + a]× [ξ − b, ξ + b]

It is a rectangular region having center at (τ, ξ).

2. M = max(t,x)∈R

|f(t, x)|, since, f is continuous and R is compact and M exists.

3. α = min(a, b

M

).

Lecture 2

Theorem 1.2.2 (Existence of solution). Let f ∈ C(R) and ε > 0. Then, there exists an

ε-approximate solution ϕ of (E) on the interval [τ − α, τ + α] such that ϕ(τ) = ξ.

This can be rephrased as

Page 9: Paper XII : Ordinary Di erential Equation

1.2. INITIAL VALUE PROBLEM 9

Let us consider an initial value problem

(1)

x′(t) = f(t, x), t ∈ [τ − a, τ + a]

and x(τ) = ξ,

Further, consider b, ε ∈ R+ such that f ∈ C(R), where R = [τ − a, τ + a]× [ξ− b, ξ + b]. Then, there

exists an α ∈ R+ and and ε-approximate solution of (1) on the interval [τ − α, τ + α], where

α = min(a, b

M

)and M = max

(t,x)∈R|f(t, x)|.

Proof. Since, R is compact and f ∈ C(R), f is uniformly continuous on R. Thus, for the given

ε > 0,∃δε > 0 such that |f(t, x)− f(t, x)| ≤ ε, whenever ||(t, x)− (t, x)|| ≤ δε. (See figure (1.1)).

Figure 1.1: Diagramatic representation of the neighborhoods

Also, we now have the existence of M and α as defined before. Clearly, α ≤ a and hence we won’t

run into an problem of going outside the domain, if we divide the interval [τ, τ + α] into n-equal

parts t0 = τ < t1 < t2 < · · · < tn = τ + α in such a way that

max1≤k≤n

|tk − tk−1| ≤ min

(δε,

δεM

)

In the interval t ∈ [t0, t1], construct a line segment passing through (t0, ξ) with slope f(t0, ξ). Let

this line segment meet the boundary t = t1 at the point (t1, ξ1).

Page 10: Paper XII : Ordinary Di erential Equation

10 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

Figure 1.2: Diagramatic representation of the approximate solution.

Then, construct another line segment passint through (t1, ξ1) with slope f(t1, ξ1) in the interval

[t1, t2]. Follow this process, till the last interval [tn−1, tn]. (see figure (1.2)).

Note :-

1. ξk = ξk−1 + (t− tk−1)f(tk−1, ξk−1),∀k ≥ 1.

2. By the definition of M , the above line segments will remain in the region T .

Thus, if we construct a function ϕ defined as

ϕ(t) =

ξ , if t = τ

ϕ(tk−1) + f(tk−1, ϕ(tk−1))(t− tk−1) , t ∈ [tk−1, tk], 1 ≤ k ≤ n

then ϕ forms the required ε-approximate solution where ϕ is extended to [τ − α, τ ] in the same way

as in [τ, τ + α].

Remark. 1. ϕ has the exact graph, that we had constructed using the previous line segments.

2. Clearly,

(a) ∀t ∈ [τ, τ + α], (t, ϕ(t)) ∈ T ⊆ R.

(b) ϕ ∈ C1p([τ, τ + α]) as the points t1, t2, · · · , tn may pose problems.

Page 11: Paper XII : Ordinary Di erential Equation

1.2. INITIAL VALUE PROBLEM 11

(c) |ϕ′(t)− f(t, ϕ(t))| = |f(tk−1, ϕ(tk−1))− f(t, ϕ(t))| ≤ ε,∀t ∈ (tk−1, tk).

Definition 1.2.3 (Equicontinuous collection of functions). Let F = {f : I → R}, where I ⊆ R is

an interval. THis collection of functions F is said to be equicontinuous on I if for each ε > 0,

∃δ > 0 such that

|f(t)− f(t)| < ε, whenever f ∈ F ; t, t ∈ I; satisfy |t− t| < δ.

Figure 1.3: Graphical illustration of equicontinuous collection of functions.

Remark. With respect to the above figure note the following

1. Within [a, b], {f1, f2} forms an equicontinuous class of functions but within [b, β], they don’t.

2. Graphically, one may understand the collection of equicontinuous functions as follows :-

Given any horizontal strip of height ε > 0, we will be able to obtain a δ > 0, such that

whenever we consider a vertical strip of width δ, within the given interval I, we will have the

graph of all the functions of this collection enclosed within a rectangle of horizontal dimension

δ and vertical dimension ε. (see figure (1.4)).

3. One of the most important properties of the set of all equicontinuous functions is given by the

following lemma (Ascoli or often referred to as Arzella-Ascoli’s theorem).

Page 12: Paper XII : Ordinary Di erential Equation

12 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

Figure 1.4: Graphical illustration of two equicontinuous functions.

Lemma 1.2.4. Let I ⊆ R be a bounded interval and let F be an infinite collection of uniformly

bounded, equicontinuous functions defined on I. Then, F contains a sequence {fn} which is

uniformly convergent on I.

Proof. As Q is countable, ∃ a bijection ϕ : N→ Q ∪ I. Let {rk}, k ∈ N be the enumerated

collection of the rational numbers in I.

Since, the collection F is uniformly bounded, the collection of real numbers {f(r1)|f ∈ F} is a

bounded set and hence, has a convergent sequence.

Thus, ∃ a sequence of functions {fn,1} of F such that {fn,1(r1)} is convergent.

Similarly for the collection {fn,1(r2)}, we will find a subsequence of functions {fn,2} ⊆ F such that

{fn,2(r2)} is convergent.

Thus, for each k ∈ N, we will be able to obtain a subsequence {fn,k} ⊆ {fn,k−1} ⊆ · · · ⊆ {fn,1} ⊆ F ,

such that {fn,k(rk)} is convergent. [Note, uniformly boundedness is required to insure that

{f(rn)|f ∈ F} is bounded for each n ∈ N.]

Note that for each fixed k ∈ N, the sequence of functions {fn,k} are convergent at the points

r1, r2, · · · , rk .

Now, construct a new sequence of functions {gn} ⊆ F defined as gn = fn,n.

Claim : {gn} is the required uniformly convergent subsequence on I .

Page 13: Paper XII : Ordinary Di erential Equation

1.2. INITIAL VALUE PROBLEM 13

Verification : for any fixed k ∈ N, the tail sequence {gn}n≥k is a subsequence of {fn,k} and

hence, the sequence of real numbers {gn(rk)} converges.

As k was arbitrarily chosen from N, we have for every k ∈ N, {gn(rk)} is convergent.

Since, {rk} was an enumeration of rationals in I, we have {gn(r)} Is convergent for all rationals

r ∈ I.

Hence, for every ε > 0, rk ∈ I ∩Q,∃Nε(rk) such that

|gn(rk)− gm(rk)| < ε,∀n,m ≥ Nε(rk)

Again as the given collection F is equicontinuous, for the above chosen ε > 0,∃δ > 0 such that

|f(t)− f(t)| < ε,∀f ∈ F ; t, t ∈ I; |t− t| < δ

Since, I is a bounded interval in R, we can split I into finitely many subintervals Ij, j = 1, 2, · · · , p,such that the length of the largest sub interval is less than δ . (see figure (1.5).) Now, as the set of

Figure 1.5: Division of intervals.

all rational in I is dense in I, therefore for each of the above subintervals Ij, we can find a rational

rk(j) ∈ Ij (this rational depends on the enumeration as well as the sub interval Ij).

Finally, to show uniform convergence of the sequence of functions {gn} over the interval I, we need

to show that for each ε > 0,∃Mε ∈ N such that for every t ∈ I, |gn(t)− gm(t)|ε, whenever

n,m ≥Mε.

Now, t ∈ I =⇒ ∃j ∈ {1, 2, · · · p} such that t ∈ Ij. Hence, we have

|gn(t)− gm(t)| ≤ |gn(t)− gn(rk(j))|+ |gn(rk(j))− gm(rk(j))|+ |gm(rk(j))− gm(t)|.

Page 14: Paper XII : Ordinary Di erential Equation

14 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

Now, t, rk(j) ∈ Ij =⇒ |t− rk(j)| < δ, which implies |gn(t)− gn(rk(j))| < ε and |gm(rk(j))− gm(t)| < ε.

Also, convergence of {gn} over I ∩Q, gives |gn(rk(j))− gm(rk(j))| < ε,∀n,m ≥ Nε(rk(j)).

Thus, we have |gn(t)− gm(t)| < 3ε,∀n,m ≥ Nε(rk(j)).

Now, we have finitely many subintervals Ij and for each subinterval we have fixed a rational number

rk(j). Thus, we have chosen only finitely many rationals rk(j), j = 1, 2, · · · , p, representing each of

the finitely many intervals Ij.

Let Mε = max1≤j≤p

{Nε(rk(j))}. Then, for n,m ≥Mε we have

|gn(t)− gm(t)| < 3ε,∀t ∈ I

Hence, {gn} forms an uniformly convergent sequence on I.

Lecture 3

Theorem 1.2.5 (Cauchy-Peano existence theorem). Let R be the rectangular region as defined

earlier. Let f ∈ C(R), then ∃ϕ ∈ C1([τ − α, τ + α]), which satisfies the IVP

ϕ′(t) = f(t, ϕ(t)) on [τ − α, τ + α], ϕ(τ) = ξ

Here, α = min{a, bM}, M and a, b are defined as earlier.

Proof. Let εn = 1n, n ∈ N. Then, by theorem 1.1, for each n ∈ N∃ an ε-approximate solution of the

IVP, say ϕn such that ϕn(τ) = ξ∀n ∈ N, on the interval I = [τ − α, τ + α], where α is defined as

min{a bM}.

By construction of each ϕn, as per theorem 1.1, we have

ϕn(t)− ϕn(t)| ≤M |t− t|,∀t, t ∈ I (i)

Page 15: Paper XII : Ordinary Di erential Equation

1.2. INITIAL VALUE PROBLEM 15

Thus, for t = τ , we have

|ϕn(t)− ϕn(τ)| = M |t− τ |

=⇒ |ϕn(t)− ξ| ≤ b as |t− τ | ≤ α ≤ b

M

=⇒ |ϕn(t)| ≤ |ϕn(t)− ξ|+ |ξ| ≤ b+ |ξ| (ii)

This is true ∀n ∈ N. Thus, the sequence {ϕn} is uniformly bounded by (b+ |ξ|).Again, (i) suggests that {ϕn} are equicontinuous. Hence, by the Ascoli’s lemma, there exists a

subsequence {ϕnk} uniformly convergent on [τ − α, τ + α].

Let ϕnk → ϕ as k →∞. As ϕnk ’s are continuous and {ϕnk} uniformly converges to ϕ, the limit ϕ

must also be continuous on [τ − α, τ + α].

Now, ϕn(t) = ξ +

∫ t

τ

[f(s, ϕn(s)) + ∆n(s)] ds (iii)

where ∆n(s) = ϕ′n(s)− f(s, ϕn(s)) defined only on those points where ϕ′n exists. We define

∆n(s) = 0,∀s for which ϕ′n does not exist.

Since, ϕn is an εn-approximate solution on the [τ − α, τ + α], |∆n(s)| ≤ εn. Again, as f is uniformly

continuous on R and ϕnk → ϕ uniformly on [τ − α, τ + α], it follows that

limk→∞

f(t, ϕnk(t)) = f(t, limk→∞

ϕnk(t))

= f(t, ϕ(t)) (iv)

The convergence is uniform. Thus, (iii) and (iv) gives

ϕ(t) = limk→∞

ϕnk(t) = limk→∞

[ξ +

∫ t

τ

{f(s, ϕnk(s)) + ∆nk(s)}ds]

= ξ +

∫ t

τ

f(s, ϕ(s))ds (v)

Now, (v) suggests ϕ(τ) = ξ and ϕ′(t) = f(t, ϕ(t)) (As f is a continuous function). Thus, ϕ is a

solution of the given IVP.

Question 1.2.6. 1. Is the choice of the subsequence in the above proof necessary? Justify.

2. If uniqueness is assumed, will the choice be unnecessary? [Ref: Theory of ODEs by

Coddington and Levinson, Chapter 1.]

Page 16: Paper XII : Ordinary Di erential Equation

16 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

Lecture 4

Theorem 1.2.7. Let f ∈ C(D) and (τ, ξ) ∈ D. Then, ∃ a solution ϕ of the IVP on some t-interval

containing τ in its interior.

Proof. D is a domain and as per our conventions D is open.

Therefore, there exists r > 0 such that B((τ, ξ), r) ⊆ D as (τ, ξ) ∈ D.

Let R be a closed rectangle centered at (τ, ξ) and contained in B((τ, ξ), r). Thus, f ∈ C(R). Hence,

by Cauch-Peano’s theorem, ∃ϕ ∈ C1(I1) which solves the given IVP. Here,

I1 = [τ − α, τ + α] ⊆ [τ − a, τ + a].

1.3 Uniqueness of solutions

Example. Consider the ode x′ = 3√x with initial condition x(0) = 0. Let the interval for this

problem be [0, 1]. For c ∈ [0, 1] define

ϕc(t) =

0 ,∀t ∈ [0, c]{

2(t−c)3

} 32

, otherwise.

Then, note that ϕ′c(t) =√ϕc and ϕc(0) = 0. But, ϕc1 6= ϕc2 for c1 6= c2. (see figure (1.6).) Thus,

Figure 1.6: ϕc are different for different values of c.

uniqueness is not guaranteed, even though the right hand side is continuous. Hence, we require

something more than continuity of f(t, x). One such sufficient condition is the Lipschitz condition.

Page 17: Paper XII : Ordinary Di erential Equation

1.3. UNIQUENESS OF SOLUTIONS 17

1.3.1 Lipschitz condition

Definition 1.3.1. One variable : Let f : D → R, D ⊆ R. Then f is said to be Lipschitz

continuous over D if ∃ a constant k > 0 such that

|f(x1)− f(x2)| ≤ k|x1 − x2|,∀x1, x2 ∈ D

Two variables : Let f : D → R, D ⊆ R2. Then f is said to be Lipschitz continuous in second

variable if ∃ a constant k > 0 such that

|f(t, x1)− f(t, x2)| ≤ k|x1 − x2|,∀(t, x1), (t, x2) ∈ D

( Note, here the first variable is fixed.)

Remark. 1. Similarly one may define Lipschitz continuity with respect to any particular

variable.

2. Note that one may also define Lipschitz continuity with respect to all the variables together

as f : D → R, D ⊆ Rm is said to be Lipschitz continuous if ∃ a constant k > 0 such that

|f(x1)− f(x2)| ≤ k||x1 − x2||,∀x1, x2 ∈ D

Example. 1. f(x) = sinx is Lipschitz continuous on R.

2. g(x) =√x is not Lipschitz continuous on [0, 1].

3. f(t, x) = sinx√t is Lipschitz continuous in x but not in t.

Notation : We will say f ∈ (C,Lip) on D to state that the two variable function f is continuous

on D and lipschitz continuous in the second variable.

Theorem 1.3.2. Let us consider the IVP

ϕ′(t) = f(t, x) on I, ϕ(τ) = ξ

Page 18: Paper XII : Ordinary Di erential Equation

18 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

Suppose f ∈ (C,Lip) on D, with Lipschitz constant k. Let ϕ1, ϕ2 ∈ C1p on some interval (a, b),

containing τ be ε1 and ε2 approximate solutions of the above IVP. Further assume that

|ϕ1(τ)− ϕ2(τ)| ≤ δ for some fixed τ ∈ (a, b) and a fixed δ ≥ 0. If ε = ε1 + ε2, then for all t ∈ (a, b),

|ϕ1(t)− ϕ2(t)| ≤ δek|t−τ | +ε

k

(ek|t−τ | − 1

)

Proof. Consider t ∈ [τ, b). Since, ϕ1, ϕ2 are ε1, ε2 approximate solutions, we have

|ϕ′i(s)− f(s, ϕi(s))| ≤ εi (i = 1, 2) (1.1)

at all but finitely many points on [τ, b). Integrating from τ to t we get

|ϕi(t)− ϕi(τ)−∫ t

τ

f(s, ϕi(s))ds| ≤∫ t

τ

|ϕ′i(s)− f(s, ϕi(s))|ds ≤ εi(t− τ) (1.2)

Summing up, we get∣∣∣∣[ϕ1(t)− ϕ2(t)]− [ϕ1(τ)− ϕ2(τ)]−∫ t

τ

[f(s, ϕ1(s))− f(s, ϕ2(s))]ds

∣∣∣∣ ≤ ε(t− τ) (1.3)

where ε = ε1 + ε2. Let r(t) = |ϕ1(t)− ϕ2(t)|. Thus, from (1.2) and (1.3) we have

r(t) ≤ r(τ) +

∫ t

τ

|f(s, ϕ1(s))− f(s, ϕ2(s))|ds+ ε(t− τ).

As f is Lipschitz on D,

r(t) ≤ r(τ) + k

∫ t

τ

r(s)ds+ ε(t− τ) (1.4)

Now, define R(t) =∫ tτr(s)ds, t ∈ [τ, b). Thus, (1.4) becomes

R′(t)− kR(t) ≤ δ + ε(t− τ) ∵ r(τ) ≤ δ

Multiplying both sides by e−k(t−τ) and integrating from τ to t we get

e−k(t−τ)R(t) ≤ δ

k

(1− e−k(t−τ)

)− ε

k2e−k(t−τ)[1 + k(t− τ)] +

ε

k2

Page 19: Paper XII : Ordinary Di erential Equation

1.3. UNIQUENESS OF SOLUTIONS 19

Thus, we have

R(t) ≤ δ

k

[ek(t−τ) − 1

]− ε

k2[1 + k(t− τ)] +

ε

k2ek(t−τ) (1.5)

Combining (1.4) and (1.5) we get

r(t) ≤ δek(t−τ) +ε

k

[ek(t−τ) − 1

],∀t ∈ [τ, b)

Similar results will be obtained for (a, τ ]. Hence, we have

|ϕ1(t)− ϕ2(t)| ≤ δek(t−τ) +ε

k

[ek(t−τ) − 1

], ∀t ∈ (a, b)

Remark. 1. If ϕ1 = ϕ (actual solution), then for any ε2 approximate solution ϕ2, we have

|ϕ(t)− ϕε22 (t)| ≤ δek(t−τ) +ε2k

[ek(t−τ) − 1

],∀t ∈ (a, b)

implies that ϕε22 → ϕ as ε2 → 0, δ → 0.

2. We can better this result by taking ϕ2 such that the initial value is satisfied at τ . Then,

δ = 0, which implies

|ϕ(t)− ϕε22 (t)| ≤ ε2k

[ek(t−τ) − 1

],∀t ∈ (a, b)

3. If δ = ε = 0, i.e. two exact solutions ϕ1, ϕ2 passing through the same initial point at t = τ ,

then

|ϕ1(t)− ϕ2(t)| ≤ 0 =⇒ ϕ1 = ϕ2

Thus, we have uniqueness of solution of IVP.

Lecture 5

Theorem 1.3.3. Let f ∈ (C,Lip) in D and (τ, ξ) ∈ D. If ϕ1 and ϕ2 are any two solutions of

ϕ′(t) = f(t, ϕ(t)) on (a, b), a < τ < b, such that ϕ1(τ) = ϕ2(τ) = ξ, then ϕ1 = ϕ2.

Proof. Since, ϕ1(τ) = ϕ2(tau) = ξ, we can take δ = 0 in theorem (1.3.2).

Page 20: Paper XII : Ordinary Di erential Equation

20 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

Also, as ϕ1, ϕ2 are solutions of ϕ′(t) = f(t, ϕ(t)), ε1 = ε2 = 0.

∴ |ϕ1(t)− ϕ2(t)| ≤ 0 =⇒ ϕ1(t) = ϕ2(t),∀t ∈ (a, b) =⇒ ϕ1 = ϕ2

1.4 Method of successive spproximations

Now, we will see a constructive proof of existence and uniqueness of the solution of IVP.

Theorem 1.4.1 (Picard-Lindelof). If f ∈ (C,Lip) on a rectangular region R, then ∃ successive

approximations ϕk on |t− τ | ≤ α ascontinuous functions and converge uniformly on this interval to

the unique solution ϕ of (E) such that ϕ(τ) = ξ.

Proof. Consider the interval I1 = [τ − α, τ ], similar arguments will hold for I2 = [τ, τ + α].

Let’s define

ϕ0(t) = ξ and ϕk+1(t) = ξ +

∫ t

τ

f(s, ϕk(s))ds,∀k ∈ N ∪ {0},∀t ∈ [τ − α, τ + α] = I. (1.6)

Then, ϕ0 ∈ C1(I1) and |ϕ0(t)− ξ| ≤M(τ − t), ∀t ∈ I1, where I1 = [τ −α, τ ] and M = max(t,ξ)∈R

|f(t, ξ)|.

Let’s assume that ϕk ∈ C1(I1) and |ϕk(t)− ξ| ≤M(τ − t), ∀t ∈ I1.∴ ϕk+1 ∈ C1(I1) (by definition of ϕk+1 and f ∈ C(R).

Now,

|ϕk+1(t)− ξ| = |∫ t

τ

f(s, ϕk(s))ds|

≤∫ τ

t

|f(s, ϕk(s))|ds

≤M(τ − t), ∀t ∈ I1

Thus, by the principle of Mathematical Induction ∀k ∈ N ∪ {0}, ϕk ∈ C1(I1) and

|ϕk(t)− ξ| ≤M(τ − t), ∀t ∈ I1 (1.7)

Page 21: Paper XII : Ordinary Di erential Equation

1.4. METHOD OF SUCCESSIVE SPPROXIMATIONS 21

Let ∆k(t) = |ϕk+1(t)− ϕk(t)|, t ∈ I1. Then, we have

∆k(t) =

∣∣∣∣∫ t

τ

f(s, ϕk(s))ds−∫ t

τ

f(s, ϕk−1(s))ds

∣∣∣∣≤∫ τ

t

|f(s, ϕk(s))− f(s, ϕk−1(s))|ds

≤ c

∫ τ

t

|ϕk(s)− ϕk−1(s)|ds, where c is the Lipschitz constant of f on R.

= c

∫ τ

t

∆k−1(s)ds (1.8)

Again, by (1.7) we have

∆0(t) = |ϕ1(t)− ϕ0(t)| ≤M(τ − t) (1.9)

Thus, proceeding inductively we have

∆k(t) ≤ ck∫ τ

t

· · ·∫ τ

t︸ ︷︷ ︸k−times

∆0(s)ds

= ckM(τ − t)k+1

(k + 1)!=

(M

k

)ck+1(τ − t)k+1

(k + 1)!,∀t ∈ I1 (1.10)

∴∞∑k=0

∆k(t) ≤M

c

∞∑k=0

ck+1(τ − t)k+1

(k + 1)!, ∀t ∈ I1

≤ M

cecα as |τ − t| ≤ α

∴∞∑k=0

∆k(t) is uniformly convergent for t ∈ I1. This implies ϕn(t) = ϕ0(t) +n−1∑k=0

[ϕk+1(t)− ϕk(t)]

converges uniformly to continuous limit function ϕ on I1.

Since, (t, ϕk(t)) ∈ R, ∀k ∈ N ∪ {0} and t ∈ I1, we have (t, ϕ(t)) ∈ R, ∀t ∈ I1. Hence, f(s, ϕ(s)) is

defined ∀s ∈ I1.

∣∣∣∣∫ τ

t

[f(s, ϕ(s))− f(s, ϕk(s))]ds

∣∣∣∣ ≤ ∫ τ

t

|f(s, ϕ(s))− f(s, ϕk(s))|ds ≤ c

∫ τ

t

|ϕ(s)− ϕk(s)|ds

which tends to 0 uniformly as k →∞. By (1.6), ϕ(t) = ξ +∫ tτf(s, ϕ(s))ds, t ∈ I1. Similarly, we

Page 22: Paper XII : Ordinary Di erential Equation

22 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

have the result for I2 and hence we have

ϕ(t) = ξ +

∫ t

τ

f(s, ϕ(s))ds,∀t ∈ I

Further, uniqueness follows from theorem (1.3.3).

Lecture 6

1.5 Continuation of solutions

Recall what we have obtained till now. We have an IVP

ϕ′(t) = f(t, ϕ(t)), ∀t ∈ [τ − a, τ + a], ϕ(τ) = ξ. The solution curve lies within the shaded region

given below.

Picture will comeThus, even though we start with the domain [τ − a, τ + a], we obtain the solution in the interval

[τ − α, τ + α] subseteq[τ − a, τ + a] which is generally a proper subset. Infact in many cases the

solution might exist in a very small neighborhood of the initial point.

Example. Consider the ode

dy

dt= −cosec y =⇒ y = cos−1(t+ c),

where c ∈ R is the integration constant. Note, that for the solution to be well-defined |t+ c| ≤ 1,

i.e. even though the differential equation can be defined on whole of R, the solution will exist only

in the interval t ∈ [−(1 + c), (1− c)].Thus, if y(0) = π

2is the initial condition, then c = cos π

2= 0, which implies the solution will exist on

[−1, 1].

Example. On the other hand the ode dydt

= 2ty has the general solution y = cet2, which exists

∀t ∈ R.

Thus, we need to see when can we extend the region of existence of the solution and up to how far

can we extend it.

Page 23: Paper XII : Ordinary Di erential Equation

1.5. CONTINUATION OF SOLUTIONS 23

Theorem 1.5.1. Let D be a domain in the (t, x) plane and f ∈ C(D) is bounded on D. If ϕ is a

solution of the ode ϕ′(t) = f(t, ϕ(t)) on an interval (a, b), then the limits ϕ(a+ 0) = limh→0+

ϕ(a+ h)

and ϕ(b− 0) = limh→0+

ϕ(b− h) exist. Further, if (a, ϕ(a+ 0)) or (b, ϕ(b− 0)) is in D, then the

solution ϕ may be continued to the left of a or right of b respectively.

Proof. Let M ∈ R+ such that |f(t, x)| ≤M, ∀(t, x) ∈ D. Also, assume ϕ passes through (τ, ξ) ∈ Dand τ ∈ (a, b). Then,

ϕ(t) = ξ +

∫ t

τ

f(s, ϕ(s))ds,∀t ∈ (a, b)

Thus, ∀t1, t2 ∈ (a, b), |ϕ(t1)− ϕ(t2)| ≤∫ t2t1|f(s, ϕ(s))|ds ≤M |t2 − t1|.

Thus, as t1, t2 → a+ 0 we have |ϕ(t1)− ϕ(t2)| → 0. Hence, by the Cauchy criterion of convergence

that ϕ(a+ 0) exists.

Similarly, ϕ(b− 0) exists.

Now, if (a, ϕ(a+ 0)) ∈ D, define ϕ(t) =

ϕ(t) ,∀t ∈ (a, b)

ϕ(a+ 0) , t = a. Then, ϕ is a solution of the given

ode of class C1 on [a, b). Infact

ϕ(t) = ξ +

∫ t

τ

f(s, ϕ(s))ds

ϕ′+(a) = ϕ′(a+ 0) = f(a, ϕ(a))

This ϕ is called a continuation of the solution ϕ to [a, b). Similarly, ϕ can be extended to (a, b] if

(b, ϕ(b− 0)) ∈ D.

Now, taking τ = a and ξ = ϕ(a), we have by the existence theorem a solution ϕ∗ ∈ C1 on some

interval [a− α, a], α > 0 such that ϕ∗′(t) = f(t, ϕ∗(t)), ∀t ∈ [a− α, a] and ϕ∗(a) = ϕ(a).

Thus, defining ϕ(t) on [a− α, b) as ϕ(t) =

ϕ(t) , if t ∈ (a, b)

ϕ∗(t) , if t ∈ [a− α, a]we have a solution for the

given ode on [a− α, b). Similarly, one can proceed on the right end point. Hence, we can extend the

solution continuously on the left of a and right of b if (a, ϕ(a+ 0)) ∈ D and (b, ϕ(b− 0)) ∈ Drespectively.

Remark. 1. In the previous example of dydt

= −cosec(y), the solution could be extended to the

left and right end points of [−1, 1]. But, that may not be the case always.

Page 24: Paper XII : Ordinary Di erential Equation

24 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

2. For example dydt

= y2 has solution ϕ(t) = −t−1 defined on (−1, 1). But, it can not be extended

to its right end pont as ϕ does not stay in the region D where f is bounded.

Lecture 7

1.6 System of differential equations of first order

Consider a first order ordinary differential equation given by

ϕ′1(t) = f1(t, ϕ1(t)), ∀t ∈ I

where I ⊆ R is an interval. Now, if we consider n-number of such first order odes,

ϕ′i(t) = fi(t, ϕ1(t), · · · , ϕn(t)),∀t ∈ I

where fi ∈ C(D), i = 1, 2, · · · , n, where D is a domain in R1+n and each fi is a function of

(t, x1, x2, · · · , xn), then we have a system of n-ordinary differential equations of first order.

Question 1.6.1. Let D ⊆ R1+n, be an open conneccted set and fi ∈ C(D), 1 ≤ i ≤ n, i ∈ N. The

problem is to find n-differentiable functions ϕ1, ϕ2, · · · , ϕn defined on a real t interval I such that

1. (t, ϕ1(t), · · · , ϕn(t)) ∈ D, ∀t ∈ I.

2. ϕ′i(t) = fi(t, ϕ1(t), · · · , ϕn(t)),∀t ∈ I,∀1 ≤ i ≤ n.

Remark. 1. In compact notation this can be written as

(a) ∀t ∈ I, (t, ϕ(t)) ∈ D, where ϕ(t) = (ϕ1(t), · · · , ϕn(t)).

(b) dϕ(t)dt

= F(t, ϕ(t)), where F(t, ϕ(t)) = (f1(t, ϕ(t)), f2(t, ϕ(t)), · · · , fn(t, ϕ(t)))t

2. This problem is called a system of n-ordinary differential equations of the first order.

3. If such an interval I and functions (ϕ1(t), · · · , ϕn(t)) exist then the set of functions

(ϕ1(t), · · · , ϕn(t)) is called a solution of the system on I.

Page 25: Paper XII : Ordinary Di erential Equation

1.7. HIGHER ORDER ODES AS SYSTEM OF FIRST ORDER ODES 25

4. Let (τ, ξ1, · · · , ξn) ∈ D. The initial value problem consists of finding a solution (ϕ1, · · · , ϕn) of

the system on an interval I containing τ such that ϕi(τ) = ξi,∀1 ≤ i ≤ n.

Example. Let

x′1 = t+ x1 + x22 + sinx3

x′2 = cos(tx21 − x2x3)x′3 = t2

∀t ∈ (−10, 0).

Then, these 3 odes represent a system of first order odes. If we introduce the vector notations, then

we have

X ′ = F(t,X),∀t ∈ (−10, 10), where

X =

x1

x2

x3

, X ′ =

x′1

x′2

x′3

,F(t,X) =

f1(t,X)

f2(t,X)

f3(t,X)

=

t+ x1 + x22 + sinx3

cos(tx1 − x2x3)t2

Remark. 1. We will use |.| to denote the l1 norm and ||.|| to denote the l2 norm for the R1+n.

2. Note, using these two norms or any other equivalent norm one may obtain equivalent

definitions of Lipschitz continuity and ε-approximate solutions of a system of odes. Hence, all

the theorems already proved for one variable ode is valid for a system of n-equations also.

3. A special case arises in the study of the system of odes when we consider the right hand side

functions as linear functions.

4. We will study this extensively later on.

1.7 Higher order ODEs as system of first order ODEs

One interesting fact is that a m-th order ode can be expressed as a system of m- first order odes.

Example. Consider the equation

d3y

dt3+ cos t

d2y

dt2+ ety

dy

dt+(sin ty + y2 + t

)= 0

Page 26: Paper XII : Ordinary Di erential Equation

26 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

This is a third order, first degree ode. Let us introduce a new set of dependent variables. Let

y1 = y

y2 =dy1dt

=dy

dt

y3 =dy2dt

=d2y

dt2

=⇒ dy3dt

=d3y

dt3= − cos t

d2y

dt2− ety dy

dt− (sin(ty) + y2 + t)

=⇒ dy3dt

= − cos ty3 − ety1y2 − (sin(ty1) + y21 + t)

Thus, we can express this as a system

d

dt

y1

y2

y3

=

y2

y3

− cos ty3 − ety1y2 − (sin(ty1) + y21 + t)

Example. Consider the equation

t2d4y

dt4+ 2t

d2y

dt2+ cos t

dy

dt+ et = 0

Then, introducing the variables

y1 = y, t2 =dy1dt, y3 =

dy2dt, and y4 =

dy3dt

we getdy4dt4

= − 1

t2[2ty3 + cos ty2 + et

]and the system can be written as

d

dt

y1

y2

y3

y4

=

y2

y3

y41t2

[2ty3 + cos ty2 + et]

=

0 1 0 0

0 0 1 0

0 0 0 1

0 − cos tt2−2

t0

y1

y2

y3

y4

+

0

0

0

− 1t2et

Page 27: Paper XII : Ordinary Di erential Equation

1.7. HIGHER ORDER ODES AS SYSTEM OF FIRST ORDER ODES 27

In vector notation, this becomesdY

dt(t) = A(t)Y (t) + b(t)

where

Y (t) =

y1

y2

y3

y4

, A(t) =

0 1 0 0

0 0 1 0

0 0 0 1

0 − cos tt2−2

t0

and b(t) =

0

0

0

− 1t2et

Example. Consider another ode

4d3y

dt3− 6

d2y

dt2+ 7

dy

dt+ 6y + 8t = 0

Then, we have the system

dY

dt= AY + b, where Y =

y1

y2

y3

, A =

0 1 0

0 0 1

−23−7

4−2

3

and b =

0

0

−2t

Remark. 1. This is a linear system of first order ode with constant corfficients. These type of

systems are by far the easiest to solve.

2. Previous example was also a linear system but, with variable coefficients.

Thus, we have found out a way to relate the solutions of a higher order ode with that of a system of

first order odes, theory of which will be covered in the chapter 4 later on.

Lecture 8

Before we move on to the vast theory of linear systems of odes, let’s look in to this important area

of how the initial conditions of an IVP influence the solutions. For linear systems it is easier to

visualize these dependencies.

Page 28: Paper XII : Ordinary Di erential Equation

28 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

1.8 Dependency on Initial conditions

Consider an ode with Lipschitz continuous (in 2nd variable) f .

ϕ′(t) = f(t, ϕ(t)) on some interval I

with initial condition

ϕ(t0) = ξ

Then, the solution must satisfy

ϕ(t) = ξ +

∫ t

t0

f(s, ϕ(s))ds

Thus, the solution depends on the initial parameters ξ and t0.

Example. y′ = y with initial condition y(t0) = y0 has a solution y(t) = y0et−t0 .

Remark. 1. Thus, we will consider the solution not only as a function of t, but also as a

function of two more variables, i.e. t0 and ξ (initial conditions).

2. We are interested in how the solution behaves with respect to all three or any of these

variables, in particular whether the solution varies continuously depending on the variables or

not.

Let D be the domain in R1+n[(t, x) space] and f ∈ (C,Lip) in D. Let Ψ be a solution of the ode

x′(t) = f(t, x(t)) on I

Thus, we have (t,Ψ(t)) ∈ D, ∀t ∈ I. By uniqueness theorem ∃ a unique solution passing through

any fixed point (τ, ξ) ∈ D close enough to the given solution. Here, ξ0 = Ψ(τ). The blue curve,

lying within the butterfly region on both sides of (τ, ξ0) is the solution curve Ψ(t). Now, consider

the star (τ, ξ), very close to (τ, ξ0) as a new initial condition.

Claim : We will have a unique solution on the interval I for this initial condition (τ, ξ).

Remark. 1. This is different from the extension of solutions on I.

Page 29: Paper XII : Ordinary Di erential Equation

1.8. DEPENDENCY ON INITIAL CONDITIONS 29

Figure 1.7: Existence of solutions of an IVP

2. In the previous extension theorems we extended the solutions to the left of (τ − α) and right

of (τ + α), but did not consider outside the butterfly region in [τ − α, τ + α].

Lecture 9

Theorem 1.8.1. Let f ∈ (C,Lip) in a domain D in R1+n[(t, x) space] and let ψ be a solution of

the ode

x′(t) = f(t, x(t)) on I = [a, b]

Then, ∃δ > 0, such that for a fixed (τ, ξ) ∈ U , where U is an open δ-rectangular (l1) neighborhood of

(τ, ψ(τ) and ∃ a unique solution ϕ is the above ode on I with ϕ(t = τ, τ, ξ) = ξ. Moreover ϕ ∈ C on

V := t ∈ (a, b) and (τ, ξ) ∈ U , i.e. V := (a, b)× U . [Here ϕ is a function of t, τ and ξ]

Remark. Refer to the previous picture with ξ0 = ψ(τ). Then, this theorem guarantees the

existence of a rectangular neighborhood U : (a, b)× (ψ(τ)− δ, ψ(τ) + δ) such that for every initial

condition (τ, ξ) ∈ U,∃ unique solution.

Proof. Let δ > 0 be such that the region U1 defined by

U1 : t ∈ I, |x− ψ(t)| ≤ δ1, i.e.U1 = {(t, x)|t ∈ I, |x− ψ(t)| < δ1}

Page 30: Paper XII : Ordinary Di erential Equation

30 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

is a subset of D. Then, let δ > 0 be such that δ < e−k(b−a)δ1, where k is the Lipschitz constant of f .

Let U be the set U = {(τ, ξ)|a < τ < b, |ξ − ψ(τ)| < δ}. Now, for any (τ, ξ) ∈ U,∃ψ, satisfying the

system of odes locally and passing through the initial point (τ, ξ). Since, locally ϕ is a solution of

the system of odes,

∴ ϕ(t, τ, ξ) = ξ +

∫ t

τ

f(s, ϕ(s, τ, ξ))ds,∀t for which ϕ exists .

For t ∈ Iψ(t) = ψ(τ) +

∫ t

τ

f(s, ψ(s))ds,∵ ψ is a solution. (1.11)

As proven earlier, two solutions of same system of ode satisfy the inequality

|ϕ(t, τ, ξ)− ψ(t)| ≤ |ξ − ψ(τ)|ek|t−τ | < δek|t−τ | ≤ δek|b−a| < δ1

This shows that (t, ϕ(t)) ∈ U1, ∀t such that it is defined. Thus, we can extend the solutions to the

whole interval I, using the idea of extension theorem. Construct a sequence of function defined by

ϕ0(t, τ, ξ) = ψ(t) + ξ − ψ(τ) (1.12)

ϕj+1(t, τ, ξ) = ξ +

∫ t

τ

f(s, ϕj(s, τ, ξ))ds, j ∈ N ∪ {0} (1.13)

Then, for (τ, ξ) ∈ U , |ϕ0(t, τ, ξ)− ψ(t)| = |ξ − ψ(τ)| < δ1, which shows that (t, ϕ0(t, τ, ξ)) ∈ U1 for

t ∈ I. Clearly, ϕ0 ∈ C(V ) and from (1.11), (1.12) and (1.13) we have

|ϕ1(t, τ, ξ)− ϕ0(t, τ, ξ)| =∣∣∣∣∫ t

τ

{f(s, ϕ0(s, τ, ξ))− f(s, ψ(s))}ds∣∣∣∣

≤ k|ξ − ψ(τ)||t− τ |

∴ |ϕ(t, τ, ξ)− ψ(t)| ≤ (1 + k|t− τ |) |ξ − ψ(τ)| < ek|t−τ ||ξ − ψ(τ)| < δ1

provided t ∈ I, (τ, ξ) ∈ U. Thus, (t, ϕ1(t, τ, ξ)) ∈ U1 and ϕ1 ∈ C(V ). Using mathematical induction

Page 31: Paper XII : Ordinary Di erential Equation

1.8. DEPENDENCY ON INITIAL CONDITIONS 31

one can show that if ψ0, ψ1, · · · , ψj are all in U1 and continuous on V , then

|ϕj+1(t, τ, ξ)− ϕj(t, τ, ξ)| ≤kj+1|t− τ |j+1

(j + 1)!|ξ − ψ(τ)| if t ∈ I and (τ, ξ) ∈ U. (1.14)

Thus, we have

|ϕj+1(t, τ, ξ)− ψ(t)| ≤j∑

α=0

|ϕα+1(t, τ, ξ)− ϕα(t, τ, ξ)|+ |ϕ0(t, τ, ξ)− ψ(t)|

[1 +

j∑α=0

kα+1|t− τ |α+1

(k + 1)!

]|ξ − ψ(τ)| < δ1

=⇒ (t, ϕj+1(t, τ, ξ)) ∈ U1.

Also, by (1.12) and (1.13), ϕj+1 ∈ C(V ). Thus, by mathematical induction, we have

∀j ∈ N ∪ {0}, (t, ϕj(t, τ, ξ)) ∈ U1 and ϕj ∈ C(V ). Hence, by (1.14) ϕj converges uniformly on V to

ϕ, which implies ϕ is continuous on V .

Page 32: Paper XII : Ordinary Di erential Equation

32 CHAPTER 1. EXISTENCE AND UNIQUENESS OF SOLUTIONS

Page 33: Paper XII : Ordinary Di erential Equation

Chapter 2

System of first order ordinary differential

equations

Lecture 10

References : This chapter is mainly based on the following two references.

1. Calculus II by Tom Apostol, Chapter : Systems of Differential Equations.

2. Elementary Differential Equations and Boundary Value problems by Boyce and Diprima,

Chapter 7 : Systems of first order linear equations, Ninth Edition, Wiley Publications.

2.1 System of First order ODEs

A general system of n-first order odes is of the form

(S1)

dy1dt

= f1(t, y1, · · · , yn)

dy2dt

= f2(t, y1, · · · , yn)...

...

dyndt

= fn(t, y1, · · · , yn)

We will assume fi, i = 1, 2, · · · , n to be constinuous in all the variables, i.e. t, yi, i = 1, 2, · · · , n; and

fis are Lipschitz continuous with respect to the dependent variables yjs.

Remark. • This does not mean that f1 is Lipschitz in only y1 variable.

• This means that each fi is Lipschitz in all the variables yj, j = 1, · · · , n.

• It is very difficult to solve systems like (S1). Hence, we will restrict ourselves to a very

specific case where fis are linear functions in yjs. Thus, we will be working (for now only)

with systems of n-linear ordinary differential equations of first order.

33

Page 34: Paper XII : Ordinary Di erential Equation

34 CHAPTER 2. FIRST ORDER ODE SYSTEMS

2.2 Systems of linear odes

We are going to consider the following system

y′1 = p11(t)y1 + p12(t)y2 + · · ·+ p1n(t)yn + q1(t)

y′2 = p21(t)y1 + p22(t)y2 + · · ·+ p2n(t)yn + q2(t)

......

......

......

......

y′n = pn1(t)y1 + pn2(t)y2 + · · ·+ pnn(t)yn + qn(t)

where yi s are dependent unknown functions, pijs and q − is are the given functions of t, defined on

some interval J .

Recall that a linear n-th order ode can be converted to a system on n linear first order odes. Thus,

we can treat n-th order linear odes as special cases of system of linear first order odes. Before we do

that, we would like to introduce some abstract concepts.

Matrix functions

Let J ⊆ R be an interval. We define a function P : J →Mnm(R) as

P (t) =

p11(t) p12(t) · · · p1m(t)

p21(t) p22(t) · · · p2m(t)...

......

...

pn1(t) pn2(t) · · · pnm(t)

where n,m ∈ N and pij : R→ R are functions of t. P is called a matrix function in 1 variable.

Page 35: Paper XII : Ordinary Di erential Equation

2.2. SYSTEMS OF LINEAR ODES 35

Integral of a matrix function

If P (t) = [pij(t)]n×m be a matrix function defined on an interval J , then P is said to be integrable

over J iff each pij is integrable over J , i, j = 1, 2, · · · , n. The integral is given by∫J

P (t) =

[∫J

pij(t)ft

]n×m

Derivatives of a matrix function

We define it in a similar way as

P ′(t) =[p′ij(t)

]n×m

Note :- All basic differential rules for sums and products of differentiable functions hold for

matrix functions also. Further, if P and Q are two square matrix functions of the same size, then

(PQ)′ = PQ′ + P ′Q.

Exponential of a matrix

For this we will work with square matrices only. Let A = [aij be an n× n matrix over R or C. We

define the exponential of A as

eA =∞∑k=0

1

k!Ak

Note : For this we require that the power series of matrix (rhs) converges.

Norm of a matrix

Let A = [aij be an n× n matrix over R or C. We will consider the following norm

||A|| =n∑j=1

n∑i=1

|aij|

Remark. This is the l1 norm for matrices. There are different norms for matrices, but the results

that we will prove here will also hold for all the other equivalent norms.

Page 36: Paper XII : Ordinary Di erential Equation

36 CHAPTER 2. FIRST ORDER ODE SYSTEMS

Fundamental properties of norms

1. ||A+B|| ≤ ||A||+ ||B||, triangle inequality.

2. ||AB|| ≤ ||A||||B||, (this is slightly different from scalar norms).

3. ||cA|| ≤ |c|||A||

Convergence of series of matrices

Let {Ak} be an infinite sequence of matrices of order m× n. Let a(k)ij be the (ij)-th entry of Ak.

Then we will say the series of matrices∞∑k=1

Ak

is convergent if all the mn series

∞∑k=1

a(k)ij , 1 ≤ i ≤ m; 1 ≤ j ≤ n

converges and we write∞∑k=1

Ak =

[∞∑k=1

a(k)ij

]m×n

Note : This is just the component wise convergence of the series.

An easy test for convergence of series of matrices

If∑∞

k=1 ||Ak|| converges, then so does∑∞

k=1Ak.

Remark. Defining Ak = I when k = 0, we have a well defined series∞∑k=0

Ak

k!for every square matrix

A. Further, we have the inequality ∣∣∣∣∣∣∣∣∣∣∞∑k=0

Ak

k!

∣∣∣∣∣∣∣∣∣∣ ≤ 1

k!||A||k,∀k ∈ N

Page 37: Paper XII : Ordinary Di erential Equation

2.3. UNIQUENESS OF THE SOLUTION OF THE SYSTEMOF DIFFERENTIAL EQUATIONS37

Differential equation satisfied by etA

Claim : Let E(t) = etA. Then E satisfies the matrix differential equation E ′(t) = E(t)A = AE(t).

Verification : Note E ′(t) = ddt

(etA). As the power series on the rhs is convergent ∀t ∈ R, we can

do term by term differentiation to obtain E ′(t) = E(t)A = AE(t).

Question 2.2.1. 1. Show that A commutes with E(t).

2. Let D be a diagonal matrix. Show that eD is also a diagonal matrix. What about etD?

3. Let A be a diagonalisable matrix, i.e. for there exist a diagonal matrix D and an invertible

matrix P such that D = PAP−1. Show that eA = P−1eDP . What is the relationship between

eAt and eDt?

Lecture 11

2.3 Uniqueness of the solution of the system of differential

equations

Statement :- Let A and B be given n]timesn constant matrices. Then the only n× n matrix

function F satisfying the initial value problem

F ′(t) = AF (t), F (0) = B, for t ∈ R is F (t) = etAB

Proof :- To prove that etAB is a solution, note that ddt

(etAB) = A(etA)B since matrix

multiplication is associative. Therefore, etAB is a solution of the given ode.

To prove the uniqueness, consider F be any solution of the given ode system and G(t) = e−tAF (t)

which implies G′(t) = 0 =⇒ G(t) = G(0) = B =⇒ F (t) = etAB.

Question 2.3.1. 1. For any two square matrices A,B of same order such that they commute,

show that eA+B = eAeB. Will the relation hold if they don’t commute?

Page 38: Paper XII : Ordinary Di erential Equation

38 CHAPTER 2. FIRST ORDER ODE SYSTEMS

2. Let A be an n× n matrix such that Am+1 = 0 for some m ∈ N. Then,

eA = I +∞∑k=1

1k!Ak = I +

m∑k=1

1k!Ak.

3. Let A be an n× n strictly upper triangular matrix, i.e. aij = 0,∀i ≥ j. Then, ∃m ∈ N such

that Am+1 = 0.

4. For a general square matrix A, it is difficult to obtain eAt. Using Cay;ey-Hamilton

theorem,Putzer gave a procedure to obtain eA. This can be seen in the book Calculus II by

Tom Apostol, Chapter 7, pg 206.

Remark. Solving the system of first order odes F ′(t) = AF (t), F (0) = B directly, using the

exponential form F (t) = eAtB is difficult for a general n× n matrix as calculating the exponential

might be a tough task.

So we explore the properties of the system a bit more and see whether we can use those results.

Theorem 2.3.2 (Principle of superposition). Consider a general homogeneous system of n linear

first order odes given by

X ′(t) = P (t)X(t) (2.1)

where P (t) =

p11(t) p12(t) · · · p1m(t)

p21(t) p22(t) · · · p2m(t)...

......

...

pn1(t) pn2(t) · · · pnm(t)

is a matrix function of t. If X and Y are two solutions of (2.1) then any linear combination of X

and Y will again be a solution of (2.1).

Proof. Clearly, X ′(t) = P (t)X(t) and Y ′(t) = P (t)Y (t) which implies

(c1X + c2Y )′(t) = P (t)(c1X + c2Y )(t) for c1, c2 ∈ R.

Theorem 2.3.3. If x(i), i = 1, 2, · · · , n are linearly independent solutions of (2.1), then any

solution Y of (2.1) can be uniquely expressed as a linear combination of X(i)s. In other words, The

set of all solutions of (2.1) forms a vector space with respect to function addition and scalar

multiplication. The dimension of S is atmost n for an n× n system.

Page 39: Paper XII : Ordinary Di erential Equation

2.3. UNIQUENESS OF THE SOLUTION OF THE SYSTEMOF DIFFERENTIAL EQUATIONS39

Proof. Using the principle of superposition, it is very easy to show that S will be a vector space. To

prove that it dimension is at most n, we require the concepts of Wronskian. The proof is done in

the upcoming sections for linear homogeneous systems with constant and variable coefficients

separately.

Definition 2.3.4 (Wronskian). Let’s consider the homogeneous system of first order linear

ordinary differential equations given by (2.1). Let x(i), i = 1, 2, · · · , n are linearly independent

solutions of (2.1). Then we define the Wronskian of the n-solutions as the determinant given by

W[X(1), X(2), · · · , X(n)

](t) =

∣∣∣∣∣∣∣∣∣∣∣

X(1)1 (t) X

(2)1 (t) · · · X

(n)1 (t)

X(1)2 (t) X

(2)2 (t) · · · X

(n)2 (t)

......

......

X(1)n (t) X

(2)n (t) · · · X

(n)n (t)

∣∣∣∣∣∣∣∣∣∣∣Remark. The Wronskian maps n-solutions for each t to a scalar given by the determinant value. If

we fix n-solutions and vary t over an interval I, then the Wronskian of these n-functions can be

considered as a function of t.

Definition 2.3.5 (Linearly independent solutions). If W[X(1), X(2), · · · , X(n)

](t0) 6= 0 for some

fixed t0 then we say the solutions are linearly independent at t0. If the Wronskian is non-zero for all

t ∈ I, then we say the solutions are linearly independent on whole I.

Definition 2.3.6 (Fundamental set of solutions). Any collection of n-solutions of the n× n system

(2.1), which are linearly independent over an interval I is said to be a fundamental set of solutions

for the system over the interval I.

Theorem 2.3.7 (Abel’s theorem). If x(i), i = 1, 2, · · · , n are linearly independent solutions of (2.1)

on an interval I = (α, β), then the Wronskian W[X(1), X(2), · · · , X(n)

](t) is either identically zero

or else never vanishes on I.

Proof. Exapanding the Wronskian determinant using the first row and differentiating it we get

dW

dt=

n∑i=1

dy(i)1

dtYi +

n∑j=1

y(j)1

dYjdt

Page 40: Paper XII : Ordinary Di erential Equation

40 CHAPTER 2. FIRST ORDER ODE SYSTEMS

Using the equation (2.1) to substitute fordYjdt

we obtain

dW

dt=

[n∑i=1

pii(t)

]W (t) =⇒ W (t) = c exp

(∫trace(P (t))dt

)

where c is the constant of integration. If for some t0 ∈ I,W (t0) = 0, then c = 0, i.e.

W (t) = 0,∀t ∈ I.

2.3.1 Existence of Fundamental set of solutions

Theorem 2.3.8. If X(i), i = 1, 2, · · · , n are n solutions of (2.1) on an interval I = (α, β)

corresponding to the initial conditions y(i)(t0) = e(i), where {e(1), e(2), · · · , e(n)} form a fundamental

set of solutions of (2.1).

Proof. As the Wronskian for a homogeneous linear system either vanishes everywhere or never

vanishes on the interval I, therefore the given solutions X(i), i = 1, 2, · · · , n are linearly independent

on I iff they are linearly independent for t = t0. Now,

W (t0) =

∣∣∣∣∣∣∣∣∣∣∣

1 · · · 0

0 · · · 0...

......

0 · · · 1

∣∣∣∣∣∣∣∣∣∣∣= |In| = 1 6= 0

Thus, X(i), i = 1, 2, · · · , n forms a fundamental set of solutions.

Lecture 12

2.3.2 Linear Differential operators (constant coefficients)

Linear Differential operators (constant coefficients) Let us define a linear differential operator with

constant coefficients as follows

Lc ≡dn

dxn+ a1

dn−1

dxn−1+ ...+ an−1

d

dx+ an

Page 41: Paper XII : Ordinary Di erential Equation

2.3. UNIQUENESS OF THE SOLUTION OF THE SYSTEMOF DIFFERENTIAL EQUATIONS41

Then any linear odedny

dxn+ a1

dn−1y

dxn−1+ ...+ an−1

dy

dx+ any = 0

can be written as

Lc(y) =dny

dxn+ a1

dn−1y

dxn−1+ ...+ an−1

dy

dx+ any = 0

Question 2.3.9. Does this linear differential operator Lc has any relation with the linear

transformations on vector spaces?

Definition 2.3.10 (Linear operator). Let V and W be two vector spaces over a field F and

L : V → W be a function. L is said to be a linear operator from V to W if for every v1, v2 ∈ V and

any c ∈ F ,

L(cv1 + v2) = cL(v1) + L(v2).

Coming back to the differential operator

Lc ≡dn

dxn+ a1

dn−1

dxn−1+ ...+ an−1

d

dx+ an

Considering Lc : Cn → C0, we have

Lc(cf + g) = cLc(f) + g,∀f, g ∈ Cn,∀c ∈ F

Thus, Lc is a linear operator on Cn. The kernel of Lc is the solution set of the equation Lc(y) = 0,

i.e. the solution set of the homogeneous differential equation

Lc(y) =dny

dxn+ a1

dn−1y

dxn−1+ ...+ an−1

dy

dx+ any = 0

Dimension of kernel : Note that, V = ker(Lc) is a vector subspace of Cn. Hence, it has a basis.

Is the kernel of the linear operator Lc finite dimensional? Here, we need the help of Wronskian. If

ker(Lc) is infinite dimensional, then there exists linearly independent functions yi ∈ ker(Lc) for

Page 42: Paper XII : Ordinary Di erential Equation

42 CHAPTER 2. FIRST ORDER ODE SYSTEMS

i = 1, 2, ..., (n+ 1). Now, we have the Wronskian of these functions W (y1, ..., yn+1) =∣∣∣∣∣∣∣∣∣∣∣∣∣

y1 y2 ... yn yn+1

y′1 y′2 ... y′n y′n+1

... ... ... ... ...

y(n−1)1 y

(n−1)2 ... y

(n−1)n y

(n−1)n+1

y(n)1 y

(n)2 ... y

(n)n y

(n)n+1

∣∣∣∣∣∣∣∣∣∣∣∣∣Using row operations R′n+1 = Rn+1 +

∑ni=1 aiRi, we have

W (y1, ..., yn+1) =

∣∣∣∣∣∣∣∣∣∣∣∣∣

y1 y2 ... yn yn+1

y′1 y′2 ... y′n y′n+1

... ... ... ... ...

y(n−1)1 y

(n−1)2 ... y

(n−1)n y

(n−1)n+1

Lc(y1) Lc(y2) ... Lc(yn) Lc(yn+1)

∣∣∣∣∣∣∣∣∣∣∣∣∣But, we know Lc(yi) = 0,∀i = 1, 2, ..., n+ 1. Hence, W (y1, ..., yn+1) = 0.

Thus, ker(Lc) is finite dimensional and dim(ker(Lc)) ≤ n.

Now, taking the hint from dydx

+ cy = 0, we look for solutions of the form y = emx for our linear

homogeneous ode. This gives (mn +∑n

i=1 aimn−i) emx = 0, which in turn gives us the auxilliary

equation

mn +n∑i=1

aimn−i = 0

This gives us exactly n linearly independent solutions (after taking care of the repeated roots of the

auxilliary equation.)

Hence, dim(ker(Lc)) = n and the basis of the kernel of the linear differential operator Lccontributes to the complementary function. In fact, the complementary function is the linear

combination of the basis elements of the ker(Lc).This takes care of the homogeneous linear ode with constant coefficients.

Question 2.3.11. What about the non-homogeneous linear odes with constant coefficients?

Answer. For this we have the Particular integral. But, to find out from where does this particular

integral come, we need some more linear algebra.

Page 43: Paper XII : Ordinary Di erential Equation

2.3. UNIQUENESS OF THE SOLUTION OF THE SYSTEMOF DIFFERENTIAL EQUATIONS43

Figure 2.1: Diagramatic representation of Quotient spaces

Quotient spaces

Definition 2.3.12 (Cosets). Let W be a subspace of a vector space V . Let α ∈ V , then the set

α +W := {α + w|w ∈ W}

is called a coset of W in V .

Remark. Note that, even if α 6= β, it is possible that the cosets α +W = β +W . For example,

take β = α + w0, where w0 ∈ W is a non-null vector.

Definition 2.3.13 (Quotient spaces). The set of all distinct cosets of W in V , denoted by V/W is

called a Quotient space.

Question 2.3.14. Let W be the kernel of the linear transformation L : V → S. Further, let s ∈ Sand w1, w2 ∈ V are two solutions of L(y) = s. Does there exists an α ∈ V such that

w1, w2 ∈ α +W?

Page 44: Paper XII : Ordinary Di erential Equation

44 CHAPTER 2. FIRST ORDER ODE SYSTEMS

Answer. Now, L(w1 − w2) = L(w1)− L(w2) = 0 =⇒ (w1 − w2) ∈ W .

If w2 ∈ α +W , then w2 = α + w for some w ∈ W .

But, then w1 = w2 + (w1 − w2) = α + [w + (w1 − w2)] ∈ α +W .

Infact, we can express this coset of the solutions to the non-homogeneous linear equation as w1 +W .

Particular integral : Coming back to the non-homogeneous linear ode with constant coefficients

Lc(y) = f(x), where f ∈ Lip(R)

Let y1 be one of its solution. Then note that

∀y ∈ ker(Lc),Lc(y + y1) = Lc(y1) = f

Hence, Y = y + y1 is a solution of the non-homogeneous problem. Also, by previous observation,

any other solution y2 of the non-homogeneous problem will always lie in the same coset as y1, i.e.

y2 ∈ y1 + ker(Lc).Thus, any solution Y of the non-homogeneous equation is given by

Y = y + y1, where y ∈ ker(Lc) and y1 ∈ y1 + ker(Lc)

which can be reframed as Y = CF + PI, where CF is the complementary function (belonging to

the kernel of Lc and PI is the particular integral belonging in the coset y1 + ker(Lc).

Lecture 13

2.3.3 Linear Differential operators (Variable coefficients)

Linear Differential operators (Variable coefficients)

Now, let’s consider a homogeneous linear ode with variable coefficients, (the coefficients can be

functions of x) given by

Lv(y) =dny

dxn+ a1(x)

dn−1y

dxn−1+ ...+ an−1(x)

dy

dx+ an(x)y = 0

Page 45: Paper XII : Ordinary Di erential Equation

2.3. UNIQUENESS OF THE SOLUTION OF THE SYSTEMOF DIFFERENTIAL EQUATIONS45

Again, we can easily show that Lv : Cn → C0 is a linear transformation. And one can similarly

show that the solution space of the above homogeneous equation (=ker(Lv))forms a vector

subspace of Cn.

Now, one may think that we can similarly show that the ker(Lv) is finite dimensional using the

Wronskian argument. But, the problem is - here the coefficients are functions of x and not scalars.

Hence, row operations won’t help.

2.3.4 Existence and uniqueness theorem

We will be using the following theorem (without proof) in our quest to prove that the kernel of Lvis finite dimensional. This theorem can be derived from the general existence and uniqueness

theorem of the ordinary differential equations.

Theorem 2.3.15. Let us consider an n−th order linear ordinary differential operator

Lv ≡dn

dxn+ a1(x)

dn−1

dxn−1+ ...+ an−1(x)

d

dx+ an(x),

where ai, i = 1, 2, ..., n, are continuous functions on some open interval J ⊂ R. If x0 ∈ J and if

k0, k1, ..., kn−1 are n given real numbers, then there exists a unique solution y = f(x), to the

homogeneous linear ode Lv(y) = 0 on J and which also satisfies the initial conditions

f(x0) = k0, f′(x0) = k1, ..., f

(n−1)(x0) = kn−1.

Consider the equation Lv(y) = 0. Then, given any (k0, k1, ...., kn−1) ∈ Rn and x0 ∈ J , there will

exist a unique solution y = f(x) of Lv(y) = 0, such that

f(x0)

f ′(x0)

.

.

f (n−1)(x0)

=

k0

k1

.

.

kn−1

Consider a general ode F(x, y, y′, ..., yn) = g(x) with initial conditions

(y, y′, ..., yn)(x0) = (k0, k1, ..., kn). Then, existence theorem says that there exists a solution

Page 46: Paper XII : Ordinary Di erential Equation

46 CHAPTER 2. FIRST ORDER ODE SYSTEMS

y = f(x) such that F(y) = g and (y, y′, ..., yn)(x0) = (k0, k1, ..., kn). Whereas, the uniqueness

theorem guarantees that no other f exists.

Kernel is finite dimensional

Theorem 2.3.16. Let Lv : Cn → C0 be a linear differential operator of order n given by

Lv =dn

dxn+ a1(x)

dn−1

dxn−1+ ...+ an−1(x)

d

dx+ an(x)

Then the solution space of the equation Lv(y) = 0 has dimension n.

Sketch of the proof. • Consider T : ker(Lv)→ F n defined by

T (y) =

(y(x0),

dy

dx(x0), ...,

dyn−1

dxn−1(x0)

),

where y ∈ ker(Lv) and x0 ∈ F .

• Now, by the uniqueness theorem of odes we have T (y) = 0 =⇒ y = 0, as we are in the

homogeneous case.

• So, ker(T ) is trivial. Which means T is one-one.

• Also, by existence theorem, for any n-tuple α ∈ F n, there exists a solution of T (y) = α.

• Thus, T is a bijection, which implies dim(ker(Lv)) = dim(F n) = n.

Remark. • Even though we have obtained the result that the kernel of Lv is finite

dimensional, it is very difficult to find a basis of ker(Lv).

Question 2.3.17. Why is it difficult to find a basis of ker(Lv), when it was so easy to find a basis

for ker(Lc)?

Answer. • Lc involved only constant coefficients, that helped us to reduce the problem of ode

to finding the roots of a polynomial.

• But, Lv has functions as coefficients and it is very difficult to manage so many variants of

functions together.

Page 47: Paper XII : Ordinary Di erential Equation

2.3. UNIQUENESS OF THE SOLUTION OF THE SYSTEMOF DIFFERENTIAL EQUATIONS47

What to do?

• Not only the basis, it is difficult to find the particular integral for the non-homogeneous linear

ode Lv(y) = f .

• But, if we have knowledge of at least one solution explicitly, then it becomes easier to find the

complete primitive.

• We use different methods based on

– the coefficients of Lv,

– the informations given along with the problem.

• If we go back to 1st order linear odes with function coefficients, then we know how to solve

them using integrating factors.

• Thus, if we can somehow reduce the order of the equation and involve some 1st order linear

odes, then we can hope for solutions.

• This idea leads us to different methods like

1. Change of dependent variable.

2. Change of independent variable.

3. Factorisation of operators.

4. Power series method if the coefficients are analytic.

Eigen values of a linear transformation Let V be a vector space over the field F (R or C)

and L : V → V be a linear transformation. We say λ ∈ F is an eigen value of L if ∃v ∈ V \{0}, such

that L(v) = λv. v is then called an eigen vector of L corresponding to the eigen value λ.

Special Case : Matrices

• Let A be an n× n matrix.

• Then, it can be considered as a linear transformation.

Page 48: Paper XII : Ordinary Di erential Equation

48 CHAPTER 2. FIRST ORDER ODE SYSTEMS

• To obtain an eigen value of A, we must find a non-zero vector v such that Av = λv.

• But, Av = λv iff (A− λI)v = 0.

• Thus, (A− λI) is singular matrix and hence det(A− λI) = 0, which is known as the

characteristic equation of A.

• Once, the eigen values are obtained, one can find the eigen vectors v by finding the kernel of

(A− λI) or directly solving Av = λv.

Question 2.3.18. 1. It is repeatedly mentioned that the roots of the auxilliary equation are the

eigen values of some Linear transformation.

2. Which linear transformation?

3. Similarly, emx are the eigen functions for which linear operator?

4. Are they eigen values and eigen vectors of Lc?

5. Clearly, NO. As Lc(emx) = 0 and not m(emx).

6. To obtain the answers we need to change the set-up. But, before that we need to look into

something else.

Lecture 14

2.4 Inhomogeneous system of first order linear odes

With variable coefficients, there was not much to do. Let’s visit another section where again we

have a lot to do - the system of linear equations of first order with constant coefficients. Consider

Page 49: Paper XII : Ordinary Di erential Equation

2.4. INHOMOGENEOUS SYSTEM 49

the following system of linear odes

dy1dt

= P11y1 + P12y2 + ...+ P1nyn + q1(t)

dy2dt

= P21y1 + P22y2 + ...+ P2nyn + q2(t)

............................................................

dyndt

= Pn1y1 + Pn2y2 + ...+ Pnnyn + qn(t)

where Pij are constants and qi(t) are functions of t for i, j ∈ 1, 2, ..., n.

Taking Y = (y1, ..., yn)t, P = (Pij)n×n, the above system can be written as

dY

dt= PY +Q(t), where and Q(t) = (q1(t), ..., qn(t))t

Observations : As in the case of a single linear ode with constant coefficients, this case can be

dealt very smoothly. One can also verify that the solutions of this system will form a vector space

and the linear transformation

Lsc ≡(d

dtI − P

): (C1)n → (C0)n

is a linear transformation. Thus, again if Q(t) ≡ 0, we have the solution of the homogeneous system

Lsc = 0 is the kernel of the linear operator Lsc.

The advantage of having a constant coefficients homogeneous system is that, one can row-reduce

the matrix P to obtain an upper triangular matrix and then easily solve the system. Let’s restrict

ourselves to a setup where we can obtain a very simple matrix - reducing n−th order linear ode to a

system of n first order linear odes.

Page 50: Paper XII : Ordinary Di erential Equation

50 CHAPTER 2. FIRST ORDER ODE SYSTEMS

P is a Diagonal matrix : The system L(Y ) = 0 becomes an autonomous system, i.e. each of

the dependent variables depends only on itself :

dy1dt

= P11y1

dy2dt

= P22y2

....................

dyndt

= Pnnyn

Each equation can be solved easily, and we have the solutions as

yi(t) = CiePiit, i = 1, 2, ..., n.

P is diagonalizable : Then, there is a diagonal matrix D, such that D = APA−1. Using change

of variables Z = AY , we have

Z ′ = AY ′ = A(PY ) = APA−1Z = DZ.

Then we can solve for the new system Z ′ = DZ. Finally, transform the solutions back to Y .

General P matrix : To solve the system Lsc(Y ) = 0, we need to do the following :

1. Find the eigen values and their corresponding eigen vectors of the matrix P, i.e. find the roots

of

det(P − rI) = 0

2. There are three possibilities for the eigenvalues of P

(a) All eigenvalues are real and different from each other.

• Then associated with each eigenvalue ri is a real eigenvector vi.

• The set of n eigenvectors vi, i = 1, 2, ..., n, is linearly independent.

Page 51: Paper XII : Ordinary Di erential Equation

2.4. INHOMOGENEOUS SYSTEM 51

• The corresponding solutions of the differential system are

yi(t) = vierit

• and the general solution becomes

Y (t) =n∑i=1

civierit.

3. Some eigenvalues occur in complex conjugate pairs.

• Then there are still n linearly independent solutions, provided that all the eigenvalues

are different.

• And we have the general solution of the form

Y (t) =n∑i=1

civierit.

4. Some eigenvalues are repeated.

• Number of corresponding linearly independent eigenvectors may be smaller than the

algebraic multiplicity of the eigenvalue.

• We need to seek additional solutions of another form.

• Why does this look familiar?

• This is similar to the n−th order linear odes with constant coefficients.

• Thus, a repeated eigen value will give rise to solutions of the form erit, terit, t2erit and so

on.

2.4.1 n-th order linear ode as a system of first order linear odes

n-th order linear ode as a system of n equations Let us go back to the n-th order linear ode with

constant coefficients

Lc(y) = 0, i.e.dny

dtn+ a1

dn−1y

dtn−1+ ...+ an−1

dy

dt+ any = 0

Page 52: Paper XII : Ordinary Di erential Equation

52 CHAPTER 2. FIRST ORDER ODE SYSTEMS

Define the following for i = 1, 2, ..., n− 1

y1 = y, y2 = y′1, ..., yi+1 = y′i, ..., yn = y′n−1

Then we have our system

d

dt

y1

y2

.

.

yn−1

yn

=

0 1 0 ... 0 0

0 0 1 ... 0 0

.. ... ... ... ... ...

.. ... ... ... ... ...

0 0 0 ... 0 1

−an −an−1 −an−2 ... −a2 −a1

y1

y2

.

.

yn−1

yn

Example. Given a linear ode, sayd2y

dt2− 3

dy

dt+ 2y = 0

Form an auxilliary equation m2 − 3m+ 2 = 0 and find its solutions m = 1, 2. Converting it to the

system we have

d

dt

(y1

y2

)=

(0 1

−2 3

)(y1

y2

)The characterestic equation of the coefficient matrix P is∣∣∣∣∣−r 1

−2 3− r

∣∣∣∣∣ = −r(3− r) + 2 = r2 − 3r + 2 = 0

The characterestic equation of the coefficient matrix of the system arising from the 2nd order linear

ode is same as the auxilliary equation of the original 2nd order equation.

Page 53: Paper XII : Ordinary Di erential Equation

2.4. INHOMOGENEOUS SYSTEM 53

Lecture 15

2.4.2 n-th order linear ode with constant coefficients

To find the solution of the system, we need to find the eigen values of the coefficient matrix P, i.e.

find the roots of ∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣

−r 1 0 ... 0 0

0 −r 1 ... 0 0

.. ... ... ... ... ...

.. ... ... ... ... ...

0 0 0 ... −r 1

−an −an−1 −an−2 ... −a2 −a1 − r

∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣Which will give rise to the same polynomial equation as the auxilliary equation of the n−th order

linear ode. Thus, the auxilliary equation is basically the characterestic equations of the system of

linear first order odes arising from the n-th order linear ode.

Consider an n-th order linear homogeneous ode with constant coefficients

Lc(y) =dny

dtn+ a1

dn−1y

dtn−1+ ...+ an−1

dy

dt+ any = 0

We can transform it into a system of n number of first order linear odes with constant coefficientsdYdx

= AY , where

Y =

y

y′

y′′

.

.

yn−1

, A =

0 1 0 ... 0 0

0 0 1 ... 0 0

.. ... ... ... ... ...

.. ... ... ... ... ...

0 0 0 ... 0 1

−an −an−1 −an−2 ... −a2 −a1

Roots of the auxilliary equation of the n-th order linear ode are precisely the eigen values of the

matrix A. The complimentary function and hence the basis of the kernel of L are formed by the

eigen functions corresponding to these eigen values.

Page 54: Paper XII : Ordinary Di erential Equation

54 CHAPTER 2. FIRST ORDER ODE SYSTEMS

Distinct roots means distinct eigen values and distinct eigen functions. Problems arise for repeated

eigen values - eigen functions can not be repeated.

Example. Consider the 3rd order linear ode

d3y

dx3− 7

d2y

dx2+ 16

dy

dx− 12y = 0

Corresponding system dYdx

= AY , where

Y =

y

y′

y′′

, A =

0 1 0

0 0 1

12 −16 7

The characterestic equation is m3−7m2 + 16m−12 = 0 having the roots 2, 2 and 3. Clearly, e2x and

e3x are two linearly independent solutions of the n-th order ode. Are they solutions of the system?

Answer is No. e2x ∈ C3 but A : C1 × C1 × C1 → C0 × C0 × C0. So, A can not act on e2x. We need

to find an eigen vector of A.

What about a 3-tuple with all the entries as e2x?

A

e2x

e2x

e2x

=

e2x

e2x

(12 + 7− 16)e2x

Clearly, this idea is not working. Now, observe that, as far as A is concerned, e2x does not have a

very important role.

A(ξe2x) = (Aξ)e2x,∀ξ ∈ F 3

To get A(ξe2x) = 2(ξe2x), we thus, need

A(ξe2x) = (Aξ)e2x = 2(ξe2x) =⇒ Aξ = 2ξ

Thus, we need to find an eigen vector of A itself.

Aξ = 2ξ =⇒ ξ2 = 2ξ1, ξ3 = 4ξ1. Thus, ξ = (1, 2, 4)t is an eigen vector of A with respect to eigen

Page 55: Paper XII : Ordinary Di erential Equation

2.4. INHOMOGENEOUS SYSTEM 55

value 2. Thus, an eigen function of the system of odes is given by

ξe2x =

e2x

2e2x

4e2x

Now, the problem is we don’t have another independent eigen vector of A wrt 2. Wrt the eigen

value 3, we have easier solution. Following is an eigen function corresponding to 3.1

3

9

e3x

To obtain another linearly independent eigen vector of A corresponding to the eigen value 2, we

need to introduce another concept called generalized eigen vectors.

Generalized eigen vectors : Let’s consider the matrix

A =

(5 0

0 5

)and B =

(5 1

0 5

)

Then, A and B both have characteristic equations (5− r)2 = 0 and repeated eigen value 5 with

multiplicity 2. Now, let’s find eigen vectors for A. Then,

Aξ = 5ξ =⇒

(1

0

)and

(0

1

)are the linearly independent eigen vectors.

As for the eigen vectors for B,

Bξ = 5ξ =⇒

(5ξ1 + ξ2

5ξ2

)=

(5ξ1

5ξ2

)

which implies ξ2 = 0. Thus,

(1

0

)is an eigen vector of B. No matter how you try, you can not find

another linearly independent eigen vector of B.

Page 56: Paper XII : Ordinary Di erential Equation

56 CHAPTER 2. FIRST ORDER ODE SYSTEMS

Remark. • Previous two examples show that (A− 5I)2 = 0 and (B − 5I)2 = 0.

• But, the kernel of (A− 5I) is 2 dimensional, whereas the kernel of (B − 5I) is 1 dimensional.

• Now, (X − 5I)2 = 0 implies (X − 5I){(X − 5I)ξ} = 0,∀ξ ∈ R2.

• Also, det(X − 5I) = 0 implies ker(X − 5I) is non-trivial. Now, two cases arise

– (X − 5I) = 0, same as A above.

– (X − 5I) 6= 0 but (X − 5I)2 = 0, same as B above.

• Now, the kernel of (A− 5I) is 2 dimensional implies (A− 5I) = 0.

• But, the kernel of (B − 5I) is 1 dimensional, which implies (B − 5I) 6= 0.

• As (B − 5I) 6= 0 but, (B − 5I)2 ≡ 0, we have at least one vector v s.t.

(B − 5I)v 6= 0 but (B − 5I)2v = 0

• We need to find such vectors and call them as the generlized eigen vector of B wrt the eigen

value 5.

• Now, (B − 5I)v 6= 0 =⇒ v2 6= 0, as (B − 5I) =

(0 1

0 0

).

• Thus,

(0

1

)is a generalized eigen vector wrt eigen value 5.

Claim : For 2 x 2 matrices, r is repeated eigen value, v if generalized e-vector

1. If you take w = (B − rI)v, then w is an eigen vector of B wrt r.

2. We may say that the generalized eigen vectors are preimages of the eigen vectors of B under

the transformation (B − rI).

Definition 2.4.1 (Generalized eigen vector of nxn matrix wrt an eigen value r with algebraic

multiplicity p). v is called a generalized eigen vector of B if

(B − rI)v 6= 0, but ∃1 < q ≤ p, s.t.(B − rI)qv = 0 even though (B − rI)q−1v 6= 0

Page 57: Paper XII : Ordinary Di erential Equation

2.4. INHOMOGENEOUS SYSTEM 57

In other words,

∃1 < q ≤ p, v ∈ ker((B − rI)q) ∩ range((B − rI)q−1)

Example.

A =

5 1 0

0 5 1

0 0 5

, B =

5 0 0

0 5 1

0 0 5

Let e1, e2, e3 be the standard basis vectors of F 3. Then, e1 is an eigen vector of A. Kernel of (A-5I)

is 1 dimensional, hence it has two linearly independent generalized eigen vectors. Note that

(A− 5I)2e2 = 0 but (A− 5I)2e3 6= 0.

On the other hand, e1 and e2 are both eigen vectors of B. Kernel of (B-5I) is 2 dimensional, hence it

has only one linearly independent generalized eigen vectors. Note that (B − 5I)2e3 = 0.

Again, consider the matrix

D =

5 1 0 0

0 5 0 0

0 0 5 0

0 0 0 5

It has 3 linearly independent e-vectors and 1- generalized e-vector. Hence, kernel of (D − 5I) is

3dimensional. Also, (D − 5I)3v = 0,∀v.

Example. Consider the 3rd order linear ode

d3y

dx3− 7

d2y

dx2+ 16

dy

dx− 12y = 0

Corresponding system dYdx

= AY , where

Y =

y

y′

y′′

, A =

0 1 0

0 0 1

12 −16 7

The characterestic equation is m3 − 7m2 + 16m− 12 = 0 having the roots 2, 2 and 3. Following are

Page 58: Paper XII : Ordinary Di erential Equation

58 CHAPTER 2. FIRST ORDER ODE SYSTEMS

its two linearly independent solutions

Y1 =

1

3

9

e3x, Y2 =

1

2

4

e2x,

We need to search for the third linearly independent solution as we already know the kernel has

dimension 3.

generalized e-vector of A corresponding to the e-value 2 : We need to find a v s.t.

(A− 2I)2v = 0. The above equation is easier to solve if we try to find v s.t. (A− 2I)v = ve, where

ve =

1

2

4

Solving it we obtain a generalised eigen value

v =

1

3

8

Verification :

Y = c1

1

3

9

e3x + c2

1

2

4

e2x + c3

1

3

8

e2x

is the general solution of the ode system dYdx

= AY . Now,

dY

dx= 3c1

1

3

9

e3x + 2c2

1

2

4

e2x + 2c3

1

3

8

e2x

Page 59: Paper XII : Ordinary Di erential Equation

2.4. INHOMOGENEOUS SYSTEM 59

But,

AY = 3c1

1

3

9

e3x + 2c2

1

2

4

e2x + 2c3

1

3

8

e2x + c3

1

2

4

e2x as (A− 2I)v = ve =⇒ Av = 2v + ve.

There is an extra term and hence it can not be a solution of the system.

Remark. • Revist the reduction of the problem of finding eigen functions for the linear system

( ddx− A) to the problem of finding eigen vectors of A.

• Our argument was that ( ddx− A)ξerx = 0⇔ (rI − A)ξerx = 0.

• Hence, as erx is never zero, we must have (rI − A)ξ = 0.

• Carefully, look at the equivalence in second point.

• The equivalence was possible as the operator ddx

acting on erx produced a similar function rerx.

• If we can replace this function by some other function f(x), such that the action of the

operator ddx

on it produces functions similar to f(x) then we may be able to obtain a new set

of linearly independent solutions.

Important observations : (A− rI)Y = 0 has infinitely many linearly independent solutions of

the form Y = vf(x), where v remains the fixed eigen vector and f(x) varies over linearly

independent C1 functions. But, we have proved that the linear operator ( ddx− A) has kernel of

dimension 3. So, we won’t be able to find more than 3 linearly independent solutions by this

process.

Suitable candidate : Looking at the relation ddx

= AY , we find our suitable choice should be

1. exponentials.

2. polynomials times exponentials.

Page 60: Paper XII : Ordinary Di erential Equation

60 CHAPTER 2. FIRST ORDER ODE SYSTEMS

Exponentials gave us one solution. Hence, we need something of form 2 to get the other linearly

independent solution. Let Y = ξxe2x be a solution of the ode system ( ddx− A)Y = 0. This will give

ξ1 + 2ξ1x

ξ2 + 2ξ2x

ξ3 + 2ξ3x

e2x =

ξ2x

ξ3x

(12ξ1 − 16ξ2 + 7ξ3)x

e2x

which gives ξ1 = ξ2 = ξ3 = 0. Hence, we need to search for a different type of solution. The other

option left is to combine the forms 1 and 2. Look for solutions of the form Y = ξe2x + ηxe2x.

Solution : Now, ( ddx− A)Y = 0 will give

2ξe2x + ηe2x + 2ηxe2x = Aξe2x + Aηxe2x

=⇒ (2ξ + η − Aξ)e2x = (A− 2I)ηxe2x

As e2x and xe2x are linearly independent, lhs = rhs = 0. Also, as e2x never vanishes, we must have

(2ξ + η − Aξ) = (A− 2I)η = 0 =⇒ (A− 2I)η = 0 and Aξ = 2ξ + η.

i.e. η and ξ are respectively an eigen vector and a generalized eigen vector of A.

Thus, taking

η =

1

2

4

and ξ =

1

3

8

we have our 3rd linearly independent solution Y = ξe2x + ηxe2x

Remark. 1. Thus, the linearly independent eigen functions are

Y1 =

1

3

9

e3x, Y2 =

1

2

4

e2x, Y3 =

1

3

8

e2x +

1

2

4

xe2x

Page 61: Paper XII : Ordinary Di erential Equation

2.5. PHASE POTRAIT 61

2. Thus, the general solution or complimentary function is

Y = c1

1

3

9

e3x + c2

1

2

4

e2x + c3

1

3

8

e2x +

1

2

4

xe2x

Which can also be written as

Y = c1

1

3

9

e3x +

c2

1

2

4

+ c3

1

3

8

e2x + c3

1

2

4

xe2x

Lecture 16

Let’s conclude this chapter with a discussion on the phase potrait for 2× 2 linear systems.

2.5 Phase potrait

Consider the homogeneous system of linear first order odes given by

X ′(t) = PX(t)

where P is an 2× 2 constant matrix.

Remark. Though, the idea developed here can be used for a general n× n system, we will restrict

ourselves to the 2× 2 systems only as it is easy to visualise.

Procedure :

1. We will plot the direction/gradient fields given by grad(X(t)) = (x′1(t), x′2(t)) at the points

X(t). This 2d plane will be called the phase plane and the diagram that we will obtain will be

called the phase potrait.

2. Evaluating PX for a large collection of values of t ∈ I, we can draw a plot of the direction

fields of the tangent vectors in the x1 − x2 plane.

Page 62: Paper XII : Ordinary Di erential Equation

62 CHAPTER 2. FIRST ORDER ODE SYSTEMS

Figure 2.2: Phase potrait for the x′1(t) = 2x1(t), x′2(t) = −3x2(t)

3. A plot that shows a representative sample of trajectories for a given system is called a phase

potrait.

Example.

X ′(t) =

(2 0

0 −3

)X

Since, it is a diagonal matrix (this type of systems are called uncoupled systems as the equations

are independent of each other) we can straight away write

x′1(t) = 2x1(t)

x′2(t) = −3x2(t)

The phase potrait for the above system is given by (2.2).

Example.

X ′(t) =

(1 1

4 1

)X

Plot the direction field and determine the qualitative behaviour of solutions. Then, find the general

solution and draw a phase potrait showing several trajectories.

Solution. This is a coupled system unlike the previous example. Hence, directly we can not work

with solutions. But, as the directional fields work with the derivatives of the solutions, we can

evaluate the derivatives at each point and obtain the tangent vectors at each point.

Page 63: Paper XII : Ordinary Di erential Equation

2.5. PHASE POTRAIT 63

Figure 2.3: Phase potrait : Pic Courtesy : Elementary differential equations and boundary valueproblems, pg 311

For x1 = 1, x2 = 0 we have (x′1

x′2

)=

(1 1

4 1

)(1

0

)=

(1

4

)Similarly, for x1 = 0, x2 = 1, we have (

x′1

x′2

)=

(1

1

)This means at (0,1), the tangent vector is (1,1), which makes an equal angle with the x1 and x2

directions. Similarly, at (1,0), the tangent vector is (1,4), which will be shifted more towards x2

direction. So the phase potrait will look like figure (2.3) Similarly, at (-1,0) and (0,-1) the tangent

vectors will be parallel to those of (1,0) and (0,1) but in opposite directions respectively.

Also, note that for x1 = 0, x2 arbitrary, the tangent vectors are all parallel and make equal angles

with the x1 and x2 axis, but they are in opposite directions for x2 > 0 and x2 < 0.

Similarly, for x1 6= 0, x2 = 0, we have a set of parallel tangent vectors reversing directions for x1 > 0

and x1 < 0.

Now, for x1 = x2 = 1 we have (x′1

x′2

)=

(2

5

)Going on this way we will be able to plot the phase potraits.

Page 64: Paper XII : Ordinary Di erential Equation

64 CHAPTER 2. FIRST ORDER ODE SYSTEMS

Page 65: Paper XII : Ordinary Di erential Equation

Chapter 3

Differential Inequalities

3.1 Gronwall’s Inequality

Theorem 3.1.1. Let λ(t) be a real valued continuous function and µ(t) is a non-negative

continuous function on I = [a, b]. If any continuous function y(t) satisfies

y(t) ≤ λ(t) +

∫ t

a

µ(s)y(s)ds,∀t ∈ I

then we have

y(t) ≤ λ(t) +

∫ t

a

λ(s)µ(s)e∫ ts µ(σ)dσds,∀t ∈ I

In particular, if λ is constant, then

y(t) ≤ λe∫ ta µ(σ)dσ

Proof. Let z(t) =∫ taµ(s)y(s)ds, t ∈ I. Since, µ, y are both continuous, z is differentiable. Also,

z(a) = 0.

∴ z(t)− z(a) =

∫ t

a

µ(s)y(s)ds

=⇒ z′(t) = µ(t)y(t)

=⇒ z′(t)− µ(t)z(t) = µ(t)

[y(t)−

∫ t

a

µ(s)y(s)ds

]=⇒ z′(t)− µ(t)z(t) ≤ µ(t)λ(t) by the given inequality.

=⇒ [z′(t)− µ(t)z(t)]e−∫ ta µ(σ)dσ ≤ µ(t)λ(t)e−

∫ ta µ(σ)dσ

=⇒ d

dt

[z(t)e−

∫ ta µ(σ)dσ

]≤ µ(t)λ(t)e−

∫ ta µ(σ)dσ

Integrating from a to t

z(t)e−∫ ta µ(σ)dσ ≤

∫ t

a

µ(s)λ(s)e−∫ sa µ(σ)dσds

=⇒ z(t) ≤∫ t

a

µ(s)λ(s)e∫ ts µ(σ)dσds

65

Page 66: Paper XII : Ordinary Di erential Equation

66 CHAPTER 3. DIFFERENTIAL INEQUALITIES

By the given inequality

y(t) ≤ λ(t) +

∫ t

a

µ(s)y(s)ds = λ(t) + z(t) ≤ λ(t) +

∫ t

a

µ(s)λ(s)e∫ ts µ(σ)dσ

Further, if λ(t) = λ (a constant) we have

z(t) ≤ λ

∫ t

a

µ(s)e∫ ts µ(σ)dσds =⇒ z(t) ≤ −λ+ λe

∫ ta µ(σ)dσ

Therefore, we have the required result y(t) ≤ λe∫ ts µ(σ)dσ.

Lecture 17

3.2 Solution of a differential inequality

We have dealt with the problems of finding solutions for the differential equations dydt

= f(t, y).

What about the functions y(t) = ψ(t), that satisfies the inequalities dydt≤ f(t, y) or dy

dt≥ f(t, y) or

dydt< f(t, y) or dy

dt> f(t, y)? We would like to explore the possibilities of finding solutions for these

problems.

Definition 3.2.1 (Solution of a differential inequality). Let f(t, x) be continuous on a region

D ⊂ R× R. A function x(t) is said to be a solution of the differential inequality

dx

dt> f(t, x), t ∈ I = [t0, t0 + α)

if the following conditions hold

1. x′(t) exists ∀t ∈ I,

2. (t, x(t)) ∈ D, ∀t ∈ I and

3. x′(t) > f(t, x(t)),∀t ∈ I.

Remark. 1. The interval I can be any type of interval.

2. Equivalent definitions for systems can be made.

Page 67: Paper XII : Ordinary Di erential Equation

3.2. SOLUTION OF A DIFFERENTIAL INEQUALITY 67

3. Analogous definitions hold for x′(t) ≥ f(t, x), [≤ f(t, x) or < f(t, x)].

Example. Consider the differential inequality

dy

dt< −{y(t)}2 on (0, π)

Verify that y(t) = cot(t) is a solution. Further note that

z(t) = −ct, 0 < c <1

π2is a constant

is also a solution.

Theorem 3.2.2. Let f(t, x) be continuous on a region D ⊆ R2 and y1, y2 be two solutions of the

differential inequalities.

y′1(t) ≤ f(t, y1(t))

y′2(t) > f(t, y2(t))

over the interval I = [t0, t0 + α). If y1(t0) < y2(t0), then y1(t) < y2(t),∀t ∈ I.

Proof. Let us assume the set A = {t ∈ I|y1(t) ≥ y2(t)} 6= φ. Since, A ⊆ I, A is bounded. Infact t0 is

a lower bound.

∴ It has a greatest lower bound, let it be t∗. Hence, t∗ ≥ t0.

Now, y1, y2 both being solutions of some differential inequalities implies they are continuous.

Claim : y(t∗) = y2(t

∗).

If the claim is false, then either (y1 − y2)(t∗) > (or <)0.

Case 1 (> 0) : There exists a neighborhood of t∗ where y1 − y2 > 0, which implies ∃t1 < t∗ such

that y1(t1) > y2(t1) =⇒ t1 ∈ A. This contradicts that t∗ = inf(A).

Case 2 (< 0) : There exists a neighborhood N of t∗ where y1 − y2 < 0. Now, choosing t2 from

this neighborhood, we should have τ ∈ A such that t∗ ≤ τ < t2, (by the property of the infimum of

a set).

Page 68: Paper XII : Ordinary Di erential Equation

68 CHAPTER 3. DIFFERENTIAL INEQUALITIES

But, then τ ∈ N =⇒ y1(τ) < y2(τ), which contradicts τ ∈ A.

Hence, the only possibility is that our claim y1(t∗) = y2(t

∗) is true.

Also, as y1(t0) < y2(t0) and t∗ ≥ t0, therefore t∗ > t0.

Now, t∗ = inf(A) =⇒ y1(t∗ − h) < y2(t

∗ − h),∀h > 0, such that t∗ − h ∈ I.

∴ y′1(t∗ − 0) = limh→0+

y1(t∗−h)−y(t∗)−h

Now, y1(t∗ − h) < y2(t

∗ − h) =⇒ y1(t∗−h)−y1(t∗)−h > y2(t∗−h)−y2(t∗)

−h . Thus, we have

y′1(t∗ − 0) ≥ y′2(t

∗ − 0).

Since, y′1(t∗) and y′2(t

∗) exists, we have

y′1(t∗ − 0) = y′1(t

∗), y′2(t∗ − 0) = y′2(t

∗)

Therefore, we have y′1(t∗) ≥ y′2(t

∗).

Again, by the given hypothesis y′1(t) ≤ f(t, y1(t)) and y′2(t) > f(t, y2(t)),∀t ∈ I. Therefore, we have

y′1(t∗) ≤ f(t, y1(t

∗)) = f(t, y2(t∗)) < y′2(t

∗), which is a contradiction.

Therefore, the initial assumption that A 6= φ is not possible. Hence, y1(t) < y2(t),∀t ∈ I.

Lecture 18

Definition 3.2.3 (Sub-solution). Solutions of the differential inequalities of the form dydt≤ f(t, y)

are called sub-solutions.

Definition 3.2.4 (Super-solution). Solutions of the differential inequalities of the form dydt≥ f(t, y)

are called super-solutions.

Theorem 3.2.5. Let f(t, x) be continuous on D ⊆ R2. Further assume

1. dxdt

= f(t, x), x(t0) = x0, where (t0, x0) ∈ D, I = [t0, t0 + α).

2. x1(t) and x2(t) are solutions of dx1dt< f(t, x1),

dx2dt> f(t, x2) in I.

3. x1(t0) ≤ x0 ≤ x2(t0) where x0 = x(t0).

Then ∀t ∈ I0, we have x1(t) < x(t) < x2(t).

Page 69: Paper XII : Ordinary Di erential Equation

3.2. SOLUTION OF A DIFFERENTIAL INEQUALITY 69

Proof. If x0 = x(t0) < x2(t0), then x(t) and x1(t) satisfy the hypothesis of the previous theorem.

Therefore, we have x(t) < x2(t),∀t ∈ I0.If x0 = x2(t0), define z(t) = x2(t)− x(t). Therefore,

z′(t) = x′2(t)− x′(t) =⇒ z′(t) > f(t0, x2(t0))− f(t0, x(t0)) = 0. Hence, Z is strictly increasing in a

neighborhood of t0 in I. Let N = [t0, t0 + δ] be such a neighborhood of t0. This implies

z(t0 + δ) > z(t0) = 0.

∴ In the interval Iδ = [t0 + δ, t0 + α), z(t0 + δ) > 0 =⇒ x2(t0 + δ) > x(t0 + δ).

Then, x and x2 satisfy the hypothesis of the previous theorem on the interval

Iδ =⇒ x(t) < x2(t),∀t ∈ Iδ.Now, δ can be chosen arbitrarily small. Hence,

∀δ > 0, x(t) < x2(t),∀t ≥ t0 + δ =⇒ x(t) < x2(t),∀t > t0, t ∈ I0.For the sub-solution and exact solution part, one may proceed as follows :

Take y1(t) = −x1(t), y(t) = −x(t) and proceed as before.

Remark. The above result suggests that if the sub-solution is less than or equal to the exact

solution or less than or equal to the super solution initially, then the inequality is maintained

strictly in the interior of the interval.

Page 70: Paper XII : Ordinary Di erential Equation

70 CHAPTER 3. DIFFERENTIAL INEQUALITIES

Page 71: Paper XII : Ordinary Di erential Equation

Chapter 4

Some more Existence and Uniqueness results

Lecture 19

4.1 Maximal And Minimal solutions

Definition 4.1.1 (Maximal Solution). Let D ⊆ R2 be open and f : D → R be continuous. A

solution r(t) of the IVPdx

dt= f(t, x), x(t0) = x0

where (t0, x0) ∈ D, is said to be a maximal solution if for any arbitrary solution y(t) of the above

IVP, we have y(t) ≤ r(t) for every t in the common domain of existence of r and y.

Definition 4.1.2 (Minimal Solution). Let D ⊆ R2 be open and f : D → R be continuous. A

solution s(t) of the IVPdx

dt= f(t, x), x(t0) = x0

where (t0, x0) ∈ D, is said to be a minimal solution if for any arbitrary solution y(t) of the above

IVP, we have y(t) ≥ s(t) for every t in the common domain of existence of s and y.

Remark. 1. Maximal solution is the one which dominates all other solutions in their common

region of existence.

2. This does not mean that a maximal solution has the maximal interval of existence.

Theorem 4.1.3. Let f be a continuous function on the region

S = {(x, y)|x0 ≤ x ≤ x0 + α, |y − y0| ≤ b} contained inside the domain D of f . Then, ∃ a maximal

and a minimal solution of the IVP in the interval [x0, x0 + α], where α = min{a, b2M+b

} and

M ≥ max |f(x, y)| on D.

Proof. Existence of maximal solution : Let 0 < ε ≤ b2. Consider the IVPs

dy

dx= f(x, y), y(x0) = y0 (4.1)

dy

dx= f(x, y) + ε, y(x0) = y0 + ε (4.2)

71

Page 72: Paper XII : Ordinary Di erential Equation

72 CHAPTER 4. SOME MORE EXISTENCE AND UNIQUENESS RESULTS

Now, f is continuous on S, which implies fε = f + ε is continuous on

Sε = {(x, y)|x0 ≤ x ≤ x0 + a, |y − (y0 + ε)| ≤ b2} and Sε ⊆ S.

Also, we have |fε(x, y)| ≤ |f(x, y)|+ ε ≤M + b2,∀(x, y) ∈ Sε.

∴ by Peano’s existence theorem we have the existence of a solution yε(x) of the IVP-ε in the

interval [x0, x0 + α] where α = min{a, b2M+b

}.Let 0 < ε2 < ε1 ≤ ε and yε1 , yε2 be the solutions of the IVP-ε1, IVP-ε2, respectively.

Note that yε2(x0) = y0 + ε2 < y0 + ε1 = yε1(x0).

Again, y′ε2(x) = f(x, yε2(x)) + ε2 and y′ε1(x) = f(x, yε1(x)) + ε1 > f(x, yε1(x)) + ε2.

Now, we have the following set-up y′ε2(x) = fε2(x, yε2(x)) and y′ε1(x) > fε2(x, yε1(x)).

∴ By previous corollary we have yε2(x) < yε1(x), ∀x ∈ [x0, x0 + α].

Varying ε→ 0, we have a family of equicontinuous and uniformly bounded functions on [x0, x0 + α].

∴ By Ascoli-Arzela theorem ∃ a decreasing sequence {εn} ↓ 0 such that {yεn} converges uniformly

on [x0, x0 + α].

Let r(x) = limn→∞ yεn(x). Thus, yεn → r uniformly on [x0, x0 + α].

∴ r(x0) = y0.

Now, f is continuous on S which is compact. Hence, f is uniformly continuous on S.

∴ yεn = (y0 + εn) +∫ xx0

[f(t, yεn(t)) + εn]dt.

∴ Taking limits as n→∞

r(x) = y0 +

∫ x

x0

f(t, r(t))dt =⇒ r is a solution of IVP.

To show r(x) is a maximal solution : Let y(x) be any other solution of the IVP in [x0, x0 + α].

Then, for ε > 0, y(x0) = y0 < y0 + ε = yε(x0) = fε(x, y). Again, y′(x) = f(x, y) < f(x, y) + ε and

y′ε(x) = f(x, yε) + ε = fε(x, yε).

By previous corollary, y(x) < yε(x),∀x ∈ [x0, x0 + α]. Since, ε > 0 was arbitrary, varying ε over εns

and taking limits as n→∞, we have

y(x) ≤ r(x),∀x ∈ [x0, x0 + α] =⇒ r is a maximal solution.

Uniqueness of maximal solution over the fixed interval [x0, x0 + α]: The uniqueness is

ensured by the fact that we have uniform convergence yεn(x)→ r(x),∀x ∈ [x0, x0 + α].

Page 73: Paper XII : Ordinary Di erential Equation

4.2. UNIQUENESS RESULTS 73

For Minimal solution : Consider the IVP-ε

dy

dx= f(x, y)− ε, y(x0) = y0 − ε

and proceed in the similar way as in the case of the maximal solution to obtain the existence and

uniqueness of the minimal solution.

Lecture 20

4.2 Uniqueness results

Till now, we have seen existence and uniqueness results together. Also, such uniqueness results

depended on the conditions satisfied by the right hand side function f ov the IVP dudt

= f(t, u).

Now, we will be exploring some more uniqueness results that will depend on the conditions satisfied

by u.

Lemma 4.2.1. Let w(z) be an increasing continuous function on [0, α), α ∈ R+ and w(0) = 0 and

w(z) > 0 for z > 0 and limε→0+

∫ αε

dzw(z)

= +∞. Further, assume u(x) to be a continuous function

on [0, α] satisfying

u(x) ≤∫ x

0

w(u(t))dt, 0 < x ≤ α

then u(x) = 0, ∀x ∈ [0, α].

Proof. Let v(x) = max{u(t) : t ∈ [0, x]}. Since, u is a continuous function, it attains its maximum

in [0, x], which implies v(x) is well-defined ∀x ∈ [0, α]. Now, assume v(x) > 0,∀0 < x ≤ α.

By the definition of v(x), u(x) ≤ v(x) = max{u(t) : 0 ≤ t ≤ x},∀x ∈ [0, α].

Again, as u is continuous, ∃yx such that u(yx) = v(x) and 0 ≤ yx ≤ x.

∴ v(x) = u(yx) ≤∫ yx

0

w(u(t))dt ≤∫ x

0

w(u(t))dt

Now, as u(t) ≤ v(t) and w ↑, we have

v(x) = u(yx) ≤∫ yx

0

w(u(t))dt ≤∫ x

0

w(u(t))dt ≤∫ x

0

w(v(t))dt

Page 74: Paper XII : Ordinary Di erential Equation

74 CHAPTER 4. SOME MORE EXISTENCE AND UNIQUENESS RESULTS

Let v(x) =∫ x0w(v(t))dt. Then v(0) = 0, v(x) ≤ v(x) and v′(x) = w(v(x)) ≤ w(v(x)).

Now, for x 6= 0, v(x) > 0 =⇒ v(x) > 0 =⇒ w(v(x)) > 0.

∴v′(x)

w(v(x))≤ 1

and for 0 < δ < a, ∫ a

δ

v′(t)

w(v(t)dt ≤

∫ a

δ

dt = a− δ

But, substituting z = v(t), we have ∫ a

δ

v′(t)

w(v(t)dt =

∫ α

ε

dz

w(z)

where v(δ) = ε and v(a) = α.

But, ε→ 0+ =⇒ RHS →∞∴∫ a

δ

v′(t)

w(v(t)dt ≤ a− δ

is not possible.

Thus, our assumption that v(x) > 0 for 0 < x ≤ α is false.

As u was non-negative, v(x) ≥ 0. But, v can not be positive. Hence,

v(x) = 0 on [0, α] =⇒ u(x) = 0 on [0, α]

Theorem 4.2.2 (Osgood’s uniqueness theorem). Let f(x, y) be continuous in

S = {(x, y) : |x− x0| ≤ a, |y − y0| ≤ b} and ∀(x, y1), (x, y2) ∈ S it satisfies

|f(x, y1)− f(x, y2)| ≤ w(|y1 − y2|)

where w(z) is as in lemma (4.2.1). Then

dy

dx= f(x, y), y(x0) = y0

has at most one solution in |x− x0| ≤ a.

Page 75: Paper XII : Ordinary Di erential Equation

4.2. UNIQUENESS RESULTS 75

Proof. Let y1, y2 be two solutions of the IVP

dy

dx= f(x, y), y(x0) = y0

As, |f(x, y1)− f(x, y2)| ≤ w(|y1 − y2|), therefore

y1(x)− y2(x) =

∫ x

x0

(f(t, y1)− f(t, y2)) dt

=⇒ |y1(x)− y2(x)| ≤∫ x

x0

|f(t, y1)− f(t, y2)| dt

≤∫ x

x0

w (|f(t, y1)− f(t, y2)|) dt

Thus, u(x) = |y1(x)− y2(x)| satisfies the hypothesis of lemma (4.2.1) and hence u(x) = 0 on the

given interval.

∴ y1(x) = y2(x) in the given interval. Thus, the IVP has a unique solution.

Remark. 1. This is a uniqueness theorem. Hence, existence of solution is not guaranteed.

2. It may happen that f satisfies the hypothesis of Osgood’s uniqueness theorem even though

the IVP has no solution!

Lemma 4.2.3. Let u(x) be a non-negative continuous function in |x− x0| ≤ a and u(x0) = 0.

Further, if u is differentiable at x = x0 with u′(x0) = 0, then the inequality

u(x) ≤∣∣∣∣∫ x

x0

u(t)

t− x0dt

∣∣∣∣ =⇒ u(x) ≡ 0 in |x− x0| ≤ a.

Proof. case 1 : (x0 ≤ x ≤ x0 + a) Define

v(x) =

∫ x

x0

u(t)

t− x0dt

Note, this integral exists as

limx→x0

u(x)

x− x0= u′(x0) = 0

Again,

v′(x) =u(x)

x− x0≤ v(x)

x− x0

Page 76: Paper XII : Ordinary Di erential Equation

76 CHAPTER 4. SOME MORE EXISTENCE AND UNIQUENESS RESULTS

Now,d

dx

[v(x)

x− x0

]=

v′(x)

x− x0− v(x)

(x− x0)2=

1

x− x0

[v′(x)− v(x)

x− x0

]< 0,∵ x ≥ x0

∴d

dx

(v(x)

x− x0

)< 0 =⇒ v(x)

x− x0is not increasing.

Also, as v(x0) = 0, we have v(x) ≤ 0, which contradicts that v(x) > 0.

∴ v(x) ≡ 0 =⇒ u(x) ≡ 0 on [x0, x0 + a]

case 2 : (x0 − a ≤ x ≤ x0) This case can be done similarly and is left as an homework.

Lecture 21

Theorem 4.2.4 (Nagumo’s Uniqueness theorem). Let f(x, y) be continuous on the rectangular

region S = [x0 − a, x0 + a]× [y0 − b, y) + b]. Also, assume that for any (x, y1), (x, y2) ∈ S, we have

|f(x, y1)− f(x, y2)| ≤ k

∣∣∣∣y1 − y2x− x0

∣∣∣∣ , where x 6= x0, 0 < k ≤ 1

Then, the IVPdy

dx= f(x, y) on |x− x0| ≤ a and y(x0) = y0

has at most one solution in [x0 − a, x0 + a].

Proof. Let y1 and y2 be two solutions of the given IVP and defines

u(x) = |y1(x)− y2(x)|, |x− x0| ≤ a

Then, u(x) ≥ 0 and u is continuous. Now,

u(x) = |y1(x)− y2(x)| =∣∣∣∣∫ x0

x

f(s, y1)ds−∫x0xf(s, y2)ds

∣∣∣∣≤∫ x

x0

|f(s, y1)− f(s, y2)|ds ≤∫ x

x0

k|y1(s)− y2(s)||s− x0|

ds

≤∫ x

x0

|y1(s)− y2(s)||s− x0|

ds =

∣∣∣∣∫ x

x0

u(s)

s− x0ds

∣∣∣∣

Page 77: Paper XII : Ordinary Di erential Equation

4.2. UNIQUENESS RESULTS 77

∴ limh→0

u(x0 + h)− u(x0)

h= lim

h→0

1

h|y1(x0 + h)− y2(x0 + h)| ,∵ u(x0) = 0

= limh→0

1

h|y1(x0) + hy′1(x0 + θh)− y2(x0)− hy′2(x0 + θ2h)|

But, y1(x0) = y2(x0) = y0. Hence, by the mean value theorem

u(x) ≤ limh→0

1

h|h||y′1(x0 + θh)− y′2(x0 + θ2h)|

= limh→0

|h|h|f(x0 + θ1h, y1(x0 + θ1h))− f(x0 + θ2h, y1(x0 + θ2h))|

= 0 as|h|h

is a bounded function and rest of the part tends to 0

∴ limh→0

u(x0 + h)− u(x0)

h= 0 =⇒ u is differentiable at x0 and u′(x0) = 0.

∴ By lemma (4.2.3), u(x) ≡ 0 on |x− x0| ≤ a, which means that IVP has atmost one solution.

Page 78: Paper XII : Ordinary Di erential Equation

78 CHAPTER 4. SOME MORE EXISTENCE AND UNIQUENESS RESULTS

Page 79: Paper XII : Ordinary Di erential Equation

Chapter 5

Sturm-Liouville Theory

Lecture 21 (contd.)

Reference : Differential Equations by S. L. Ross, 3rd edition, chapters 11 and 12.

5.1 Adjoint of a second order linear ODE

Consider the second order linear differential operator L ≡ a0(t)d2

dt2+ a1(t)

ddt

+ a2(t), where ai(t) are

differentiable and a0(t) 6= 0

Definition 5.1.1 (Adjoint of a second order linear ODE). Let L(x) = 0 where ai ∈ C2−i on [a, b].

Also, a0(t) 6= 0 on [a, b]. The adjoint of the above 2nd order linear differential equation is

d2

dt2[a0(t)x]− d

dt[a1(t)x] + a2(t)x = 0

On simplifiction this gives

a0(t)d2x

dt2+ [2a′0(t)− a1(t)]

dx

dt+ [a′′0(t)− a′1(t) + a2(t)]x = 0

Example. Consider the second order linear ode

t2d2x

dt2+ 7t

dx

dt+ 8x = 0

Find the adjoint.

Solution. Here, a0(t) = t2, a1(t) = 7t and a2(t) = 8. Therefore the adjoint equation is given by

t2d2x

dt2− 3t

dx

dt+ 3x = 0

79

Page 80: Paper XII : Ordinary Di erential Equation

80 CHAPTER 5. STURM-LIOUVILLE THEORY

Lecture 22

5.2 Self-adjoint 2nd order linear ode

Definition 5.2.1 (Self-adjoint of a second order linear ODE). A second order linear ode L(x) = 0

where L ≡ a0(t)d2

dt2+ a1(t)

ddt

+ a2(t) and ai ∈ C2−i on [a, b] with a0(t) 6= 0, is said to be self-adjoint

if the adjoint of the above 2nd order linear differential equation, given by

d2

dt2[a0(t)x]− d

dt[a1(t)x] + a2(t)x = 0

is same as itself.

Theorem 5.2.2. Let us consider a second order linear ode

a0(t)d2x

dt2+ a1(t)

dx

dt+ a2(t)x = 0

where ai ∈ C2−i on [a, b] with a0(t) 6= 0. Then, the necessary and sufficient condition for this ode to

be self-adjoint is

a1(t) = a′0(t),∀t ∈ I

Proof. Equating the respective coefficients will give the result.

Example (Legendre’s Equation).

(1− t2)d2x

dt2− 2t

dx

dt+ n(n+)x = 0

Here, a0(t) = 1− t2, a1(t) = −2t. Thus, a′0(t) = −2t = a1(t).

Hence, this is a self-adjoint ode.

Remark. If a second order linear ode

a0(t)d2x

dt2+ a1(t)

dx

dt+ a2(t)x = 0

Page 81: Paper XII : Ordinary Di erential Equation

5.2. SELF-ADJOINT 2ND ORDER LINEAR ODE 81

is self-adjoint, then it can be written as

d

dt

[a0(t)

dx

dt

]+ a2(t)x = 0

Question 5.2.3. Can we convert a general second order linear ode in to a self-adjoint second order

linear ode?

Let the second order linear ode given by

a0(t)d2x

dt2+ a1(t)

dx

dt+ a2(t)x = 0

is not self-adjoint.

Let v(t) be a non-trivial function such that

v(t)[a0(t)d2x

dt2+ a1(t)

dx

dt+ a2(t)x] = 0

is self-adjoint. That would require

d

dt[v(t)a0(t)] = v(t)a1(t)

⇔ v′(t)a0(t) + v(t)a′0(t) = v(t)a1(t)

⇔ v′(t)

v(t)=a1(t)− a′0(t)

a0(t)

⇔∫v′(t)

v(t)+

∫a′0(t)

a0(t)dt =

∫a1(t)

a0(t)dt

⇔ v(t) =1

ka0(t)e∫ a1(t)a0(t)

dt, where k is a constant of integration.

Neglecting the arbitrary constant we have v(t) = 1a0(t)

e∫ a1(t)a0(t)

dt, which is the required multiplier.

Theorem 5.2.4. Let a0(t)d2xdt2

+ a1(t)dxdt

+ a2(t)x = 0 be a second order linear differential equation

defined on I = [a, b].Then, it can be transformed into a self-adjoint ode

d

dt

[P (t)

dx

dt

]+Q(t)x = 0 on I,

Page 82: Paper XII : Ordinary Di erential Equation

82 CHAPTER 5. STURM-LIOUVILLE THEORY

where

P (t) = e∫ a1(t)a0(t)

dt, and Q(t) =

a2(t)

a0(t)P (t)

Proof. (Justification of the proof is already given in the discussion prior to theorem. Here,

well-definedness will be taken care of.)

Since, a0, a1 are continuous on I and a0(t) 6= 0∀t ∈ I, P (t) exists and is differentiable on I.

P ′(t) = e∫ a1(t)a0(t)

dta1(t)

a0(t)

d

dt

(a1(t)

a0(t)

)Hence, d

dt

[P (t)dx

dt

]+Q(t)x = 0 is well-defined and self-adjoint.

5.3 Basic results of Sturm theory

Theorem 5.3.1. Let f be a solution of ddt

[P (t)dx

dt

]+Q(t)x = 0 on I = [a, b]. If f has infinitely

many zeros on I, then f ≡ 0 on I.

Proof. f has infinitely many zeros in I. Let S = {t ∈ I|f(t) = 0}. As S ⊂ I and is infinite, there

exists a sequence {sn}, with sn ∈ S,∀n ∈ N, such that it converges to some limit s0

(Bolzano-Weirstrass’ theorem).

Also, as I is closed, s0 ∈ I. Now, f is a solution of the given self-adjoint ode, f is differentiable on

I. This implies f(s0) = limn→∞

f(sn) = 0.

Again, f ′(s0) = lims→s0

f(s)−f(s0)s−s0 . Since, f ′(s0) exists as f is differentiable, we have

limn→∞

f(sn)− f(s0)

sn − s0= lim

s→s0

f(s)− f(s0)

s− s0= f ′(s0)

But, f(sn) = 0 = f(s0). Hence, f ′(s0) = 0.

Thus, f is a solution of the given self-adjoint 2nd order ode with initial conditions

f(s0) = 0 = f ′(s0).

Since, the given ode is linear, it has a unique solution satisfying the conditions x(s0) = x′(s0) = 0,

where s0 ∈ I.

∴ f has to be the trivial solution on I.

Page 83: Paper XII : Ordinary Di erential Equation

5.3. BASIC RESULTS OF STURM THEORY 83

Lecture 23

Theorem 5.3.2 (Abel’s formula). Let f and g be two solutions of ddt

[P (t)dx

dt

]+Q(t)x = 0 on

I = [a, b], then ∀t ∈ I,

P (t) [f(t)g′(t)− f ′(t)g(t)] = k

where k is a constant.

Proof. We have

d

dt[P (t)f ′(t)] +Q(t)f(t) = 0 (5.1)

d

dt[P (t)g′(t)] +Q(t)g(t) = 0 (5.2)

Combining the two we get

g(t)d

dt[P (t)f ′(t)]− f(t)

d

dt[P (t)g′(t)] = 0

Integrating from a to t ∈ I, we have∫ t

a

g(t)d

dt[P (s)f ′(s)] ds =

∫ t

a

f(s)d

dt[P (s)g′(s)] ds

=⇒ [g(s)P (s)f ′(s)]ta −∫ t

a

g′(s)P (s)f ′(s)ds = [f(s)P (s)g′(s)]ta −∫ t

a

f ′(s)P (s)g′(s)ds

=⇒ P (t)g(t)f ′(t)− P (a)g(a)f ′(a) = P (t)f(t)g′(t)− P (a)f(a)g′(a)

=⇒ P (t)[f(t)g′(t)− g(t)f ′(t)] = P (a)[f(a)g′(a)− g(a)f ′(a)]

Now, RHS is a constant. Let it be k. Hence,

P (t)[f(t)g′(t)− g(t)f ′(t)] = k, ∀t ∈ I

Remark. Abel’s formula states that any two solutions f and g of a second order linear self-adjoint

ode ddt

[P (t)dx

dt

]+Q(t)x = 0, satisfy the condition P (t)W (f, g)(t) = constant.

Page 84: Paper XII : Ordinary Di erential Equation

84 CHAPTER 5. STURM-LIOUVILLE THEORY

Theorem 5.3.3. Let f and g be two solutions of ddt

[P (t)dx

dt

]+Q(t)x = 0 on I = [a, b], such that

they have a common zero at t0 ∈ I. Then f and g are linearly dependent on I.

Proof. By Abel’s formula

P (t)W (f, g)(t) = P (t0)W (f, g)(t0),∀t ∈ I =⇒ P (t)W (f, g)(t) = 0∀t ∈ I

Now, P (t) 6= 0,∀t ∈ I implies

W (f, g)(t) = 0,∀t ∈ I =⇒ f and g are linearly dependent on I.

Theorem 5.3.4. Let f and g be two non-trivial linearly dependent solutions ofddt

[P (t)dx

dt

]+Q(t)x = 0 on I = [a, b]. Then, f(t0) = 0 for some t0 ∈ I will imply g(t0) = 0.

Proof. ∵ f and g are linearly dependent on I, ∃ constants c1, c2 not both zero such that

c1f(t) + c2g(t) = 0∀t ∈ I.

As f and g are non-trivial solutions, neither of them are identically zero on I. Thus, 2 cases arise.

case 1 : If c1 = 0. Then, c2 6= 0 =⇒ c2g(t) = 0,∀t ∈ I =⇒ g ≡ 0, which is a contradiction.

case 2 : If c2 = 0, then c1 6= 0, which implies f(t) ≡ 0 on I, which is again a contradiction.

Thus, none of c1 or c2 is zero and neither of f and g is identically 0 on I.

Now, f(t0) = 0 and c1f(t0) + c2g(t0) = 0 implies g(t0) = 0.

Theorem 5.3.5 (Sturm seperation theorem). Let f and g be two solutions ofddt

[P (t)dx

dt

]+Q(t)x = 0 on I = [a, b]. Then, between any two consecutive zeros of f , there is

precisely one zero of g.

Proof. Let t0 and t1(> t0) be the consecutive zeros of the function f in I. Further, as f is

continuous on I, f(t) has same sign on (t0, t1).

Now, as g is linearly independent of f ,

∴ by previous theorems, g(t0) 6= 0 and g(t1) 6= 0.

If possible let g(t) 6= 0 on (t0, t1). Then, fg

is a differentiable function on [t0, t1].

Now,(fg

)(t0) =

(fg

)(t1) = 0.

∴ By Rolle’s theorem ∃t2 ∈ (t0, t1) such that ddt

(fg

)(t2) = 0.

Page 85: Paper XII : Ordinary Di erential Equation

5.3. BASIC RESULTS OF STURM THEORY 85

But,d

dt

(f

g

)(t2) =

g(t2)f′(t2)− f(t2)g

′(t2)

{g(t2)}2=W (f, g)(t2)

{g(t2)}2

Thus, we have W (f, g)(t2) = 0.

Since, f and g are linearly independent solutions of a linear ode,

W (f, g)(t) 6= 0∀t ∈ I in particular on (t0, t1)

Thus, we arrive at a contradiction.

Hence, our assumption that g never vanishes on (t0, t1) is false.

∴ g vanishes at least once on (t0, t1). If possible let t2, t3 ∈ (t0, t1) be two consecutive zeros of g,

with t2 < t3.

Now, as per the first part of the proof, f will vanish at least once in (t2, t3).

But, that will contradict that t0, t1 were consecutive zeros of f .

∴ g vanishes exactly once in (t0, t1).

Lecture 24

Theorem 5.3.6 (Sturm’s comparison theorem). Let P (t) be a differentiable function and

Q1(t), Q2(t) be continuous functions on I = [a, b]. Further, assume P ′(t) is continuous, P (t) > 0

and Q2(t) > Q1(t) on I.

Let φ1 and φ2 be the real valued solutions of ddt

[P (t)dx

dt

]+Q1(t)x = 0 and d

dt

[P (t)dx

dt

]+Q2(t)x = 0

respectively.

Further, if t1 and t2 are consecutive zeros of φ1 in I, then φ2 has at least one zero in (t1, t2).

Proof. If possible, let φ2(t) 6= 0 on (t1, t2). Without any loss of generality assume

φ1(t), φ2(t) > 0∀t ∈ (t1, t2).

By hypothesis

d

dt[P (t)φ′1(t)] +Q1(t)φ1(t) = 0 (5.3)

d

dt[P (t)φ′2(t)] +Q2(t)φ2(t) = 0 (5.4)

Page 86: Paper XII : Ordinary Di erential Equation

86 CHAPTER 5. STURM-LIOUVILLE THEORY

Multiplying (5.3) by φ2(t) and (5.4) by −φ1(t) and summing up we get

d

dt[P (t){φ′1(t)φ2(t)− φ1(t)φ

′2(t)}] = {Q2(t)−Q1(t)}φ1(t)φ2(t)

[P (t){φ′1(t)φ2(t)− φ1(t)φ′2(t)}]

t2t=t1

=

∫ t2

t1

{Q2(t)−Q1(t)}φ1(t)φ2(t)dt

Now, φ1(t1) = φ1(t2) = 0. This implies

P (t2)φ′1(t2)φ2(t2)− P (t1)φ

′1(t1)φ2(t1) =

∫ t2

t1

{Q2(t)−Q1(t)}φ1(t)φ2(t)dt

But, by hypothesis P (t2) > 0, φ1(t2) = 0, φ1(t) > 0 on (t1, t2). This implies φ′1(t2) < 0.

Also, φ2(t2) > 0. Therefore P (t2)φ′1(t2)φ

′2(t2) < 0.

Similarly, P (t1) > 0, φ′1(t1) > 0, φ2(t1) > 0 implies P (t1)φ′1(t1)φ2(t1) > 0.

Thus, P (t2)φ′1(t2)φ2(t2)− P (t1)φ

′1(t1)φ2(t1) < 0.

But, Q2(t) > Q1(t) on I, in particular on [t1, t2]. Also, φ1(t)φ2(t) > 0 there. This implies∫ t2

t1

{Q2(t)−Q1(t)}φ1(t)φ2(t)dt > 0

which becomes a contradiction.

Thus, our initial assumption that φ2 never vanishes on (t1, t2) is false. Hence, ∃t3 ∈ (t1, t2) such

that φ2(t3) = 0.

Example. Consider

d2x

dt2+ A2x = 0 (5.5)

d2x

dt2+B2x = 0 (5.6)

where A,B are fixed real numbers with B > A > 0. Then, φ1(t) = sin(At) and φ2(t) = sin(Bt)

solves (5.5) and (5.6).

The consecutive zeros of φ1 are nπA

and (n+1)πA

, n ∈ N.

The Sturm comparison theorem suggests that ∃ξn ∈(nπA, (n+1)π

A

)such that φ2(ξn) = 0.

Remark. Uniqueness of zeros of φ2 in between two consecutive zeros of φ1 is not there, but

Page 87: Paper XII : Ordinary Di erential Equation

5.4. STURM-LIOUVILLE PROBLEMS 87

existence is there.

5.4 Sturm-Liouville Problems

Definition 5.4.1. Consider a boundary value problem consisting of

1. second order homogeneous linear ode of the form

d

dx

[p(x)

dy

dx

]+ [q(x) + λr(x)]y = 0

where p, q, r are real valued functions of x with p(x) > 0, having continuous derivative and q, r

are just continuous and r(x) > 0 on I = [a, b]. Also, λ is a parameter independent of x.

2. Two supplementary conditions of the form

boundary conditions

A1y(a) + A2y′(a) = 0

B1y(b) +B2y′(b) = 0

where A1, A2, B1, B2 ∈ R and A1, A2 are not both zero and B1, B2 are not both zero.

We are interested in finding the values of λ so that the above system admits non-trivial solutions.

This type of problems are called Sturm-Liouville problem (system).

Example. Consider the ode d2ydx2

+ λy = 0 on [0, π], with y(0) = 0, y(π) = 0. This is an example of

Sturm-Liouville problem.

Lecture 25

Let L : C2(R)→ C0(R) be given by

L(f)(x) =d

dx

[p(x)

df

dx

]+ q(x)f(x)

Then the Sturm-Liouville problem ddx

[p(x) dy

dx

]+ [q(x) + λr(x)]y = 0 can be re-written as

L(y) = −λr(x)y

Page 88: Paper XII : Ordinary Di erential Equation

88 CHAPTER 5. STURM-LIOUVILLE THEORY

If r(x) = 1, then we say −λ is an eigen value of L is the equation admits a non-trivial solution. For

a general r(x) we say (−λ) is an eigen value of L with respect to the weight function r(x).

Remark. 1. Using standard notations used for the Sturm-Liouville problems, we will say λ and

not (−λ) as the eigen values of L.

2. In fact we will say that λ (for which y 6= 0 solution exists) is an eigen value (characteristic

value) of the ode.

Example. d2ydx2

+ λy = 0, y(0) = 0, y(π) = 0.

Case 1 : (λ = 0) The equation becomes d2ydx2

= 0, which has only trivial solutions.

Case 2 : (λ > 0) In this case the general solution of the problem is given by

y(x) = c1 cos(√λx) + c2 sin(

√λx). We find that non-trivial solutions exist for λ = n2, n ∈ N.

Case 3 : (λ < 0) In this case the general solution of the problem is given by

y(x) = c1 exp(√λx) + c2 exp(−

√λx). Imposing the boundary conditions gives that no non-trivial

solution exists.

Thus, the given problem has only positive eigen values given by λ = n2, n ∈ N.

Remark. The boundary / initial conditions play a vital role in determining the eigen values.

Example. Consider the previous problem with boundary conditions y(0) = 0, y(π2

)= 0.

Case 1 : (λ = 0) Only trivial solution exists.

Case 2 : (λ > 0) In this case the general solution of the problem is given by

y(x) = c1 cos(√λx) + c2 sin(

√λx). Imposing the boundary conditions we find that the non-trivial

solutions exist for λ = 4n2, n ∈ N.

Case 3 : (λ > 0) In this case the general solution of the problem is given by

y(x) = c1 exp(√λx) + c2 exp(−

√λx). Imposing the boundary conditions gives that no non-trivial

solution exists.

Question 5.4.2. 1. Is it possible for some boundary conditions the previous problem has no

eigen values?

2. Does there exist infinitely many eigen values of any Sturm-Liouville problem?

3. Will every Sturm-Liouville problem have isolated eigen values?

Page 89: Paper XII : Ordinary Di erential Equation

5.4. STURM-LIOUVILLE PROBLEMS 89

Theorem 5.4.3. Let’s consider a general Sturm-Liouville problem

d

dx

[p(x)

dy

dx

]+ [q(x) + λr(x)]y = 0 (5.7)

with boundary conditions

A1y(α) + A2y′(α) = 0

B1y(β) +B2y′(β) = 0

where A1, A2 are not both 0 and B1, B2 are not both 0. Then,

1. There exists infinitely many eigen values λn, n ∈ N of the given problem. In particular, they

can be arranged in the increasing order as λ1 < λ2 < · · · < λn < λn+1 < · · · such that λn →∞as n→ +∞.

2. For each eigen value λn, ∃ an one parameter family of eigen functions φn. [∵ constant

multiple of eigen functions are also eigen functions.]

3. Each eigen space is one dimensional. Two eigen functions φn and φ′n corresponding to the

same eigen value λn, vary only by a constant multiple.

4. Each eigen function φn corresponding to the eigen value λn has exactly (n− 1) zeros in the

open interval (α, β).

Proof is omitted for this theorem.

Example. Consider d2ydx2

+ λy = 0, y(0) = y(π) = 0.

Then, the eigen values are λn = n2, n ∈ N. Hence, we have

λ1 = 1 < 4 < 9 < · · · < n2 < (n+ 1)2 < · · ·

For λ1 = 1 we have the eigen function y(x) = sinx = φ1(x).

For λ2 = 4 we have the eigen function y(x) = sin(2x) = φ2(x).

For λ5 = 25 we have the eigen function y(x) = sin(5x) = φ5(x).

Page 90: Paper XII : Ordinary Di erential Equation

90 CHAPTER 5. STURM-LIOUVILLE THEORY

Remark. Recall from linear algebra that the eigen vectors corresponding to the different eigen

values are linearly independent.

Definition 5.4.4 (Orthogonal functions). Let f and g be two continuous functions of x ∈ [a, b].

Let r(x) be a continuous, then f and g are said to be orthogonal with respect to r(x) iff∫ b

a

f(x)g(x)r(x)dx = 0

Observation 5.4.5. Cn([a, b]) is an inner product space with respect to the inner product

〈f, g〉 =

∫ b

z

f(x)g(x)r(x)dx

Definition 5.4.6. Let {fn}n∈N, be an infinite collection of functions on [a, b]. {fn} is said to be an

orthogonal system with respect to the weight function r(x) on [a, b] if for m 6= n, φm is orthogonal

to φn, i.e. ∫ b

a

φm(x)φn(x)r(x)dx = 0, for m 6= n

Theorem 5.4.7. Consider the Sturm-Liouville problem ddx

[p(x) dy

dx

]+ [q(x) + λr(x)]y = 0. Let

{λn}, n ∈ N be the eigen values with φn being the corresponding eigen functions. Then the set

{φn}n∈N is an orthogonal set of functions with respect to r(x) over [a, b].

[Proof can be found in the book of S. L. Ross (given in the reference)].

Page 91: Paper XII : Ordinary Di erential Equation

Chapter 6

Variation of parameters

Lecture 26

Reference : Differential Equations by S. L. Ross, 3rd edition.

Consider the general n-th order linear inhomogeneous ode with constant coefficients given by

dny

dxn+ c1

dn−1y

dxn−1+ · · ·+ cn−1

dy

dx+ cny = F (x)

Reduced Equationdny

dxn+ c1

dn−1y

dxn−1+ · · ·+ cn−1

dy

dx+ cny = 0

Auxilliary Equation

mn + c1mn−1 + · · ·+ cn−1m+ cn = 0

Let m1,m2, · · · ,mn be n-roots of auxilliary equation. Then the complimentary function comprises

of emix, xemix, · · · , etc.

Particular Integral

P(D)y = F (x) =⇒ y =F (x)

P(D)

where P(D) is polynomial of D ≡ ddx

.

General Solution

y = C.F.+ P.I.

Remark. This method of finding the particular integral is helpful only if F (x) has any of the

following forms

1. polynomial in x (including constants),

2. exponentials,

3. trignometric functions,

91

Page 92: Paper XII : Ordinary Di erential Equation

92 CHAPTER 6. VARIATION OF PARAMETERS

4. combination of above 3 types of functions.

We look for some other methods to find the particular integral for a broader categories of F (x).

One such method is the Variation of parameters.

Example. Let us start with a second order linear ode d2ydx2

+ y = tanx.

Then the complimentary function is yc = c1 cosx+ c2 sinx.

We now replace the arbitrary constants in the complimentary function by some arbitrary functions

(twice differentiable). Thus, we consider a function

f(x) = v1(x) cos(x) + v2(x) sin(x)

If f solves the given ode then

f ′′(x) + f(x) = tan x

f ′(x) = v′1(x) cosx− v1(x) sinx+ v′2(x) sinx+ v2(x) cosx

We impose the condition v′1(x) cosx+ v′2(x) sinx = 0. Thus, we get

f ′(x) = −v1(x) sinx+ v2(x) cosx

=⇒ f ′′(x) = − (v′1(x) + v2(x)) sinx+ (−v1(x) + v′2(x)) cosx

Since, we want f to be a solution of the given ode, we have

f ′′(x) + f(x) = tan x

=⇒ − v′1(x) sinx+ v′2(x) cosx = tanx

Thus, we have the following 2× 2 system(v′1(x)

v′2(x)

)=

[cosx sinx

− sinx cosx

]−1(0

tanx

)=

(− sinx tanx

sinx

)∴ v2(x) = − cosx+ c3 and v1(x) = sin x− ln(sec x+ tanx) + c4

Thus, the particular integral is given by yp = c3 sinx+ c4 cosx− cosx ln(sec x+ tanx). Choosing

Page 93: Paper XII : Ordinary Di erential Equation

6.1. GENERAL THEORY FOR SECOND ORDER LINEAR ODES 93

c3 = c4 = 0 we get the particular integral as yp = − cosx ln(sec x+ tanx).

Remark. Since, c3 sinx+ c4 cosx is the complimentary function, hence this part won’t contribute

to the ode. Hence, we can neglect c3, c4 in the particular integral itself.

6.1 General theory for second order linear odes

Let a0(x) dy

dx2+ a1(x) dy

dx+ a2(x)y = F (x) be the general second order linear ode, where a0(x) 6= 0.

Let yc = c1y1(x) + c2y2(x) be the known complimentary function. Replace c1, c2 in the

complimentary function by two arbitrary C1 functions v1(x), v2(x) respectively.

Assume yp(x) = v1(x)y1(x) + v2(x)y2(x) be a particular integral. Then,

y′p(x) = {v′1(x)y1(x) + v′2(x)y2(x)}+ {v1(x)y′1(x) + v2(x)y′2(x)}

= v1(x)y′1(x) + v2(x)y′2(x)

if we impose the condition v′1(x)y1(x) + v′2(x)y2(x) = 0. Then, we obtain

y′′p(x) = {v′1(x)y′1(x) + v′2(x)y′2(x)}+ {v1(x)y′′1(x) + v2(x)y′′2(x)}

Since, yp(x) is a particular integral, it must satisfy the given ode. Hence,

a0(x)y′′p(x) + a1(x)y′p(x) + yp(x) = F (x)

=⇒ {a0(x)y′′1(x) + a1(x)y′1(x) + y1(x)}v1(x) + {a0(x)y′′2(x) + a1(x)y′2(x) + y2(x)}v2(x) + a0(x){v′1(x)y′1(x) + v′2(x)y′2(x)} = F (x)

Since, y1, y2 are solutions of the homogeneous ode, the first two terms in the above expression will

vanish. Thus, we are left with

v′1(x)y′1(x) + v′2(x)y′2(x) =F (x)

a0(x)

Page 94: Paper XII : Ordinary Di erential Equation

94 CHAPTER 6. VARIATION OF PARAMETERS

Thus, we have two conditions

v′1(x)y1(x) + v′2(x)y2(x) = 0

v′1(x)y′1(x) + v′2(x)y′2(x) =F (x)

a0(x)

As the coefficient matrix is the Wronskian matrix of the linearly independent solutions y1, y2 of the

reduced equation, it is invertible. Hence, we have unique solutions

v′1(x) =F (x)y2(x)

a0(x)W [y1, y2](x)

v′2(x) =F (x)y1(x)

a0(x)W [y1, y2](x)

Thus, the particular integral of the given problem is yp(x) = v1(x)y1(x) + v2(x)y2(x), where

v1(x) =

∫ x

a

F (t)y2(t)

a0(t)W [y1, y2](t)dt

v2(x) =

∫ x

a

F (t)y1(t)

a0(t)W [y1, y2](t)dt

where a < b are the end points of the interval I and x ∈ I.

Remark. This approach is equally valid for the m− th order linear odes.

Example.d3y

dx3− 6

d2y

dx2+ 11

dy

dx− 6y = ex

Auxilliary equation : m3 − 6m2 + 11m− 6 = 0, whose roots are m = 1, 2, 3.

Complimentary function : yc = c1ex + c2e

2x + c3e3x.

Let yp = v1(x)ex + v2(x)e2x + v3(x)e3x be a particular integral of the given ode, where

v1(x), v2(x), v3(x) are C1 functions of x

y′p(x) = (v′1(x)ex + v′2(x)e2x + v′3(x)e3x) + (v1ex + 2v2e

2x + 3v3e3x)

Condition 1 : v′1(x)ex + v′2(x)e2x + v′3(x)e3x = 0, which implies

Page 95: Paper XII : Ordinary Di erential Equation

6.1. GENERAL THEORY FOR SECOND ORDER LINEAR ODES 95

y′′p(x) = v1(x)ex + 4v2(x)e2x + 9v3(x)e3x. Therefore

y′′′p (x) = (v′1(x)ex + 4v′2(x)e2x + 9v′3(x)e3x) + (v1(x)ex + 8v2e2x + 27v3e

3x)

Thus, from the given equation we have

y′′p(x)− 6y′′p(x) + 11y′p(x)− 6yp(x) = ex

v′1(x)ex + 4v′2(x)e2x + 9v′3(x)e3x = ex

Thus, we have the 3× 3 systemex e2x e3x

ex 2e2x 3e3x

ex 4e2x 9e3x

v′1(x)

v′2(x)

v′3(x)

=

0

0

ex

∴ W [ex, e2x, e3x] 6= 0. This has unique solution given by

v1(x) =1

2x

v2(x) = e−x

v3(x) = −1

4e−2x

∴ yp(x) = 12xex + 3

4ex and the general solution is given by

y(x) = c1ex + c2e

2x + c3e3x +

1

2xex

Note, we have neglected the term 34ex as it can be absorbed in the complimentary function itself.

Page 96: Paper XII : Ordinary Di erential Equation

96 CHAPTER 6. VARIATION OF PARAMETERS

Page 97: Paper XII : Ordinary Di erential Equation

Chapter 7

Liapunov functions

Lecture 27

Reference : Differential Equations by S. L. Ross, 3rd edition.

Let us consider a linear system

dx1dt

= 2x1 + x2

dx2dt

= x1 + x2

d

dt

[x1

x2

]=

[2 1

1 1

][x1

x2

]

The eigen values of the coefficient matrices are λ = 3±√5

2. Therefore the solution of the system is

x1(t) = c1eλ1t, x2(t) = c2e

λ2t.

Further, the equilibrium points of the system are x1 = 0 = x2.

Now, as t ↑, ||(x1(t), x2(t))|| ↑, i.e. the solution curve moves away from the equilibrium point (0, 0)

as t increases. Hence, the origin is an unstable equilibrium point.

Remark. We know, how to check the stability of the equilibrium points of a linear system. What

about the nonlinear systems?

7.1 Stability of non-linear odes

Consider a nonlinear system given by

dx1dt

= P (x1, x2)

dx2dt

= Q(x1, x2)

Equilibrium points for this system satisfy P (x1, x2) = Q(x1, x2) = 0.

97

Page 98: Paper XII : Ordinary Di erential Equation

98 CHAPTER 7. LIAPUNOV FUNCTIONS

Example. Consider a nonlinear system given by

dx1dt

= P (x1, x2)

dx2dt

= Q(x1, x2)

where P (x1, x2) = x21 + x22 and Q(x1, x2) = sin(x1x2). Thus, origin is the only equilibrium point.

Example. Taking P (x1, x2) = x21 + x22 and Q(x1, x2) = x21 + x22 − 2 in the above example, then this

system does not have any equilibrium point.

In this chapter we will be concerned about the nature of the equilibrium point rather than finding

one.

7.1.1 Liapunov’s direct method

We are interested to see whether a given equilibrium point of a first order ode system is stable or

not.

Definition 7.1.1. Let E(x, y) be a differentiable function of (x, y) on some domain D ⊂ R2

containing the origin. Then, E is said to be

1. Positive definite if : E(0, 0) = 0 and E(x, y) > 0,∀(x, y) 6= (0, 0).

2. Positive semi-definite if E(x, y) ≥ 0, ∀(x, y) ∈ D.

3. Negative definite if : E(x, y) = 0 iff (x, y) = (0, 0) and E(x, y) < 0 otherwise.

4. Negative semi-definite if E(x, y) ≤ 0,∀(x, y) ∈ D.

Setup : Let us consider a system of two non-linear first order odes

dx1dt

= P (x1, x2)

dx2dt

= Q(x1, x2)

Page 99: Paper XII : Ordinary Di erential Equation

7.1. STABILITY OF NON-LINEAR ODES 99

Let E(x, y) be a differentiable function defined on the domain D containing the origin and the

range of the solution (x1, x2). Then, if x1(t), x2(t) are solutions of the system (1) then E(x1, x2) can

be considered as a differentiable function of t.

∴dE

dt=∂E

∂x1

dx1dt

+∂E

∂x2

dx2dt

=∂E

∂x1P (x1, x2) +

∂E

∂x2Q(x1, x2)

Definition 7.1.2. Let E(x, y) be any differentiable function on D, as defined above. Then, the

derivative of E with respect to the system

dx1dt

= P (x1, x2)

dx2dt

= Q(x1, x2)

is given by

∴dE

dt=∂E

∂x1P (x1, x2) +

∂E

∂x2Q(x1, x2)

Definition 7.1.3. Consider the system

dx1dt

= P (x1, x2)

dx2dt

= Q(x1, x2)

and E(x, y) as given in the setup. If E is positive definite on D and the derivative of E wrt the

system is negative semi-definite, then we say that E is a Liapunov function for the given system.In

other words,

A differentiable function E(x, y) on D is said to be a Liapunov function for the given system if

1. E is positive definite and

2. dEdt

is negative semi-definite.

Page 100: Paper XII : Ordinary Di erential Equation

100 CHAPTER 7. LIAPUNOV FUNCTIONS

Example. Consider

dx1dt

= −x1 + x22

dx2dt

= −x2 + x21

Then, (0, 0) is an equilibrium point. Consider E(x, y) = x2 + y2, which implies the derivative of E

wrt the given system is dEdt

= −2(x21 + x22) + 2x1x2(x1 + x2).

Now, E(x, y) is positive definite and dEdt

is negative semidefinite on |x1| < 12, |x2| < 1

2. Therefore, E

is a Liapunov function for the given system.

Remark. 1. We will be considering systems for which the origin (0, 0) is an equilibrium point.

2. In case (0, 0) is not an equilibrium point, we will translate the equilibrium point to (0, 0) and

work with an equivalent system.

Theorem 7.1.4. Consider a system of first order odes

dx1dt

= P (x1, x2),dx2dt

= Q(x1, x2) (7.1)

Assume (7.1) has an isolated equilibrium or critical point at the origin and P,Q have continuous

first order partial derivatives. If ∃ a Liapunov function for (7.1) in some neighborhood of the

origin, then the origin is a stable equilibrium point.

Proof. Since, the Liapunov function E exists in a neighborhood of origin, ∃ε > 0 such that E is

defined on B2ε(0, 0). Then, define Kε = {(x, y) ∈ R2|x2 + y2 = ε2}.Again, as E is continuous and Kε is compact, therefore E attains its minimum on Kε, i.e.

∃(xε, yε) ∈ Kε such that

E(xε, yε) = inf(x,y)∈Kε

E(x, y) = m (say) (7.2)

As E is a Liapunov function, E is positive definite. Also, as it is continuous on Kε ⊆ B2ε(0, 0), E

must be uniformly continuous on Bε(0, 0).

∴ ∃ε > δ > 0 such that E(x, y) < m,∀(x, y) ∈ Kδ

where Kδ = {(x, y) ∈ R2|x2 + y2 ≤ δ2}.

Page 101: Paper XII : Ordinary Di erential Equation

7.1. STABILITY OF NON-LINEAR ODES 101

Figure 7.1: Diagramatic representation for the neighborhoods

Consider C be any path satisfying the system (7.1), i.e. C(t) = (f(t), g(t)) where f and g solve

(7.1), starting within the region (Kδ)0, i.e. C(t0) ∈ Bδ(0, 0), i.e. f(t0)

2 + g(t0)2 < δ2. This implies

E(f(t0), g(t0)) < m.

Since, dEdt

is negative semi-definite and dEdt

[f(t), g(t)] ≤ 0,∀t such that (f(t), g(t)) ∈ B2ε(0, 0), E

must be non-increasing along C(t).

This implies E[f(t), g(t)] ≤ E[f(t0), g(t0)] < m, which implies the curve C(t) has to lie inside the

region bounded by Kε. As otherwise, ∃ at least one point on the curve C(t) for which E[C(t)] ≥ m,

which implies that the origin is a stable equilibrium point.

Lecture 28

Definition 7.1.5 (Stable critical point). Let P be a critical point of the system dXdt

= F (X). The

critical point P is called a stable critical point if for any ε > 0,∃δ > 0 such that for any initial

condition X(t0) = X0 ∈ Bδ(P ), we will have the solution of the given system X(t) ∈ Bε(P ),∀t ≥ t0.

(see figure (7.1.1).)

Definition 7.1.6 (Asymptotically stable). A critical point P of the system dXdt

= F (X) is said to

be an asymptotically stable critical point if P is a stable critical point and as t→∞, X(t)→ P .

(see figure (7.1.1).)

Theorem 7.1.7. Let dxdt

= P (x, y), dydt

= Q(x, y) and the origin be a critical point. Further assume

P and Q have continuous first order partial derivatives ∀(x, y) ∈ D. If there exists a Liapunov

Page 102: Paper XII : Ordinary Di erential Equation

102 CHAPTER 7. LIAPUNOV FUNCTIONS

Figure 7.2: Diagramatic representation of a stable critical point

Figure 7.3: Diagramatic representation asymptotically stable critical point.

Page 103: Paper XII : Ordinary Di erential Equation

7.1. STABILITY OF NON-LINEAR ODES 103

Figure 7.4: Diagramatic representation of the neighborhoods

function E(x, y) in the region D, containing (0,0) and E is negative definite there, then (0, 0) is

asymptotically stable.

Proof. Since, E is negative definite, it is also negative semi-definite. Hence, (0, 0) is a stable critical

point.

Let C(t) = (f(t), g(t)) be a solution curve of the given system with initial condition (f(t0), g(t0))

belonging to Kδ (for some fixed ε > 0, as obtained in the previous theorem). (see figure (7.1.1).)

Now, E(f(t), g(t))− E(f(t0), g(t0)) =∫ tt0

dEds

(f(s), g(s))ds. Also, as E is negative definite, E has to

be strictly decreasing except at the origin.

If possible let C(t) 9 (0, 0) as t→∞. But, C(t) ∈ Kε (due to stability), which means it is bounded.

Case 1 : If C(t) converges to some point γ = (γ1, γ2) 6= (0, 0), then E(γ) > 0 and E(γ) < 0.

∴ E(f(t), g(t))− E(f(t0), g(t0)) ≤ −k(t− t0), where − k = maxt∈[t0,t]

{E(f(t), g(t))}

Further, as E(f(t), g(t))→ E(γ) < 0 =⇒ −k < 0 =⇒ E(f(t), g(t))→ −∞, as t→∞, which

contradicts the fact that E is positive definite.

Case 2 : If C(t) remains within Kε but does not converge. As C(t) is bounded, ∃λ > 0 and a

sequence {tn} → ∞ such that ||C(tn)− (0, 0)|| ≥ λ.

Then, E(C(tn))− E(C(t0)) ≤∫ tnt0E(C(s))ds.

Now, E is continuous on Kε. This implies the maximum of E exists on Kε\Bλ(0, 0). (see figure

(7.1.1).) Let −k be the maximum of E on Kε\Bλ(0, 0). Again, as E is negtive definite,

−k < E(x, y),∀(x, y) ∈ Bλ(0, 0) and −k < 0.

Page 104: Paper XII : Ordinary Di erential Equation

104 CHAPTER 7. LIAPUNOV FUNCTIONS

Figure 7.5: Diagramatic representation of the solution curve

Therefore E(C(tn))− E(C(t0)) ≤ −k(tn − t0) =⇒ E(C(tn))→ −∞ as n→∞, which again

contradicts that E is positive definite.

Thus, our assumption that C(t) 9 (0, 0) as t→∞ is false.

Hence, origin is an asymptotically stable critical point.

Example. Consider the system

dx

dt= x+ x2 − 3xy

dy

dt= −2x+ y + 3y2

Origin is a critical point. We find that it is an unstable critical point.

To determine the instability of the critical points we need the following theorems (without proof).

Lecture 29

7.2 Instability theorems

Theorem 7.2.1 (Liapunov instability theorem). Let origin be a critical point of the systemdxdt

= P (x, y) and dydt

= Q(x, y). Suppose there exists a continuously differentiable function E(x, y)

such that

1. E is positive definite and

2. dEdt> 0 in a neighborhood of the origin.

Page 105: Paper XII : Ordinary Di erential Equation

7.2. INSTABILITY THEOREMS 105

Figure 7.6: Diagramatic representation of Liapunov instability condition

Then the origin is an unstable equilibrium point. (see figure (7.6)).

Theorem 7.2.2 (Chataev instability theorem). Let the origin be a critical point of the systemdxdt

= P (x, y) and dydt

= Q(x, y). Suppose there exists a continuously differentiable function E(x, y)

on a neighborhood U of the origin and a non-empty set U1 ⊆ U such that

1. (0, 0) ∈ U1,

2. E(x, y) > 0,∀(x, y) ∈ U1 {(0, 0)},

3. E = ∂E∂xP (x, y) + ∂E

∂yQ(x, y) > 0, ∀(x, y) ∈ U1\{(0, 0)},

4. E(x, y) = 0,∀(x, y) ∈ ∂U1.

Then, (0, 0) is an unstable critical point. (See figure (7.7).)

Example.

dx

dt= x+ x2 − 3xy = P (x, y)

dy

dt= −2x+ y + 3y2 = Q(x, y)

For the postive definite function E(x, y) = x2 + y2,

E(x, y) = 2[x2 + y2 + (x− 3y)(x2 − y2) + xy(y − 2)]

Now, E(0, y) = 2[y2 + 3y3]. If y is sufficiently small in magnitude, then E(0, y) > 0.

Thus, by Chataev instability theorem, origin is unstable.

Page 106: Paper XII : Ordinary Di erential Equation

106 CHAPTER 7. LIAPUNOV FUNCTIONS

Figure 7.7: Diagramatic representation of Chataev instability condition

Remark. 1. There are a lot of stability and instability theorems to check the nature of stability

of the critical points of a non-linear system.

2. The list of such theorems is non-exhaustive.

3. These are all sufficient conditions and not necessary.


Recommended