+ All Categories
Home > Documents > Exercises for Stochastic Calculus

Exercises for Stochastic Calculus

Date post: 26-Oct-2014
Category:
Upload: abenhari
View: 155 times
Download: 3 times
Share this document with a friend
Description:
Probability measure, Review of probability theory,Markov chains, Recurrence transition matrices,Stationary distributions, Hitting times, Poisson processes, Renewal theory, Branching processes, Branching and point processes, Martingales in discrete time, Brownian motion, Martingales in continuous time
Popular Tags:
85
Exercises for Stochastic Calculus Abdelkader BENHARI Probability measure, Review of probability theory, Markov chains, Recurrence transition matrices, Stationary distributions, Hitting times, Poisson processes, Renewal theory, Branching processes, Branching and point processes, Martingales in discrete time, Brownian motion, Martingales in continuous time
Transcript
Page 1: Exercises for Stochastic Calculus

Exercises for Stochastic Calculus

Abdelkader BENHARI

Probability measure, Review of probability theory, Markov chains, Recurrence transition matrices,Stationary distributions, Hitting times, Poisson processes, Renewal theory, Branching processes, Branching and point processes, Martingales in discrete time, Brownian motion, Martingales in continuous time

Page 2: Exercises for Stochastic Calculus

Show that if A and B belongs to the σ-algebra F then also B\A ∈ F (fordenition of σ-algebra, see Denition 1.3). Also show that F is closed undercountable intersections, i.e. if Ai ∈ F for i = 1, 2, . . . , then ∩∞i=1Ai ∈ F.

Proof. 1) B\A = B ∩ Ac = (Bc ∪ A)c and since Bc ∈ F, Bc ∪ A ∈ F so(Bc ∪A)c ∈ F.

2) Take Ai ∈ F for i = 1, 2, . . . . Since ∪∞i=1Ac ∈ F it follows that

(∪∞i=1Ac)c = ∩∞i=1A ∈ F

2.

Though a fair die once. Assume that we only can observe if the number obtainedis small, A = 1, 2, 3 and if the number is odd, B = 1, 3, 5. Describe theresulting probability space; in particular, describe the σ-algebra F generatedby A and B in terms of a suitable partition (for denition of a partition, seeDenition 1.9) of the sample space.

Proof. Looking at the Venn diagram insert diagram, we conclude that thereare at most four partitions of the space, A∩B = 1, 3, A∩Bc = 2, Ac∩B =5 and (A ∪ B)c = 4, 6 of which none is an empty sets. These partitionscan be combined in 24 = 16 dierent ways to generate the σ-algebra F denedbelow.

F =∅,Ω, A,B,Ac, Bc, A ∪B,A ∪Bc, Ac ∪B,Ac ∪Bc,A ∩B,A ∩Bc, Ac ∩B,Ac ∩Bc, (A ∩Bc) ∪ (Ac ∩B), (A ∩B) ∪ (Ac ∩Bc)

=∅,Ω, 1, 2, 3, 1, 3, 5, 4, 5, 6, 2, 4, 6, 1, 2, 3, 5, 1, 2, 3, 4, 6,1, 3, 4, 5, 6, 2, 4, 5, 6, 1, 3, 2, 5, 4, 6, 2, 5, 1, 3, 4, 6.

The probability measure on each of the sets in F may be deduced by usingthe additivity of the probability measure and the probability measure of eachof the partitions, P (A ∩ B) = 2/6, P (A ∩ Bc) = 1/6, P (Ac ∩ B) = 1/6 andP ((A ∪B)c) = 2/6.

3.

Given a probability space (Ω,F,P) and functions X : Ω → R, Y : Ω → R.Dene Z = maxX,Y .

1. Show that Z is F-measurable if both X and Y are F-measurable.

2. Find a special case where Z is F measurable even though neither X norY is.

Proof. 1)

ω ∈ Ω : Z(ω) ≤ x = ω ∈ Ω : maxX(ω), Y (ω) ≤ x= ω ∈ Ω : X(ω) ≤ x︸ ︷︷ ︸

∈F

∩ω ∈ Ω : X(ω) ≤ x︸ ︷︷ ︸∈F

∈ F,

1

1.XERCISEeX

Series 1: Probability mesure

Page 3: Exercises for Stochastic Calculus

where the last conclusion is based on the result of proposition 3.2.2) Suppose you threw two coins and our σ-algebra F was generated by the

single setA = HT, TH, i.e. F = ∅,Ω, A,Ac = ∅, HH,TT,HT, TH, HT, TH, HH,TT.Let

X =

1 if ω ∈ HT0 if ω ∈ HH,TT, TH Y =

1 if ω ∈ TH0 if ω ∈ HH,TT,HT .

Then

Z =

1 if ω ∈ HT, TH0 if ω ∈ HH,TT,

hence X and Y are not F-measurable while Z is.

4.

Toss a fair soin n = 4 times. describe the sample space Ω. we want to considerfunctions X : ω → −1, 1.

1. describe the probability space (Ω,F1,P1) thar arises if we want each out-come to be a legitimate event. How many F1-measurable functions Xwith E[X] = 0 are there?

2. Now describe the probability space (Ω,F2,P2) that arises if we want thatonly combinations of sets of the type

Ai = ω ∈ Ω : Number of heads = i, i = 0, 1, 2, 3, 4

should be events. How many F2-measurable functions X with E[X] = 0are there?

3. Solve 2) above for some other n.

Proof. 1) Each coin that is tossed has to possible outcomes, and since thereare four coins to be tossed, there are 24 = 16 possible outcomes, or partitions.So the sample space is Ω = HHHH,HHHT,HHTH, etc. with 16 membersthat describes all the information from that four tosses. F1 is the power set

of Ω consisting of all combinations of sets of Ω (don't forget that ∅ allways isincluded in a σ-algebra ), hence F1 = σ(Ω). P1 is deduced by rst observing thateach ω ∈ Ω has P(ω) = 1/16 and then using the additivity och the probabilitymeasure.

For any function X : Ω → −1, 1 that is measurable with respect to F1

there is a set A ∈ F1 such that

X(ω) = 1 if ω ∈ A−1 if ω ∈ Ac.

Hence E[X] = P1(A) − P1(Ac) so if E[X] = 0 we must have P1(A) = P1(Ac),and since there are 16 partitions each with probability measure 1/16, there are168 ways of combining the partitions so that there are equally many of them in

A and Ac, hence there are168 possible nctions X : Ω → −1, 1 such that

E[X] = 0.

2

Page 4: Exercises for Stochastic Calculus

2) The sets

A0 = TTTTA1 = TTTH, TTHT, THTT,HTTTA2 = TTHH,THHT,HHTT,HTTH, THTH,HTHTA3 = HHHT,HHTH,HTHH,THHHA4 = HHHH,

denes a partition of Ω. The σ-algebra F2 is the σ-algebra generated by thispartition. Dene Pi = P(Ai), then P0 = 1/16, P1 = 4/16, P2 = 6/16, P3 = 4/16and P4 = 1/16. For any function X : Ω → −1, 1 that is measurable withrespect to F2 there is a a set A ∈ F2 such that

X(ω) = 1 if ω ∈ A−1 if ω ∈ Ac.

Hence E[X] = P2(A) − P2(Ac) so if E[X] = 0 we must have P1(A) = P1(Ac).We may write E[X] = k0P0 + k1P1 + k2P2 + k3P3 + k4P4 = k0/160 + 4k1/16 +6k2/16 + 4k3/16 + k4/16 which is zero either if ki = (−1)i or if ki = −(−1)i.Hence, there are two possible F2-measurable functions X : Ω → −1, 1 suchthat E[X] = 0.

3) Using the same notation as in 2), we conclude that, for any given in-teger n > 0, A0, A1, . . . , An is a partition of Ω with σ-algebra Fn generated

by this partition, and with Pn(Ai) =( ni)(1/2)i for i = 0, 1, . . . , n. And

E[X] =∑ni=0 ki

( ni)(1/2)i for ki = ±1, where it should be noted that

∑ni=0

( ni)

(1/2)i = 1. For any function X : Ω→ −1, 1 that is measurable with respectto Fn there is a a set A ∈ Fn such that

X(ω) = 1 if ω ∈ A−1 if ω ∈ Ac.

Hence E[X] = Pn(A)− Pn(Ac) so if E[X] = 0 we must have Pn(A) = Pn(Ac) =1/2.

For even n, we have ∑i=odd

( ni)(1/2)i = 1/2

∑i=even

( ni)(1/2)i = 1/2

so E[X] = 0 if ki = (−1)i or ki = −(−1)i. For odd n, we have

(n−1)/2∑i=0

( ni)(1/2)i = 1/2

n∑i=(n−1)/2+1

( ni)(1/2)i = 1/2

so E[X] = 0 if kn−i = −ki for i = 0, 1, . . . , (n−1)/2 and there are 2(n−1)/2 waysof combining these. Hence, there are 2(n−1)/2 possible Fn-measurable functionsX : Ω→ −1, 1 such that E[X] = 0.

3

Page 5: Exercises for Stochastic Calculus

Consider the probability space (Ω,F,P) where Ω = [0, 1], F is the Borel σ-algebra on Ω and P is the uniform probability measure on (Ω,F). Show thatthe two random variables X(ω) = ω and Y (ω) = 2|ω − 1/2| have the samedistribution, but that P(X = Y ) = 0.

Proof. (For denition of distribution function see denition 3.4) Since X isuniform, the distribution function is

P(X ≤ x) = 0 for x ∈ (−∞, 0]P(X ≤ x) = x for x ∈ [0, 1]P(X ≤ x) = 1 for x ∈ [1,∞)

so for Y we get, with x ∈ [0, 1],

P(Y ≤ x) = P(2|X − 1/2| ≤ x) = P(|2X − 1| ≤ x) = by symmetry= 2P(0 ≤ 2X − 1 ≤ x) = 2P(1/2 ≤ X ≤ (x+ 1)/2)

= 2((x+ 1)/2− 1/2) = x.

Hence P(X ≤ x) = P(Y ≤ x) = x for x ∈ [0, 1]. But since

P(X = Y ) = P(X = 2|X − 1/2|)= P(ω ∈ Ω : X = 2X − 1/2 ∪ ω ∈ Ω : X = −2X + 1/2)= P(ω ∈ Ω : X = 1 ∪ ω ∈ Ω : X = 3) = P(∅) = 0,

we conclude that P (X = Y ) = 0.

6.

Show that the smallest σ-algebra containing a set A is that intersection of allσ-algebras containing A. Also, show by counter example that the union of twoσ-algebras is not necessarily a σ-algebra .

Proof. Let GA be the set of all σ-algebras containing A, we want to show that∩G∈GAG = ω ∈ Ω : ω ∈ G for every G ∈ GA = F is a σ-algebra ; if it is,then it is the smallest σ-algebra since otherwise, there would be a σ-algebracontaining A that was smaller than F , this would be a contradiction since thenthis σ-algebra would be included in G and therefore F would not be minimal.To check that F is a σ-algebra , we simply use Denition 1.3.

1. ∅ ∈ G ∀G ∈ GA hence ∅ ∈ F.

2. If ω ∈ G ∀G ∈ GA then ωc ∈ G ∀G ∈ GA hence ωc ∈ F.

3. If ω1, ω2, . . . ∈ G ∀G ∈ GA then ∪∞i=1ωi ∈ G ∀G ∈ GA hence ω1, ω2, . . . ∈ F.

so F is a σ-algebra .To show that the union of two σ-algebras is not necessarily a σ-algebra ,

take F1 = ∅,Ω, A,Ac and F2 = ∅,Ω, B,Bc where A ⊂ B. Then F1 ∪ F2 =ω ∈ Ω : ω ∈ F1 or F2 = ∅,Ω, A,Ac, B,Bc is not a σ-algebra since, e.g.Bc ∪A /∈ F1 ∪ F2.

4

5.

Page 6: Exercises for Stochastic Calculus

given a probability space (Ω,F,P) and a random variable X. Let A be a sub-σ-

algebra of F, and consider X = E[X|A].

1. Show that E[(X − X)Y ] = 0 if Y is an A-measurable random variable.

2. Show that theA-measurable random variable Y that minimize E[(X−Y )2]is Y = E[X|A].

Proof. 1) Using equality 4 followed by equality 2 in Proposition 4.8 we get

E[(X − X)Y ] = E[E[(X − X)Y |A]] = E[E[(X − X)|A]Y ]

= E[(X − X)Y ] = 0.

2) Using equality 3 followed by equality 2 in Proposition 4.8 we get

E[(X − Y )2] = E[X2]− 2E[XY ] + E[Y 2]

= E[X2]− 2E[E[XY |A]] + E[Y 2]

= E[X2]− 2E[Y E[X|A]] + E[Y 2]

= E[X2]− E[E[X|A]2] + E[E[X|A]2]− 2E[Y E[X|A]] + E[Y 2]

= E[X2]− E[E[X|A]2] + E[(Y − E[X|A])2]︸ ︷︷ ︸≥0

,

hence E[(X − Y )2] is minimized when Y = E[X|A].

8.

LetX and Y be two integrable random variables dened on the same probabilityspace (Ω,F,P). Let A be a sub-σ-algebra such that X is A-measurable.

1. Show that E[Y |A] = X implies that E[Y |X] = X.

2. Show by counter example that E[Y |X] = X does not necessarily implythat E[Y |A] = X.

Proof. (For denition of conditional expectation see Denition 4.6, for proper-ties of the conditional expectation, see Proposition 4.8.)

1) SinceX isA-measurable, σ(X) ⊆ A hence, using equality 4 in Proposition4.8, we get

E[Y |X]∆= E[Y |σ(X)︸ ︷︷ ︸

⊆A

] = E[E[Y |A]︸ ︷︷ ︸=X

|X] = E[X|X] = X.

2) Let X and Z be independent and integrable random variables and assumethat E[Z] = 0 and dene Y = X + Z. Let A = σ(X,Z), then

E[Y |σ(X)] = E[X + Z|σ(X)] = X + 0 = X,

while

E[Y |A] = E[X + Z|σ(X,Z)] = X + Z 6= X.

5

7.

Page 7: Exercises for Stochastic Calculus

Series 2 : review of probability theory

Exercise 1For each givenp let X have a binomial distribution with parametersp andN.

SupposeN is itself binomially distributed with parametersq andM, M > N.(a) Show analytically thatX has a binomial distribution with parameterspq andM.(b) Give a probabilistic argument for this result.

Exercise 2Using the central limit theorem for suitable Poisson randomvariables, prove

that

limn→∞

e−nn∑

k=0

nk

k!=

12.

Exercise 3Let X andY be independent, identically distributed, positive randomvariableswith continuous density functionf (x) satisfying f (x) > 0 for x > 0. Assume, further, thatU = X − YandV = min(X, Y) are independent random variables. Prove that

f (x) =

λe−λx for x > 0;0 elsewhere,

for someλ > 0.Hint: Show first that the joint density function ofU andV is

fU,V (u, v) = f (v) f (v + |u|).

Next, equate this with the product of the marginal densitiesfor U,V.

Exercise 4Let X be a nonnegative integer-valued random variable with probabilitygenerating functionf (s) =

∑∞n=0 ansn. After observingX, then conductX binomial trials with probability

p of success. LetY denote the resulting number of successes.(a) Determine the probability generating function ofY.(b) Determine the probability generating function ofX given thatY = X.(c) Suppose that for everyp (0 < p < 1) the probability generating functions of (a) and (b) coincide.Prove that the distribution ofX is Poisson, i.e.,f (s) = eλ(s−1) for someλ > 0.

6

Page 8: Exercises for Stochastic Calculus

Series 2: review of probability theory Solutions

Exercise 1(a) Since the distribution ofN is binomial(M, q), we have

fN(s) = E[sN ] = (sq + 1− q)M .

Since the conditional distribution ofX givenN = n is Binomial(n, p), we have

fX|N=n(s) = E[sX |N = n] = (sp + 1− p)n.

We can compute the generating function ofX, say fX(s), by conditioning onN. We obtain

fX(s) = E[sX] =M∑

n=0

E[sX | N = n] P[N = n]

=

M∑

n=0

(sp + 1− p)nP[N = n] = fN(sp + 1− p)

=

(

(sp + 1− p)q + 1− q)M=

(

s(pq) + 1− pq)M.

Here we recognize the generating function of the binomial(M, pq) distribution.

Here’s another approach. Fork ∈ 0, 1, 2, ...,M, we get

P[X = k] =M∑

n=0

P[X = k| N = n] P[N = n]

=

M∑

n=k

(

nk

)

pk(1− p)n−k(

Mn

)

qn(1− q)M−n

=

M!k!(M − k)!

(pq)kM∑

n=k

(M − k)!(n − k)!(M − n)!

((1− p)q)n−k(1− q)M−k

=

(

Mk

)

(pq)kM−k∑

j=0

(

M − kj

)

((1− p)q) j(1− q)M−k− j

=

(

Mk

)

(pq)k ((1− p)q + (1− q))M−k=

(

Mk

)

(pq)k(1− pq)M−k.

This is the binomial(M, pq) distribution.

(b) ImagineM kids. Each one of them will toss a silver coin and a gold coin. With a silver coin theprobability of heads isq. With a gold coin, it isp. Let N be the number of kids who will get headswith the silver coin and letX be the number of kids who will get heads on both tosses. Clearly thedistribution ofN is binomial(M, q), the conditional distribution ofX given N = n is binomial(n, p) andthe distribution ofX is binomial(M, pq).

7

Page 9: Exercises for Stochastic Calculus

Exercise 2Let X1, X2, X3, ... be independent random variables, each with distribution Poisson(1), and letS n =∑n

k=1 Xk. Since the mean and the variance of the Poisson(1) distribution are both equal to 1, the cen-tral limit theorem gives us

limn→∞P

[

S n − n√

n≤ x

]

=

∫ x

−∞

1√

2πe−u2/2du

for everyx ∈ R. With x = 0, this gives us

limn→∞P

[

S n − n√

n≤ 0

]

=

12.

This last equation can be written as

limn→∞P [S n ≤ n] =

12.

In view of the lemma below, the distribution ofS n is Poisson(n). Therefore, the last equation can bewritten as

limn→∞

e−nn

k=0

nk

k!=

12.

Lemma. If U ∼ Poisson(α) and V ∼ Poisson(β) and if U and V are independent, thenU + V ∼Poisson(α + β).

The elementary proof is left as an exercise.

Exercise 3Let fX,Y(x, y) denote the joint density ofX andY. Let fU,V(u, v) denote the joint density ofU andV. Forthe moment we do not make any independence assumptions.

Fix u > 0 andv > 0. The following computation will be easy to follow if you look at the first quadrantof R2 and visualize the set of points (x, y) for which min(x, y) ≤ v andx − y ≤ u.

fU,V(u, v) =∂2

∂u∂vP[(U ≤ u) ∩ (V ≤ v)]

=

∂2

∂u∂vP[(X − Y ≤ u) ∩ (min(X, Y) ≤ v)]

=

∂2

∂u∂v(P[(X, Y) ∈ A] + P[(X, Y) ∈ B])

whereA = (x, y) ∈ R2 : v < y < ∞ and 0< x < v andB = (x, y) ∈ R2 : 0 < y ≤ v and 0< x ≤ u + y.SinceA depends only onv, we have

∂2

∂u∂vP[(X, Y) ∈ A] = 0.

Thus we have

fU,V(u, v) =∂2

∂u∂vP[(X, Y) ∈ B]

=

∂2

∂u∂v

∫ v

0

∫ u+y

0fX,Y(x, y) dx dy

=

∂u

∫ u+v

0fX,Y(x, v) dx

= fX,Y(u + v, v).

8

Page 10: Exercises for Stochastic Calculus

A similar calculation gives usfU,V(u, v) = fX,Y(v − u, v) for the case whereu > 0 andv < 0. (You shoulddo the computation; be careful with the domain of integration in the (x, y) plane). In summary, we haveshown that

(1) fU,V(u, v) =

fX,Y(v + |u|, v) if u ∈ R andv > 0,0 otherwise.

If we assume thatX andY are independent and identically distributed with densityf , then equation (1)can be written as

fU,V (u, v) =

f (v + |u|) f (v) if u ∈ R andv > 0,0 otherwise.

If we assume furthermore thatU andV are independent, i.e.,fU,V(u, v) = fU(u) fV (v) for all v > 0 andu ∈ R, then the above result gives us

f (v) f (v + |u|) = fU(u) fV (v) v > 0, −∞ < u < ∞.

In particular we have

(2) f (u + v) = g(u) h(v) u > 0, v > 0

with g(u) = fU(u) andh(v) = fV (v)/ f (v). From (2) we get, for allx > 0 andy > 0,

P[X > x + y]P[X > x]

=

∫ ∞y

f (x + v)dv∫ ∞0 f (x + v)dv

=

g(x)∫ ∞

yh(v)dv

g(x)∫ ∞

0 h(v)dv=

∫ ∞y

h(v)dv∫ ∞0 h(v)dv

Thus the ratioP[X > x + y]/P[X > x] does not depend onx. Taking the limit asx → 0, we see that thisratio is equal toP[X > y]. Thus we have

(3) P[X > x + y] = P[X > x] P[X > y] pour tout x > 0, y > 0.

The lemma below allows us to conclude thatX ∼ exponential(λ).

Lemma. Let X be a non negative random variable.

(a) If X ∼ exponential(λ) for someλ > 0, then equation (3) is true.

(b) If equation (3) is true, thenX ∼ exponential(λ) (for someλ > 0).

The proof is left to the reader.

Exercise 4(a) The distribution ofY givenX = n is Binomial(n, p). Thus,

E(sY |X = n) = (sp + 1− p)n.

Using this, we have

fY(s) = E(sY) =∞∑

n=0

E(sY |X = n) P(X = n) =∞∑

n=0

(sp + 1− p)n · P(X = n) = f (sp + 1− p).

(b) We have

P(X = Y) =∞∑

n=0

P(X = n, Y = n) =∞∑

n=0

pnP(X = n) = f (p)

and then

P(X = n | X = Y) =P(X = n, X = Y)P(X = Y)

=

P(X = n)P(Y = n | X = n)f (p)

=

pnP(X = n)f (p)

.

9

Page 11: Exercises for Stochastic Calculus

To conclude,

fX|X=Y(s) =∞∑

n=0

snP(X = n | X = Y) =

1f (p)

∞∑

n=0

sn pnP(X = n) =

f (sp)f (p).

(c) From parts (a) and (b) we have

f (1− p + ps) =f (ps)f (p)

0 < p < 1 and − 1 < s < 1.

If we fix p and we take derivative w.r.t.s, we obtain

f ′(1− p + ps) p =f ′(ps) p

f (p)0 < p < 1 and − 1 < s < 1.

Now we divide byp on both sides and we evaluate ats = 1. We obtain

f ′(1) =f ′(p)f (p)

0 < p < 1

i.e.

(4) E[X] =f ′(p)f (p)

0 < p < 1

When we solve equation (4), subject to the boundary condition f (1) = 1, we get f (s) = eλ(s−1) (whereλ = E[X]). We conclude that the distribution ofX is Poisson.

10

Page 12: Exercises for Stochastic Calculus

Series 3: Markov chains

Exercise 1Determine the classes and the periodicity of the various states fora Markov chain with transition probability matrix

(a)

0 0 1 01 0 0 012

12 0 0

13

13

13 0

, (b)

0 1 0 00 0 0 10 1 0 013 0 2

3 0

.

Exercise 2Consider repeated independent trials of two outcomes S (sucess)or F (failure) with probabilitiesp andq, respectively. Determine the mean of the number of trials requiredfor the first occurence of the event SF (i.e., sucess followedby failure). Do the same for the event SSFand SFS.

Exercise 3Let a Markov chain containr states. Prove the following:(a) If a statek can be reached fromj, then it can be reached inr − 1 steps or less.(b) If j is a recurrent state, there existsα (0 < α < 1) such that forn > r the probability that the firstreturn to statej occurs aftern transitions is6 αn.

Exercise 4In this exercise, consider a random walk on the integers suchthatPi,i+1 = p,

Pi,i−1 = q for all integeri (0 < p < 1, p + q = 1).(a) DeterminePn

0,0.(b) For any reala and naturaln, define

(

an

)

=

a(a − 1) · · · (a − n + 1)n!

.

Prove that(

2nn

)

= (−1)n(

−1/2n

)

22n.

(c) Let P be the generating function associated to the sequenceun = Pn0,0, i.e., P(x) =

∑∞n=0 unxn. Show

thatP(x) = (1− 4pqx2)−1/2.(d) Prove that the generating function of the recurrence time from state 0 to state 0 isF(x) = 1 −√

1− 4pqx2.(e) What is the probability of eventual return to the origin?

11

Page 13: Exercises for Stochastic Calculus

Exercise 5A player hask coins. At each round independently, he gains one coin with probability p, or loses onecoin with probability 1− p = q. He stops to play if he reaches 0 (he loses) orN (he wins). What is theprobability to win ?

Exercise 6SupposeX1, X2, . . . are independent withP(Xk = +1) = p, P(Xk = −1) = q = 1− p, wherep > q

With S 0 = 0, setS n = X1 + · · · + Xn, Mn = maxS k : 0 6 k 6 n andYn = Mn − S n.If T (a) = minn : S n = a, showP (

max06k6T (a)

Yk < y

)

=

(

y1+y

)aif p = q = 1/2;

[

pq−

(

pq

)y+1

1−(

pq

)y+1

]a

if p , q

Hint: The bivariate process (Mn, Yn) is a random walk on the positive lattice. What is the probability thatthis random walk, started at (0, 0), leaves the rectangle in the picture from the top?

12

Page 14: Exercises for Stochastic Calculus

Solutions 3 Solutions

Exercise 1(a) We have 1→ 3, 3 → 2 and 2→ 1, so 1, 2 and 3 are in the same class. Also, for statesi = 1, 2, 3we haveP(i, 4) = 0, so class1, 2, 3 is recurrent. SinceP(4, 4) = 0, 4 itself forms a transient class. Theclasses are thus1, 2, 3, 4. Periodicity is only defined for recurrent states, so it suffices to determine theperiod of 1, 2 and 3. Since they are in the same class, they have the same period, so it suffices to determinethe period of 1. We haveP2(1, 1) > P(1, 3)P(3, 1) > 0 andP3(1, 1) = P(1, 3)P(3, 2)P(2, 1) > 0, so theperiod of 1 must divide 2 and 3, hence it is equal to 1.(b) Since 1→ 2, 2→ 4, 4→ 3, 4→ 1 and 3→ 2, there is a single recurrent class1, 2, 3, 4. Let usdetermine the period of state 4. We haveP(4, 4)P2(4, 4) = 0. Also,P3(4, 4) = 1, soP3n(4, 4) = 1 for alln. Fix N ∈ N, N > 3 andN not divisible by 3; we can then writeN = 3n + k for somen ∈ N andk = 1or 2. Then,

PN(4, 4) = P3n+k(4, 4) =4

i=1

P3n(4, i)Pk(i, 4) = P3n(4, 4)Pk(4, 4) = 0.

We thus havePN(4, 4) > 0 if and only if N is a multiple of 3, so the period of 4 (and every other state) is3.

Exercise 2Let ξ1, ξ2, . . . be a sequence of letters chosen independently at random so that for eachi, P(ξi = S ) = p =1− q = 1− P(ξi = F).• Time until SF. Let X0 = ∅∅, X1 = ∅ξ1 andXn = ξn−1ξn for n > 2, where in each case we are simplyconcatenating the symbols. (Xn) is then a Markov chain on the state space∅∅, ∅S , ∅S , S S , S F, FS , FF.Define, forn > 0,

an = P∅∅(mink : Xk = S F = n), bn = P∅∅(Xn = S F), cn = PS F(Xn = S F);

F∅∅,S F(s) =∞∑

n=0

ansn, P∅∅,S F(s) =∞∑

n=0

bnsn, PS F,S F(s) =∞∑

n=0

cnsn,

wheres ∈ C with |s| < 1. We have

b0 = b1 = 0, bn = pq ∀n > 2;

c0 = 1, c1 = 0, cn = pq ∀n > 2,

so

P∅∅,S F(s) =∞∑

n=2

pqsn=

pqs2

1− s, PS F,S F(s) = 1+

∞∑

n=2

pqsn= 1+

pqs2

1− s.

Using the formulaFx,y(s)Py,y(s) = Px,y(s) (which applies whenx , y, we have

F∅∅,S F(s) =pqs2

1−s

1+ pqs2

1−s

=pqs2

1− s + pqs2.

Then,

F′∅∅,S F(s) =

2pqs(1− s + pqs2) − pqs2(−1+ 2pqs)(1− s + pqs2)2

13

Page 15: Exercises for Stochastic Calculus

andF′∅∅,S F(1) = 1/pq. We are now done:

E(minn : Xn = S F) = F′∅∅,S F(1) = 1/pq.

• Time until SSF. Now putY0 = ∅∅∅, Y1 = ∅∅ξ1, Y2 = ∅ξ1ξ2 andYn = ξn−2ξn−1ξn for n > 3. Define

an = P∅∅∅(mink : Xk = S S F = n), bn = P∅∅∅(Xn = S S F), cn = PS S F(Xn = S S F);

F∅∅∅,S S F(s) =∞∑

n=0

ansn, P∅∅∅,S S F(s) =∞∑

n=0

bnsn, PS S F,S S F(s) =∞∑

n=0

cnsn.

We haveb0 = b1 = b2 = 0, bn = p2q ∀n > 3;

c0 = 1, c1 = c2 = 0, cn = p2q ∀n > 3,

so that

P∅∅∅,S S F(s) =∞∑

n=3

p2qs3=

p2qs3

1− s, PS S F,S S F(s) = 1+

∞∑

n=3

p2qs3= 1+

p2qs3

1− s

and

F∅∅∅,S S F(s) =p2qs3

1− s + p2qs3.

We thus have

F′∅∅∅,S S F(s) =3p2qs2(1− s + p2qs3) − p2qs3(−1+ 3p2qs2)

(1− s + p2qs3)2

andF′∅∅∅,S S F(1) = 1/p2q = E(minn : Xn = S S F).

• Time until SFS. Similarly to before, put

an = P∅∅∅(mink : Xk = S FS = n), bn = P∅∅∅(Xn = S FS ), cn = PS FS (Xn = S FS );

F∅∅∅,S FS (s) =∞∑

n=0

ansn, P∅∅∅,S FS (s) =∞∑

n=0

bnsn, PS FS ,S FS (s) =∞∑

n=0

cnsn.

We now haveb0 = b1 = b2 = 0, bn = p2q ∀n > 3;

c0 = 1, c1 = 0, c2 = pq, cn = p2q ∀n > 3,

so that

P∅∅∅,S FS (s) =∞∑

n=3

p2qs3=

p2qs3

1− s, PS FS ,S FS (s) = 1+ pqs2

+

∞∑

n=3

p2qs3= 1+ pqs2

+p2qs3

1− s

and

F∅∅∅,S FS (s) =p2qs3

1−s

1+ pqs2 +p2qs3

1−s

=p2qs3

1− s + pqs2 + (p2q − pq)s3.

Finally,

F′∅∅∅,S FS (s) =

3p2qs2(1− s + pqs2+ (p2q − pq)s3) − p2qs3(−1+ 2pqs + 3(p2q − pq)s2))(1− s + pqs2 + (p2q − pq)s3)2

;

F′∅∅∅,S FS (1) =3p2q(pq + p2q − pq) − p2q(−1+ 2pq + 3p2q − 3pq)

(pq + p2q − pq)2=

1

p2q+

1p,

soE(minn : Yn = S FS ) = 1p2q+

1p .

14

Page 16: Exercises for Stochastic Calculus

Exercise 3(a) We assume thatk , j. The fact thatk can be reached fromj means that there exists a sequence ofstatesx0, x1, . . . , xN such thatx0 = j, xN = k andP(xi, xi+1) > 0 for all i. Define

Γ = i ∈ 1, . . . ,N − 1 : ∃ j1 ∈ 1, . . . , i, j2 ∈ i + 1, . . . ,N : x j1 = x j2

(the set of indicesi such thatxi is between two repeated entries in the sequencex1, . . . , xN). By enumerat-ing 0, 1, . . . ,N\Γ = a0, . . . , aM with a0 = 0 < a1 < . . . < aM = N, it is easy to see thatP(xai , xai+1) > 0for eachi and thatxa0, xa1, xa2, . . . xaM has no repetitions. This implies that this sequence has at most relements, so the pathxa0 → xa1 → . . . → xaM−1 → xaM shows thatk can be reached fromj in at mostr − 1 steps.(b) Defineτ j = minn > 1 : Xn = j. Assume thatj→ k. Since j is recurrent, we must also havek → j,and by part (a),j can be reached fromk in at mostr − 1 steps. This implies that

Pk(τ j 6 r − 1) > 0

or, equivalently, thatPk(τ j > r) < 1.

Since the state space is finite, we thus conclude that the number

θ = min

Pk(τ j > r) : k can be reached fromj

is smaller than 1. Now, fixn ∈ N. We have⌊n/r⌋r − 1 < n, so the eventτ j > n is contained inτ j > ⌊n/r⌋r − 1. Using this, we have

P j(τ j > n) 6 P j(τ j > ⌊n/r⌋r − 1) = P j( j < X1, . . . , X⌊n/r⌋r−1)

=

k: j→k

P j( j < X1, . . . , X(⌊n/r⌋−1)r−1, X(⌊n/r⌋−1)r−1 = k) · Pk(τ j > r)

6

k: j→k

P j( j < X1, . . . , X(⌊n/r⌋−1)r−1, X(⌊n/r⌋−1)r−1 = k) · θ

= θ P j(τ j > (⌊n/r⌋ − 1)r − 1).

Proceeding inductively, we getP j(τ j > n) 6 θ⌊n/r⌋, ∀ n > r

We can now chooseα ∈ (0, 1) such that, for anyn > r,

θ⌊n/r⌋ 6 αn,

completing the proof.

Exercise 4(a) Let ξ1, ξ2, . . . denote a sequence of independent and identically distributed random variables withP(ξi = −1) = q, P(ξi = 1) = p for eachi. PuttingX0 ≡ 0 andXn =

∑ni=1 ξn, we see thatXn is a Markov

chain with transitions given byP. Since for anyn > 1 we haveXn − Xn+1 ∈ −1, 1, Xn has the sameparity asn. As a consequence,Pn(0, 0) = 0 whenn is odd. Ifn is even, then

P(Xn = 0) = P

n∑

i=1

ξi = 0

=

y1,...,yn∈−1,1n:∑

yi=0

P(ξ1 = y1, . . . , ξn = yn) =

(

nn/2

)

pn/2qn/2.

(b)(

−1/2n

)

=1n!·

(

−12

) (

−32

)

· · ·

(

−(2n − 1)2

)

= (−1)n ·1n!·

12n · 1 · 3 · 5 · · · (2n − 1)

= (−1)n ·1n!·

12n ·

1 · 2 · 3 · 4 · · · 2n2 · 4 · 6 · · · 2n

= (−1)n ·1n!·

12n ·

(2n)!2n · n!

15

Page 17: Exercises for Stochastic Calculus

Then,

(−1)n(

−1/2n

)

22n= (−1)2n ·

1n!·

22n

22n·

(2n)!n!=

(

2nn

)

(c) By part (a),

P(x) =∞∑

n=0

u2nx2n=

∞∑

n=0

(

2nn

)

(pqx2)n.

Using (b), we obtain

P(x) =∞∑

n=0

(

−1/2n

)

(−4pqx2)n

= (1− 4pqx2)−1/2.

(d) Using the formulaF(x)P(x) = P(x) − 1 and part (c), we immediately getF(x) = 1−√

1− 4pqx2.

(e) Writing F(x) =∑∞

k=0 fkxk, we have

P(∃n : Xn = 0) =∞∑

k=0

P(minn : Xn = 0 = k) =∞∑

k=0

fk = F(1) = 1−√

1− 4pq.

Exercise 5Let (Xn)n∈N be the Markov chain representing the fortune of the player, and letτ be the time when thegame ends, that is,

τ = inf n ≥ 0 : Xn = 0 or Xn = N.

Let uk be the probability that starting fromk coins, the player wins, that is,

uk = Pk[Xτ = N],

wherePk = P[ · | X0 = k]. For 0< k < N, we have

Pk[Xτ = N] = Pk[X1 = k − 1, Xτ = N] + Pk[X1 = k + 1, Xτ = N].

Note that this is equal to

Pk[X1 = k − 1] [Xτ = N | X1 = k − 1] + Pk[X1 = k + 1] Pk[Xτ = N | X1 = k + 1]

= q Pk−1[Xτ = N] + p Pk+1[Xτ = N],

where we used the Markov property in the last step. We have proved that

uk = quk−1 + puk+1.

Moreover, it is clear thatu0 = 0 anduN = 1. We solve this recurrence equation as in Karlin-Taylor, (3.6)in chapter 3. Forp = 1/2, we obtain

uk =kN,

and otherwise,

uk =1− (q/p)k

1− (q/p)N.

Exercise 6From the fact that (S n) is a Markov chain, it follows that ((Yn,Mn)) is also a Markov chain. To justifythis properly, fix a sequence (y0,m0) = (0, 0), (y1,m1), . . . , (yn,mn); noticing the equality

(Y0,M0) = (0, 0), (Y1,M1) = (y1,m1), . . . , (Yn,Mn) = (yn,mn) = S 0 = 0, S 1 = m1−y1, . . . , S n = mn−yn,

16

Page 18: Exercises for Stochastic Calculus

we have

P((Yn+1,Mn+1) = (y,m) | (Y0,M0) = (0, 0), . . . , (Yn,Mn) = (yn,mn))

= P

(

max06i6n+1 S i = m,(max06i6n+1 S i) − S n+1 = y

∣ S 0 = 0, S 1 = m1 − y1, . . . , S n = mn − yn

)

=

p if y = yn = 0, m = mn + 1;or if y > 0, y = yn − 1,m = mn;

q if y = yn + 1,m = mn;0 in any other case.

In words, (Yn,Mn) takes values in the positive quadrant ofZ2 (see the picture in the statement of theexercise) and moves according to the rules:

• if y > 0, we jump to the right with probabilityq and to the left with probabilityp;

• if y = 0, we jump to the right with probabilityq and up with probabilityp.

Notice that we never jump down and we can only jump up on the vertical axis.For a > 0, we haveT (a) = minn : S n = a = minn : Mn = a, so

max06k6T (a)

Yk < y

=

(Yn,Mn) reaches (a, 0) before reachingy × Z+ ,so, using the Strong Markov Property,

P

(

max06k6T (a)

Yk < y

)

=

a−1∏

i=0

P

(

(Yn,Mn) reaches (0, i + 1)before reaching (y, i)

∣ (Y0,M0) = (0, i)

)

= P

(

(Yn,Mn) reaches (0, a)before reaching (y, a − 1)

∣ (Y0,M0) = (0, a − 1)

)a

The probability in the above expression is the probability that the chain in the picture reaches the upperpoint (0, a) before reaching the rightmost point (y, a − 1). From the gambler’s ruin problem, we see thatthis is equal to

y1+y if p = q = 1/2;

pq−

(

pq

)y+1

1−(

pq

)y+1 if p , q,

finishing the proof.

17

Page 19: Exercises for Stochastic Calculus

Series 4: recurrence, transition matrices

Exercise 1Let P be the transition matrix of a Markov chain. Ifj is a transient state, provethat for all i,

+∞∑

n=1

Pni, j < +∞.

Exercise 2Consider a sequence of Bernoulli trialsX1,X2, . . ., whereXn = 1 or 0. Assume

that there existsα > 0 such that, for anyn,

P [Xn = 1 | X1, . . . ,Xn−1] ≥ α.

Show that(1) P[Xn = 1 for somen] = 1,(2) P[Xn = 1 infinitely often]= 1.

Exercise 3Players I, with fortunea, and player II, with fortuneb, playtogether (a > 10, b > 10). At each round, player I has probabilityp to win one unit from player II,else he gives one unit to player II. What is the probability that player I will achieve a fortunea + b − 3before his fortune dwindles to 5?

Exercise 4A car can be in one of three states: working (state 1), being repared (state 2), or definitely out of order(state 3). If the car is working on some day, then the probability it will be working the day after is0.995, and the probability it will need repair is 0.005. If it is being repaired, then the probability it willbe working the day after is 0.9, the probability it will still need repair is 0.05, the probability it will bedefinitely out of order is 0.05.(1) Write the transition matrix for this Markov chain.(2) Let us define the reduced matrix

R=

[

0.995 0.0050.9 0.05

]

.

Show thatQ = (I − R)−1 is well defined, and thatQi j is the expected total time spent at statej startingfrom statei (Hint: study the series

+∞n=0 Rn. To show that Q and this series are well defined, you may

study the spectrum of the transpose of R.)(3) Starting from a car that is working, what is the expected time before it gets definitely out of order ?(4) How can the result be extended to more general Markov chains ?(5) Use this method to solve exercise 2 of the previous series.

18

Page 20: Exercises for Stochastic Calculus

Series 4: recurrence, transition matrices Solutions

Exercise 1Let us write (Xn) for the Markov chain with transition matrixP, Pi for P[· | X0 = i], and

τi = inf n ≥ 1 : Xn = i.

LetFn

i j = Pi[τ j = k],

and letFi j (s) be its generating function,Fi j (s) =∑

+∞n=0 Fn

i j sn. Let alsoPi j (s) =

+∞n=0 Pn

i j sn. One can

show that

P j j (s) =1

1− F j j (s),

see Karlin-Taylor, Eq. (5.8) of Chapter 2 for a proof. By definition, if the statej is transient, then

P j[τ j < +∞] < 1.

This amounts to saying that+∞∑

n=0

Fnj j < 1,

and thuslims→1

P j j (s) < +∞.

We have thus proved that+∞∑

n=0

Pnj j < +∞.

For i , j, we havePi j (s) = Fi j (s)P j j (s)

(see Karlin-Taylor, Eq. (5.10) of Chapter 2), and thus we also have

lims→1

Pi j (s) < +∞,

or equivalently,+∞∑

n=0

Pni j < +∞.

Exercise 2(1) Note that

P[X1 = · · · = Xn = 0] = P[Xn = 0 | Xn−1 = · · · = X1 = 0] P[Xn−1 = · · · = X1 = 0]

≤ (1− α) P[Xn−1 = · · · = X1 = 0].

By induction, we thus getP[X1 = · · · = Xn = 0] ≤ (1− α)n,

19

Page 21: Exercises for Stochastic Calculus

and as a consequence,P[∀n ∈ 1, 2, . . ., Xn = 0] = 0.

(2) If it is not true thatXn = 1 infinitely often, then there existsN such that for alln ≥ N, Xn = 0. Wehave

P[∃N : ∀n ≥ N, Xn = 0] ≤+∞∑

N=0

P[∀n ≥ N, Xn = 0],

and we proceed as in (1) to show that each summand is equal to 0.

Exercise 3Let (Xn)n>0 be a Markov chain onZ such that, for eachn, the transitionsn→ n+ 1 andn→ n− 1 occurwith probability p and 1− p, respectively. We want to find

P(Xn reachesa+ b− 3 before reaching 5| X0 = a),

which is the same as

P(Xn reachesa+ b− 8 before reaching 0| X0 = a− 5).

By Gambler’s ruin, this is equal to

(

qp

)a−5−1

(

qp

)a+b−8−1

if p , q;

a−5a+b−8 if p = q = 1/2.

Exercise 4(1) The transition matrix is

P =

0.995 0.005 00.9 0.05 0.050 0 1

.

(2) Remark: for the purpose of this question, computing explicitely the eigenvalues of this2 × 2 matrixand checking that their modulus are strictly smaller than1 would be fine. Here, we study the spectrumusing arguments that are easier to extend to more general situations.Note that for anyi, one has

j

Ri j ≤ 1,

Hence, for anyx = (x1, x2) ∈ C2,

(1) ‖RT x‖1 =∑

j

i

Ri j xi

≤∑

j

i

Ri j |xi | ≤ ‖x‖1.

As a consequence, the spectrum ofRT (which is the same as that ofR) is contained inλ ∈ C : |λ| ≤ 1.Moreover, since

j

R2 j < 1,

the inequality in (1) is strict ifx2 , 0.Assume that there existsλ ∈ C of modulus 1, andx ∈ C2 a non-zero vector such thatRT x = λx. Then itis easy to see thatx2 , 0, and thus‖RT x‖1 < ‖x‖1, a contradiction with the assumption thatλ has modulusone. We have proved that the eigenvalues ofRT , which are also those ofR, have modulus strictly smallerthan 1.

20

Page 22: Exercises for Stochastic Calculus

This clearly implies thatQ and the series+∞∑

n=0

Rn

are well defined. Now, observe that

(I − R)+∞∑

n=0

Rn=

+∞∑

n=0

Rn −

+∞∑

n=1

Rn= I ,

so in factQ =∑

+∞n=0 Rn.

Finally, one has to see thatRni j is the probability that starting fromi, the walk is at j at time n. By

definition, this probability is∑

i=i0,i1,...,in−1,in= j

Pi0 i1 · · ·Pin−1 in,

wherei1, . . . , in−1 take all possible values in1, 2, 3. But since state 3 is absorbing (that is,P3 j = 0 ifj , 3), the sum above is zero if oneik is equal to 3. So the sum is equal to

i=i0,i1,...,in−1,in= j

Ri0 i1 · · ·Rin−1 in,

wherei1, . . . , in−1 take all possible values in1, 2. This last expression is preciselyRni j .

(3) A computation shows that

Q =

[

3800 203600 20

]

.

Starting from a car that is working, the expected time beforeit goes out of order is the sum of expectedtimes spent in states 1 and 2, which is thus 3820 days (and on average, the total number of days duringwhich the car is working before it gets out of order is 3800 days).(4) A careful examination of the proof of part (2) shows that the only requirement is that the “out oforder” state can be reached from any other state in a finite number of steps.(5) For instance, to get the expected time before getting “SF”, we may consider the (3× 3) reducedtransition matrixRgiven by

FF FS S SFF q p 0FS 0 0 pS S 0 0 p

.

For the inverse, we obtain

(I − R)−1=

1/p 1 p/q0 1 p/q0 0 1/q

.

It takes two tosses to “get the chain started”. But the momentof appearance of “SF” is not changed ifinstead we start in the state “FF”. We get that the expected time before “SF” appears is

1p+ 1+

pq=

1p+

1q,

as expected.

21

Page 23: Exercises for Stochastic Calculus

Series 5: stationary distributions

Exercise 1Consider a Markov chain with transition probabilities matrix

P =

p0 p1 p2 · · · pm

pm p0 p1 · · · pm−1

· · · · · · · · · · · ·

p1 p2 p3 · · · p0

where 0 < pi < 1 for each i and p0 + p1 + · · · + pm = 1. Determine limn→∞

Pni j, the stationary distribution.

Exercise 2An airline reservation system has two computers only one of whichis in operation at any given time. A computer may break down any given day with probability p. Thereis a single repair facility which takes 2 days to restore a computer to normal. The facilities are such thatonly one computer at a time can be dealt with. Form a Markov chain by taking as states the pairs (x, y)where x is the number of machines in operating conditions at the end of a day and y is 1 if a day’s laborhas been expended on a machine not yet repaired and 0 otherwise. Enumerating the states in the order(2, 0), (1, 0), (1, 1), (0, 1), the transition matrix is

q p 0 00 0 q pq p 0 00 1 0 0

where p, q > 0 and p + q = 1. Find the stationary distribution in terms of p and q.

Exercise 3Sociologists often assume that the social classes of successivegenerations in a family can be regarded as a Markov chain. Thus, the occupation of a son is assumedto depend only on his father’s occupation and not on his grandfather’s. Suppose that such a model isappropriate and that the transition probability matrix is given by

Son’s classLower Middle Upper

Lower .40 .50 .10Father’s class Middle .05 .70 .25

Upper .05 .50 .45

For such a population, what fraction of people are middle class in the long run?

Exercise 4Suppose that the weather on any day depends on the weather

conditions for the previous two days. To be exact, suppose that if it was sunny today and yesterday, thenit will be sunny tomorrow with probability .8; if it was sunny today but cloudy yesterday, then it willbe sunny tomorrow with probability .6; if it was cloudy today but sunny yesterday, then it will be sunnytomorrow with probability .4; if it was cloudy for the last two days, then it will be sunny tomorrow withprobability .1.

22

Page 24: Exercises for Stochastic Calculus

Such a model can be transformed into a Markov chain provided we say that the state at any time isdetermined by the weather conditions during both that day and the previous day. We say the process isin• State (S,S) if it was sunny both today and yesterday;• State (S,C) if it was sunny yesterday but cloudy today;• State (C,S) if it was cloudy yesterday but sunny today;• State (C,C) if it was cloudy both today and yesterday.Enumerating the states in the order (S,S), (S,C), (C,S), (C,C), the transition matrix is

.8 .2.4 .6

.6 .4.1 .9

Find the stationary distribution for the Markov chain.

Exercise 5(Chapter 3, Problem 4) Consider a discrete time Markov chain with states 0, 1, . . . ,N whose matrix haselements

Pi j =

µi, j = i − 1,λi, j = i + 1, i, j = 0, 1, . . . ,N;

1 − λi − µi, j = i;0, | j − i| > 1.

Suppose that µ0 = λ0 = µN = λN = 0, and all other µi’s and λi’s are strictly positive, and that the initialstate of the process is k. Determine the absorption probabilities at 0 and N.

23

Page 25: Exercises for Stochastic Calculus

Series 5: stationary distributions Solutions

Exercise 1Since all entries in the transition matrix are strictly positive, the chain is irreducible and positive recurrent,so there exists a unique invariant measure. By the symmetricstructure of the matrix, we can guess thatthe uniform distributionπ = ( 1

m+1 , . . . ,1

m+1) is invariant, and indeed it is trivial to check thatπP = π.

Exercise 2Following the enumeration in the exercise, let (π1, π2, π3, π4) denote the stationary distribution. It mustsatisfy:

π1 + π2 + π3 + π4 = 1;(1)

qπ1 + qπ3 = π1;(2)

pπ1 + pπ3 + π4 = π2;(3)

qπ2 = π3;(4)

pπ2 = π4.(5)

Applying (4) in (2), we get

qπ1 + q2π2 = π1 =⇒ π1 =q2

pπ2.

Using this, (4) and (5) in (1), we get

q2

pπ2 + π2 + pπ2 + qπ2 = 1 =⇒ π2 =

p

1+ p2.

We thus getπ1 =q2

1+p2 , π2 =p

1+p2 , π3 =qp

1+p2 , π4 =p2

1+p2 .

Exercise 3Since all the entries of the transition matrix are positive,the chain is irreducible and aperiodic, so thereexists a unique stationary distribution. Denoting this distribution by π = (x, y, z), the quantity we arelooking for – namely, the fraction of people that are middle class in the long run – is equal toy. We knowthat x + y + z = 1 and also that

(x y z)

.40 .50 .10

.05 .70 .25

.05 .50 .45

= (x y z);

solving the corresponding system of linear equations, we get x = 113, y = 5

8 andz = 31104.

Exercise 4Let x, y, z,w denote the states of the chain in the order that they appear inthe transition matrix. Theinvariant measure is obtained by solving the system

(x y z w)

.8 .2.4 .6

.6 .4.1 .9

= (x y z w);

x + y + z + w = 1.

24

Page 26: Exercises for Stochastic Calculus

The unique solution is (x y z w) = ( 311,

111,

111,

611).

Exercise 5For k ∈ 0, . . . ,N, define f (k) = P(∃n > 0 : Xn = N | X0 = k). Since 0 andN are absorbing, we havef (0) = 0, f (N) = 1. Fork ∈ 1, . . . ,N − 1, we have

f (k) = µk f (k − 1)+ λk f (k + 1) =⇒ f (k + 1)− f (k) =µk

λk( f (k) − f (k − 1)).

For k ∈ 1, . . . ,N, define∆k = f (k) − f (k − 1). We can thus rewrite what we have obtained above as∆k+1 =

µkλk∆k; iterating this we get

∆k =µ1 · · · µk−1

λ1 · · · λk−1∆1 ∀ k > 2

We can now find∆1:

1 = f (N) − f (0) = ∆1 + · · · + ∆N =

(

1+µ1

λ1+ · · · +

µ1 · · · µk−1

λ1 · · · λk−1

)

∆1

=⇒ ∆1 =

(

1+µ1

λ1+ · · · +

µ1 · · · µk−1

λ1 · · · λk−1

)−1

Then, fork ∈ 1, . . . ,N, we have

f (k) = f (k) − f (0) = ∆1 + · · ·∆k = ∆1 +µ1

λ1∆1 · · · + ∆k =

1+ µ1λ1+ · · · +

µ1···µk−1λ1···λk−1

µ1λ1+ · · · +

µ1···µN−1λ1···λN−1

.

25

Page 27: Exercises for Stochastic Calculus

Series 6: hitting times

Exercise 1Let (Xn) be an irreducible Markov chain with stationary distribution π. FindEπ(minn : Xn = X0).

Exercise 2Let P be the transition matrix of an irreducible Markov chain in state spaceS . A distributionπ on S iscalledreversible for P if, for eachx, y ∈ S , we haveπ(x)P(x, y) = π(y)P(y, x).(a) Show that, ifπ is reversible forP, thenπ is stationary forP.(b) Assuming thatπ is reversible forP, show that, for anyn and anyx0, x1, . . . xn ∈ S ,

Pπ(X0 = x0, . . . , Xn = xn) = Pπ(Xn = x0, . . . , X0 = xn).

(c) Random walk on a graph. Let G = (V, E) be a finite connected graph. For eachx ∈ V, let d(x)denote the degree ofx. Let P be the Markov transition matrix for the Markov chain onV given by

P(x, y) =

1d(x) if ( x, y) ∈ E;0 otherwise.

This is called the random walk onG. Show thatπ(x) = d(x)/∑

z d(z) is reversible forP.

Exercise 3We identify the square gridΛ = 1, . . . , 82 with a chessboard. In chess, a knight is allowed to move onthe board as follows. If the knight is in position (x, y), it can go to any of the positions in

Λ ∩

(

(x + 1, y + 2), (x − 1, y + 2), (x − 2, y + 1), (x − 2, y − 1),(x − 1, y − 2), (x + 1, y − 2), (x + 2, y − 1), (x + 2, y + 1)

)

.

Let (Xn) denote the sequence of positions of a knight that at time zero is in position (1, 1) and at each stepchooses uniformly among its allowable displacements. Findthe expected time until the knight returns to(1, 1).(Hint: describe the chain as in Exercise 2(c).)

Exercise 4Consider a finite population (of fixed sizeN) of individuals of possible types

A anda undergoing the following growth process. At instants of time t1 < t2 < t3 < . . ., one individualdies and is replaced by another of typeA or a. If just before a replacement timetn there arej A’s andN − j a’s present, we postulate that the probability that anA individual dies is jµ1/B j and that anaindividual dies is (N − j)µ2/B j whereB j = µ1 j + µ2(N − j). The rationale of this model is predicated onthe following structure: Generally a typeA individual has chanceµ1/(µ1 + µ2) of dying at each epochtnand ana individual has chanceµ2/(µ1+µ2) of dying at timetn. (µ1/µ2 can be interpreted as the selectiveadvantage ofA types overa types). Taking account of the sizes of the population it is plausible to assignthe probabilitiesµ1 j/B j and (µ2(N− j)/B j) to the events that the replaced individual is of typeA and typea, respectively. We assume no difference in the birth pattern of the two types and so the new individualis taken to beA with probability j/N anda with probability (N − j)/N.(1) Describe the transition probabilities of the Markov chain (Xn) representing the number of individualsof typeA at timestn(n = 1, 2, . . .).(2) Find the probability that the population is eventually all of type a, givenk A’s and (N −k) a’s initially.

26

Page 28: Exercises for Stochastic Calculus

Exercise 5Let (Xk) be a Markov chain for whichi is a recurrent state. Show that

limN→∞

P(Xk , i for n + 1 6 k 6 n + N | X0 = i) = 0.

If i is a positive recurrent state prove that the convergence in the above equation is uniform with respectto n.

27

Page 29: Exercises for Stochastic Calculus

Series 6: hitting times Solutions

Exercise 1

Eπ(minn : Xn = X0) =∑

x∈S

π(x) · Ex(minn : Xn = X0) =∑

x∈S

π(x) ·1π(x)

= #S .

Exercise 2

(a) For anyx ∈ S ,

y

π(y)P(y, x) =∑

y

π(x)P(x, y) = π(x)∑

y

P(x, y) = π(x),

soπ is stationary.

(b)

Pπ(X0 = x0, . . . , Xn = xn) = π(x0)P(x0, x1)P(x1, x2) · · · P(xn−1, xn)

= P(x1, x0)π(x1)P(x1, x2) · · · P(xn−1, xn)

= P(x1, x0)P(x2, x1)π(x2)P(x2, x3) · · · P(xn−1, xn)

= · · · = P(x1, x0)P(x2, x1)P(x3, x2) · · · P(xn, xn−1)π(xn)

= Pπ(Xn = x0, . . . , X0 = xn).

(c) Let x, y ∈ V. If x andy are not adjacent, we haveπ(x)P(x, y) = π(y)P(y, x) = 0. Otherwise,

π(x)P(x, y) =d(x)∑

z d(z)·

1d(x)

=1

z d(z)=

d(y)∑

z d(z)·

1d(y)

= π(y)P(y, x).

Exercise 3

We give a graph structure to the chessboard by taking the vertex set to beΛ and setting edgesx ∼ y ifand only if the knight can go fromx to y (this implies that it can also go fromy to x, so the relation issymmetric). The stochastic sequence (Xn) is then a random walk on this graph.

We claim that the graph is connected (equivalently, that thechain is irreducible). To see this, consider thereduced boardΛ′ = 1, 2, 32. Without leavingΛ′, it is possible to go from (1, 1) to (1, 2): performing themoves (1, 1)→ (3, 2)→ (1, 3)→ (2, 1); of course, reversing these moves allows us to go from (2, 1) to(1, 1). Now, consider two adjacent positionsx, y ∈ Λ (x andy differ by 1 in one coordinate and coincidein the other). For any such pairx, y, it is possible to find a 3x3 subset ofΛ such that eitherx or y is oneof the corners of the square. Then, except possibly for a rotation, the moves given forΛ′ can be repeatedso that we can go fromx to y and fromy to x. Now, for arbitrary pointsx, y ∈ Λ, we can construct a pathfrom x to y such that at each step we go to an adjacent position, so all cases are now covered.

By Exercise 2(c), the stationary distribution is given byπ(x) = d(x)/∑

z d(z). The degrees of the vertices

28

Page 30: Exercises for Stochastic Calculus

of the graph are given by2 3 4 4 4 4 3 23 4 6 6 6 6 4 34 6 8 8 8 8 6 44 6 8 8 8 8 6 44 6 8 8 8 8 6 44 6 8 8 8 8 6 43 4 6 6 6 6 4 32 3 4 4 4 4 3 2

The expected return time to (1, 1) is given by 1/π((1, 1)) = (∑

z d(z))/d((1, 1)) = 336/2 = 168.

Exercise 4(1) The transition probabilities are given by

P j, j−1 =µ1 j(N − j)

B jN, P j, j+1 =

µ2(N − j) jB jN

,

P j j = 1− P j, j−1 − P j, j+1, Pi j = 0, for |i − j| > 1.

(2) We are exactly in the context of last week’s Exercise 5. Using what was obtained there, we haveP(∃n : Xn = 0 | X0 = 0) = 1, P(∃n : Xn = 0 | X0 = N) = 0 and

P(∃n : Xn = N | X0 = k) =1+ P1,0

P1,2+

P1,0P2,1

P1,2P2,3+ · · ·

P1,0···Pk−1,k−2

P1,2···Pk−1,k

1+ P1,0P1,2+

P1,0P2,1P1,2P2,3

+ · · ·P1,0···PN−1,N−2P1,2···PN−1,N

Since in our present case we havePk,k−1

Pk,k+1=µ1µ2

for all k, we conclude that

P(∃n : Xn = 0 | X0 = k) = 1− P(∃n : Xn = N | X0 = k) = 1−1+ µ1

µ2+ · · · +

(

µ1µ2

)k−1

1+ µ1µ2+ · · · +

(

µ1µ2

)N−1=

(

µ1µ2

)N−(

µ1µ2

)k

(

µ1µ2

)N− 1

.

Exercise 5Let τi = mink > 1 : Xk = i be the first return time toi. Wheni is recurrent, we havePi(τi < ∞) = 1, so

(1) limK→∞

Pi(τi > K) = 0.

If, in addition, i is positive recurrent, we have

∞∑

l=0

Pi(τi > l) = Ei(τi) < ∞,

so that

(2) limK→∞

∞∑

l=K

Pi(τi > l) = 0.

With this at hand, we are ready:

Pi(∄k ∈ [n, n + N] : Xk = i) =n−1∑

t=0

Pi(Xt = i, Xt+1, . . . , Xn+N , i)

=

n−1∑

t=0

Pi(Xt = i) · Pi(τi > N + n − t) 6n−1∑

t=0

Pi(τi > N + n − t) =n∑

l=1

Pi(τi > N + l).

29

Page 31: Exercises for Stochastic Calculus

It follows from (1) that, if we keepn fixed and takeN to infinity, the last expression converges to zero.In addition, if i is positive recurrent, we can use (2) to obtain

n∑

l=1

Pi(τi > N + l) =N+n∑

l=N+1

Pi(τi > l) 6∞∑

l=N+1

Pi(τi > l)→ 0

asN → ∞, so the convergence is uniform inn.

30

Page 32: Exercises for Stochastic Calculus

Series 7: recap on Markov chains

Exercise 1Let S = 0, 1, 2, 3. We consider the Markov chain on S whose transition matrix is given by

P =

1/4 1/4 1/2 01/6 1/6 1/3 1/30 0 1/3 2/30 0 1/2 1/2

.

1) What are the communication classes, the transient states ? Is the chain aperiodic ? If not, give theperiod.2) What is P2

00 ? What approximately is P100 ?3) Starting from 2, what is the expected number of visits to state 3 before returning to 2 ?

Exercise 2Consider the Markov chain on the state space below, where at each step a neighbour is uniformly chosenamong sites that are linked by an edge to the current position.

1) What are the communication classes, the transient states ? Is the chain aperiodic ? If not, give theperiod.2) Is there an invariant measure ? Is it unique ? In case it is, give it.3) Starting from x, what is the probability to hit S before F ?4) Starting from x, what is the expected number of visits to F before returning to x ?

Exercise 3We have n coins C1, . . . ,Cn. For each k, Ck is biased so that it has probability 1/(2k+ 1) of falling heads.We toss the n coins. What is the probability of obtaining an odd number of heads ?(Hint: letting pk = 1/(2k + 1) and qk = 1 − pk, you may study the function

f (x) = (q1 + p1x) · · · (qn + pnx)

and its power series.)

31

Page 33: Exercises for Stochastic Calculus

Series 7: recap on Markov chains Solutions

Exercise 1The communication classes are0, 1 and2, 3. To see that state 0 is transient, note that for the chainstarting at 0, there is a non-zero probability that the chaingoes to state 2. Once there, there is no pathgoing back to 0, so indeed, the probability that the return time to 0 is infinite is strictly positive. Thesame holds for state 1. The chain is aperiodic on its recurrent class2, 3.2) We have

P200 = (P00)

2+ P01P10 =

116+

124=

548.

To approximateP100 (that is, to find whatPn looks like whenn tends to infinity), we first compute thestationary distribution on the communication class2, 3, which has transition matrix

[

1/3 2/31/2 1/2

]

.

We find a stationary distributionπ with weights (3/7 4/7). On the set2, 3, the chain is irreducibleand aperiodic, so starting from any point, the distributionof Xn converges to the stationary distribution.Hence, fori, j ∈ 2, 3, one hasPn

i j → π( j) asn tends to infinity. Clearly, ifi ∈ 2, 3 and j ∈ 0, 1, thenPn

i j = 0.If j ∈ 0, 1, then it is a transient state, so for anyi, we havePn

i j → 0 (in fact, we proved a strongerstatement in exercise 1 of series 3: that

n Pni j is finite). There remains to determinePn

i j wheni ∈ 0, 1and j ∈ 2, 3. Intuitively, the chain enters the class2, 3 at some point, and then converges to equilibriumin the class2, 3, so we should have

(1) limn→+∞

Pni j = π( j).

Here is a rigorous proof of this fact. Let

τ = inf n : Xn ∈ 2, 3.

Sincei is transient,Pi[τ = +∞] = 0. For anyε > 0, we can findN large enough such thatPi[τ ≥ N] ≤ ε.For j ∈ 2, 3 andn ≥ N, we have

(2) Pni j = Pi[Xn = j] =

N−1∑

k=0

Pi[Xn = j, τ = k] + Pi[τ ≥ N]︸ ︷︷ ︸

≤ε

.

We decomposePi[Xn = j, τ = k] into

Pi[Xn = j, τ = k, Xτ = 2] + Pi[Xn = j, τ = k, Xτ = 3],

and we apply the strong Markov property at timeτ on each term. The first term becomes

Pi[τ = k, Xτ = 2] P2[Xn−k = j] −−−−−→n→+∞

Pi[τ = k, Xτ = 2] π( j),

where we used the fact that the Markov chain on2, 3 is aperiodic. Similarly,

Pi[Xn = j, τ = k, Xτ = 3] −−−−−→n→+∞

Pi[τ = k, Xτ = 3] π( j),

32

Page 34: Exercises for Stochastic Calculus

and putting the two together, we obtain

Pi[Xn = j, τ = k] −−−−−→n→+∞

Pi[τ = k] π( j).

So the sum appearing in the right-hand side of (2) is such that

N−1∑

k=0

Pi[Xn = j, τ = k] −−−−−→n→+∞

Pi[τ < N] π( j),

and 1− ε ≤ Pi[τ < N] ≤ 1. We have thus proved that

lim supn→+∞

Pni j ≤ π( j) + ε,

andlim infn→+∞

Pni j ≥ (1− ε)π( j).

Sinceε > 0 was arbitrary, this proves (1).Conclusion :

P100 ≃ limn→+∞

Pn=

0 0 3/7 4/70 0 3/7 4/70 0 3/7 4/70 0 3/7 4/7

.

3) We can use the fact that this expected value is exactlyπ(3)/π(2) = 4/3. We can also take a moreexplicit approach and decompose the possible movements of the chain: with probability 1/3 the walkstays at 2 and therefore the number of visits to 3 is zero. Elseit goes to 3, and then stays for a timegeometric(1/2), and then goes back to state 2. The expectation we are looking for is thus

13

0+23

+∞∑

k=1

k

(

12

)k

=

43.

Exercise 21) The Markov chain is a random walk on a graph (see exercise 2 of series 5). LetG = (V, E) denotethe graph.G is connected, that is, for anyx, y ∈ V, there exists a pathx0 = x, x1, x2, . . . , xN = y suchthat (xi, xi+1) ∈ E for eachi. We then havePx(y is ever reached)> P(x, x1)P(x1, x2) · · · P(xN−1, y) > 0,so the chain is irreducible, and there are no transient states. The chain is also aperiodic. To see this, fixany vertex that is the extreme point of one of the diagonal segments. Starting from this vertex, we canreturn to it in 3 steps (using the diagonal) and also in 4 steps(not using the diagonal), so the period ofthis vertex must divide 3 and 4, so it is 1. Since the chain is irreducible, we conclude that all verticeshave period 1, so the chain is aperiodic.2) since the chain is irreducible and the state space is finite, there exists a unique invariant measure. Wecan look for a reversible measure as in exercise 2 of series 5,and we find that the equilibrium distributionπ evaluated at a vertexx is equal to the degree ofx divided by the sum of the degrees of all vertices. Inour case, this gives the following invariant measure.

2/52 3/52 3/52 2/523/52 5/52 5/52 3/523/52 5/52 5/52 3/522/52 3/52 3/52 2/52

(Remark: the invariant measure is represented here as a matrix just for ease of notation, following thepositions of the graph; we can of course enumerate the 16 states of the chain and present the invariantmeasure as a row vector as we usually do.)

33

Page 35: Exercises for Stochastic Calculus

3) Consider the diagram1/2 1− b 1− a 0b 1/2 1− c 1− aa c 1/2 1− b1 a b 1/2

The value at each site represents the probability, startingfrom this site, to arrive atS beforeF. Noticethat we have used symmetry several times. Using the relation

Px(S is reached beforeF) =1

deg(x)

y∼x

Py(S is reached beforeF),

valid for anyx < S , F, we obtain the system of linear equations

a = 13 +

13b + 1

3c

b = 13a + 1

3

c = 25a + 2

5 ·12 +

15(1− c)

Solving this givesb = 47.

4) This isπ(F)/π(x) = 2/3.

Exercise 3Let us write

f (x) =n∑

k=0

ak xk.

By developping, we observe thatak is the probability of getting exactlyk heads. We want to know whatis a1 + a3 + a5 + · · · . Note that

f (1) =n∑

k=0

ak,

while

f (−1) =n∑

k=0

ak(−1)k =∑

k even

ak −∑

k odd

ak.

So the quantity we are interested in isf (1)− f (−1)

2.

Clearly, f (1) = 1, while an easy computation givesf (−1) = 1/(2n+1). The probability we are interestedin is thusn/(2n + 1).

34

Page 36: Exercises for Stochastic Calculus

Series 8: Poisson processes

Exercise 1Let X1 andX2 be independent exponentially distributed randomvariables with parametersλ1 andλ2 so that

P(Xi > t) = exp(−λit) for t > 0.

Let

N =

1 if X1 < X2;2 if X2 6 X1,

U = minX1, X2 = XN ,

V = maxX1, X2,

W = V − U = |X1 − X2|.

Show

a) P(N = 1) = λ1/(λ1 + λ2) andP(N = 2) = λ2/(λ1 + λ2).

b) P(U > t) = exp−(λ1 + λ2)t for t > 0.

c) N andU are independent random variables.

d) P(W > t|N = 1) = exp−λ2t andP(W > t|N = 2) = exp−λ1t for t > 0.

e) U andW = V − U are independent random variables.

Exercise 2Assume a device fails when a cumulative effect ofk shocks occur.If the shocks happen according to a Poisson process with parameterλ, find the density function for thelife T of the deviceAnswer:

f (t) =

λktk−1e−λtΓ(k) if t > 0;

0, if t 6 0.

Exercise 3Let X(t), t > 0 be a Poisson process with intensity parameterλ.

Suppose each arrival is “registered” with probabilityp, independently of other arrivals. LetY(t), t > 0be the process of “registered” arrivals. Prove thatY(t) is a Poisson process with parameterλp.

Exercise 4At time zero, a single bacterium in a dish divides into two bacteria. This species of bacteria has thefollowing property: after a bacteriumB divides into two new bacteriaB1 andB2, the subsequent lengthof time until eachBi divides is an exponential random variable of rateλ = 1, independently of everythingelse happening in the dish.

a) Compute the expectation of the timeTn at which the number of bacteria reachesn.

b) Compute the variance ofTn.

35

Page 37: Exercises for Stochastic Calculus

Series 8: Poisson processes Solutions

Exercise 1a)

P(N = 1) = P(X1 < X2) =∫ ∞

0

∫ ∞

xλ2e−λ2y

λ1e−λ1x dy dx =λ1

λ1 + λ2;

P(N = 2) = 1− P(N = 1) =λ2

λ1 + λ2.

b)P(U > t) = P(X1, X2 > t) = P(X1 > t) P(X2 > t) = e−(λ1+λ2)t

.

c)

P(U > t, N = 1) = P(t < X1 < X2) =∫ ∞

t

∫ ∞

xλ2e−λ2y

λ1e−λ1x dy dx =λ1

λ1 + λ2= P(U > t) P(N = 1).

P(U > t, N = 2) = P(U > t) − P(U > t, N = 1) = P(U > t)(1− P(N = 1)) = P(U > t) P(N = 2).

d)

P(W > t | N = 1) =P(W > t, N = 1)P(N = 1)

=P(X2 > X1 + t)P(N = 1)

=1λ1λ1+λ2

∫ ∞

0

∫ ∞

x+tλ2e−λ2y

λ1e−λ1x dydx =λ1 + λ2

λ1·λ1

λ1 + λ2· e−λ2t

= e−λ2t.

Exchanging the roles ofX1 andX2, we getP(W > t | N = 2) = e−λ1t.

e) For alls, t > 0,

P(U > s, W > t) = P(s < X1 < X2 − t) + P(s < X2 < X1 − t)

=

∫ ∞

s

∫ ∞

x+tλ2e−λ2y

λ2e−λ1x dy dx +∫ ∞

s

∫ ∞

y+tλ1e−λ1x

λ2e−λ2y dx dy

=λ1

λ1 + λ2e−λ2te−(λ1+λ2)s

+λ2

λ1 + λ2e−λ1te−(λ1+λ2)s

.

Also,

P(W > t) = P(W > t | N = 1) · P(N = 1)+ P(W > t | N = 2) · P(N = 2) =λ1

λ1 + λ2e−λ2t

+λ2

λ1 + λ2e−λ1t.

We thus haveP(U > s, W > t) = P(U > s) · P(W > t), so they are independent.

Exercise 2If T1, T2, T3, . . . are the arrival times for the Poisson process, thenT1, T2−T1, T3−T2, . . . are independentand all have exponential(λ) distribution. We are looking for the law ofTk = T1+(T2−T1)+· · ·+(Tk−Tk−1).This is given by thek-convolution of exponentialλ. We claim that the densityf (k) of this convolution isgiven by

f (k)(x) =

λktk−1

Γ(k) e−λt if t > 0;0, otherwise.

36

Page 38: Exercises for Stochastic Calculus

This is easy to prove by induction: whenk = 1, it is trivial; assuming it is true fork, we get

f (k+1)(x) = ( f (k) ∗ f (1))(x) =∫ x

0λe−λ(x−y) λ

kyk−1

Γ(k)e−λy dy =

λk+1

Γ(k)e−λx xk

k=λk+1xk

Γ(k + 1)· e−λx

sincekΓ(k) = Γ(k + 1).

Exercise 3Let T1, T2, . . . denote the sequence of arrival times ofX(t). One rigorous way of stating what is said inthe exercise is as follows. LetZ1, Z2, . . . be a sequence of independent Bernoulli(p) random variables.Assume that they are also independent of the processX. Then, putY(t) =

∑X(t)i=1 Zi. With this construction,

Zi is 1 if arrival i is registered and 0 otherwise.Given an intervalA ⊂ [0,∞), we will denote respectively byNA and NA the number of arrivals ofX(t)andY(t) in A. We prove thatY(t) is a Poisson process of parameterλp by verifying:

• If A, B are disjoint intervals, thenNA andNB are independent.

P(NA = m NB = n) =∑

m1>mn1>n

P(NA = m, NB = n, NA = m1, NB = n1)

=

m1>m,n1>n

r,s:r+m16s

P

(

NA = m, NB = n,NA = m1, Tr, . . . , Tr+m1−1 ∈ A, NB = n1, Ts, . . . , Ts+m2−1 ∈ B

)

=

m1>m,n1>n

r,s:r+m16s

P

( ∑r+m1i=r Zi = m,

∑s+m2j=s Z j = n,

NA = m1, Tr, . . . , Tr+m1−1 ∈ A, NB = n1, Ts, . . . , Ts+m2−1 ∈ B

)

=

m1>m,n1>n

P(Bin(m1, p) = m) · P(Bin(n1, p) = n) ·∑

r,s:r+m16s

P

(

NA = m1, Tr, . . . , Tr+m1−1 ∈ A,NB = n1, Ts, . . . , Ts+m2−1 ∈ B

)

=

m1>m,n1>n

P(Bin(m1, p) = m) · P(Bin(n1, p) = n) P(NA = m1, NB = n1)

=

m1>m

P(Bin(m1, p) = m) · P(NA = m1)

n1>n

P(Bin(n1, p) = n) · P(NB = n1)

Repeating the above computation in each of the last big parenthesis, we get

P(NA = m, NB = n) = P(NA = m) · P(NB = n)

as required.

• P(N[t,t+h] > 2) = o(h). This is true becauseN[t,t+h] > 2 ⊂ N[t,t+h] > 2 andP(N[t,t+h] > 2) = o(h).

• P(N[t,t+h] = 1) = λph + o(h). Indeed,

P(N[t,t+h] = 1) = P(N[t,t+h] = 1, N[t,t+h] = 1)+ P(N[t,t+h] = 1, N[t,t+h] > 2)

=

∞∑

i=1

P(N[t,t+h] = 1, Ti ∈ [t, t + h], Zi = 1)+ o(h)

= p∞∑

i=1

P(N[t,t+h] = 1, Ti ∈ [t, t + h]) + o(h)

= p · P(N[t,t+h] = 1)+ o(t) = pλh + po(h) + o(h).

37

Page 39: Exercises for Stochastic Calculus

Exercise 4Let R0 = 0 and letR1 < R2 < . . . be the times at which the population size increases. Since wealreadystart with two bacteria at time 0, we haveTn = Rn−2.Consider the following alternate experiment. Start at timeS 0 = 0 a population of 2 bacteria and waituntil one of them reproduces; call this timeS 1. At this instant, we forget about these bacteria and startwatching another population with three new-born bacteria and wait until one of them reproduces; callthis timeS 2. Then yet again we forget about this population and start a new one with four new-bornbacteira, and wait until one of them reproduces, timeS 3, and so on.By the lack of memory of the exponential distribution, the law of R1,R2−R1,R3−R2, . . . is equal to thatof S 1, S 2 − S 1, S 3 − S 2, . . . In the alternate experiment it is easy to see thatS m − S m−1 is equal to theminimum ofm+1 independent exponential(1) random variables (the times until the reproduction of eachbacterium in themth population) and using Exercise 1b we conclude thatS m−S m−1 has exponential(m+1)distribution. Thus,

E(Tn) = E(Rn−2) = E(R1 + (R2 − R1) + · · · + (Rn−2 − Rn−3)) =12+

13+ . . . +

1n − 1

.

Using independence,

Var(Tn) = Var(Rn−2) = Var(R1) + Var(R2 − R1) + · · · + Var(Rn−2 − Rn−3)) =1

22+

1

32+ . . . +

1

(n − 1)2.

38

Page 40: Exercises for Stochastic Calculus

Series 9: More on Poisson processes

Exercise 1Messages arrive at a telegraph office in accordance with the lawsof a Poisson process with mean rate of 3 messages per hour.(a) What is the probability that no message will have arrived during the morning hours (8 to 12)?(b) What is the distribution of the time at which the first afternoon message arrives?

Exercise 2A continuous time Markov chain has two states labeled 0 and 1.The waiting time in state 0 is exponentially distributed with parameterλ > 0. The waiting time in state1 follows an exponential distribution with parametersµ > 0. Compute the probabilityP00(t) of being instate 0 at timet starting at time 0 in state 0.Solution: P00(t) =

µ

λ+µ+λλ+µ

e−(λ+µ)t.

Exercise 3In the above problem, letλ = µ and defineN(t) to be the numberof times the system has changed states in timet > 0. Find the probability distribution ofN(t).Solution: P(N(t) = n) = e−λt (λt)n

n! .

Exercise 4Let X(t) be a pure birth continuous time Markov chain. Assumethat

P(an event happens in (t, t + h) | X(t) is odd)= λ1h + o(h);

P(an event happens in (t, t + h)) | X(t) is even)= λ2h + o(h),

whereo(h)/h → 0 ash→ 0. TakeX(0) = 0. Find the probabilities:

P1(t) = P(X(t) is odd), P2(t) = P(X(t) is even).

Exercise 5

Under the conditions of the above problem, show that

EX(t) =2λ1λ2

λ1 + λ2t +

(λ1 − λ2)λ2

(λ1 + λ2)2[exp−(λ1 + λ2)t − 1].

39

Page 41: Exercises for Stochastic Calculus

Series 9: More on Poisson processes Solutions

Exercise 1The distribution of the number of messages that arrive during a four-hour period is Poisson(4· λ) =Poisson(12). Thus,

P(No messages from hour 8 to 12)= P(Poi(12)= 0) = e−12.

By the lack of memory of the Poisson process, the arrival timeT of the first message after hour 12 isgiven byT = 12+ X, whereX is a random variable with exponential(3) distribution.

Exercise 2The forward Kolmogorov equation tells us that

P′00(t) = −λP00(t) + µP01(t)

= −λP00(t) + µ(1− P00(t))

= −(λ + µ)P00(t) + µ.

We can then observe thatddt

(

P00(t)e(λ+µ)t

)

= µe(λ+µ)t.

Hence, there exists a constantc ∈ R such that

P00(t)e(λ+µ)t

λ + µe−(λ+µ)t

+ c.

Observing further thatP00(t) = 1, we arrive at the conclusion.

Exercise 3Let T0 = 0 andT1 < T2 < · · · be the times at which the chain changes state. Then, the hypothesisimplies thatT1, T2 − T1, T3 − T2, . . . is a sequence of independent random variables with the exponentialλ distribution. Thus,N(t) = maxn : Tn 6 t is a Poisson process of intensityλ. In particular, for fixedtthe law ofN(t) is Poisson(λt), that is,P(N(t) = n) = ((λt)n/n!)e−λt.

Exercise 4For t > 0, h > 0,

P1(t) = P(X(t + h) is odd)= P (X(t) is odd, no arrivals in [t, t + h])

+ P (X(t) is even, one arrival in [t, t + h])

+ P (X(t + h) is odd, two or more arrivals in [t, t + h])

= P(X(t) is odd)P(No arrivals in [t, t + h]] | X(t) is odd)

+ P(X(t) is even)P(One arrival in [t, t + h] | X(t) is even)+ o(h)

= P1(t)(1− λ1h − o(h)) + P2(t)(λ2h + o(h)) + o(h).

Dividing by 1/h and takingh to zero, we conclude thatP1 is the solution of the differential equation

P′1(t) = −λ1P1(t) + λ2P2(t).

40

Page 42: Exercises for Stochastic Calculus

The initial condition isP1(0) = 0, since we start at zero which is even. The solution of this equation is

P1(t) =λ2

λ1 + λ2−λ2

λ1 + λ2e−t(λ1+λ2).

We then immediately get

P2(t) = 1− P1(t) =λ1

λ1 + λ2+λ2

λ1 + λ2e−t(λ1+λ2)

Exercise 5To begin with, note that, fort, h ≥ 0, one has

E[X(t + h) − X(t)] = P[X(t + h) − X(t) = 1] + E[(X(t + h) − X(t)) 1X(t+h)−X(t)≥2].

We would like to leth tend to 0 in the equality above, and to show that the last term is negligible, that is,

E[(X(t + h) − X(t)) 1X(t+h)−X(t)≥2] = o(h).

Let λ′ = min(λ1, λ2). It is possible to find a coupling ofX(t + h) − X(t) with a Poisson random variableN of parameterλ′h, such thatX(t + h) − X(t) ≤ N. So

E[(X(t + h) − X(t)) 1X(t+h)−X(t)≥2] ≤+∞∑

k=2

kP[X(t + h) − X(t) = k]

+∞∑

k=2

kP[N = k]

+∞∑

k=2

ke−λ′h (λ′h)k

k!= O(h2).

As a consequence,

E[X(t + h) − X(t)] = P[X(t + h) − X(t) = 1] + o(h)

= P[X(t + h) − X(t) = 1 | X(t) is odd]P1(t) + P[X(t + h) − X(t) = 1 | X(t) is even]+ o(h),

whereP1(t) = P[X(t) is odd]

andP2(t) = P[X(t) is even].

Using the assumptions of the exercise, we get

E[X(t + h) − X(t)] = λ1hP1(t) + λ2hP2(t) + o(h).

Hence, the functiont 7→ E[X(t)] is (right-) differentiable, and

ddtE[X(t)] = λ1P1(t) + λ2P2(t).

Using the results of the previous exercise, together with the fact thatE[X(0)] = 0, we get

E(X(t)) =∫ t

0[λ1P1(s) + λ2P2(s)] ds

=

∫ t

0

2λ1λ2

λ1 + λ2+λ1λ2 − λ

22

λ1 + λ2e−s(λ1+λ2)

ds =2λ1λ2

λ1 + λ2t +

(λ1 − λ2)λ2

(λ1 + λ2)2[exp−(λ1 + λ2)t − 1].

41

Page 43: Exercises for Stochastic Calculus

Series 10: Renewal theory

Exercise 1A patient arrives at a doctor’s office. With probability 1/5 he

receives service immediately, while with probability 4/5 his service is deferred an hour. After an hour’swait again with probability 1/5 his needs are serviced instantly or another delay of an houris imposedand so on.(a) What is the patient’s waiting time distribution?(b) What is the distribution of the number of patients who receive service over an 8-hour period assumingthe same procedure is followed independently for every arrival and the arrival pattern is that of a Poissonprocess with parameter 1?

Exercise 2The random lifetime of an item has distribution functionF(x).Show that the mean remaining life of an item of agex is

E(X − x | X > x) =

∫ ∞

x1− F(t)dt

1− F(x).

Hint. Recall the derivation, that applies to any positive integrable random variableZ,

E(Z) =∫ ∞

0x FZ(dx) =

∫ ∞

0

∫ x

01 dt FZ(dx) =

∫ ∞

0

∫ ∞

tFZ(dx)dt =

∫ ∞

0(1− FZ(t))dt.

Try to do something similar.

Exercise 3

FindP(N(t) > k) in a renewal process having lifetime density

f (x) =

ρe−ρ(x−δ) for x > δ;0 for x 6 δ.

whereδ > 0 is fixed.

Exercise 4

For a renewal process with distributionF(t) =∫ t

0 xe−xdx, show that

P(Number of renewals in (0, x] is even)= e−x ·

∞∑

n=0

(

x4n

(4n)!+

x4n+1

(4n + 1)!

)

Hint:

P(Number of renewals in (0, x] is even)=∞∑

n=0

P(Number of renewals in (0, x] = 2n).

42

Page 44: Exercises for Stochastic Calculus

Solutions 10 Solutions

Exercise 1(a) P(Waiting time = k) = (4/5)k−1(1/5), k = 0, 1, 2, . . ..(b) We will need the following fact. LetX be a Binomial(N, p) random variable, whereN is itself aPoi(λ) random variable. Then, the distribution ofX is Poi(λp). This can be proved with probabilitygenerating functions:

gBin(n,p)(s) = (sp + 1− p)n, gPoi(λ)(s) = eλ(s−1);

gX =

∞∑

n=0

E(sX |N = n) P(N = n) =∞∑

n=0

(1+ sp − p)n(

λn

n!e−λ

)

= eλ(sp+1−p)e−λ = eλp(s−1)= gPoi(λp)(s).

Now, for i = 0, . . . 7, let Ni denote the number of patients that arrive in the time interval (i, i + 1] andXi

the number of these patients that receive service before hour 8. Since patients arrive as a Poisson processwith parameter 1 and (i, i + 1] has length 1,Ni has Poisson(1) distribution. For fixedi, the probabilitythat a given patient that arrives in (i, i + 1] receives service before hour 8 is equal to the probabilitythathe waits 7− i or less hours for service; this is equal to 1− (4/5)7−i+1

= 1 − (4/5)8−i. Thus,Xi hasBinomial(Ni, 1 − (4/5)8−i) = Poi(1− (4/5)8−1) distribution. Notice thatX0, . . .X7 are all independent,because they are each determined from arrivals of a Poisson process in disjoint intervals and patients’waiting times, which are assumed to be independent. Since for the sum of independent Poisson randomvariables has Poisson distribution with parameter equal tothe sum of the parameters, we have

X0, . . . X7 ∼ Poi

1−

(

45

)8

+ 1−

(

45

)7

+ · · · + 1−

(

45

)1

.

Exercise 2

E(X − x | X > x) = −x +1

P(X > x)·

∫ ∞

xy F(dy) = −x +

1P(X > x)

·

∫ ∞

x

∫ y

01 dt F(dy)

= −x +1

P(X > x)·

(∫ x

0

∫ ∞

xF(dy) dt +

∫ ∞

x

∫ ∞

tF(dy) dt

)

= −x +1

P(X > x)

(

x(1− F(x)) +∫ ∞

x(1− F(t))dt

)

=

∫ ∞

x(1− F(t))

1− F(x)dt.

Exercise 3Let the sequenceX1, X2, . . . of independent random variables with the assigned density be the sequenceof inter-arrival times of the renewal process. The density in the statement of the exercise is the densityfor a random variable obtained by addingδ to a exponential(ρ) random variable. We can thus writeXi = Yi + δ, whereYi ∼ exp(ρ) andi > 0. Then,

P(N(t) > k) = P(X1 + · · · + Xk 6 t) = P(Y1 + · · · + Yk 6 t − δk) =

0, if t − δk 6 0;P(N(t − δk) > k), otherwise,

whereN is a Poisson process of parameterρ. We further remark thatP(N(t − δk) > k) = P(Poi(ρ) > k).

43

Page 45: Exercises for Stochastic Calculus

Exercise 4Let F be the distribution function of the renewal times; we haveF(x) =

∫ x

0 ye−y dy = 1 − e−x − xe−x

for x > 0. Also let f (x) = xe−x · Ix>0 be the density function of the renewal times. Letf (n) denote then-convolution of f ; let us show by induction that

f (n)(x) =1

(2n − 1)!x2n−1 e−x · Ix>0.

Indeed, this trivially holds forn = 1 and, assuming it holds for fixedn andx > 0,

f (n+1)(x) =∫ x

0f (x − y) · f (n)(y) dy

=

∫ x

0(x − y)e−(x−y) ·

1(2n − 1)!

y2n−1 e−y dy

=

e−x

(2n − 1)!

x2n+1

2n−

x2n+1

2n + 1

=

1(2(n + 1)− 1)!

x2(n+1)−1 e−x.

Of course, ifx < 0 we havef n+1(x) = 0 and so the induction is complete. Now, letN(x) denote thenumber of renewals in [0, x]; we have

P(N(x) = 2n) =∫ x

0(1− F(x − y)) · f (2n)(y) dy =

∫ x

0

1(4n − 1)!

y4n−1 e−y (e−(x−y)+ (x − y)e−(x−y)) dy

=

e−x

(4n − 1)!

∫ x

0y4n−1 dy + x

∫ x

0y4n−1 dy −

∫ x

0y4n dy

=

e−x

(4n − 1)!

x4n

4n+

x4n+1

4n−

x4n+1

4n + 1

= e−x

x4n

(4n)!+

x4n+1

(4n + 1)!

.

The result now follows by summing this expression overn.

44

Page 46: Exercises for Stochastic Calculus

Series 11: more on renewal theory

Exercise 1The weather in a certain locale consists of rainy spells alternatingwith spells when the sun shines. Suppose that the number of days of each rainy spell is Poisson distributedwith parameter 2 and a sunny spell is distributed according to an exponential distribution with mean 7days. Assume that the successive random durations of rainy and sunny spells are statistically independentvariables. In the long run, what is the probability on a givenday that it will be raining?

Exercise 2Determine the distribution of the total lifeβt of the Poisson process.

Exercise 3Consider a renewal process with non-arithmetic, finite meandistribution ofrenewals, and suppose that the excess lifeγt and current lifeδt are independent random variables for allt.Establish that the process is Poisson.Hint: use limit theorems on the identity

P[δt ≥ x, γt > y] = P[δt ≥ x] P[γt > y],

to derive a functional equation for

v(x) =1µ

+∞

x(1− F(t)) dt.

Exercise 4Show that the renewal function corresponding to the lifetime density

f (x) = λ2xe−λx, x > 0

is

M(t) =12λt −

14

(1− e−2λt).

Hint. Use the uniqueness of the solution of the renewal equation. Here are some shortcuts for thecomputations:

∫ T

0e−λtdt =

(1− e−λT );

∫ T

0te−λtdt =

1λ2

(1− e−λT − λTe−λT );

∫ T

0t2e−λtdt =

1

λ3

(

1− e−λT − λTe−λT −λ2T 2e−λT

2

)

;

∫ T

0teλtdt =

1λ2

(1− eλT + λTeλT ).

45

Page 47: Exercises for Stochastic Calculus

Series 11: more on renewal theory Solutions

Exercise 1Since we are only interested in the long run result, we can assume that at time 0 we are starting a rainyspell. Fort > 0, define

A(t) = P(It is raining at timet);

a(t) = P(The initial rainy spell is still taking place at timet).

Also let F denote the distribution of the duration of two successive seasons (i.e., the distribution of a sumof two independent random variables, one with law Poisson(2) and the other with law exponential(1/7)).We then have the renewal equation

A(t) = a(t) +∫ t

0A(t − s) dF(s).

(Notice thatF is neither purely discrete nor purely continuous, so the integral has to be seen as a sumplus an integral). From the renewal theorem, we have

limt→∞

A(t) =1µ

∫ ∞

0a(s) ds,

whereµ is the expectation associated withF, equal to 2+ 7. Denoting byY the duration of the initialrainy spell, we have

∫ ∞

0a(s) ds =

∑∞s=0 P(Y > s) ds = E(Y) = 2. So the final solution is2

2+7.

Exercise 2Let λ denote the parameter of the Poisson process.We haveβt = γt+δt, whereγt = S N(t)−t is the residual life andδt = t−S N(t) is the current life. Moreover,since the process is Poisson(λ), γt andδt are independent, with distributions given by

Fγt (x) =

0, if x < 0;1− e−λx if x > 0.

Fδt (x) =

0, if x < 0;1− e−λx, if 0 6 x < t;1 if x > t.

Notice that the distribution ofγt does not depend ont, and is the exponential(λ) distribution.The distribution ofβt is given by the convolution of the two above distributions. If x < t, we have

Fβt (x) =∫ x

0Fγt (x − s) Fδt (ds) =

∫ x

0(1− e−λ(x−s)) λe−λs ds = 1− e−λx − λxe−λx.

If x > t, we have to take into account that the distribution ofδt has a mass ofe−λt in point t; theconvolution is then given by

Fβt (x) =∫ x

0Fγt (x − s) Fδt (ds) =

∫ t

0(1− e−λ(x−s)) λe−λs ds + (1− e−λ(x−t))e−λt = 1− e−λx − λte−λx.

46

Page 48: Exercises for Stochastic Calculus

Putting together the two cases, the solution can be expressed as

Fβt (x) =

0, if x < 0;1− e−λx − λ ·min(t, x) · e−λx if x > 0.

Exercise 3It follows from the renewal theorem that

limt→+∞

P[γt > y] = v(y).

Note thatP[δt ≥ x, γt > y] = P[γt−x > y + x],

and using the previous result, we get that this converges tov(x + y) as t tends to infinity. Similarly,observing the casey = 0, we infer that

limt→+∞

P[δt ≥ x] = v(x).

We thus obtain thatv(x + y) = v(x)v(y). Sincev is monotone, it is a classical exercise to check that theremust existc, λ such that

v(x) = eλx

(if g = logv, theng(x + y) = g(x) + g(y). . . ) By differentiating, we find

1− F(t) =−λ

µeλx,

soF is the repartition function of an exponential random variable of parameter−λ(> 0) (andµ = −λ).

Exercise 4Let

M(t) = E(#Arrivals until timet);

M(t) =12λt −

14

(1− e−2λt).

Our objective is to show thatM(t) = M(t).We know that the renewal equation

A(T ) = F(T ) +∫ T

0A(T − t) F(dt)

has a unique solution and this solution isM (see course notes or page 183 of the textbook). So, we only

need to check thatM(T ) = F(T ) +∫ T

0 M(T − t) F(dt).First note that, forT > 0,

F(T ) =∫ T

0λ2te−λt dt = 1− e−λT − λTe−λT .

Next,∫ T

0M(T − t) F(dt) =

∫ T

0

[

12λ(T − t) −

14+

14

e−2λ(T−t)]

λ2te−λt dt

=λ3

2T∫ T

0te−λt dt −

λ3

2

∫ T

0t2e−λt dt −

λ2

4

∫ T

0te−λt dt +

λ2

4e−2λT

∫ T

0teλt dt.

Using the shortcuts in the statement of the exercise and simplifying, we getF(T ) +∫ T

0 M(T − t) F(dt) =M(T ) as required.

47

Page 49: Exercises for Stochastic Calculus

Series 12: branching processes

Exercise 1Sir Galton is worried about the survival of his family name. He himself has three sons, and estimates thateach of them has probability 1/8 to have no boy, probability 1/2 to have 1 boy, probability 1/4 to have 2boys, and probability 1/8 to have 3 boys. He thinks that these probabilities will be constant in time, thatthe numbers of children his descendants will have are independent random variables, and since he livesin the xixth century, he believes that only men can pass on their name. According to these assumptions,what is the probability that his name will become extinct ?

Exercise 2A server takes 1 minute to serve each client. During the n-th minute, the number Zn of clients that arriveand get in line for service is a random variable. We assume that these variables are independent and that

P(Zn = 0) = 0.2, P(Zn = 1) = 0.2, P(Zn = 2) = 0.6.

The attendant can leave to have a coffee only if there are no clients to serve. What is the probability thathe can ever leave for a coffee?

Exercise 3(i) Show that, for a positive random variable Y with

i2P(Y = i) < ∞, we have

Var(Y) = g′′Y (1) + E(Y) − E(Y)2,

where gY is the probability generating function of Y .(ii) Let (Xn)n>0 be a branching process (as is usual, assume X0 = 1) and let Z be a random variablewhose distribution is equal to the distribution of the number of descendents in (Xn). Let m = E(Z). Usingthe relation gXn+1 = gXn gZ , show that E(Xn) = mn.(iii) Let σ2

= Var(Z). Again using gXn+1 = gXn gZ , show that

g′′Xn+1= (g′′Xn

gZ)(g′Z)2+ (g′Xn

gZ)g′′Z .

(iv) Show by induction that

Var(Xn) =

mn(mn−1)

m2−mσ2 if m , 1;

nσ2 if m = 1.

48

Page 50: Exercises for Stochastic Calculus

Series 12: branching processes Solutions

Exercise 1Let p be the probability that the descendence of one of Galton’s sons dies out. Since the average numberof boys is strictly larger than 1, we know thatp > 0. The fixed point equation in our case reads

p =18+

12

p +14

p2+

18

p3.

As always,p = 1 is a solution. It is not the one we are looking for, so we can simplify the equation into

p2+ 3p − 1 = 0.

The solution we are looking for is

p =

√13− 3

2.

Now, the descendences of Galton’s three sons are assumed to be independent. The probability that theyall become extinct is thus

√13− 3

2

3

,

and eternal survival is the complementary event.

Exercise 2We use a branching process as a model to describe the problem.Let us suppose that there areX0 clientsin line. They constitute generation 0. The “direct descendents” of a client are those that arrive while thatclient is being served. Generationn + 1 is formed of direct descendents of clients of generationn. Thus,each one of theXn clients of generationn will be served during a minute during which a certain numberZi (i = 1, . . . , Xn) of clients arrive: his direct descendents. Once all clients of generationn have beenserved, we findXn+1 = Z1 + · · · + ZXn new clients in lineXn is thus a branching process.The probability that the server can have a pause is equal to the probability of extinction of the branchingprocess, which in turn is given by the smallest solution of the equation

α =15+

15α +

35α2,

namelyα = 1/3.

Exercise 3(i) For |s| < 1, we have

g′′Y (s) =d2

ds2

∞∑

n=0

snP(Y = n) =

∞∑

n=0

d2

ds2snP(Y = n) =

∞∑

n=1

n(n − 1)sn−2P(Y = n).

When∑∞

n=1 n2P(Y = n) < ∞, we can take

g′′Y (1) = lims→1

g′′Y (s) =∞∑

n=1

n(n − 1)P(Y = n) =∞∑

n=1

n2P(Y = n) −

∞∑

n=1

nP(Y = n) = E(Y2) − E(Y).

Then,Var(Y) = E(Y2) − E(Y)2

= g′′Y (1)+ E(Y) − E(Y)2.

49

Page 51: Exercises for Stochastic Calculus

(ii) We know that the generating function ofXn is given by the recursion formula

gXn+1 = gXn gZ ,

so that we get

(1) g′Xn+1= (g′Xn

gZ)g′Z

and thenE(Xn+1) = g′Xn+1

(1) = g′Xn(gZ(1)) · g′Z(1) = E(Xn)E(Z),

sincegZ(1) = 1. Thus,E(Xn) = mn for eachn > 1.(iii) − (iv) We obviously have Var(X0) = 0. Deriving equation (1), we get

g′′Xn+1= (g′Xn

gZ)′g′Z + (g′Xn gZ)g′′z = (g′′Xn

gZ)(g′Z)2+ (g′Xn

gZ)g′′Z .

We then have

Var(Xn+1) = g′′Xn+1(1)+ E(Xn+1) − E(Xn+1)2

= g′′Xn(gZ(1))(g′Z(1))2 + g′Xn

(gZ(1))g′′Z (1)+ E(Xn+1) − E(Xn+1)2

= (Var(Xn) + E(X2n) − E(Xn))E(Z)2

+ E(Xn)(Var(Z) + E(Z)2 − E(Z)) + E(Xn+1) − E(Xn+1)2

= m2Var(Xn) + m2(n+1) − mn+2+ mnσ2

+ mn+2 − mn+1+ mn+1 − m2(n+1)

= mnσ2+ m2Var(Xn).

Thus,Var(Xn+1) = mnσ2

+ m2Var(Xn) = (mn+ mn+1)σ2

+ m4Var(Xn−1)

= · · · = (mn+ mn+1

+ · · · + m2n)σ2+ m2n+2Var(X0) =

(n + 1)σ2 if m = 1mn(mn+1−1)

m−1 σ2 if m , 1

since Var(X0) = 0. The result is now proved.

50

Page 52: Exercises for Stochastic Calculus

Series 13: branching and point processes

Exercise 1Suppose that in a branching process, the number of offspring of aninitial particle has a distribution whose generating function is f (s). Each member of the first generationhas a number of offspring whose distribution has generating functiong(s). The next generation hasgenerating functionf , the nextg, and the functions continue to alternate this way from generation togeneration.What is the probability of extinction ? Does this probability change if we start the process with thefunctiong, and then continue to alternate ? Can the process go extinct with probability 1 in one case, butnot in the other ?

Exercise 2Consider a branching process with initial sizeN and probabilitygenerating function

ϕ(s) = q + ps, p, q > 0, p + q = 1.

Determine the probability distribution of the timeT when the population first becomes extinct.

Exercise 3Points are thrown onR2 according to a Poisson point process of rateλ. What is the distribution of thedistance from the origin to the closest point of the point process ?

Exercise 4Let t 7→ X(t) be a Poisson process of rateλ. What is the distribution of the set of jumps of the processt 7→ X(e−t) ?

51

Page 53: Exercises for Stochastic Calculus

Series 13: branching and point processes Solutions

Exercise 1Considering only even generations, one obtains a usual branching process with generating functionf (g(s)). The probability of extinction is thus the smallestp > 0 such thatf (g(p)) = p. If we startwith the generating functiong instead off , then the probability of extinction is the smallestp such thatg( f (p)) = p. These two probabilities are different in general. Choose

f (s) =14+

34

s,

g(s) = s2.

Solving f (g(p)) = p for 0 < p < 1 leads top = 1/3, while solvingg( f (p)) = p for 0 < p < 1 leads top = 1/9.It is however not possible to find an example where one has a non-zero probability of survival in onecase, but not on the other. Recall that this question is decided by comparing the expected number ofdescendants to the value 1. Here, ifm1 is the expected number of descendants associated to the generatingfunction f , andm2 is the expected number of descendants associated to the generating functiong, thenthe expected number of descendants associated to the generating function f (g(s)) is m1m2, which is thesame as the value forg( f (s)). This can be proved by representing the process using random variables,but can also be seen directly on the characteristic function, since the value is

(

dds

f (g(s))

)

∣s=1= g′(1) f ′(g(1)) = g′(1) f ′(1) = m1m2 =

(

dds

g( f (s))

)

∣s=1.

Exercise 2Let us consider the process started with one individual, andlet Xn be the size of the population at timen.The generating function ofXn is ϕ(n)(s), and one can show by induction that

ϕ(n)(s) = q + pq + · · · + pn−1q + pns.

The probability thatXn = 0 is thus

ϕ(n)(0) = q + pq + · · · + pn−1q =q(1− pk)

1− p= 1− pk.

When the population is started withN individuals, the offspring of these individuals are independent,and thus

P[T ≤ N] = P[the offspring of theN individuals are all extinct]

= (1− pk)N .

Exercise 3The distanced from the origin to the closest point is larger thanr if and only if there is not point of thePoisson process that falls within the ball of radiusr centred at the origin. The number of such pointsfollows a Poisson distribution with parameterπr2λ. Hence,

P[d > r] = exp(−πλr2).

52

Page 54: Exercises for Stochastic Calculus

It thus follows thatd has a probability density 2πλr exp(−πλr2)dr.

Exercise 4For any intervalI, let N(I) be the number of jumps ofX occuring duringI, andN′(I) be the number ofjumps ofX′ that occur duringI. One can check that

N([e−b, e−a]) = N′([a, b]).

As a consequence, it is easy to see that for two disjoint intervals I1 and I2, the random variablesN′(I1)andN′(I2) are independent. Moreover,N′([a, b]) follows a Poisson distribution with parameter

λ(e−a− e−b) =

∫ b

aλe−x dx.

The set of jumps of the processt 7→ X(e−t) thus forms a Poisson point process with intensity measureλe−x dx.

53

Page 55: Exercises for Stochastic Calculus

let Xi be a sequence of independent random variables with E[Xi] = 0 andV (Xi) = E[(Xi − E[Xi])

2] = σ2i . Show that the sequence

Sn =

n∑i=1

(X2i − σ2

i )

is a martingale with respect to F, the ltration generated by the sequence Xi.

Proof. (For denition of martingale see Denition 5.2) We rst check the inte-grability os Sn.

E[|Sn|] = E[|n∑i=1

X2i − σ2

i |] ≤ E[

n∑i=1

|X2i − σ2

i |] ≤n∑i=1

E[|X2i − σ2

i |]

≤n∑i=1

E[X2i ] + E[σ2

i ] =

n∑i=1

2σ2i <∞

hence Sn is integrable.To check that Sn is Fn -measurable, just observe that since Xi for i =

1, 2, . . . , n are Fn measurable, the sum of them is as well.What remains to prove is the matringale property.

E[Sn+1|Fn] = E[Sn +X2n+1 − σ2

n+1|Fn] = E[Sn|Fn] + E[X2n+1|Fn]− σ2

n+1

= Sn + E[X2n+1]− σ2

n+1 = E[Sn|Fn] + σ2n+1 − σ2

n+1

= Sn.

Hence Sn is an F-martingale.

2.

Let Xi be IID with Xi ∼ N(0, 1) for each i and put Yn =∑ni=1Xi. Show that

Sn = expαYn − nα2/2

is an Fn-martingale for every α ∈ R.

Proof. (For denition of martingale see Denition 5.2) We rst check the in-

tegrability os Sn. Since Sn ≥ 0 and by knowing that E[ecX ] = ec2/2 for

X ∼ N(0, 1) (moment generating function), we get

E[|Sn|] = E[Sn] = E[expαn∑i=1

Xi − nα2/2] = e−nα2/2

n∏i=1

E[eαXi ] = 1 <∞.

hence Sn is integrable.To check that Sn is Fn -measurable, observe that since Xi for i = 1, 2, . . . , n

are Fn measurable, and the exponential of the sum is continuous, so Sn ismeasurable as well.

54

1.

Series 14: martingales in discrete time

Page 56: Exercises for Stochastic Calculus

What remains to prove is the matringale property.

E[Sn+1|Fn] = E[expαn∑i=1

Xi − nα2/2|Fn] = E[SnexpαXn+1 − α2/2|Fn]

= Sn E[expαXn+1 − α2/2|Fn]︸ ︷︷ ︸=1 (as in 7.1)

= Sn.

Hence Sn is an F-martingale.

3.

Let Xi be a sequence of bounded random variables such that

Sn =

n∑i=1

Xi

is an F-martingale. Show that Cov(Xi, Xj) = 0 for i 6= j.

Proof. By Proposition 5.4 we get that E[Xi] = 0 for i > 1, hence for 1 ≤ n ≤n+m we have

Cov(Xn, Xn+m) = E[XnXn+m]− E[Xn]E[Xn+m]︸ ︷︷ ︸=0

= E[E[XnXn+m|Fn]]

= E[XnE[Xn+m|Fn]] = E[XnE[Sn+m − Sn+m−1|Fn]] = 0

where the last equality stems from the fact that Sn is an F-martingale. Hencethe sequence Xi are mutually uncorrelated.

4.

Let Mn and Nn be square integrable F-martingales. Show that

E[Mn+1Nn+1|Fn]−MnNn = 〈M,N〉n+1 − 〈M,N〉n (1)

Proof. (For denition of square integrability see Denition 8.1, for denition ofquadratic variatio and covariation see page 54) The right hand side of equality(1) yields

〈M,N〉n+1 − 〈M,N〉n =

=

n∑i=0

E[(Mi+1 −Mi)(Ni+1 −Ni)|Fi]−n−1∑i=0

E[(Mi+1 −Mi)(Ni+1 −Ni)|Fi]

= E[(Mn+1 −Mn)(Nn+1 −Nn)|Fn]

= E[Mn+1Nn+1 −Mn+1Nn −MnNn+1 +MnNn|Fn].

55

Page 57: Exercises for Stochastic Calculus

Using the martingale property of the two processes Mn and Nn, and themeasurability of Mn and Nn with respect to Fn, we get

E[Mn+1Nn+1 −Mn+1Nn −MnNn+1 +MnNn|Fn]

= E[Mn+1Nn+1|Fn]− E[Mn+1Nn|Fn]− E[MnNn+1|Fn] + E[MnNn|Fn]

= E[Mn+1Nn+1|Fn]−NnE[Mn+1|Fn]−MnE[Nn+1|Fn] +MnNn

= E[Mn+1Nn+1|Fn]−NnMn,

and the proof is done.

5.

Let Mn and Nn be square integrable F-martingales.

1. Let α and β be real numbers. verify that, for every integer n ≥ 0,

〈αM + βN〉n = α2〈M〉n + 2αβ〈M,N〉n + β2〈N〉n.

2. Derive the Cauchy-Schwarz inequality

|〈M,N〉n| ≤√〈M〉n

√〈N〉n, n ≥ 0.

Proof. (For denition of square integrability see Denition 8.1, for denition ofquadratic variatio and covariation see page 54) 1) By Denition 8.3 we get

〈αM + βN〉n =

n−1∑i=0

E[(αMi+1 + βNi+1 − αMi − βNi)2|Fi]

=

n−1∑i=0

E[((αMi+1 − αMi) + (βNi+1 − βNi))2|Fi]

=

n−1∑i=0

E[α2(Mi+1 −Mi)2 + 2αβ(Mi+1 −Mi)(Ni+1 −Ni) + β2(Ni+1 −Ni)2|Fi]

= α2〈M〉n + 2αβ〈M,N〉n + β2〈N〉n,

which is what we wherev set out to prove.2) It is easily seen that the quadratic variation is alway positive and by using

this observation, combined with the result from the rst part of this exercise,we get, for any λ ∈ R,

0 ≤ 〈M − λN〉n = 〈M〉n − 2λ〈M,N〉n + λ2〈N〉n.

Let λ = 〈M,N〉n/〈N〉n,

0 ≤ 〈M〉n − 2λ〈M,N〉n + λ2〈N〉n= 〈M〉n − 2〈M,N〉2n/〈N〉n + 〈M,N〉2n/〈N〉n = 〈M〉n − 〈M,N〉2n/〈N〉n

hence

〈M,N〉2n/〈N〉n ≤ 〈M〉n⇐⇒ 〈M,N〉n ≤

√〈M〉n

√〈N〉n

and the proof is done.

56

Page 58: Exercises for Stochastic Calculus

Let Mn and Nn be square integrable F-martingales. Check the followingparallellogram equality,

〈M,N〉n =1

4(〈M +N〉n − 〈M −N〉n).

Proof. Using the result from part 1) of problem 8.2 we get

〈M +N〉n − 〈M −N〉n= 〈M〉n + 2〈M,N〉n + 〈N〉n − 〈M〉n + 2〈M,N〉n − 〈N〉n = 4〈M,N〉n,

hence 〈M,N〉n = 14 (〈M +N〉n − 〈M −N〉n).

7.

Let Mn and Nn be two square integrable F-martingales and let ϕ and ψbe bounded F-adapted processes. Derive the Cauchy-Schwarz inequality

|〈IM (ϕ), IN (ψ)〉n| ≤√〈IM (ϕ)〉n

√〈IN (ψ)〉n, n ≥ 0.

Proof. By Proposition 9.3 we have that both IM (ϕ) and IN (ψ) are square in-tegrable F-martingales, so the proof is identical to the one given in part 2) ofproblem 8.2.

8.

In this problem we look at a simple market with only two assets; a bond and astock. The bond price is modelled according to Bn = (1 + r)Bn−1 for n = 1, 2, . . . , N

B0 = 1

where r > −1 is the constant rate of return for the bond. The stock price isasumed to be stochastic, with dynamics

Sn = (1 +Rn)Sn−1 for n = 1, 2, . . . , NS0 = s

where s > 0 and Rn is a sequence of IID random variables on (Ω,F,P). Fur-thermore, let Fn be the ltration given by Fn = σ(R1, . . . , Rn) n = 1, . . . , N

a) When is Sn/Bn a martingale with respect to the ltration Fn?

We now look at portfolios consisting of the bond and the stock. For everyn = 0, 1, 2, . . . , N let xn and yn be the number of stocks and bonds respectivelybought at time n and held over the period [n, n+ 1). furthermore, let

Vn = xnSn + ynBn

57

6.

Page 59: Exercises for Stochastic Calculus

by the value of the portfolio over [n, n + 1), and let V0 be our initial wealth.The rebalancing of the portfolio is done in the following way.

At every time n we observe the value of our old portfolio, composed at timen − 1, which at time n is xn−1Sn + yn−1Bn. We are allowed to only use thisamount to to rebalance the portfolio at time n, i.e. we are not allowed towithdraw or add any money to the portfolio. A portfolio with this restriction iscalled a self−financingportfolio. Formally we dene a self-nancing portfolioas a pair xn, yn of Fn-adapted processes such that

xn−1Sn + yn−1Bn = xnSn + ynBn, n = 1, . . . , N.

b) Show that if Sn/Bn is a martingale with respect to the ltration Fn, thenso is Vn/Bn, where Vn is the portfolio value of any self-nancing portfolio.

Finally we look at a type of self-nancing portfolios called arbitrage strategies.A portfolio xn, yn is called an arbitrage if we have

V0 = 0

P(VN ≥ 0) = 1

P(VN > 0) > 0

for the value process of the portfolio. The idea formalized in an arbitrage port-folio is that with an initial wealth of 0 we get a non-negative portfolio valueat time N with probability one, i.e. your are certain to make money on yourstrategy. We say that a model is arbitrage free if the model permits arbitrageportfolios.

c) Show that if Sn/Bn is a martingale then every self-nancing portfolio isarbitrage free.

Let Qn be a square integrable martingale with respect to the ltration Fnsuch that Qn > 0 a.s. and Q0 = 1 a.s..

d) Show that even if Sn/Bn is not a martingale with respect to the ltrationFn, nding a process Qn as dened above such that SnQn/Bn, will give thatVnQn/Bn is a martingale with respect to the ltration Fn, and furthermore,that Vn is arbitrage free.

Even though the multiplication of the positive martingale Qn might seem unim-portant, we will later in the course see that this is in fact a very special actionwhich gives us the ability to change measure. In nancial applications, thisis important since the portfolio pricing theory say that a portfolio should bepriced under a risk neutral measure, a measure where all portfolios, divided bythe bank process Bn should be a martingale. The reason for this is that thetheory is based on a no-arbitrage assumption, which hold if Sn/Bn or SnQn/Bnis a martingale as proven in this exercise. So the existence of Qn guaranteesthat Vn is arbitrage free, and using a change of measure closely related to Qnwe may price any portfolio Vn consisting of Sn and Bn in a consistent way.

Proof. a) Use Denition 5.2 to conclude that Sn/Bn is an Fn-martingale ifthe process is integrable, measurable and have the martingale property, i.e. that

58

Page 60: Exercises for Stochastic Calculus

E[Sn+1/Bn+1|Fn] = Sn/Bn. Since Sn/Bn ≥ 0 for every n, Bn is deterministicand the Rn's are IID we get

E[|Sn/Bn|] = E[Sn/Bn] =

∏ni=1 E[1 +Rn]∏ni=1(1 + r)

=

n∏i=1

E[1 +Rn]

1 + r, (2)

hence Sn/Bn is integrable if Rn is. Since Fn = σ(R1, . . . , Rn), and the theprodukt is a continuous mapping, Sn/Bn is Fn-measurable. To check the mar-tingale property, just add a conditioning to (2),

E[Sn+1/Bn+1|Fn] =SnE[1 +Rn+1|Fn]

Bn(1 + r)=SnBn

E[1 +Rn|Fn]

1 + r=SnBn

1 + E[Rn|Fn]

1 + r.

To get the martingale property E[Sn+1/Bn+1|Fn] = Sn/Bn we must have

1 + E[Rn|Fn]

1 + r= 1,

or equivalently that E[Rn+1|Fn] = r. Hence Sn/Bn is an Fn-martingale ifE[Rn+1|Fn] = r.

b) Since the denition of a self-nancing portfolio is that

xn−1Sn + yn−1Bn = xnSn + ynBn, n = 1, . . . , N.

we get, by the denition of Vn,

E[Vn+1

Bn+1|Fn] = E[

xn+1Sn+1 + yn+1Bn+1

Bn+1|Fn] = E[

xnSn+1 + ynBn+1

Bn+1|Fn]

= xnE[Sn+1

Bn+1|Fn] + yn

Bn+1

Bn+1

since xn, yn are Fn-measurable. Under the assuption that Sn/Bn is an Fn-martingale we get

E[Vn+1

Bn+1|Fn] = E[

xnSn+1 + ynBn+1

Bn+1|Fn] =

xnSn + ynBnBn

=VnBn

,

so Vn/Bn is an Fn-martingale if Sn/Bn is.

c) From b) we have that any self-nancing portfolio Vn = xnSn + ynBn issuch that Vn/Bn is a martingale if Sn/Bn is. To check that any self-nancingportfolio Vn is arbitrage free, we must have V0 = x0S0 + y0B0 = 0. Let Sn/Bnbe a martingale, then by Proposition 5.4 a) we have

E[Vn+1/Bn+1] = V0/B0 = V0 = 0.

Assume that P (Vn ≥ 0) = 1 and P (Vn > 0) > 0. Since E[Vn/Bn] = 0 andBn <∞ for any n we get

E[VnBn

] = E[VnBn

IVn=0]︸ ︷︷ ︸=0

+E[VnBn

IVn>0]︸ ︷︷ ︸>0

> 0,

59

Page 61: Exercises for Stochastic Calculus

where I· is the indicator function. This is a contradiction to E[Vn/Bn] = 0,hence there are no arbitrage strategies Vn.

d) Following the same lines as in b) we get that if SnQn/Bn is a martingalewith respect to the ltration Fn and Vn is self nancing,

E[Vn+1Qn+1

Bn+1|Fn] = E[

xnSn+1Qn+1 + ynBn+1Qn+1

Bn+1|Fn] =

xnSnQn + ynBnQnBn

=VnQnBn

,

so VnQn/Bn is an Fn-martingale if SnQn/Bn is. And following the same linesas the proof of c),

E[Vn+1Qn+1

Bn+1] =

V0Q0

B0= V0 = 0.

Assume that P (Vn ≥ 0) = 1 and P (Vn > 0) > 0. Since E[VnQn/Bn] = 0 andQn > 0, Bn <∞ for any n we get

E[VnQnBn

] = E[VnQnBn

IVn=0]︸ ︷︷ ︸=0

+E[VnQnBn

IVn>0]︸ ︷︷ ︸>0

> 0,

where I· is the indicator function. This is a contradiction to E[VnQn/Bn] = 0,hence there are no arbitrage strategies Vn.

9.

A coin is tossed N times, where the number N is known in advance. 1 unitinvested in a coin toss gives the net protof 1 unit with probability p ∈ (1/2, 1]and the net prot of −1 with probability 1− p. If we let Xn n = 1, 2, . . . , N bethe net prot per unit invested in the nth coin toss, then,

P(Xn = 1) = p and P(Xn = −1) = 1− p,

and the Xn's are independent of each other. Let Fn = σ(X1, . . . , Xn)and letSn, n = 1, 2, . . . , N ne the wealth of the investor at time n. Assume furtherthat the initial wealth S0 is a given constant. Any non-nergative amount Cn canbe invested in coin toss n+ 1, n = 1, . . . , n− 1, but we assume that borrowingmoney is not allowed so Cn ∈ [0, Sn]. Thus we have

Sn+1 = Sn + CnXn+1, n = 1, . . . , N − 1 and Cn ∈ [0, Sn].

Finally assume that the objective of the investor is to maximize the expectedrate of return E[(1/N) log(SN/S0)].

a) Show that Sn is a submartingale with respect to the ltration F.

b) Show that whatever strategy Cn the investor use in the investment game,Ln = log(Sn) − nα where α = p log(p) + (1 − p) log(1 − p) + log(2) is a super-martingale with respect to the ltration Fn.

Hint: At some point you need to study the function

g(x) = p log(1 + x) + (1− p) log(1− x) for x ∈ [0, 1] and p ∈ (1/2, 1).

60

Page 62: Exercises for Stochastic Calculus

c) Show that the fact that log(Sn)− nα is a supermartingale implies that

E[[] log(SN/S0)] ≤ Nα.

d) Show that if Cn = Sn(2p− 1), Ln is an F-martingale.

Proof. (For denition of submartingale and supermartingale see the text follow-ing Denition 5.2) a) To show that Sn is a submartingale w.r.t. F we wantto show that E[Sn+1|Fn] ≥ Sn,

E[Sn+1|Fn] = E[Sn + CnXn+1|Fn] = Cn and Sn are Fn-measurable= Sn + CnE[Xn+1|Fn] = Xn+1 independent of Fn= Sn + CnE[Xn+1] = Sn + Cn︸︷︷︸

≥0

(1 · p− 1 · (1− p))︸ ︷︷ ︸>0

≥ Sn,

hence Sn is a submartingale w.r.t. Fn.

b) We now want to show that E[Ln+1|Fn] ≤ Ln,

E[Ln+1|Fn] = E[log(Sn+1)− (n+ 1)α|Fn] = E[log(Sn + CnXn+1)|Fn]

− (n+ 1)α = E[log(Sn(1 + CnXn+1/Sn))|Fn]− (n+ 1)α

= E[log(Sn) + log(1 + CnXn+1/Sn))|Fn]− (n+ 1)α

= log(Sn)− nα︸ ︷︷ ︸=Ln

+E[log(1 + CnXn+1/Sn))|Fn]− α

= Ln + p log(1 + Cn/Sn) + (1− p) log(1− Cn/Sn)︸ ︷︷ ︸=g(Cn/Sn)

−α = Ln + g(Cn/Sn)− α.

Since g′′(x) = −p/(1+x2)−(1−p)/(1−x2) < 0 for x ∈ [0, 1), g is concave in thatregion, and the maximum is x = 2p−1 since g′(x) = p/(1+x)−(1−p)/(1−x) = 0so g(x) ≤ g(x) for all x ∈ [0, 1]. Since Cn/Sn ∈ [0, 1],

g(Cn/Sn) ≤ g(x) = g(2p− 1) = p log(p) + (1− p) log(1− p) + log 2 = α,

hence

E[Ln+1|Fn] = Ln + g(Cn/Sn)− α ≤ Ln + α− α = Ln,

so Ln is a supermartingale w.r.t. Fn.

c) We have just shown that Ln = log(Sn) − nα is a supermartingale w.r.t.Fn. Because of this we also have that E[Ln] ≤ L0 so

E[log(SN )−Nα] ≤ log(S0)− 0 · α⇐⇒ E[log(SN/S0)] ≤ Nα.

d) For Cn = SN (2p− 1) we get

E[Ln+1|Fn] = Ln + g(Cn/Sn)− α = Ln + g(2p− 1)− α = Ln,

hence Ln is a Fn-martingale using the strategy Cn = Sn(2p− 1).

61

Page 63: Exercises for Stochastic Calculus

Assume that Xn n = 0, 1, 2, . . . is that price of a stock at time n and assume thatXn is a supermartingale with respect to the ltration Fn. This means that ifwe buy one unit of stock at time n, paying Xn, the expected price of the stocktomorrow (represented by the time n + 1) given the information Fn is lowerthan today's price. in other words, we expect the price to go down. Investing inthe stock does not seem to be a good idea, but is it possible to nd a strategythat performs better? The answer is no, and the objective of this exercise is toshow that. Let Cn be a process adapted to Fn with 0 ≤ Cn n = 0, 1, 2, . . .,representing our investment strategy. We know that the gain of our tradingafter n days is given by IX(C)n, the sochastic integral of C with respect toX. Now, show that for any supermartingale Xn and any positive, adapted andbounded process Cn

E[IX(C)n+1|Fn] ≤ IX(C),

i.e. that IX(C)n is also a supermartingale.

Proof. (For denition of the stochastic integral IX(C) see Denition 9.1) We

may write the stochastic integral as IX(C)n =∑n−1i=0 Ci(Xi+1 −Xi) so taking

the conditional expectation of the stochastic integral we get

E[IX(C)n+1|Fn] = E[

n∑i=0

Ci(Xi+1 −Xi)|Fn]

= E[

n−1∑i=0

Ci(Xi+1 −Xi) + Cn(Xn+1 −Xn)|Fn]

= E[IX(C)n + Cn(Xn+1 −Xn)|Fn] = IX(C)n is Fn -measurable= IX(C)n + E[Cn(Xn+1 −Xn)|Fn] = Cn and Xn are Fn -adapted= IX(C)n + Cn(E[Xn+1|Fn]−Xn) ≤ Xn is supermartingale and Cn ≥ 0≤ IX(C)n + Cn(Xn −Xn) = IX(C)n.

Hence IX(C)n is a supermartingale with respect to Fn if Xn is.

62

10.

Page 64: Exercises for Stochastic Calculus

Proof. Recall that for a square integrable random variable X,the Chebyshev'sinequality is

P(|X| > ε) ≤ E[X2]

ε2.

Since 〈B〉n = n, wich is given in the text at page 64 if needed, we get

P(∣∣∣∣ Bn〈B〉n

∣∣∣∣ > ε

)= P

(∣∣∣∣Bnn∣∣∣∣ > ε

)= P (|Bn| > nε) ≤ E[B2

n]

(nε)2=

n

(nε)2

=1

ε21

n→ 0 as n→∞.

Hence

Bn〈B〉n

P−→ 0 as n→∞.

2.

Assume the value of the bond's rate of return

is r = e12σ

2 − 1 for some constant σ. What should be the distribution of the

random variable (1+Rn) in order to model Sn∆= Sn/Bn as a geometric Brownian

motion i.e.

Sn = seσWn−12nσ

2

, S0 = s,

where Wn is a discrete Brownian motion.

Proof. from the denition of Sn we get

Sn =SnBn

=(1 +Rn)Sn−1

(1 + r)Bn−1=

(1 +Rn)Sn−1

e12σ

2Bn−1

.

We get

Sn+1

Sn=

(1 +Rn+1)

e12σ

2=

(1 +Rn+1)

e12σ

2,

so letting Sn be a geometric Brownian motion, we must have

Sn+1

Sn=seσWn+1−

12(n+1)σ2

seσWn−12nσ2

= seσ(Wn+1−Wn)−12σ

2

.

Combining the two results we get

Sn+1

Sn= seσ(Wn+1−Wn)−

12σ

2

=(1 +Rn+1)

e12σ

2,

which holds if

1 +Rn+1 = seσ(Wn+1−Wn).

63

1.

Let Bn, n = 0, 1, 2, . . . be a discrete Brownian motion. Show that

Bn〈B〉n

P−→ 0 as n→∞,

that is for every ε > 0

P(∣∣∣∣ Bn〈B〉n

∣∣∣∣ > ε

)→ 0 as n→∞.

Series 15: discrete Brownian motion

Page 65: Exercises for Stochastic Calculus

Let Mt and Nt be square integrable Ft-martingales.

1. Let α and β be real numbers. verify that, for every t ≥ 0,

〈αM + βN〉t = α2〈M〉t + 2αβ〈M,N〉t + β2〈N〉t.

2. Derive the Cauchy-Schwarz inequality

|〈M,N〉t| ≤√〈M〉t

√〈N〉t, t ≥ 0.

Proof. (For denition of square integrability see Denition 11.3, for denitionof quadratic variation and covariation see pages 74-75) 1) We use the denitionof the covariation process to get

〈αM + βN〉t = lim‖Π‖→0

n−1∑i=0

(αMi+1 + βNi+1 − αMi − βNi)2

= lim‖Π‖→0

n−1∑i=0

((αMi+1 − αMi) + (βNi+1 − βNi))2

= lim‖Π‖→0

n−1∑i=0

α2(Mi+1 −Mi)2 + 2αβ(Mi+1 −Mi)(Ni+1 −Ni) + β2(Ni+1 −Ni)2

= lim‖Π‖→0

n−1∑i=0

α2(Mi+1 −Mi)2 + 2 lim

‖Π‖→0

n−1∑i=0

αβ(Mi+1 −Mi)(Ni+1 −Ni)

+ lim‖Π‖→0

n−1∑i=0

β2(Ni+1 −Ni)2

α2〈M〉t + 2αβ〈M,N〉t + β2〈N〉t,

which is what we wherev set out to prove.

2) Recall the Cauchy-Schwarz inequality for n-dimensional euclidean space

n∑i=1

aibi ≤

√√√√ n∑i=1

a2i

n∑i=1

b2i .

We have

〈M,N〉t = lim‖Π‖→0

n−1∑i=0

(Mi+1 −Mi)(Ni+1 −Ni)

≤ lim‖Π‖→0

√√√√n−1∑i=0

(Mi+1 −Mi)2

n−1∑i=0

(Ni+1 −Ni)2

64

1.

Series 16: martingales in continuous time

Page 66: Exercises for Stochastic Calculus

and since√· is continuous the limit may be passed inside the root sign

〈M,N〉t ≤

√√√√ lim‖Π‖→0

n−1∑i=0

(Mi+1 −Mi)2

n−1∑i=0

(Ni+1 −Ni)2

=√〈M〉t〈N〉t

and the proof is done.

2.

Let Mt and Nt be square integrable Ft-martingales. Check the followingparallellogram equality,

〈M,N〉t =1

4(〈M +N〉t − 〈M −N〉t), t ≥ 0.

Proof. Using the result from part 1) of problem 11.1 we get

〈M +N〉t − 〈M −N〉t= 〈M〉t + 2〈M,N〉t + 〈N〉t − 〈M〉t + 2〈M,N〉t − 〈N〉t = 4〈M,N〉t,

hence 〈M,N〉t = 14 (〈M +N〉t − 〈M −N〉t).

65

Page 67: Exercises for Stochastic Calculus

Series 17: martingales in continuous time

1.

(The value of a European Call Option). In the Black.Scholes model, the priceSt of a risky asset (i.e. an asset that has no deterministic payo)at time t isgiven by the formula

St = se(r− 12σ

2)t+σBt

where Bt is a Brownian motion and s is a positive constant representing theinitial value of the asset. The value of a European Call option, with maturitytime T and strike price K is (ST −K)+ at time T . If T > t, compute explicitly

E[(ST −K)+|Ft].

Proof. Because of the Markov property of the Brownian motion, any expectationof a function h of the Brownian motion evaluated at time T , h(BT ), conditionedon a time t < T is only dependent on the value Bt and the time to maturityT − t. By Proposition 12.4 we get

E[(ST −K)+|Ft] = E[(Ste(r−σ2

2 )(T−t)+σ(BT−Bt) −K)+|Ft]

= Proposition 12.4 = EBt [(Ste(r−σ2

2 )(T−t)+σ(BT−Bt) −K)+]

66

Page 68: Exercises for Stochastic Calculus

and by the time homogeneity we may write BT − Bt =√T − tX where X ∼

N(0, 1), so

E[(ST −K)+|Ft] = EBt [(Ste(r−σ2

2 )(T−t)+σ(BT−Bt) −K)+]

= EBt [(Ste(r−σ2

2 )(T−t)+σ√T−tX −K)+]

=1√2π

∫R

(Ste(r−σ2

2 )(T−t)+σ√T−tx −K)+e−

x2

2 dx.

Since (·)+ is non-zero only when Ste(r−σ2

2 )(T−t)+σ√T−tx ≥ K which may be

rewritten to get x separated as

x ≥log(KSt

)− (r − σ2

2 )(T − t)

σ√T − t

.

Call the right hand side of the inequality d1, the integral may be written as

E[(ST −K)+|Ft] =1√2π

∫ ∞d1

(Ste(r−σ2

2 )(T−t)+σ√T−tx −K)e−

x2

2 dx

=1√2π

∫ ∞d1

Ste(r−σ2

2 )(T−t)+σ√T−txe−

x2

2 dx−K 1√2π

∫ ∞d1

e−x2

2 dx︸ ︷︷ ︸P(X≥d1)

=1√2πSte

r(T−t)∫ ∞d1

e−σ2

2 (T−t)+σ√T−tx− x2

2 dx−K 1√2π

∫ ∞d1

e−x2

2 dx︸ ︷︷ ︸P(X≥d1)

=1√2πSte

r(T−t)∫ ∞d1

e−12 (x−σ

√T−t)2

dx−KP(X ≥ d1)

= y = x− σ√T − t, dy = dx

= Ster(T−t) 1√

∫ ∞d1−σ

√T−t

e−y2

2 dy︸ ︷︷ ︸=P(X≤d1−σ

√T−t)

−KP(X ≥ d1)

= Ster(T−t)P(X ≥ d1 − σ

√T − t)−KP(X ≥ d1).

This is the explicit form of the Call Option price.

2.

Let Bt be a one dimensional Brownian motion and let Ft be the ltrationgenerated by Bt. Show that

E[B3t |Fs] = B3

s + 3(t− s)Bs.

Proof. We start by separating the process into a part that is measurable withrespect to Fs and one that is independent of Ft, namely

E[B3t |Ft] = E[(Bt −Bs +Bs)

3|Ft] = E[(Bt −Bs)3 + 3(Bt −Bs)2Bs

+ 3(Bt −Bs)B2s +B3

s |Ft] = E[(Bt −Bs)3] + 3BsE[(Bt −Bs)2]

+ 3B2sE[Bt −Bs] +B3

s .

67

Page 69: Exercises for Stochastic Calculus

Bt −Bs ∼ N(0,√t− s) and since the normal distribution is symmetric all odd

moments is zero, so

E[B3t |Ft] = 0 + 3Bs(t− s)2 + 0 +B3

s = B3s + 3(t− s)Bs.

3.

Show that the following processes are martingales with respect to Ft,the ltra-tion generated by theone dimensional Brownian motion Bt

1. B3t − 3tBt

2. B4t − 6tB2

t + 3t2.

Proof. 1) From Exercise 12.2 we have that

E[B3t |Ft] = B3

s + 3(t− s)Bs.

Using this together with the fact that Bt is an Ft-martingale we get

E[B3t − 3tBt|Ft] = B3

s + 3(t− s)Bs − 3tBs = B3s − 3sBs

which proves the martingale property of B3t − 3tBt with respect to Ft.

2) We start by computing E[B4t |Fs], and as in Exercise 12.2 we do this by

separating Bt in a part that is measurable with respect to Fs and part that isindependent of Fs

E[B4t |Fs] = E[(Bt −Bs +Bs)

4|Fs] = E[(Bt −Bs)4 + 4(Bt −Bs)3Bs

+ 6(Bt −Bs)2B2s + 4(Bt −Bs)B3

s +B4s |Fs] = E[(Bt −Bs)4]

+ 4BsE[(Bt −Bs)3] + 6B2sE[(Bt −Bs)2] + 4B3

sE[Bt −Bs] +B4s .

Recall that Bt−Bs ∼ N(0,√t− s) so we may write Bt−Bs =

√T − sX where

X ∼ N(0, 1) so we may rewrite our expression as

E[B4t |Fs] = (t− s)2E[X4] + 4Bs

√t− s3E[X3] + 6B2

s (t− s)E[X2]

+ 4B3s

√t− sE[X] +B4

s

and since all odd moments of the standard normal distribution is zero and thesecond moment is one we have

E[B4t |Fs] = (t− s)2E[X4] + 6B2

s (t− s) +B4s .

To evaluate E[X4] we use the moment generating function of the standard nor-mal distribution

ΨX(u) = E[euX ] = eu2/2

and use the result that the n'th derivative of ΨX(u) evaluated in u = 0 is then'th moment of X. The fourth derivative of ΨX(u) is

Ψ(n)X (u) = (3 + 6u2 + u4)ΨX(u)

68

Page 70: Exercises for Stochastic Calculus

and since ΨX(0) = 1 we get Ψ(4)X (0) = E[X4] = 3. From this we get

E[B4t |Fs] = (t− s)2E[X4] + 6B2

s (t− s) +B4s = 3(t− s)2 + 6B2

s (t− s) +B4s .

We now may derive the martingale property of B4t − 6tB2

t + 3t2, using the factthat B2

t − t in an Ft-martingale,

E[B4t − 6tB2

t + 3t2|Fs] = 3(t− s)2 + 6B2s (t− s) +B4

s − 6tE[B2t |Fs] + 3t2

= 3(t− s)2 + 6B2s (t− s) +B4

s − 6tE[B2t − t+ t|Fs] + 3t2

= 3(t− s)2 + 6B2s (t− s) +B4

s − 6t(B2s − s+ t) + 3t2

= 3t2 − 6ts+ 3s2 + 6B2s t− 6sB2

s +B4s − 6tB2

s + 6ts− 6t2 + 3t2

= B4s − 6sB2

s + 3s2,

hence B4t − 6tB2

t + 3t2 is an Ft-martingale

4.

Let ti∞i=0 be an increasing sequence of scalars and dene t∗i such that ti <t∗i ≤ ti+1. Furthermore, let

Sn =

n−1∑i=0

Bt∗i (Bti+1−Bti),

where Bti is the discrete Brownian motion.

Check that Sk, 0 ≤ k ≤ n is not a martingale with respect to the ltrationgenerated by B.

Proof. We only chek the martingale property of Sk.

E[Sk|Fk−1] = E[

k−1∑i=0

Bt∗i (Bti+1−Bti)|Fk−1] = Bt∗i (Bti+1

−Bti)

are Fk−1-measurable for i ≤ k − 2 =

k−2∑i=0

Bt∗i (Bti+1 −Bti)︸ ︷︷ ︸=Sk−1

+ E[Bt∗k−1(Btk −Btk−1

)|Fk−1] = E[Btk−1(Btk −Btk−1

)|Fk−1] = 0

= Sk−1 + E[Bt∗k−1(Btk −Btk−1

)|Fk−1]− E[Btk−1(Btk −Btk−1

)|Fk−1]

= Sk−1 + E[(Bt∗k−1−Btk−1

)(Btk −Btk−1)|Fk−1] = Sk−1 + (t∗k−1 − tk−1)

6= Sk−1

for any tk−1 < t∗k−1 ≤ tk hence Sk is not a martingale with respect to theltration generated by the Brownian motion B.

69

Page 71: Exercises for Stochastic Calculus

Let Bt be a Brownian motion and let Xt be the stochastic integral

Xt =

∫ t

0

es−tdBs

1. Determine the expectation E[Xt] and the variance V (Xt) of Xt.

2. Show that the random variable

Wt =√

2(t+ 1)Xlog(t+1)/2

has distribution Wt ∼ N(0, t).

Proof. 1) By part (vi) of Proposition 13.11, that denes properties of then Itointegral, we have that since the integrand, es−t, of the stochastic integral isdeterministic, the stochastic integral is normally distributed as

Xt ∼ N(

0,

∫ t

0

(es−t)2ds)

= N(

0,

∫ t

0

e2(s−t)ds)

= N(

0,1

2(1− e−2t)

).

Hence Xt has the distribution Xt ∼ N(

0, 12 (1− e−2t)

).

2) From the rst part of the exercise, we know that Xt ∼ N(

0, 12 (1−e−2t)

).

For a normally distrbuted random variable Y ∼ N(0, σ) it holds that cY ∼N(0, c2σ2) hence

Wt ∼ N(

0,(√

2(t+ 1))2 1

2(1− e−2 log(t+1)/2)

)= N

(0, (t+ 1)(1− e− log(t+1))

)= N

(0, (t+ 1)(1− 1

t+ 1))

= N(

0, (t+ 1)t

t+ 1

)= N

(0, t).

And the proof is done.

6.

Let B be a Brownian motion. Find z ∈ R and ϕ(s, ω) ∈ V such that

F (ω) = z +

∫ T

0

ϕ(s, ω)dBs

in the following cases

1. F (ω) = B3T (ω).

2. F (ω) =∫ T

0B3sds.

3. F (ω) = eT/2 cosh(BT (ω)) = eT/2 12 (eBT (ω) + e−BT (ω))

70

5.

Page 72: Exercises for Stochastic Calculus

Proof. 1) We get

d(B3t ) = 3B2

t dBt +1

26Btdt

and since

d(tBt) = tdBt +Btdt

we have tBt =∫ t

0sdBs +

∫ t0Bsds which may be rewritten using tBt =

∫ t0tdBs

to get∫ t

0Bsds =

∫ t0(t− s)dBs. We may now write the B3

T as

B3T = z +

∫ T

0

3(B2t + (T − t))dBt

where z = E[B3T ] = 0. Hence ϕ(s, ω) = 3(Bs(ω)2 + (T − s))

2)

d(TB3T ) = T (3B2

T dBT + 3BT dT ) +B3T dT

hence

TB3T = −z +

∫ T

0

s3B2sdBs +

∫ T

0

3sBsds+

∫ T

0

B3sds,

for some z ∈ R. Rewriting the expression gives∫ T

0

B3sds = z + TB3

T −∫ T

0

s3B2sdBs −

∫ T

0

3sBsds

which by problem 1) gives∫ T

0

B3sds = z +

∫ T

0

(3T (B2

s + (T − s))− s3B2s

)dBs −

∫ T

0

3sBsds.

We need to rewrite∫ T

03sBsds on a form that is with respect to dBs instead of

ds. Study

d(T 2BT ) = 2TBT dT + T 2dBT ,

hence T 2BT =∫ T

02sBsds+

∫ T0s2dBs and by writing T 2BT =

∫ T0T 2dBs we get∫ T

0

sBsds =1

2

∫ T

0

(T 2 − s2)dBs,

hence∫ T

0

B3sds = z +

∫ T

0

(3T (B2

s + (T − s))− s3B2s

)dBs −

3

2

∫ T

0

(T 2 − s2)dBs.

We may now write the∫ T

0B3sds as∫ T

0

B3sds = z +

∫ T

0

(3T (B2

s + (T − s))− s3B2s −

3

2(T 2 − s2)

)dBs.

71

Page 73: Exercises for Stochastic Calculus

where z = E[∫ T

0B3sds] =

∫ T0E[B3

s ]ds = 0. Hence ϕ(s, ω) = 3T (Bs(ω)2 + (T −s))− s3Bs(ω)2 − 3

2 (T 2 − s2).3) We notice that since

d(eBt−t/2) = eBt−t/2dBt

d(e−Bt+t/2) = −e−Bt+t/2dBt,

we may write

eT/21

2(eBT (ω) + e−BT (ω)) = eT/2

1

2(eT/2eBT (ω)−T/2 + e−T/2e−BT (ω)+T/2)

= z + eT/21

2(eT/2

∫ T

0

eBs−s/2dBs − e−T/2∫ T

0

e−Bs+s/2dBs)

= z +1

2(eT

∫ T

0

eBs−s/2dBs −∫ T

0

e−Bs+s/2dBs)

= z +

∫ T

0

eT eBs−s/2 − e−Bs+s/2

2dBs

where z is

z = E[F ] = eT/21

2E[eBT ] + E[e−BT ] = eT/2

1

2

(eT/2 E[eBT−T/2]︸ ︷︷ ︸

=1

+ e−T/2 E[e−BT+T/2]︸ ︷︷ ︸=1

)= eT/2

1

2

(eT/2 + e−T/2

)=

1

2(eT + 1).

7.

Let Xt be a generalized geometric grownian motion given by

dXt = αtXtdt+ βtXtdBt (3)

where αt and βt are bounded deterministic functions and B is a Brownianmotion.

1. Find an explicit expression for Xt and compute E[Xt].

2. Find z ∈ R and ϕ(t, ω) ∈ V such that X(T, ω) = z +∫ t

0ϕ(s, ω)dBs(ω).

Proof. 1) Let Xt = e∫ t0αsdsYt, the dierential of Xt is

dXt = αte∫ t0αsdsYtdt+ e

∫ t0αsdsdYt = αtXtdt+ e

∫ t0αsdsdYt

for this expression to be equal to (3) we must have

e∫ t0αsdsdYt = βt e

∫ t0αsdsYt︸ ︷︷ ︸=Xt

dBt

which holds if dYt = βYtdBt hence Yt is an exponential martingale given by

Yt = y0e∫ t0βtdBt− 1

2

∫ t0β2t dt

72

Page 74: Exercises for Stochastic Calculus

where y0 = 1 since X0 = 1. This gives the following expresiion for Xt

Xt = e∫ t0βtdBt+

∫ t0

(αt− 12βt)

2dt

Using the martiongale property of Yt gives

E[Xt] = E[e∫ t0αsdsYt] = e

∫ t0αsds E[Yt]︸ ︷︷ ︸

=Y0

= e∫ t0αsds.

2) In part 1) it was shown that XT = e∫ T0αsdsYT where YT is the exponential

martingale with SDE dYt = βtYtdBt hence XT may be written as

XT = e∫ T0αsdsYT = e

∫ T0αsds(1 +

∫ T

0

βsYsdBs) = e∫ T0αsds︸ ︷︷ ︸

=z

+

∫ T

0

e∫ T0αsdsβsYsdBs

and ϕ(s, ω) = e∫ T0αsdsβsYs(ω). We need to show that ϕ(s, ω) ∈ V, the criterias

are given in Denition 13.1. Part 1 and 2 of Deniton 13.1 is showed by noticing

that ϕs is the product of the processes e∫ T0αuduβs which is deterministic and

therefore universally measurable, and Ys which is the exponential martingale andtherefore fullll the conditions 1) and 2). The product of these two processesdoes also fullll criterias 1) and 2). Condition 3) of Denition 13.1 is shownby using that βt is bounded so that there is a constant 0 < K < ∞ such that|βt| < K for every t ∈ [0, T ],

E[ ∫ T

0

(βte∫ T0αsdsYt)

2dt]

= e∫ T0αsdsE

[ ∫ T

0

β2t Y

2t dt]≤ |β| < K (4)

≤ e∫ T0αsdsK2︸ ︷︷ ︸∆=Γ

E[ ∫ T

0

Y 2t dt]

= Fubini = Γ

∫ T

0

E[Y 2t

]dt. (5)

Y 2t = e

∫ t0

2βsdBs−∫ t0β2sds can be written as the product of an exponential martin-

gale and a deterministic function as Y 2t = e

∫ t0

2βsdBs− 12

∫ t0

(2β)2sdse

∫ t0β2sds so that

(4) becomes

E[ ∫ T

0

(βte∫ T0αsdsYt)

2dt]≤ Γ

∫ T

0

E[e∫ t0

2βsdBs− 12

∫ t0

(2βs)2dse

∫ t0β2sds]dt

= Γ

∫ T

0

e∫ t0β2sdsE

[e∫ t0

2βsdBs− 12

∫ t0

(2βs)2ds]dt = Γ

∫ T

0

e∫ t0β2sdsdt

≤ Γ

∫ T

0

e∫ t0K2dsdt ≤ Γ

∫ T

0

eK2tdt = Γ

∫ T

0

eK2tdt = Γ

eK2T − 1

K2<∞.

Hence ϕs ∈ V.

8.

Let f(t) = et2/2 − 1 and let B be a brownian motion on the probability space

(Ω,F, Ft, t ≥ t,P). Show that there exists another Brownian motion B suchthat

Xt =

∫ f(t)

0

1√1 + s

dBs =

∫ t

0

√sdBs

73

Page 75: Exercises for Stochastic Calculus

Proof. The quadratic variation of Xt is

〈X〉t =

∫ f(t)

0

( 1√1 + s

)ds =

∫ et2/2−1

0

1

1 + sds = log

(1 + (et

2/2−1))

= t2/2 =

∫ t

0

sds.

By Theorem 15.4 there exists an extension (Ω, F, Ft, t ≥ t, P) of (Ω,F, Ft, t ≥t,P) on which there is a Brownian motion B such that

Xt =

∫ t

0

sdBs

which solves the problem at hand.

9.

Let Xt solve the SDE

dXt = (αXt + β)dt+ (σXt + γ)dBtX0 = 0

where α, β, σ and γ are constants and B is a Brownian motion. Furthermore,let St = e(α−σ2/2)t+σBt .

1. derive the SDE satised by S−1t .

2. Show that d(XtS−1t ) = (β − σγ)S−1

t dt+ γS−1t dBt.

3. Derive the explicit form of Xt.

Proof. 1) Let f(t, x) = e−(α−σ2/2)t−σx then the SDE of S−1t is

dS−1t =

∂tf(t, Bt)dt+

∂xf(t, Bt)dBt +

1

2

∂2

∂x2f(t, Bt)dt

= −(α− σ2/2)e−(α−σ2/2)t−σxdt− σe−(α−σ2/2)t−σxdBt +1

2σ2e−(α−σ2/2)t−σxdt

= −(α− σ2)S−1t dt− σS−1

t dBt.

2)

d(XtS−1t ) = XtdS

−1t + s−1

t dXt + 〈X,S−1〉 = Xt(−(α− σ2)S−1t dt− σS−1

t dBt)

+ S−1t ((αXt + β)dt+ (σXt + γ)dBt) + (−σS−1

t )(σXt + γ)dt

= −(α− σ2)XtS−1t dt− σXtS

−1t dBt + αXtS

−1t dt

+ βS−1t dt+ σXtS

−1t dBt + γS−1

t dBt − σ2XtS−1t dt− σγS−1

t dt

=(− (α− σ2) + α− σ2

)XtS

−1t dt+

(− σ + σ

)XtS

−1t dBt +

(β − γσ

)S−1t dt

+ γS−1t dBt =

(− (α− σ2) + α− σ2

)XtS

−1t dt+

(− σ + σ

)XtS

−1t dBt

+(β − γσ

)S−1t dt+ γS−1

t dBt =(β − γσ

)S−1t dt+ γS−1

t dBt.

74

3) From 2) we get that

XtS−1t =

(β − γσ

) ∫ t

0

S−1s ds+ γ

∫ t

0

S−1s dBs

multiplying both sides by St and express St on its explicit form gives

Xt = e(α−σ2/2)t+σBt(

(β − γσ)

∫ t

0

e−(α−σ2/2)s−σBsds

+ γ

∫ t

0

e−(α−σ2/2)s−σBsdBs

).

Page 76: Exercises for Stochastic Calculus

1

Compute the stochastic dierential dz when

1. Zt = eαt, α ∈ R, t ∈ [0,∞).

2. Zt =∫ t

0g(s)dBs where g(s) is an adapted stochastic process.

3. Zt = eαBt .

4. Zt = eαXt where dXt = µdt+ σdBt.

5. Zt = X2t where dXt = µXtdt+ σXtdBt.

Proof. 1) Since αt is of rst variation, dZt = αeαtdt = αZtdt.2) dZt = g(t)dBt since it is the dierential of an integral.3) Let f(x) = eαx then, using the Ito formula, since αBt has quadratic

variation, we get

dZt = f ′(Bt)dBt +1

2f ′′(Bt)d〈B〉t = αeαBtdBt +

α2

2eαBtdt

= αZtdBt +α2

2Ztdt.

4) Using the same notation as in 3) we get

dZt = f ′(αXt)dXt +1

2f ′′(αXt)d〈X〉t = αeαXt(µdt+ σdBt) +

α2

2eαXtσ2dt

= ασZtdBt + (αµ+(ασ)2

2)Ztdt.

5) Let f(x) = x2, we get

dZt = f ′(Xt)dXt +1

2f ′′(Xt)d〈X〉t = 2XtdXt +

1

22d〈X〉t

= 2Xt(µXtdt+ σXtdBt) + σ2X2t dt = 2µX2

t dt+ 2σX2t dBt + σ2X2

t dt

= (2µ+ σ2)Ztdt+ 2σZtdBt.

75

Series 18: Additional exercises

Page 77: Exercises for Stochastic Calculus

Compute te stochastic dierential for Z when Zt = 1/Xt where Xt has thedierential

dXt = αXtdt+ σXtdBt.

Proof. Let f(x) = 1/x and hence f ′(x) = −1/x2 and f ′′(x) = 2/x3. By the Itoformula we get

dZt = f ′(Xt)dXt +1

2f ′′(Xt)d〈X〉t = − 1

X2t

dXt +1

2

2

X2t

d〈X〉t

= −αXtdt+ σXtdBtX2t

+1

X3t

σ2X2t dt = −αdt+ σdBt

Xt+σ2

Xtdt

= (−α+ σ2)Ztdt− σZtdBt.

3.

Let B be a Brownian motion and Ft be the ltration generated by B. Showby using stochastic calculus that the following processes are martingales.

1. B2t − t.

2. eλBt−λ2t2 .

Proof. If the Ito dierential only has an dBt-part, i.e. that dierential looks onthe form 0dt + (. . .)dBt, and the integrand is a member of the class V denedin Denition 13.1, then we may be certain that the process is a martingale byProposition 13.11. 1)

d(B2t − t) = 2BtdBt +

1

22dt− dt = 2BtdBt

and since

E[

∫ t

0

B2sds] = Fubini =

∫ t

0

E[B2s ]ds =

∫ t

0

sds =t2

2<∞,

2Bt ∈ V, and hence B2t − t is a martingale.

2)

d(eλBt−λ2t2 ) = −λ

2

2eλBt−

λ2t2 dt+ λeλBt−

λ2t2 dBt +

1

2λ2eλBt−

λ2t2 dt

= λeλBt−λ2t2 dBt.

And since

E[

∫ t

0

(eλBs−

λ2s2

)2ds] = E[

∫ t

0

e2λBs−λ2sds] = Fubini =

∫ t

0

E[e2λBs−λ2s]ds

=

∫ t

0

E[e2λBs ]e−λ2sds = Bs ∼ N(0, s) =

∫ t

0

e4λ2se−λ2sds <∞, 0 ≤ t <∞

hence eλBt−λ2t2 is a matingale.

76

2.

Page 78: Exercises for Stochastic Calculus

Check whether the following processes are martingales with respect to the l-tration generated by Bt.

1. Xt = Bt + 4t

2. Xt = B2t

3. Xt = t2Bt − 2∫ t

0rBrdr

4. Xt = B(1)t B

(2)t where (B

(1)t , B

(2)t ) is a two dimensional Brownian motion.

Proof. The proofs may (at least) be done in two ways, one by checking themartingale property of the process using a time s < t, and the other is to do thestochastic dierential of the process and the use the Martingale RepresentationTheorem (Theorem 15.3) to conclude whether the process is a martingale ornot. We will do both. When using the Martingale Representation Theorem weshould also prove that the integrand is a member of the class V, which we omitin the solutions.

1) E[Xt|Fs] = E[Bt + 4t|Fs] = Bs + 4t 6= Bs + 4s hence Bt + 4t is not anFt-martingale.

dXt = dBt + 4dt 6= g(s, ω)dBt hence Bt + 4t is not an Ft-martingale.2) E[Xt|Fs] = E[B2

t |Fs] = E[(Bt − Bs + Bs)2|Fs] = E[(Bt − Bs)

2|Fs] +E[B2

s |Fs] = t− s+B2s 6= B2

s hence B2t is not an Ft-martingale.

dXt = 2BtdBt + dt 6= g(s, ω)dBt hence B2t is not an Ft-martingale.

3)

E[Xt|Fs] = E[t2Bt − 2

∫ t

0

rBrdr|Fs] = t2Bs − 2E[

∫ t

0

rBrdr|Fs]

= t2Bs − 2

∫ t

0

rE[Br|Fs]dr = t2Bs − 2

∫ s

0

rE[Br|Fs]︸ ︷︷ ︸=Br

dr − 2

∫ t

s

rE[Br|Fs]︸ ︷︷ ︸=Bs

dr

= t2Bs − 2

∫ s

0

rBrdr − 2Bs

∫ t

s

rdr = s2Bs − 2

∫ s

0

rBrdr = Xs

hence t2Bt − 2∫ t

0rBrdr is an Ft-martingale.

dXt = 2tBtdt + t2dBt − 2tBtdt = t2dBt hence t2Bt − 2

∫ t0rBrdr is an Ft-

martingale.

4) E[Xt|Fs] = E[B(1)t B

(2)t |Fs] = independent = E[B

(1)t |Fs]E[B

(2)t |Fs] =

B(1)s B

(2)s hence B

(1)t B

(2)t is an Ft-martingale.

dXt = B(1)t dB

(2)t +B

(2)t dB

(1)t +d〈B(1), B

(2)t 〉 = B

(1)t dB

(2)t +B

(2)t dB

(1)t hence

B(1)t B

(2)t is an Ft-martingale.

5.

Let X be the solution to the SDE

dXt = αXtdt+ σdBt, X0 = x0

where α, σ, x0 are constants and B is a brownian motion.

77

4.

Page 79: Exercises for Stochastic Calculus

1. determine E[Xt].

2. Determine V (Xt).

3. Determine the solutionto trhe SDE.

Proof. 1)

E[Xt] = E[x0 + α

∫ t

0

Xsds+ σ

∫ t

0

dBs] = x0 + αE[

∫ t

0

Xsds] + σ E[

∫ t

0

dBs]︸ ︷︷ ︸=0

= x0 + αE[

∫ t

0

Xsds]

which leads to the ordinary dierential equation

d

dtE[Xt] = αE[Xt]

E[X0] = x0

which has the solution E[Xt] = x0eαt

2) V (Xt) = E[X2t ]− (E[Xt])

2 = E[X2t ]−x0e

αt so we need to nd an explicitexpression for E[X2

t ]. Looking at the stochastic dierential of X2t we get

dX2t = 2XtdXt + d〈X〉t = 2Xt(αXtdt+ σdBt) + σ2dt

= (2αX2t + σ2)dt+ 2σXtdBt

from which we have

X2t = x2

0 +

∫ t

0

(2αX2t + σ2)dt+ 2σ

∫ t

0

XtdBt.

E[X2t ] becomes

E[X2t ] = E[x2

0 +

∫ t

0

(2αX2t + σ2)dt+ 2σ

∫ t

0

XtdBt] = x20 + E[

∫ t

0

2αX2t dt] + σ2t

(6)

+ 2σ E[

∫ t

0

XtdBt]︸ ︷︷ ︸=0 (Mg)

= Fubini = x20 +

∫ t

0

2αE[X2t ]dt+ σ2t.

Let g(t) = E[X2t ], the problem (6) leads to the ordinary dierential equation

d

dtg(t) = 2αg(t) + σ2.

Consider the function e−2αtg(t). Dierentiating e−2αtg(t) we get

d

dt

(e−2αtg(t)

)= −2αg(t)e−2αt + e−2αt d

dtg(t) = e−2αtσ2.

78

Page 80: Exercises for Stochastic Calculus

Integrating both sides and multiplying both sides by e2αt we get

g(t) = e2αt(c+ σ2

∫ t

0

e−2αsds).

c is determined by the initial condition g(0) = x20 as c = x2

0. And the formulafor E[Xt] is

E[Xt] = e2αt(x2

0 + σ2

∫ t

0

e−2αsds)

= x20e

2αt + σ2 e2αt − 1

so the variance is V (Xt) = E[X2t ]− (E[Xt])

2 = σ2 e2αt−12α .

3) Taking the dierential of e−αtXt yields

d(e−αtXt

)= −αe−αtXtdt+ e−αtdXt = e−αtσdBt. (7)

Integrating both sides of (7) and multiplying both sides with eαt gives

Xt = eαt(c+

∫ t

0

e−αtσdBt

)where c is determined by the initial condition X0 = x0 to be c = x0. Thesolution of the SDE is therefore given by

Xt = eαt(x0 +

∫ t

0

e−αtσdBt

)Xt is the so called Ornstein Uhlenbeck process.

6.

Let h(t) be a deterministic function and dene the process Xt as

Xt =

∫ t

0

h(s)dBs.

Show that Xt ∼ N(0,∫ t

0h2(s)ds) by showing that

E[eiuXt ] = e−u2

2

∫ t0h2(s)ds. (8)

Proof. Recall that (8) is the characteristic function of N(0,∫ t

0h2(s)ds) which

is a unique transformation, therefore proving (8) is equal to proving that Xt ∼N(0,

∫ t0h2(s)ds).

Let Yt = eiuXt . Since dXt = h(t)dBt we get

dYt = iuYtdXt −u2

2Ytd〈X〉t = iuYth(t)dBt −

u2

2Yth

2(t)dt.

so Yt = 1 + iu∫ t

0Yth(s)dBs − u2

2

∫ t0Ysh

2(s)ds since Y0 = 1. The expected valueof Yt is

E[Yt] = E[1 + iu

∫ t

0

Yth(s)dBs −u2

2

∫ t

0

Ysh2(s)ds] = 1 + iuE[

∫ t

0

Yth(s)dBs]︸ ︷︷ ︸=0

(9)

− u2

2E[

∫ t

0

Ysh2(s)ds] = Fubini = 1− u2

2

∫ t

0

E[Ys]h2(s)ds. (10)

79

Page 81: Exercises for Stochastic Calculus

Let g(t) = E[Yt] and (9) may be written as the ordinary dierential equation

t

dtg(t) = −u

2

2h2(s)g(s)

g(0) = 1

which has the solution g(t) = e−u2

2

∫ t0h2(s)ds hence

E[eiuXt ] = e−u2

2

∫ t0h2(s)ds

from which we conclude that Xt ∼ N(0,∫ t

0h2(s)ds).

7.

Let X,Y satisfy the following system of SDE's

dXt = αXtdt+ YtdBt, X0 = x0

dYt = αYtdt−XtdBt, Y0 = y0

1. Show that Rt = X2t + Y 2

t is deterministic.

2. Compute E[Xt], E[Yt] and Cov(Xt, Yt).

Proof. We rst calculate the stochastic dierentials of X2t and Y 2

t .

dX2t = 2XtdXt + d〈X〉t = 2αX2

t dt+ 2XtYtdBt + Y 2t dt

dY 2t = 2αY 2

t dt− 2YtXtdBt +X2t dt

so we may write Rt as

Rt = X2t + Y 2

t = x0 + 2α

∫ t

0

X2t ds+ 2

∫ t

0

XsYsdBs +

∫ t

0

Y 2s ds

+ y0 + 2α

∫ t

0

Y 2s ds− 2

∫ t

0

YsXsdBs +

∫ t

0

X2sds = x0 + y0 (11)

+ (1 + 2α)

∫ t

0

X2t ds+ (1 + 2α)

∫ t

0

Y 2s ds = (1 + 2α)

∫ t

0

Rsds.

Let g(t) = Rt, then (11) can be written as the ordinary dierential equation

d

dtg(t) = (1 + 2α)g(s)

g(0) = x0 + y0

with the solution g(t) = (x0 + y0)e(1+2α)t. It has been shown that Rt = (x0 +y0)e(1+2α)t.

2) rewriting Xt and Yt on integral form we get

Xt = x0 + α

∫ t

0

Xsds+

∫ t

0

YsdBs

Yt = y0 + α

∫ t

0

Ysds−∫ t

0

XsdBs.

80

Page 82: Exercises for Stochastic Calculus

we get

E[Xt] = E[x0 + α

∫ t

0

Xsds+

∫ t

0

YsdBs] = x0 + αE[

∫ t

0

Xsds] + E[

∫ t

0

YsdBs]︸ ︷︷ ︸=0

= x0 + α

∫ t

0

E[Xs]ds

E[Yt] = E[y0 + α

∫ t

0

Ysds−∫ t

0

XsdBs] = y0 + αE[

∫ t

0

Ysds]− E[

∫ t

0

XsdBs]︸ ︷︷ ︸=0

= y0 + α

∫ t

0

E[Ys]ds.

This gives two ordinary derential equations solved in the same manner as inpart 1) and gives the solutions

E[Xt] = x0eαt

E[Yt] = y0eαt.

Since the covariance is Cov(Xt, Yt) = E[XtYt]−E[Xt]E[Yt] we need to cvalculateE[XtYt]. For this purpose, consider the stochastic dierential of XtYt,

dXtYt = XtdYt + YtdXt + d〈X,Y 〉t = Yt(αXtdt+ YtdBt) +Xt(αYtdt−XtdBt)

−XtYtdt = (2α− 1)XtYtdt+ Y 2t dBt −X2

t dBt.

Hence XtYt = x0y0 + (2α− 1)∫ t

0XsYsds+

∫ t0

(Y 2s −X2

s

)dBs

and we get

E[XtYt] = E[x0y0 + (2α− 1)

∫ t

0

XsYsds+

∫ t

0

(Y 2s −X2

s

)dBs] (12)

= x0y0 + (2α− 1)E[

∫ t

0

XsYsds] + E[

∫ t

0

(Y 2s −X2

s

)dBs]︸ ︷︷ ︸

=0

= Fubini (13)

= x0y0 + (2α− 1)

∫ t

0

E[XsYs]ds. (14)

Let g(t) = E[XtYt], then solving (12) amounts to solving the ordinary dierentialequation

d

dtg(t) = (2α− 1)g(t)

g(0) = x0y0

which has the solution g(t) = x0y0e(2α−1)t hence E[XtYt] = x0y0e

(2α−1)t andwe get

Cov(Xt, Yt) = E[XtYt]− E[Xt]E[Yt] = x0y0e(2α−1)t − x0y0e

2αt = x0y0e2αt(e−t − 1

).

81

Page 83: Exercises for Stochastic Calculus

Let X and Y be processes given by the SDE's

dXt = αXtdB(1)t + βXtdB

(2)t , X0 = x0

dYt = γYtdt+ σYtdB(1)t , Y0 = y0

where α, β, γ, σ are constants and B(1), B(2) are independent Brownian mo-tions. Compute E[XtYt].

Proof. Start by dierentiating XtYt to get

d(XtYt) = XtdYt + YtdXt + 〈X,Y 〉t = Xt(γYtdt+ σYtdB(1)t )

+ Yt(αXtdB(1)t + βXtdB

(2)t ) + ασXtYtdt = (γ + ασ)XtYtdt+ (σ + α)XtYtdB

(1)t

+ βXtYtdB(2)t .

Using the initial condition X0Y0 = x0y0 we have

XtYt = x0y0 + (γ + ασ)

∫ t

0

XsYsds+ (σ + α)

∫ s

0

XsYsdB(1)s + β

∫ t

0

XsYsdB(2)s

so E[XtYt] becomes

E[XtYt] = E[x0y0 + (γ + ασ)

∫ t

0

XsYsds+ (σ + α)

∫ s

0

XsYsdB(1)s (15)

+ β

∫ t

0

XsYsdB(2)s ] = x0y0 + (γ + ασ)E[

∫ t

0

XsYsds] + (σ + α)E[

∫ s

0

XsYsdB(1)s ]︸ ︷︷ ︸

=0

(16)

+ β E[

∫ t

0

XsYsdB(2)s ]︸ ︷︷ ︸

=0

= Fubini = x0y0 + (γ + ασ)

∫ t

0

E[XsYs]ds. (17)

Let g(t) = E[XtYt], then (15) gives the following ordinary dierential equation

d

dtg(t) = (γ + ασ)g(t)

g(0) = x0y0

which has the solution g(t) = x0y0e(γ+ασ)t hence E[XtYt] = x0y0e

(γ+ασ)t.

82

8.

Page 84: Exercises for Stochastic Calculus

References

General Probability:

1. Ross, S.M. (1980). Introduction to Probability Models, Second Edition. Academic Press, New York. (Chapters 1, 2 and 3)

2. Breiman, L. (1968). Probability. Addison-Wesley, Reading, Massachusetts.

3. Billingsley, P. (1979). Probability and Measure. John Wiley, New York. 4. Feller, W. (1957). An Introduction to Probability Theory and Its Applications, Vol. 1. John Wiley &

Sons, New York.

5. Feller, W. (1971). An Introduction to Probability Theory and Its Applications, Vol. 2. John Wiley & Sons, New York.

6. Renyi, A. (1970). Foundations of Probability. Holden-Day, San Francisco, California. 7. Chung, K.L. (1968). A Course in Probability. Harcourt, Brace and World, New York.

8. Gnedenko, B.V. and A.N. Kolmogorov (1954). Limit Distributions for Sums of Independent Random

Variables. Addison-Wesley, Reading, Massachusetts (Translation from Russian).

General Stochastic Processes:

9. Cinlar, E. (1975). Introduction to Stochastic Processes. Prentice-Hall, Englewood Cliffs, New .

10. Cox, D.R. and H.D. Miller (1965). The Theory of Stochastic Processes. John Wiley & Sons, New York. 11. Doob, J.L. (1953). Stochastic Processes. John Wiley & Sons, New York.

12. Taylor, H.M. and S. Karlin (1984). An Introduction to Stochastic Modeling. Academic Press, New York.

13. Parzen, E. (1962). Stochastic Processes. Holden-Day, Francisco, California. 14. Prabhu, N.U. (1965). Stochastic Processes. Macmillan, New York.

15. Spitzer, F. (1964). Random Walks. Nostrand. Princeton, New Jersey.

Discrete-time Markov Chains:

16. Ross, S.M. (1980). Introduction to Probability Models, Second Edition. Academic Press, New York. (Chapter 4)

17. Karlin, S. and H. Taylor (1975). A First Course in Stochastic Processes, Second Edition. Academic Press, New York. (Chapters 2 and 3)

18. Taylor, H.M. and S. Karlin (1984). An Introduction to Stochastic Modeling. Academic Press, New York.

Continuous-time Markov Chains :

19. Ross, S.M. (1980). Introduction to Probability Models, Second Edition. Academic Press, New York. (Chapter 6)

20. Ross, S.M. (1983). Stochastic Processes. John Wiley & Sons, New York. 21. Taylor, H.M. and S. Karlin (1984). An Introduction to Stochastic Modeling. Academic Press, New York.

Stability for General State Space Markov Chains:

22. S.P. Meyn and R.L. Tweedie (1993). Markov Chains and Stochastic Stability. Springer-Verlag, New

York.

Stochastic Control

23. D. Bertsekas (1995). Dynamic Programming and Optimal Control. Volume 2. Athena, MA.

24. W. Fleming and R.W. Rishel (1975). Deterministic and Stochastic Optimal Control. Springer-Verlag. 25. W. Fleming and H.M. Soner (1992 ). Controlled Markov Processes and Viscosity Solutions. Springer-

Verlag,

26. M. L. Puterman (1994). Markov Decision Processes. Wiley.

Diffusion Approximations and Brownian motion:

27. Karlin, S.and H. Taylor (1975). A Second Course in Stochastic Processes, Second Edition. Academic

Press, New York.

28. Taylor, H.M. and S. Karlin (1984). An Introduction to Stochastic Modeling. Academic Press, New York. 29. Bharucha-Reid, A.T. (1960). Elements of the Theory of Markov Processes and their Applications.

McGraw-Hill, New York

Jersey

San

Van

83

Page 85: Exercises for Stochastic Calculus

Renewal Theory:

30. Cox, D.R. (1962). Renewal Theory. Methuen, London.

31. Feller, W. (1971). An Introduction to Probability Theory and Its Applications, Vol. 2. John Wiley & Sons, New York.

32. Asmussen, S. (2003). Applied Probability and Queues. 2nd ed, Springer.

33. Taylor, H.M. and S. Karlin (1984). An Introduction to Stochastic Modeling. Academic Press, New York.

Queueing Theory:

34. Asmussen, S. (2003). Applied Probability and Queues. 2nd ed, Springer.

35. Gross, D. and C. Harris (1985). Fundamentals of Queueing Theory, Second Edition. Wiley, New York.

36. Cox, D.R. and W.L. Smith (1961). Queues.? Methuen, London.

37. Gross, D. and C. Harris (1985). Fundamentals of Queueing Theory, Second Edition. Wiley, New York. 38. Tijms, H. (1986). Stochastic Modeling and Analysis-A Computational Approach. John Wiley & Sons,

New York.

Algorithms

39. Tijms, H. (1986). Stochastic Modeling and Analysis-A Computational Approach. John Wiley & Sons,

New York.

40. Bremaud, P. Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues. Springer.

84


Recommended