EXERCISES ON STOCHASTIC CALCULUS IN FINANCE «
KARTHIK IYER
This collection of solutions is based on assignments for a graduate level course on stochasticcalculus for quantitative finance that I attended. The course roughly covered Brownianmotion, Stochastic Calculus for processes without jumps, Risk Neutral Pricing, Connectionsto PDEs (Kolmogorov forward and backward equations) and Stochastic Optimization (HJBequations).
All questions, cited theorems, definitions and equations are from the (excellent) book’Stochastic Calculus for Finance II, Continuous-Time Models, ’ by Steven Shreve. A bold-faced equation or a theorem number refers to the corresponding equation/theorem in thebook.
All errors are entirely mine. Please let me know if you spot any. Comments, suggestionsand ideas are always welcome.
Exercises
(1.5) When dealing with double Lebesgue integrals, just as in double Riemann integrals,the order of integration can be reversed. The only assumption required is thatthe function being integrated be either non-negative or integrable. Here is anapplication of this fact.
Let X be a non negative random variable with cumulative distribution functionF(x) = P(X ≤ x). Show that
E(X) =
∞∫0
(1−F(x))dx (0.1)
by showing ∫Ω
∞∫0
1[0,X(ω))(x)dxdP(ω) (0.2)
is equal to both E(X) and∫∞
0(1−F(x))dx
Proof. We first note that g(x,ω) = 1[0,X(ω))(x) is a non-negative function on (0,∞)×Ω.Fubini-Tonelli theorem gives us∫
Ω
∞∫0
1[0,X(ω))(x)dxdP(ω) =
∞∫0
∫Ω
1[0,X(ω))(x)dP(ω)dx (0.3)
1
2 KARTHIK IYER
Let us look at the LHS of (0.3). We have∫Ω
∞∫0
1[0,X(ω))(x)dxdP(ω) =∫Ω
[
∞∫0
1[0,X(ω))(x)dx ]dP(ω)
=∫Ω
X(ω)dP(ω)
= E(X) (by definition of E(X)) (0.4)
Before we proceed with the RHS of (0.3), let us first note that, by the finite additivityof the given probability measure and the fact that X is a non-negative randomvariable, we have 1−F(x) = P(X > x) = P(ω : X(ω) > x ≥ 0). RHS of (0.3) gives us
∞∫0
∫Ω
1[0,X(ω))(x)dP(ω)dx =
∞∫0
P(ω : 0 ≤ x < X(ω))dx
=
∞∫0
(1−F(x))dx (0.5)
(0.4), (0.5) and (0.3) together give the desired result.
(1.9) Suppose X is a random variable on some probability space (Ω,F ,P), A is a set in F ,and for every Borel set B of R, we have∫
A
1B(X(ω))dP(ω) = P(A) ·PX ∈ B (0.6)
then we say that X is independent of the event A. Show that if X is independent ofan event A, then ∫
A
g(X(ω))dP(ω) = P(A) ·Eg(X) (0.7)
for every non-negative, Borel measurable function g.
Proof. We use the ’standard machine’ trick to prove the result. First we prove forg = 1C(x) where C is a Borel measurable subset of R. By hypothesis (0.6), we have∫
A
1C(X(ω))dP(ω) = P(A) ·PX ∈ C (0.8)
Since the random variable 1C(X) takes only two values, 0 and 1, its expectation isE1C(X) = 1 ·PX ∈ C+ 0 ·PX ∈ Cc = PX ∈ C. We thus have from (0.8)∫
A
1C(X(ω))dP(ω) = P(A) ·PX ∈ C = P(A) ·E1C(X) (0.9)
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 3
Next we choose g =∑ki=1 ci1Ci where Ciki=1 are disjoint subsets of R and all ci ≥ 0.∫
A
g(X(ω))dP(ω) =∫A
k∑i=1
ci1Ci (X(ω))dP(ω)
=k∑i=1
ci
∫A
1Ci (X(ω))dP(ω) (linearity of the Lebesgue integral)
=k∑i=1
ciP(A) ·E1Ci (X) (by (0.9))
= P(A) ·Eg(X) (by linearity of expectation) (0.10)
Next, let g be a arbitrary non negative Borel measurable function and choosea sequence of simple increasing non negative functions hn → g. By monotoneconvergence theorem applied on each side and using (0.10), we have∫
A
g(X(ω))dP(ω) = limn→∞
∫A
hn(X(ω))dP(ω)
= limn→∞
P(A) ·E(hn(X)) = P(A) ·E(g(X)) (0.11)
(1.10) Let P be the uniform (Lebesgue) measure on Ω = [0,1]. Define
Z(ω) =
0 if 0 ≤ω < 12
2 if 12 ≤ω ≤ 1
For A ∈ B[0,1], define
P(A) =∫Ω
Z(ω)dP(ω)
(a) Show that P is a probability measure.
Proof. Instead of proving the result by invoking the definition of a probabilitymeasure, we will just use Theorem 1.6.1. To ensure the conclusion of Theorem1.6.1, i.e that P is a probability measure, we need to show that EZ = 1. Thefact that P is the Lebesge measure on [0,1] along with the definition of therandom variable Z gives us EZ =
∫ 112
2dx = 1
(b) Show that if P(A) = 0, then P(A) = 0. We say that P is absolutely continuouswith respect to P.
Proof. If P(A) = 0, then by definition of the integral,∫AZ(ω)dP(ω) = 0 for any
random variable Z. (We can justify this assertion, for example, by using the’standard machine’ trick for the random variable Z.) Hence P(A) = 0.
4 KARTHIK IYER
(c) Show that there is a set A for which P(A) = 0 but P(A) > 0. In other words, Pand P are not equivalent.
Proof. Choose any set A supported in [0, 12 ], for instance take A = [0, 1
4 ]. P(A) =14 as P is the usual Lebesgue measure. Note that on A, Z = 0 and hence∫AZ(ω)dP(ω) = 0 (By Theorem 1.3.4). Thus P(A) = 0.
Note that this happens because Z is not P almost surely non-zero.
(1.11) In Example 1.6.6, we began with a standard normal random variable X undera measure P. According to Exercise 1.6, this random variable has the moment-generating function
EeuX = e12u
2for all u ∈R
The moment-generating function of a random variable determines its distribution.In particular, any random variable that has moment generating function e
12u
2must
be standard normal.
In Example 1.6.6, we also defined Y = X + θ, where θ is a constant, we set Z =e−θX−
12θ
2, and we defined P by the formula
P(A) =∫A
Z(ω)dP(ω) for all A ∈ F
We showed by considering its cumulative distribution function that Y is a standardnormal variable under P. Give another proof that Y is standard normal under P byverifying the moment-generating function formula
EeuY = e12u
2for all u ∈R
Proof. Consider any u ∈R. We have
EeuY =∫Ω
euY (ω)Z(ω)dP(ω) (By eqn 1.6.4)
=∫Ω
eu(X+θ)e−θX−12θ
2dP(ω)
=∫R
eux+uθe−θx−12θ
2e−
12x
2dx (By Theorem 1.5.2 )
= e12u
2∫R
e−12 (x+θ−u)2
dx
= e12u
2∫R
e−12 (x)2
dx (Change of variables)
= e12u
2
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 5
(1.14) Let X be a non-negative random variable defined on a probability space (Ω,F ,P)with the exponential distribution, which is
PX ≤ a = 1− e−λa, a > 0
where λ > 0 is a constant. Let Λ be another positive constant. Define
Z =λλe−(λ−λ)X
Define P by
P(A) =∫A
Z dP for all A ∈ F
Proof. Let us first note that for any a ≥ 0,
PX ≤ a = 1− e−λa = λ
a∫0
e−λx dx (0.12)
Using properties of the measure P and properties of the Lebesgue integral, it iseasy to see that
PX ∈ B = λ∫B
e−λx dx (0.13)
where B is any interval in [0,∞).
(To show that (0.13) is valid for all B ∈ B, we define a collection S of sets in [0,∞)for which (0.13) holds. This collection contains all intervals and it can be shownthat it is actually a σ -algebra (using properties of P and the Lebesgue integral).Thus (0.13) can be shown to be valid for all Borel subsets of R.)
Now that we have a density function for X, computations become simpler.
(a) Show that P(Ω) = 1.
P(Ω) =∫Ω
Z(ω)dP(ω)
=∫Ω
λλe−(λ−λ)X(ω)dP(ω)
=
∞∫0
λe−λx dx (By (0.13) and Theorem 1.5.2)
= 1 (0.14)
(b) Compute the cumulative distribution function
PX ≤ a for a ≥ 0
for the random variable X under the probability measure P.
6 KARTHIK IYER
We have
PX ≤ a =∫Ω
1X(ω)≤aZ(ω)dP(ω)
=∫Ω
1X(ω)≤aλλe−(λ−λ)X(ω)dP(ω)
=
∞∫0
1x≤a λe−λx dx (By (0.13) and Theorem 1.5.2)
= 1− e−λa. (0.15)
(2.2) Independence of random variables can be affected by changes of measure. Toillustrate thus point, consider the space of two coin tosses Ω2 = HH,HT ,TH,T T ,and let stock prices be given by S0 = 4, S1(H) = 8, S1 = 2, S2(HH) = 16, S2(HT ) =S2(TH) = 4, S2(T T ) = 1.
Consider two probability measures given by
P(HH) =14, P(HT ) =
14, P(TH) =
14, P(T T ) =
14
P(HH) =49,P(HT ) =
29,P(TH) =
29,P(T T ) =
19
Define the random variable X = 1 if S2 = 4, X = 0 otherwise.
(a) List all the sets in σ (X).
Solution. [∅,Ω2,HH ∪ T T ,HT ∪ T T ]
(b) List all the sets in σ (S1).
Solution. [∅,Ω2,HH ∪HT ,TH ∪ T T ]
(c) Show that σ (X) and σ (S1) are independent under P.
Solution. We only show one set of equalities. Other cases follow by a similarcomputation.
14
= P(HH) = P(HH ∪ T T ) ∩ HH ∪HT )
= P(HH ∪ T T )) · P(HH ∪ T T ) =12· 1
2
(d) Show that σ (X) and σ (S1) are not independent under P.
Solution. P(HH ∪HT ·P(HH ∪ T T ) , P (HH).
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 7
(e) Under P, we have PS1 = 8 = 23 and PS1 = 2 = 1
3 . Explain intuitively why,if you are told that X = 1, you would want to revise your estimate of thedistribution of S1.
Solution. We first note that under P, X and S1 are dependent (as shown inpart d) above). In fact, a simple calculation shows that P(S1 = 8|X = 1) = 1
2and P(S1 = 2|X = 1) = 1
2 . Intuitively, we are finding the conditional distributionhere. Since we are conditioning on the knowledge of a random variable i.e wealready have some prior information, the sample space shrinks and we need torevise the distribution of S1.
(2.6) Consider a probability space Ω = a,b,c,d. Let F be all possible subsets of Ω.Define a probability measure P by specifying that Pa = 1
6 ,Pb =13 ,Pc =
14 ,Pd =
14 . Next we define two random variables, X and Y by the formulas
X(a) = X(b) = 1 X(c) = X(d) = −1
Y (a) = 1,Y (b) = −1,Y (c) = 1,Y (d) = −1
We then define Z = X +Y .
(a) List the sets in σ (X).
Solution. It is easy to see that σ (X) = ∅,Ω, a,b, c,d].
(b) Determine E[Y |X]. And verify the partial averaging property.
Solution. A quick calculation gives usE[Y |X](a) = −1
3 , E[Y |X](b) = −13 , E[Y |X](c) = 0, E[Y |X](d) = 0
Let us now verify the partial averaging property. We have,∫a,b
E[Y |X]dP(ω) = −13· 1
2= −1
6=
∫a,b
YdP(ω)
∫c,d
E[Y |X]dP(ω) = 0 =∫c,d
YdP(ω)
∫a,b,c,d
E[Y |X]dP(ω) = −13· 1
2= −1
6=
∫a,b,c,d
YdP(ω)
∫∅
E[Y |X]dP(ω) = 0 =∫∅
YdP(ω) (Vacuously true)
(c) Determine E[Z |X]. And verify the partial averaging property.
8 KARTHIK IYER
Solution. E[Z |X](a) = 23 , E[Z |X](b) = 2
3 , E[Z |X](c) = −1, E[Z |X](d) = −1, Let usnow verify the partial averaging property.∫
a,b
E[Z |X]dP(ω) =23· 1
2=
13
=∫a,b
ZdP(ω)
∫c,d
E[Z |X]dP(ω) = −12· 1 = −1
2=
∫c,d
ZdP(ω)
∫a,b,c,d
E[Z |X]dP(ω) = −16
=∫
a,b,c,d
ZdP(ω)
∫∅
E[Z |X]dP(ω) = 0 =∫∅
ZdP(ω) (Vacuously true)
(d) Compute E[Z |X]−E[Y |X]. Explain why you get X.
Solution. As we can see from the computations in part (b) and (c) above, therandom variable E[Z |X] − E[Y |X] = X on Ω. Alternatively, by linearity ofexpectation and definition of Z, E[Z |X] − E[Y |X] = E[X |X] = X. The lastequality follows by ’Taking out what is known property’ and the fact thatE[1|X] =
∫Ω
1 = P(Ω) = 1.
(2.7) Let Y be an integrable random variable on a probability space (Ω,F ,P) and let G bea sub-sigma algebra of F . Based on the information in G, we can form the estimateE[Y |G] of Y and define the error of the estimation Err= Y −E[Y |G]. This is a randomvariable with expectation zero and some variance Var(Err). Let X be some other Gmeasurable random variable, which we can regard as another estimate of Y . Showthat
Var(Err) ≤ Var(Y-X)
In other words, the estimate E[Y |G] minimizes the error among all estimates basedon the information in G.
Proof. Let us first note that S := E[Y |G] is G measurable. We thus have
E[SE[Y |G]] = E[E[SY |G]]] (Taking out what is known)
= E[SY ]
= E[YE[Y |G]] (0.16)
Moreover, using the fact that X is G measurable, we have
E[E[Y |G](X +µ)]
= E[E[(X +µ)Y |G]] (Taking out what is known)
= E[(X +µ)Y ] (0.17)
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 9
Let µ = E[Y −X]. We have
E[(Y −X −µ)2]
= E[((Y −E[Y |G]) + (E[Y |G]−X −µ))]2
≥ E[((Y −E[Y |G]))2] + 2E[(Y −E[Y |G])(E[Y |G]−X −µ))] (By linearity of E)
= E[((Y −E[Y |G]))2] (Cross terms cancel; by (0.16) and (0.17))
(We have to assume, for cancellation of cross terms, that all terms of the formE[ST ] <∞ where S and T are different random variables that are relevant to ourcomputation.)
(2.10) Let X and Y be random variables (on some unspecified probability space (Ω,F ,P)),assume they have a joint density fX,Y (x,y), and assume E|Y | <∞. In particular, forevery Borel subset C of R2, we have
P(X,Y ) ∈ C =∫C
fX,Y (x,y)dxdy
Define
g(x) =
∞∫−∞
y fX,Y (x,y)fX(x)
dy
where fX(x) =∫RfX,Y (x,η)dη is the marginal density of X, and we must assume it is
strictly positive for every x. We want to show
E[Y |X] = g(X)
Since g(X) is obviously σ (X) measurable, to verify that E[Y |X] = g(X), we need onlycheck that the partial-averaging property is satisfied. For every Borel-measurablefunction h mapping R to R and satisfying E|h(X)| <∞, we have
Eh(X) =∫R
h(x)fX(x)dx (0.18)
Similarly, if h is a function of both x and y, then
Eh(X,Y ) =∫R
∫R
h(x,y)fX,Y (x,y)dxdy (0.19)
whenever (X,Y ) has a joint density fX,Y (x,y). You may use both (0.18) and (0.19) inyour solution to this problem.
Show the partial averaging property∫A
g(X)dP =∫Y
dP
10 KARTHIK IYER
Proof. We would like to use (0.19) in our proof. For that, we need to verify thatE|g(X)| <∞. Let us first get that out of the way.
E|g(X)| =∫Ω
|g(X)|dP(ω)
=∫R
|g(x)|fX(x)dx (By Theorem 1.5.2)
≤∫R
∫R
|y|fX,Y (x,y)dxdy (By definition of g)
<∞ (As E|Y | <∞ by hypothesis) (0.20)
For any A ∈ σ (X), we let B be the Borel-measurable subset of R so that A = X ∈ B.The existence and uniqueness of such a set follows by the definition of σ (X).∫A
g(X)(ω)dP(ω) =∫Ω
g(X)1AdP(ω)
=
∞∫−∞
g(x)1B fX(x)dx (By Theorem 1.3.4 iii and (0.20)) (0.21)
We also have,∫A
YdP =∫Ω
1A(ω)Y (ω)dP(ω)
=
∞∫−∞
∞∫∞
y1B(x)fX,Y (x,y)dxdy (By (0.19))
=
∞∫−∞
1B [
∞∫−∞
yfX,Y (x,y)dy]dx (*)
The last equality is valid by Fubini’s theorem (which, of course can be applied asE|Y | <∞.)
∗ =
∞∫−∞
g(x)1B(x)fX(x)dx (By the definition of g) (0.22)
(3.1) According to Definition 3.3.3(iii), for 0 ≤ t ≤ u, the Brownian motion incrementW (u)−W (t) is independent of the σ -algebra F (t). Use this property and property(i) of that definition to show that for 0 ≤ t < u1 < u2, the increment W (u2)−W (u1) isalso independent of F (t).
Proof. Let X be a random variable defined on some probability space (Ω,F ,P). Letus first observe that if G is a sub-sigma algebra ofH and X is independent ofH, thenX is independent of G. Indeed, if A ∈ σ (X) and B ∈ G, then P(A∩B) = P(A) ·P(B).
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 11
This equality follows because B is also in H and by assumption, X is independentof H.
With this observation in mind, let us prove that W (u2)−W (u1) is also independentof F (t). Note that, by definition of Brownian motion the increment W (u2)−W (u1)is independent of F (u1). Since the collection F (s), s ≥ 0 is a filtration, F (t) is a subsigma algebra of F (u1) and the observation above proves that W (u2)−W (u1) is alsoindependent of F (t).
(3.2) Let W (t), t ≥ 0, be a Brownian motion, and let F (t), t ≥ 0 be a filtration for thisBrownian motion. Show that W 2(t)− t is also a martingale.
Proof. Let 0 ≤ s ≤ t be given. We first use linearity for expectation and get
E[W 2 − t |F (s)] = E[(W (s)−W (t))2 + 2W (s)W (t)−W 2(s)− t |F (s)]
= E[(W (t)−W (s))2 |F (s)] + 2E[W (s)W (t) |F (s)]−E[W 2(s) |F (s)]− t (0.23)
Since, the incrementW (t)−W (s) is independent of F (s) and has mean 0 and variancet − s,
E[(W (t)−W (s))2 |F (s)] = E[(W (t)−W (s))2] = t − s (0.24)
As W (s) is F (s) measurable,
E[W (s)2 |F (s)] =W 2(s) (0.25)
Moreover,
2E[W (s)W (t) |F (s)] = 2E[W (s)(W (t)−W (s)) +W 2(s) |F (s)]
= 2W (s)E[W (t)−W (s) |F (s)] + 2W 2(s) (Taking out what is known and (0.25))
= 2W 2(s) [Since W (t)−W (s) is F (s) independent with mean 0] (0.26)
From (0.23), (0.24), (0.25), (0.26), we get
E[W 2 − t |F (s)] =W 2(s)− s (0.27)
thereby proving W 2(t)− t is also a martingale. (It is trivially true that W 2(t)− t isF (t) adapted stochastic process.)
(3.4) Other variations of Brownian motion Theorem 3.4.3 asserts that if T is a positivenumber and we choose a partition Π with points 0 = t0 < t1 < t2 < ... < tn = T , thenas the number n of partition points approaches ∞ and the length of the longestsub-interval approaches 0, the sample quadratic variation
n−1∑j=0
(W (tj+1 −W (tj))2
approaches T for almost every path of the Brownian motion W . In Remark 3.4.5,we further showed that dW (t)dW (t) = dt and dW (t)dt = 0, dtdt = 0.
12 KARTHIK IYER
(a) Show that as the number of partition points approaches∞ and the length ofthe longest sub interval approaches 0, the sample first variation
n−1∑j=0
|W (tj+1)−W (tj)|
approaches∞ for almost every path of the Brownian motion W .
Proof. Let Ω′ ⊂Ω denote the full measure set for which the quadratic variationof the Brownian motion up to time T equals T . From now on, we assume allour computations are on this set Ω′.
Let δ(n) = max0≤k≤n−1|W (tj+1 −W (tj)|. Since W (t) is a continuous on [0,T ],δ(n)→ 0 as n→∞. Moreover, δ(n) cannot be 0 since in that case W (t) wouldbe a constant violating the assumption that W (t) needs to have variance t. Wecan hence safely divide by δ(n) and get
n−1∑j=0
|W (tj+1 −W (tj)| ≥∑n−1j=0 |W (tj+1)−W (tj)|2
δ(n)(0.28)
Taking limits on both sides as n → ∞, the we see that the RHS of (0.28)approaches T
δ(n) →∞.
(b) Show that as the number n of partition points approaches∞ and the length ofthe longest sub-interval approaches 0, the sample cubic variation
n−1∑j=0
|W (tj+1)−W (tj)|3
approaches 0 for almost every path of the Brownian motion W .
Proof. Let Ω′, δ(n) be as in the previous part. As before, we assume all ourcomputations are on this set Ω′. We have,
n−1∑j=0
|W (tj+1)−W (tj)|3 ≤ δ(n) · [n−1∑j=0
(W (tj+1)−W (tj)2)] (0.29)
Taking limits on both sides as n → ∞, the we see that the RHS of (0.29)approaches δ(n) · T → 0.
Remark: This exercise in particular tells us why only the quadratic variation of aBrownian motion is worth studying. In fact, we can show, very similar to what weshowed in part a) that, for any 0 < ε < 2, the εth variation,
∑n−1j=0 |W (tj+1 −W (tj)|ε→∞
as n→∞. We can also show, very similar to to what we showed in part b) that for any2 < ε, the εth variation,
∑n−1j=0 |W (tj+1 −W (tj)|ε→ 0 as n→∞. Both the claims follow
by observing that if δ(n)→ 0 as n→∞ then so does δ(n)ε for any ε > 0.
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 13
(3.5) Black-Scholes-Merton formula Let the interest rate r and the volatility σ > 0 beconstant. Let
S(t) = S(0)e(r− 12σ
2)t+σW (t)
be a geometric Brownian motion with mean rate of return r, where the initial stockprice S(0) is positive. Let K be a positive constant. Show that, for T > 0,
E
[e−rT (S(T )−K)+
]= S(0)N (d+(T ,S(0)))−Ke−rTN (d−(T ,S(0))),
where
d± =1
σ√T
[log
(S(0)K
)+(r ± σ
2
2
)T
],
and N is the standard normal cumulative distribution
N (y) =1√
2π
y∫−∞
e−12 z
2dz =
∞∫−y
e−12 z
2dz. (0.30)
Proof. Fix a T > 0. We begin by noting that dSdW > 0 implying that S is an increasing
function of W . Hence, S(T ) > K iff W (T ) > 1σ
[log
(KS(0)
)+(
12σ
2 − r)T]
= x0. Withthis observation and the fact that W (T ) is a normal random variable with mean 0and variance T , we have
E
[e−rT (S(T )−K)+
]=
S(0)√
2πT
∞∫x0
e−12σ
2T+σx− x22T dx − 1
√2πT
Ke−rT∞∫x0
e−x22T dx
=S(0)√
2πT
∞∫x0
e−(σ√T√2
+ x√2T
)2
dx − 1√
2πTKe−rT
∞∫x0
e−x22T dx
=S(0)√
2π
∞∫y0
e−12y
2dy − 1
√2πKe−rT
∞∫z0
e−z22 dz (*)
where y0 = 1σ√T
[log
(KS(0)
)−(
12σ
2 + r)T]
and z0 = 1σ√T
[log
(KS(0)
)−(−1
2σ2 + r
)T]. The
last equality follows by change of variables y√2
= σ√T√2
+ x√2T
and z = x√T
.
Note that −y0 = 1σ√T
[log
(S(0)K
)+(
12σ
2 + r)T]
= d+(T ,S(0)) and z0 = d−(T ,S(0)). Us-ing (0.30), (*) can be written as
(∗) =S(0)√
2π
d+(T ,S(0))∫−∞
e−12y
2dy −Ke−rT 1
√2π
∞∫d−(T ,S(0))
e−12 z
2dz
= S(0)N (d+(T ,S(0)))−Ke−rTN (d−(T ,S(0)))
(3.6) Let W (t) be a Brownian motion and let F (t), t ≥ 0 be an associated filtration.
14 KARTHIK IYER
(a) For µ ∈R, consider the Brownian motion with drift µ,
X(t) = µt +W (t)
Show that for any Borel measurable function f (y), and for any 0 ≤ s < t, thefunction
g(x) =1√
2π(t − s)
∫R
f (y)e−−(y−x−µ(t−s))2
2(t−s) dy
satisfies E[f (X(t))|F (s)] = g(X(s)), and hence X has the Markov property. Wemay write g(x) as g(x) =
∫Rf (y)p(τ,x,y)dy, where τ = t − s and
p(τ,x,y) =1√
2πτe−−(y−x−µτ)2
2τ
is the transition density for the Brownian motion with drift µ.
Proof. ClearlyX(t) is an adapted stochastic process with respect to the filtrationF (t). Note that,
E[f (X(t))|F (s)] = E[f (W (t)−W (s) +µ(t − s) + sµ+W (s))|F (s)].
The random variable W (t)−W (s) +µ(t − s) is F (s) independent (sigma algebragenerated by a random variableX is the same as the sigma algebra generated byX + c for any constant c) and the random variable sµ+W (s) is F (s) measurable.This permits us to apply the Independence Lemma, Lemma 2.3.4.
In order to compute the expectation above, we replace W (s) +µs = X(s) by adummy variable x and take the unconditional expectation of the remainingrandom variable. Since, W (t) −W (s) + µ(t − s) is normally distributed withmean µ(t − s) and variance t − s, we define
g(x) =1√
2π(t − s)
∫R
f (w+ x)e−−(w−µ(t−s))2
2(t−s) dw
=1√
2π(t − s)
∫R
f (y)e−−(y−x−µ(t−s))2
2(t−s) dy ( Changing variables w+ x→ y) (0.31)
The Independence Lemma states that if we now take the function g(x) definedin (0.31) and replace the dummy variable x by the random variable X(s) thenE[f (X(t))|F (s)] = g(X(s)) and hence X has Markov property.
(b) For ν ∈R and σ > 0, consider the geometric Brownian motion
S(t) = S(0)eσW (t)+νt
Set τ = t − s and
ρ(τ,x,y) =1
σy√
2πτe−(log(
yx )−ντ)2
2σ2τ
Show that for any Borel measurable function f (y) and for any 0 ≤ s < t thefunction g(x) =
∫∞0f (y)ρ(τ,x,y)dy satisfies E[f (S(t))|F (s)] = g(S(s)) and hence
S has Markov property and ρ(τ,x,y) is its transition density.
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 15
Proof. Clearly, S(t) is an adapted stochastic process with respect to filtrationF (t). Note that,
E[f (S(t))|F (s)] =
E[f (S(0)(exp(σ (W (t)−W (s)) + ν(t − s) + νs+ σW (s))) |F (s)]
The random variable σ (W (t) −W (s)) + ν(t − s) is F (s) independent and therandom variable νs+ σW (s) is F (s) measurable. This permits us to apply theIndependence Lemma, Lemma 2.3.4.
In order to compute the expectation above, we replace S(0)(eW (s)+νs) = S(s) bya dummy variable x and take the unconditional expectation of the remainingrandom variable. Since, σ (W (t)−W (s)) + ν(t − s) is normally distributed withmean ν(t − s) and variance σ2(t − s), we define
g(x) =1
σ√
2π(t − s)
∫R
f (xew)e−(w−ν(t−s))2
2σ2(t−s) dw
=1
σy√
2π(t − s)
∫R
+
f (y)e−(log(
yx )−ν(t−s))2
2σ2(t−s) dy
(Changing variables xew→ y) (0.32)
The Independence Lemma states that if we now take the function g(x) definedin (0.32) and replace the dummy variable x by S(s) then E[f (S(t))|F (s)] =g(S(s)) and hence S has Markov property and moreover
ρ(τ,x,y) =1
σy√
2πτe−(log(
yx )−ντ)2
2σ2τ
is its transition density.
(3.7) Theorem 3.6.2 provides the Laplace transform of the density of the first passage timefor Brownian motion. This problem derives the analogous formula for Brownianmotion with drift. Let W be a Brownian motion. Fix m > 0 and µ ∈R. For 0 ≤ t <∞,we define
X(t) = µt +W (t),
τm =mint ≥ 0;X(t) =m
As usual, we set τm =∞ if X(t) never reaches the levelm. Let σ be a positive numberand set Z(t) = eσX(t)−(σµ+ 1
2σ2)t
(a) Show Z(t), t ≥ 0, is a martingale.
Proof. Clearly, Z(t) is an adapted stochastic process with respect to the filtra-tion F (t) where F (t) denotes the filtration of sub-sigma algebras of F with
16 KARTHIK IYER
respect to which W is a martingale. We have, for 0 ≤ s ≤ t,
E[Z(t)|F (s)] = E[eσX(t)−(σµ+ 12σ
2)t |F (s)] = E[eσW (t)− 12σ
2t |F (s)]
= E[eσ (W (t)−W (s))+σW (s)− 12σ
2t |F (s)]
= E[eσ (W (t)−W (s))|F (s)] · eσW (s)− 12σ
2t (Taking out what is known)
= eσW (s)− 12σ
2t ·E[eσ (W (t)−W (s))] (Independence of W (t)−W (s) w.r.t to F (s))
= eσW (s)− 12σ
2te12σ
2(t−s) (By (3.2.13) in text)
= eσW (s)− 12σ
2s = eσX(s)−(σµ+ 12σ
2)s
= Z(s)
(b) Use (a) to conclude that
E[eσX(t∧τm)−(σµ+ 12σ
2)(t∧τm)] = 1, t ≥ 0. (0.33)
Proof. A martingale that is frozen at a stopping time is still a martingaleand must have constant expectation. In particular, 1 = Z0 = E[Zt∧τm |F0] =E[Zt∧τm].
(c) Now suppose that µ ≥ 0. Show that for σ > 0,
E[eσm−(σµ+ 12σ
2)τm1τm<∞] = 1
Use this fact to show that Pτm <∞ = 1 and to obtain the Laplace transform
Ee−ατm = emµ−m√
2α+µ2 ∀α > 0 (0.34)
Proof. The random variable eσ (X(t∧τm)) ∈ [0, eσm] for all t ≥ 0. Hence,
0 ≤ E[eσX(t∧τm)] ≤ eσm ∀t ≥ 0 (0.35)
If τm <∞, then for large enough t, t ∧ τm = τm and the term e(−σµ− 12σ
2)(t∧τm) =e(−σµ− 1
2σ2)τm . If τm = ∞, then e(−σµ− 1
2σ2)(t∧τm) = e(−σµ− 1
2σ2)t which goes to 0 as
t→∞ as σµ+ 12σ
2 > 0. We combine these 2 cases by writing
limt→∞
e(−σµ− 12σ
2)(t∧τm) = 1τm<∞e(−σµ− 1
2σ2)τm
The notation 1τm<∞ denotes a random variable which is 1 if random variableτm <∞ and otherwise takes the value 0.
If τm <∞, then eσX(t∧τm) = eσX(τm) = eσm for large enough t. If τm =∞, then weknow from (0.35) that 0 ≤ eσX(t∧τm) ≤ eσm as t becomes large. This ensures thateσX(t∧τm)e(−σµ− 1
2σ2)(t∧τm) → 0 as t → ∞. We combine all these observations
above in to
limt→∞
eσX(t∧τm)−(σµ− 12σ
2)(t∧τm) = 1τm<∞eσm−(σµ+ 1
2σ2)τm (0.36)
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 17
Taking expectations on both sides in (0.36) and using (0.33) gives us
E[1τm<∞eσm−(σµ+ 1
2σ2)τm] = 1, t ≥ 0. (0.37)
The switching of limits and integrals in the last step can be justified by Lebesguedominated convergence theorem.
We now take limits as σ → 0 in (0.37) and use Dominated Convergence theo-rem to get
E[1τm<∞] = 1
Thus, Pτm < ∞ = 1. Because τm is finite almost surely, we may drop theindicator of this event in (0.37) to obtain
E[e−(σµ+ 12σ
2)τm] = e−σm, t ≥ 0. (0.38)
We let α be a positive constant and set (σµ+ 12σ
2) = α. Thus, from (0.38), weget
E[e−ατm] = emµ−m√
2α+µ2 ∀α > 0 (0.39)
(The other root is discarded as σ would be then negative.)
(d) Show that if µ > 0, then Eτm <∞. Obtain a formula for Eτm.
Proof. We differentiate (0.34) with respect to α (differentiation under integralsign is permitted by use of Lebesgue dominated convergence theorem) to get
E[τme−ατm] = emµ−m
√2α+µ2 m√
2α +µ2(0.40)
Taking limit as α→ 0 on both sides of (0.40) and using Monotone convergencetheorem gives us
E[τm] =mµ
In particular, we have Eτm <∞
(e) Now suppose that µ < 0. Show that, for σ > −2µ,
E[eσm−(σµ+ 12σ
2)τm1τm<∞] = 1
Use this fact to show that if Pτm <∞ = e−2x|µ|, which is strictly less than 1,and to obtain the Laplace transform
Ee−ατm = emµ−m√
2α+µ2 ∀α > 0
Proof. The random variable eσ (X(t∧τm)) ∈ [0, eσm] for all t ≥ 0. Hence,
0 ≤ E[eσX(t∧τm)] ≤ eσm ∀t ≥ 0 (0.41)
18 KARTHIK IYER
If τm <∞, then for large enough t, t ∧ τm = τm and the term e(−σµ− 12σ
2)(t∧τm) =e(−σµ− 1
2σ2)τm . If τm = ∞, then e(−σµ− 1
2σ2)(t∧τm) = e(−σµ− 1
2σ2)t which goes to 0 as
t→∞ as σµ+ 12σ
2 > 0. We combine these 2 cases by writing
limt→∞
e(−σµ− 12σ
2)(t∧τm) = 1τm<∞e(−σµ− 1
2σ2)τm
The notation 1τm<∞ denotes a random variable which is 1 if random variableτm <∞ and otherwise takes the value 0.
If τm <∞, then eσX(t∧τm) = eσX(τm) = eσm for large enough t. If τm =∞, then weknow from (0.41) that 0 ≤ eσX(t∧τm) ≤ eσm as t becomes large. This ensures thateσX(t∧τm)e(−σµ− 1
2σ2)(t∧τm) → 0 as t → ∞. We combine all these observations
above in to
limt→∞
eσX(t∧τm)−(σµ− 12σ
2)(t∧τm) = 1τm<∞eσm−(σµ+ 1
2σ2)τm (0.42)
Taking expectations on both sides in (0.42) and using (0.33) gives us
E[1τm<∞eσm−(σµ+ 1
2σ2)τm] = 1, t ≥ 0. (0.43)
The switching of limits and integrals in the last step can be justified by Lebesguedominated convergence theorem.
We now take limits as σ →−2µ in (0.43) and use Lebesgue dominated Conver-gence theorem to get
E[1τm<∞] = e2mµ = e−2m|µ|
Thus, Pτm <∞ = e−2m|µ|. From (0.43) we have,
E[1τm<∞e−(σµ+ 1
2σ2)τm] = e−σm, t ≥ 0. (0.44)
We let α be a positive constant and set (σµ+ 12σ
2) = α. Thus, from (0.44), weget
E[1τm<∞e−ατm] = emµ−m
√2α+µ2 ∀α > 0 (0.45)
(The other root is discarded as σ would then be negative.)
On the set τm =∞, the random variable e−ατm = 0. Thus, E[1τm=∞e−ατm] = 0.
Hence, we have
E[e−ατm] = E[1τm<∞e−ατm] +E[1τm=∞e
−ατm] = emµ−m√
2α+µ2 ∀α > 0 (0.46)
(4.1) Suppose M(t), 0 ≤ t ≤ T is a martingale with respect to some filtration F (t), 0 ≤ t ≤T . Let ∆(t), 0 ≤ t ≤ T , be a simple process adapted to F (t). For t ∈ [tk , tk+1), definethe Stochastic integral
I(t) =k−1∑j=0
∆(tj)[M(tj+1)−M(tj)] +∆(tk)[M(t)−M(tk)]
We think of M(t) as price of an asset at time t and ∆(tj) as the number of shares ofthe asset held by an investor between times tj and tj+1. Then I(t) is the capital gains
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 19
that accrue to the investor between times 0 and t. Show that I(t), 0 ≤ t ≤ T is an amartingale.
Proof. Clearly, I(s) adapted to F (s). Consider 0 ≤ s ≤ t. When s = 0 or s = t, it is easyto see that E[I(t)|F (s)] = I(s). We thus consider 0 < s < t.
Let s ∈ (0, t) be such that s ∈ [tj , tj+1) for 0 ≤ j ≤ k with the convention that tk+1 = t.Then,
I(t) =j−1∑i=0
∆(ti)[M(ti+1)−M(ti)] +k∑i=j
∆(ti)[M(ti+1)−M(ti)]
Note that∑j−1i=0∆(ti)[M(ti+1)−M(ti)] is F (s) measurable as tj ≤ s,M(t) is a martingale
with respect to the filtration F (t) and ∆(t) is a simple process adapted to F (t).
k∑i=j
∆(ti)[M(ti+1)−M(ti)] = ∆(tj)[M(tj+1)−M(s) +M(s)−M(tj)]+
k∑i=j+1
∆(ti)[M(ti+1)−M(ti)]
= ∆(tj)[M(s)−M(tj)] +∆(tj)[M(tj+1)−M(s)] +k∑
i=j+1
∆(ti)[M(ti+1)−M(ti)] (0.47)
The first term on the right hand side of the last equality in (0.47) is F (s) measurable.Moreover, by ’taking out what is known’ property,E[∆(tj)M(tj+1)|F (s)] = E[∆(tj)E[M(tj+1)|F (s)] = ∆(tj)E[M(tj+1)|F (s)] = ∆(tj)M(s).The last equality follows from the fact that M(t) is a martingale.
We also have,
E[k∑
i=j+1
∆(ti)(M(ti+1)−M(ti))|F (s)] =k∑
i=j+1
E[∆(ti)(M(ti+1)−M(ti))|F (s)]
=k∑
i=j+1
∆(ti)(M(s)−M(s)) = 0 (0.48)
The last equality in (0.48) follows because, for j + 1 ≤ i ≤ k, ∆(ti) is a constant andE[M(ti)|F (s)] =M(s) as M(t) is a martingale.
Combining the observations obtained from (0.47) and (0.48), we can say
E[k∑i=j
∆(ti)[M(ti+1)−M(ti)|F (s)] = ∆(tj)(M(s)−M(tj)) (0.49)
20 KARTHIK IYER
(0.49) along with a previously observed fact that∑j−1i=0∆(ti)[M(ti+1)−M(ti)] is F (s)
measurable gives us
E[I(t)|F (s)] =j−1∑i=0
∆(ti)[M(ti+1)−M(ti)] +∆(tj)(M(s)−M(tj)) = I(s)
showing I(t),0 ≤ t ≤ T is a martingale.
(4.2) Let W (t), 0 ≤ t ≤ T , be a Brownian motion, and let F (t), 0 ≤ t ≤ T be an associatedfiltration. Let ∆(t), 0 ≤ t ≤ T be a non random simple process. For t ∈ [tk , tk+1)define the stochastic integral
I(t) =k−1∑j=0
∆(tj)[W (tj+1)−W (tj)] +∆(tk)[W (t)−W (tk)]
(a) Show that whenever 0 ≤ s < t ≤ T , the increment I(t)− I(s) is independent ofF (s).
Proof. We only need to show I(tk)− I(tl) is independent of F (tl) whenever tkand tl are two partition points with tl < tk.I(tk)− I(tl) =
∑k−1j=l ∆(tj)[W (tj+1)−W (tj)].
Since W (t) is a Brownian motion, each of the increments W (tj+1) −W (tj) isindependent of F (tj) for l ≤ j ≤ k−1 and hence independent of F (tl) as F(t) is afiltration. Moreover since, ∆(tj) is a non-random quantity, ∆(tj)[W (tj+1)−W (tj)]is also independent of F (tl) for l ≤ j ≤ k − 1. (The sigma algebra generated bya random variable X is the same as the sigma algebra generated by cX if c , 0.If c = 0, that particular term does not enter our computations and hence ourclaim still holds.)
Since, I(tk)− I(tl) is a sum of random variables each of which is independentwith respect to F (tl), I(tk) − I(tl) is independent of F (tl). (If X and Y areindependent of a sigma algebra F , then so is f (X,Y ) for any Borel-measurablefunction f .)
(b) Show that whenever 0 ≤ s < t ≤ T , the increment I(t) − I(s) is a normallydistributed random variable with mean 0 and variance
∫ ts∆2udu
Proof. Note that it suffices to show thatI(tk)− I(tl) is normally distributed withzero mean and variance
∫ tktl∆2uduwhenever tk and tl are two partition points
with tl < tk.I(tk)− I(tl) =
∑k−1j=l ∆(tj)[W (tj+1)−W (tj)].
Note that ∆(tj) is non-random and constant on each sub-interval [tj , tj+1). SinceW (t) is a Brownian motion, the increments W (tj+1)−W (tj) are independentnormal random variables with zero mean and variance tj+1 − tj for l ≤ j ≤ k −1.Hence,
∑k−1j=l ∆(tj)[W (tj+1)−W (tj)] being a sum of independent normal random
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 21
variables is itself normal with zero mean and variance∑k−1j=l ∆
2(tj)(tj+1 − tj) =∫ tktl∆2(u)du
(c) Use (a) and (b) to show I(t), 0 ≤ t ≤ T is a martingale.
Proof. Clearly, I(s) adapted to F (s). Consider 0 ≤ s ≤ t. When s = 0 or s = t, itis easy to see that E[I(t)|F (s)] = I(s). We thus consider 0 < s < t.
E[I(t)|F (s)] = E[I(t)− I(s)|F (s)] +E[I(s)|F (s)]
= E[I(t)− I(s)] + I(s) (By part a) and since I(s) ∈ F (s))
= I(s) (By part b) (0.50)
(d) Show that I2(t)−∫ t
0∆2(u)du, 0 ≤ t ≤ T is a martingale.
Proof. Let 0 ≤ t ≤.Clearly, I2(t) −∫ t
0∆2(u)du is adapted to F (t). Fix 0 ≤ s ≤ t.
By part a) and part b)
E[(I(t)− I(s))2|F (s)] = E[(I(t)− I(s))2] =
t∫s
∆2(u)du (0.51)
Moreover, by part a) and ’taking out what is known’ property,
E[I(s)I(t)|F (s)] = E[I(s)(I(t)− I(s))|F (s)] +E[I(s)2|F (s)]
= I(s)E[I(t)− I(s)|F (s)] +E[I(s)2|F (s)]
= I2(s) (0.52)
We also have,
E[(I(t)− I(s))2|F (s)] = E[(I(t)2|F (s)] +E[I(s))2|F (s)]− 2E[(I(t)I(s)|F (s)]
= E[(I(t)2|F (s)]− I2(s) (By (0.52)) (0.53)
(0.51) and (0.53) give us,t∫s
∆2(u)du = E[(I(t)2|F (s)]− I2(s)
t∫0
∆2(u)du −s∫
0
∆2(u)du = E[(I(t)2|F (s)]− I2(s)
I2(s)−s∫
0
∆2(u)du = E[(I(t)2|F (s)]−t∫
0
∆2(u)du (0.54)
In the RHS of the last equality in (0.54), we can take the Lebesgue integralinside the conditional expectation as the process ∆(u) is non random thusshowing I2(t)−
∫ t0∆2(u)du, 0 ≤ t ≤ T is a martingale.
22 KARTHIK IYER
(4.6) Let S(t) = S(0)expσW (t) + (α − 12σ
2)t be a geometric Brownian motion. Let p be apositive constant. Compute d(Sp(t)), the differential of S(t) raised to the power p.
Proof. Note that X(t) = σW (t) + (α − 12σ
2)t is an Itô process with dX(t) = σdW (t) +(α − 1
2σ2)dt and dX(t)dX(t) = σ2dt. Using the Itô-Doeblin formula (4.4.23) with
the function f (x) = S(0)pexp gives us
d(Sp(t)) = pSp(t)dX(t) +12p2Sp(t)dX(t)dX(t)
= pSp(t)[σdW (t) + (α − 12σ2)dt +
p
2σ2dt]
= pSp(t)[σdW (t) + (α +p − 1
2σ2)dt]
(4.7) (a) Compute dW 4(t) and then write W 4(T ) as the sum of an ordinary Lebesgueintegral and an Itô integral.
Proof. Using Itô-Doeblin formula (4.4.1) with the function f (x) = x4 gives us
dW 4(t) = 4W 3(t)dW (t) + 6W 2(t)dt
Thus, by (4.4.2), since W (0) = 0,
W 4(T ) =
T∫0
4W 3(t)dW (t) +
T∫0
6W 2(t)dt (0.55)
(b) Take expectations on both sides of the formula you obtained in part (a) andshow E[W 4(T )] = 3T 2.
Proof. Taking expectations on both sides of (0.55) (and using Fubini’s theorem),we have
E[W 4(T )] = E[
T∫0
4W 3(t)dW (t)]
+
T∫0
6E[W 2(t)]dt (0.56)
The first term above is zero as I(t) =∫ t
04W 3(u)dW (u) is a martingale. Thus
E[I(t)] = E[I(t)|F (0)] = I(0) = 0. SinceW (t) is a Brownian motion, E[W 2(t)] = t.Thus, (0.56) simplifies to
E[W 4(T )] =
T∫0
6t dt = 3T 2
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 23
(c) Use the method of (a) and (b) to find a formula for E[W 6(T )].
Proof. Using Itô-Doeblin formula (4.4.1) with the function f (x) = x6 gives us
dW 6(t) = 6W 5(t)dW (t) + 15W 4(t)dt (0.57)
Thus, by (4.4.2), since W (0) = 0,
W 6(T ) =
T∫0
6W 5(t)dW (t) +
T∫0
15W 4(t)dt (0.58)
Taking expectations on both sides of (0.58) (and using Fubini’s theorem), wehave
E[W 6(T )] = E[
T∫0
6W 5(t)dW (t)]
+
T∫0
15E[W 4(t)]dt (0.59)
The first term above is zero as I(t) =∫ t
06W 5(u)dW (u) is a martingale. Thus
E[I(t)] = E[I(t)|F (0)] = I(0) = 0. By part b), (0.59) simplifies to
E[W 6(T )] =
T∫0
45t2dt = 15T 3
(4.8) (Solving the Vasicek equation) The Vasicek interest rate stochastic differentialequation (4.4.32) is
dR(t) = (α − βR(t))dt + σdW (t)
where α,β and σ are positive constants. The solution to this equation is given inExample (4.4.10). This exercise shows how to derive this solution.
(a) Use (4.4.32) and the Itô-Doeblin formula to compute d(eβtR(t)). Simplify it sothat you have a formula for d(eβtR(t)) that does not involve R(t).
Proof. Let f (t,x) = eβtx. Then ft(t,x) = βf (t,x), fx(t,x) = eβt, fxx(t,x) = 0. UsingItô-Doeblin formula (4.4.23) with f (t,x) = eβtx gives
d(eβtR(t)) = βeβtR(t) + eβtdR(t)
= eβt(αdt + σdW (t)) (0.60)
(b) Integrate the equation you obtained in part a) and solve for R(t) to obtain(4.4.33).
24 KARTHIK IYER
Proof. We integrate both sides in (0.60) between 0 and t to get
eβtR(t)−R(0) =αβ
(eβt−1) + σ
t∫0
eβsdW (S) (0.61)
Adding R(0) and multiplying by eβt on both sides in (0.61) gives us the desiredequation (4.4.33).
(4.9) In this exercise, we will verify that a given function solves the Black-Scholes-MertonPDE with the right terminal and boundary conditions.
(a) First verify the equation
Ke−r(T−t)N ′(d−) = xN ′(d+)
Proof. For any y ∈R, N ′(y) = 1√2πe−
y2
2 . Let τ = T − t. We have,
Ke−r(T−t)N ′(d−) =1√
2πKe−r(T−t)e−
d2−2
=1√
2πKe−r(T−t)e−
d2++σ2τ−2d+σ
√τ
2 (0.62)
Since
e−d2
++σ2τ−2d+σ√τ
2 = e−d2
+2 −
σ2τ2 +log( xK )+(r+ 1
2σ2)τ
(0.62) reduces to
Ke−r(T−t)N ′(d−) = x1√
2πe−
d2+2 = xN ′(d+).
(b) Show that cx =N (d+). This is the delta of the option.
Proof. Using chain rule, we get
cx = xN (d+) + xN ′(d+)d′+ −Ke−r(T−t)N ′(d−)d′−.
By part (a),
xN ′(d+)d′+ −Ke−r(T−t)N ′(d−)d′− = xN ′(d+)(d′+ − d′−) = 0.
Hence, cx =N (d+).
(c) Show that
ct = −rKe−r(T−t)N (d−)−σx
2√T − t)
N ′(d+)
This is the theta of the option.
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 25
Proof. An application of chain rule gives us
ct = xN ′(d+)d+
dt−Ke−r(T−t)N ′(d−)
d−dt− rKe−r(T−t)N (d−)
= −rKe−r(T−t)N (d−) + xN ′(d+)ddtσ√T − t) (By part a)
= −rKe−r(T−t)N (d−)−σx
2√T − t)
N ′(d+)
(d) Use the formulas above to show that c satisfies (4.10.3).
Proof. By part (b) and part (c) and using the notation τ = T − t,
ct + rxcx +12σ2x2cxx
= −rKe−r(T−t)N (d−)−σx
2√τN ′(d+) + rxN (d+) +
12N ′(d+)
σx√τ
= rc(t,x)
(e) Show that for x > K , limt→T d± = ∞, but for ) < x < K , limt→T d± = −∞. Usethese equalities to derive the terminal condition (4.10.4).
Proof. t → T is equivalent to τ → 0. As τ → 0, 1σ√τ
(r + 12σ
2)τ → 0 and1
σ√τ
log( xK )→±∞ as τ→ 0 depending on whether x > K or 0 < x < K . Thus forx > K , limt→T d+ =∞, but for 0 < x < K , limt→T d+ = −∞.
Since d− = d+ − σ√τ and σ
√τ → 0 as τ → 0, d− also has the same limiting
behavior as d+ when t→ T (or equivalently τ→ 0). This finishes the proof ofthe first part.
It is easy to see that N (y)→ 0 as y→−∞ and N (y)→ 1 as y→∞. Thus, whenx > K and we take the limit t→ T i.e limt→T d± =∞, we get
limt→T
c(t,x) = x −K
Likewise, when 0 < x < K and we take the limit t→ T i.e limt→T d± = −∞, weget
limt→T
c(t,x) = 0
We thus get the terminal condition (4.10.4).
(f) Show that for 0 ≤ t < T , limx→0d± = −∞. Use this fact to verify that the firstpart of boundary condition (4.10.5) as x→ 0.
Proof. Fixing a 0 ≤ t < T is equivalent to fixing a 0 < τ ≤ T . Since limx→0 log( xK ) =−∞, we get limx→0d± = −∞.
Since N (y)→ 0 as y→−∞, we see that limx→0 c(t,x) = 0.
26 KARTHIK IYER
(g) Verify that
limx→∞
[c(t,x)− (x −Ke−r(T−t))] = 0 ,0 ≤ t ≤ T
Proof. Since limx→∞ log( xK ) =∞, limx→∞d± =∞.
Let us first show that limx→∞N (d+)−1x−1 = 0. Since limx→∞N (d+) = 1, this is an
indeterminate form 00 , and L’Hópital’s rule implies that the limit is
limx→∞
ddx [N (d+ − 1]
ddxx−1
The above expression evaluates to
− x
σ√
2πτe−
d+(x)22
= −Ke−τ(r+ 1
2σ2)
σ√
2πτeστd+−
d2+2
Since, limx→∞στd+ −d2
+2 = −∞,
limx→∞−Ke
−τ(r+ 12σ
2)
σ√
2πτeστd+−
d2+2 = 0
Moreover, limx→∞N (d−(τ,x)) = 1. Putting the above observations together, weget
limx→∞
[c(t,x)− (x −Ke−r(T−t))] = 0
(4.10) Self-Financing Trading
(a) In continuous time, let M(t) = ert be the price of a share of the money marketaccount at time t, let ∆(t) denote the number of shares of stock held at time t,and let Γ (t) denote the number of shares of the money market held at time t,so that the total portfolio value at time t is
X(t) = ∆(t)s(t) + Γ (t)M(t)
Use (4.10.16) and (4.10.9) and derive the continuous-time self-financing con-dition
S(t)d∆(t) + dS(t)d∆(t) +M(t)dΓ (t) + dM(t)dΓ (t) = 0
Proof. We have, by Itô’s product rule
dX(t) = d∆(t)S(t) + S(t)d∆(t) + d∆(t)dS(t) + dΓ (t)M(t)
+ Γ (t)dM(t) + dΓ (t)dM(t) (0.63)
Equating (0.63) and dX(t) = ∆(t)dS(t) + r(X(t) − ∆(t)S(t))dt and using thatdM(t) = rM(t), we obtain
S(t)d∆(t) + dS(t)d∆(t) +M(t)dΓ (t) + dM(t)dΓ (t) = 0
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 27
(b) Replace (4.10.17) by its corrected version (4.10.21) and use the continuous-time self-financing condition you derived in part a) to derive (4.10.18)
Proof. Since N (t) = Γ (t)M(t), by Itô’s product formula,
dN (t) = dΓ (t)M(t) + dM(t)Γ (t) + dΓ (t)dM(t) (0.64)
Equating (0.64) and (4.10.21) and using the continuous-time self-financingcondition from part (a), we get
ct(t,S(t))dt + cx(t,S(t))dS(t) +12cxx(t,S(t))dS(t)dS(t)
= ∆(t)dS(t) + dM(t)Γ (t) (0.65)
Since ∆(t) = cx(t,S(t)), dS(t)dS(t) = σ2S2dt and dM(t)Γ (t) = r(t)M(t)Γ (t)dt =rN (t)dt, (0.65) reduces to
rN (t)dt =[ct(t,S(t)) +
12σ2S2(t)cxx(t,S(t))
]dt.
(4.11) Letc(t,x) = xN (d+(T − t,x))−Ke−r(T−t)N (d−(T − t,x))
denote the price for a European call, expiring at time T with strike price K , where
d±(T − t,x) =1
σ1√T − t)
[log(
xK
) +(r ±
σ21
2
)(T − t)
].
This option price assumes the underlying stock is a geometric Brownian motionwith volatility σ > 0. For this problem, we take this to be the market price of theoption.
Suppose, however, that the underlying asset is really a geometric Brownian motionwith volatility σ2 > σ1, i.e,
dS(t) = αS(t)dt + σ2s(t)dW (t)
Consequently, the market price of the call is incorrect.
We set up a portfolio whose value at each time t we denote by X(t). We begin withX(0) = 0. At each time t, the portfolio is long one European call, is short cx(t,S(t))shares of stock, and thus has a cash position
X(t)− c(t,S(t)) + S(t)cx(t,S(t)),
which is invested at the constant interest rate r. We remove cash from this portfolioat a rate 1
2(σ22 − σ
21 )S2(t)cxx(t,S(t)). Therefore the differential value of this portfolio
is
dX(t) = dc(t,S(t))− cx(t,S(t))dS(t)
+ r[X(t)− c(t,S(t)) + S(t)cx(t,S(t))]dt
− 12
(σ22 − σ
21 )S2(t)cxx(t,S(t))dt 0 ≤ t ≤ T .
28 KARTHIK IYER
Show that X(t) = 0 for all t ∈ [0,T ]. In particular, because cxx(t,S(t)) > 0 and σ2 > σ1,we have an arbitrage opportunity; we can start with zero initial capital, removecash at a positive rate between times 0 and T , and at time T have zero liability.
Proof. Let us first note that since
dc(t,S(t))− cx(t,S(t)) = ct(t,S(t))dt +12cxxdS(t)dS(t)
= ct(t,S(t))dt +12cxxσ
22S(t)dt
we get
dX(t) = ct(t,S(t))dt + r [X(t)− c(t,S(t)) + S(t)cx(t,S(t))]dt +12σ2
1S2(t)cxxdt (0.66)
The last equality implies dX(t)dX(t) = 0. Applying the Itô Doeblin formula 4.4.23with f (t,x) = e−rtx gives us
d(e−rtX(t)) = −re−rtX(t)dt + e−rtdX(t) (0.67)
From (0.66) and (0.67), we get
d(e−rtX(t)) = e−rt[ct(t,S(t))− rc(t,S(t)) + rS(t)cx(t,S(t)) +
12σ2
1S2cxx(t,S(t))
]dt
= 0 (as c solves Black-Scholes-Merton PDE) ∀t ∈ [0,T ]
Since X(0) = 0, we thus get e−rtX(t) = 0 for all t ∈ [0,T ] implying that X = 0 for allt ∈ [0,T ].
(4.18) Let a stock price be a geometric Brownian motion
dS(t) = αS(t)dt + σS(t)dW (t),
and let r denote the interest rate. We define the market price of risk to be
θ =α − rσ
and the state price density process to be
ζ(t) = exp−θW (t)− (r +12θ2)t
(a) Show thatdζ(t) = −θζ(t)dW (t)− rζ(t)dt.
Proof. An application of Itô-Doeblin formula (4.4.13) with f (x) = e−θx−(r+ 12θ
2)t
gives us
dζ(t) = −(r +12θ2)ζ(t)dt −θζ(t)dW (t) +
12θ2dt
= −θζ(t)dW (t)− rζ(t)dt
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 29
(b) Let X denote the value of an investor’s portfolio when he uses a portfolioprocess ∆(t). From (4.5.2), we have
dX(t) = rX(t)dt +∆(t)(α − r)S(t)dt +∆(t)σS(t)dW (t).
Show that ζ(t)X(t) is a martingale.
Proof. By Itô’s product rule,
d(ζ(t)X(t)) = ζ(t)dX(t) + dζ(t)X(t) + dζ(t)dX(t)
Using the fact that, dtdt = dtdW (t) = 0 and dW (t)dW (t) = dt, we get
dζ(t)dX(t) = −σθζ(t)∆(t)S(t) (0.68)
We also have,
dX(t)ζ(t) = [rX(t)ζ(t) +∆(t)(α − r)ζ(t)S(t)]dt + σ∆(t)S(t)ζ(t)dW (t) (0.69)
X(t)dζ(t) = −θζ(t)X(t)dW (t)− rζ(t)X(t)dt (0.70)
Adding (0.68), (0.69), (0.70) and using that θσ = α − r, we get
d(ζ(t)X(t)) = [σ∆(t)S(t)ζ(t)−θζ(t)X(t)]dW (t)
This last equality tells us that ζ(t)X(t) is an Itô integral and hence by Theorem(4.3.1, part 4) is a martingale.
(c) Let T > 0 be a fixed terminal time. Show that id an investor wants to beginwith some initial capital X(0) and invest in order to have portfolio value V (T )at time T , where V (T ) is a given F (T ) measurable random variable, then hemust begin with initial capital
X(0) = E[ζ(T )V (T )].
In other words, the present value at time zero of the random payment V (T )at time T is E[ζ(T )V (T )]. This justifies calling ζ(t) the state price densityprocess.
Proof. Since by part (c), ζ(t)X(t) is a martingale, E[ζ(T )X(T )] = E[ζ(T )X(T )|F (0)] =ζ(0)X(0).
Since ζ(0) = 1 and we impose the condition X(T ) = V (T ), we get
X(0) = ζ(0)X(0) = E[ζ(T )X(T )] = E[ζ(T )V (T )].
(5.2) State price density process. Show that the risk-neutral pricing formula (5.2.30)may be re-written as
D(t)Z(t)V (t) = E[D(T )Z(T )V (T )|F (t)]. (0.71)
Here Z(t) is the Radon-Nikodym derivative process (5.2.11) when the market priceof risk-process Θ(t) is given by (5.2.21) and the conditional expectation on theright-hand side of (0.16) is taken under the actual probability measure P, not therisk-neutral measure P. In particular, if for some A ∈ F (T ), a derivative securitypays off 1A then the value of this derivative security at time zero is E[D(T )Z(T )1A].The process D(t)Z(t) appearing in (0.16) is called the state price density process.
30 KARTHIK IYER
Proof. Note that Z(t) is a positive random variable. We have,
D(t)V (t) = E[D(T )V (T )|F (t)], 0 ≤ t ≤ T (By (5.2.30)
=1Z(t)
E[D(T )V (T )Z(T )|F (t)] (By Lemma 5.2.2)
This proves (0.71).
(5.3) According to the Black-Scholes-Merton formula, the value at the zero of a Europeancall on a stock whose initial price is S(0) = x is given by
c(0,x) = xN (d+(T ,x))−Ke−TN (d−(T ,x)),
where
d+(T ,x) =1
σ√T
[log(
xk
) +(r +
12σ2
)T],
d−(T ,x) = d+(T ,x)− σ√T .
The stock is modeled as a geometric Brownian motion with constant volatility σ > 0,the interest rate is constant r, the call strike is K , and the call expiration time is T .This formula is obtained by computing the discounted expected payoff of the callunder the risk-neutral measure,
c(0,x) = E[e−rT (S(T ()−K)+]
= E
[e−rT
(xexpσW (T ) +
(r − 1
2σ2
)T −K
)+], (0.72)
where W is a Brownian motion under the risk-neutral measure P. In Exercise4.9(part b), the delta of this option is computed to be cx(0,x) = N (d+(T ,x)). Thisproblem provides an alternate way to compute cx(0,x).
(a) We begin with the observation that if h(s) = (s −K)+, then
h′(s) =
0 s < K
1 s > K
If s = K , then h′(s) is is undefined, but that will not matter in what followsbecause S(T ) has zero probability of taking the value K . Using the formula forh′(s), differentiate inside the expected value in (0.72) to obtain a formula forcx(0,x).
Proof. We first note that S(T ) = S(0)expσW (T ) +(r − 1
2σ2)T . Differentiating
under integral sign (whose validity is justified since PS(T ) = K = PS(T ) =K = 0) gives us
cx(0,x) = E[e−rT expσW (T ) +(r − 1
2σ2
)T 1S(T )>K ]
= E[eσW (T )− 12σ
2T1S(T )>K ] (0.73)
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 31
(b) Show that the formula you obtained in part a) can be re-written as
cx(0,x) = P(S(T ) > K)
where P is a probability measure equivalent to P. Show that
W (t) = W (t)− σtis a Brownian motion under P.
Proof. We first observe that Z = eσW (T )− 12σ
2T is a random variable such thatEZ = 1. To prove this, we note that W (T ) is a normal random variable withmean zero and variance T . We then use a standard change of variables to seethat EZ = 1. We define a new probability measure P by the formula
P(A) =∫A
Z(ω)dP(ω) ∀A ∈ F .
P and P are in fact equivalent measures as Z is a strictly positive randomvariable. For any random variable X, we have EX = E[XZ]. Hence, (0.72)reduces to
E[1S(T )>K ] = P(S(T ) > K).
To prove that W (t) = W (t) − σt is a Brownian motion under P, we simplyapply Girsanov’s theorem (Theorem 5.2.3) with Θ(u) = −σ . In particular, thisimplies that W (t) is a normal random variable under P with mean zero andvariance T .
(c) Rewrite S(T ) in terms of W (T ), and then show that
PS(T ) > K = P−W (T )√T
< d+(T ,x) =N (d+(T ,x)).
Proof. By part (b),
S(T ) = S(0)expσW (T ) +(r − 1
2σ2
)T
= S(0)expσW (T ) +(r +
12σ2
)T
Note that S(T ) > K iff −W (T )√T< d+(T ,x). Hence,
PS(T ) > K = P−W (T )√T
< d+(T ,x)
=N (d+)
The last equality follows because W (T )√T
is a normal random variable under Pwith mean zero and variance 1.
(5.5) Prove Corollary 5.3.2 by the following steps,
(a) Compute the differential of 1Z(t) , where Z(t) is given in Corollary 5.3.2.
32 KARTHIK IYER
Proof. 1Z(t) = exp
∫ t0θ(u)dW (u) + 1
2
∫ t0θ2(u)du. Using the Itô-Doeblin formula
4.4.23 with f (x) = ex, we get
d
(1Z(t
)=
1Z(t)
θ(t)dW (t) +θ2(t)1Z(t)
dt.
(b) Let M(t), 0 ≤ t ≤ T , be a martingale under P. Show that M(t) = Z(t)M(t) is amartingale under P.
Proof. We first note that Z(t) is a positive random variable for any t ∈ [0,T ].We have,
E[M(t)|F (s)] = M(s) (By hypothesis)
E[M(t)Z(t)|F (s)] =
M(s)Z(s)
1Z(s)
E[M(t)|F (s)] =M(s)Z(s)
(By Lemma 5.2.2)
E[M(t)|F (s)] =M(s).
(c) According to Theorem 5.3.1, there is an adapted process Γ (u), 0 ≤ u ≤ T , suchthat
M(t) =M(0) +
T∫0
Γ (u)dW (u) 0 ≤ t ≤ T
Write M(t) =M(t) · 1Z(t) and take its differential using Itô’s product rule.
Proof. Itô’s product rules gives us
d(M(t)) =(M(t)θ(t) + Γ (t)
Z(t)
)dW (t) +
[M(t)θ2(t) +θ(t)Γ (t)
Z(t)
]dt (0.74)
Since, dW (t) = dW (t)−θ(t)dt, (0.74) reduces to
d(M(t)) =(M(t)θ(t) + Γ (t)
Z(t)
)dW (t)
= Γ (t)dW (t) (0.75)
where Γ (t) is a stochastic process adapted to the filtration generated by theBrownian motion W (t).
(d) Show that the differential M(t) is the sum of an adapted process, which wecall Γ (t), times ˜dW (t), and zero times dt. Integrate to obtain (5.3.2).
Proof. We integrate (0.75) to obtain the conclusion of Corollary 5.3.2.
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 33
(5.7) (a) Suppose a multi-dimensional market model as described in Section 5.4.2has an arbitrage. In other words, suppose there is a portfolio value processsatisfying X1(0) = 0 and
PX1(T ) ≥ 0 = 1, PX1(T ) > 0 > 0 (0.76)
for some positive T . Show that if X2(0) is positive then there exists a portfoliovalue process X2(t) starting at X2(0) and satisfying
PX2(T ) ≥ X2(0)D(T )
, PX2(T ) >X1(T )D(T )
> 0 (0.77)
Proof. Define X2(t) := X1(t) +X2(0)M(t), where M(t) = 1D(t) denotes the money
market account. Note that X2(T ) ≥ X2(0)M(T ) iff X1(T ) ≥ 0 andX2(T ) > X2(0)M(T ) iff X1(T ) > 0. We thus get (0.77) assuming (0.76).
(b) Show that if a multidimensional market model has a portfolio vale processX2(t) such thatX2(0) is positive and (0.17) holds, then the model has a portfoliovalue process X1(0) such that X1(0) = 0 and (0.76) holds.
Proof. DefineX1(t) := X2(t)−X2(0)M(t). It is easy to see thatX2(T ) ≥ X2(0)M(T )iff X1(0) ≥ 0 and X2(T ) > X1(T )M(0) iff X1(T ) > 0. Moreover, X1(0) = 0. Thus(0.76) holds assuming (0.77).
(5.8) (Every strictly positive asset is a generalized Brownian motion) Let (Ω,F ,P) bea probability space on which is defined a Brownian motion W (t),0 ≤ t ≤ T . LetF (t),0 ≤ t ≤ T be the filtration generated by this Brownian motion. Assume thatthere is a unique risk-neutral measure P, and let W (t),0 ≤ t ≤ T , be the Brownianmotion under P obtained by an application of Girsanov’s theorem, Theorem 5.2.3.
Corollary 5.3.2 of the Martingale Representation Theorem asserts that every mar-tingale M(t),0 ≤ t ≤ T , under P can be written as a stochastic integral with respectto W (t),0 ≤ t ≤ T .
Now, let V (T ) be an almost surely positive F (T ) measurable random variable.According to the risk-neutral pricing formula (5.2.31), the price at time t of asecurity paying V (T ) at time T is
V (t) = E
[e−
∫ TtR(u)duV (T )|F (t)
],0 ≤ t ≤ T
(a) Show that there exists an adapted process Γ (t),0 ≤ t ≤ T , such that
dV (t) = R(t)V (t)dt +Γ (t)D(t)
˜dW (t),0 ≤ t ≤ T . (0.78)
Proof. Note that
D(t)V (t) = E
[e−
∫ T0 R(u)duV (T )|F (t)
],0 ≤ t ≤ T .
34 KARTHIK IYER
It is easy to see that E[e−
∫ T0 R(u)duV (T )|F (t)
]is a martingale under P. Thus,
Corollary (5.3.2) of Martingale representation theorem gives us an existenceof an adapted process Γ (t) so that
d(D(t)V (t)) = Γ (t) ˜dW (t), (0.79)
(0.79) and Itô’s product rule for d(D(t)V (t)) along with the fact thatdD(T ) = −R(t)D(t)dt gives us (0.78).
(b) Show that, for each t ∈ [0,T ], the price of the derivative security V (t) at time tis almost surely positive.
There might be a way to solve this using maximum principle for a para-bolic equation by connecting a Markov process to a PDE as in Chapter 6.However, since V (t) does not satisfy a stochastic PDE, it is not clear howwe would bring a Markov process in to the picture.
Also, it is not clear how exactly we can bring a transition probability to useits properties. It will be great if some pointers can be given to completethis problem!
(c) Conclude from the previous two parts that there exists an adapted processσ (t),0 ≤ t ≤ T , such that
dV (t) = R(t)V (t)dt + σ (t)V (t) ˜dW (t),0 ≤ t ≤ T . (0.80)
In other words, prior to time T , the price of every asset with almost surelypositive price at time T follows a generalized geometric Brownian motion.
Proof. Since V (t) is almost surely positive for each 0 ≤ t ≤ t, we can define anF (t) adapted process
σ (t) =Γ (t)
D(t)V (t).
(0.80) thus follows from (0.78).
(5.11) (Hedging a cash flow)Let W (t),0 ≤ t ≤ T , be a Brownian motion on a probability space (Ω,F ,P), andlet F (t),0 ≤ t ≤ T , be the filtration generated by this Brownian motion. Let themean rate of return α(t), the interest rate R(t), and the volatility σ (t) be adaptedprocesses, and assume that σ (t) is never zero. Consider a stock price process whosedifferential is given by
dS(t) = α(t)S(t)dt + σ (t)S(t)dW (t),0 ≤ t ≤ T . (0.81)
Suppose an agent must pay a cash flow at rate C(t) at each time t, where C(t),0 ≤t ≤ T , is an adapted process. If the agent holds ∆(t) shares of stock at each time t,then the differential of her portfolio will be
dX(t) = ∆(t)dS(t) +R(t)(X(t)−∆(t)S(t))dt −C(t)dt. (0.82)
Show that there is a non-random value of X(0) and a portfolio process ∆(t),0 ≤ t ≤ T ,such that X(T ) = 0 almost surely.
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 35
Proof. From (0.82) it is easy to see that
d[D(t)X(t) +
t∫0
D(u)C(u)du] = ∆(t)D(t)S(t)[(α(t)−R(t))dt + σ (t)dW (t)]
= ∆(t)D(t)S(t)σ (t) ˜dW (t) (0.83)
Here W (t) is a Brownian motion under P where we apply the usual change ofmeasure to risk-neutral probability P.
Hence, we can say that D(t)X(t) +∫ t
0D(u)C(u)du is a P martingale i.e
D(t)X(t) +
t∫0
D(u)C(u)du = E[D(T )X(T ) +
T∫0
D(u)C(u)du |F (t)]
= E
T∫
0
D(u)C(u)du |F (t)
( as X(T ) = 0 a.s is what we impose) (0.84)
In particular, (0.84) tells us that X(0) = E
[∫ T0D(u)C(u)du
]. Moreover, a quick
application of iterated conditioning proves that M(t) = E
[∫ T0D(u)C(u)du |F (t)
]is a
P martingale. Applying Corollary (5.3.2), we get the existence of an F (t) adaptedprocess Γ (t) such that
E
T∫
0
D(u)C(u)du |F (t)
= E
T∫
0
D(u)C(u)du
+
t∫0
Γ (u) ˜dW (u). (0.85)
(0.84) and (0.85) together give
D(t)X(t) +
t∫0
D(u)C(u)du = E
T∫
0
D(u)C(u)du
+
t∫0
Γ (u) ˜dW (u). (0.86)
(0.83) and (0.86) together imply (as σ (t)D(t)S(t) is never 0) that
∆(t) =Γ (t)
D(t)S(t)σ (t). (0.87)
(6.1) Consider the stochastic differential equation
dX(u) = (a(u) + b(u)X(u))du + (γ(u) + σ (u)X(u))dW (u), (0.88)
where W (u) is a Brownian motion relative to a filtration F (u),u ≥ 0, and we allowa(u),b(u),γ(u) and σ (u) to be processes adapted to this filtration. Fix an initial timet ≥ 0 and an initial position x ∈R. Define
Z(u) = expt∫u
σ (v)dW (v) +
t∫u
(b(v)− 1
2σ2(v)
)dv,
36 KARTHIK IYER
Y (u) = x+
u∫t
α(v)− σ (v)γ(v)Z(v)
dv +
u∫t
γ(v)Z(v)
dW (v).
(a) Show that Z(t) = 1 and
dZ(u) = b(u)Z(u)du + σ (u)Z(u)dW (u),u ≥ t. (0.89)
Proof. Clearly, Z(t) = 1. Applying Itô’s formula with f (x) = ex and for X(t) =∫ tuσ (v)dW (v) +
∫ tu
(b(v)− 1
2σ2(v)
)dv gives us (0.89).
(b) By its very definition, Y (u) satisfies Y (t) = x and
dY (u) =a(u)− σ (u)γ(u)
Z(u)du +
γ(u)Z(u)
dW (u),u ≥ t (0.90)
Show that X(u) = Y (u)Z(u) solves the stochastic differential equation (0.88)and satisfies the initial condition X(t) = x.
Proof. By the definition for Y (u), it is clear that Y (t) = x. Since, Z(t) = 1 asshown in part (a), it follows that X(t) = x. We have for u ≥ t
dY (u)Z(u) = [a(u)− σ (u)γ(u)]du +γ(u)dW (u)
dZ(u)Y (u) = b(u)Y (u)Z(u)du + σ (u)Z(u)Y (u)dW (u)
dZ(u)dY (u) = σ (u)γ(u)du (From (0.32) and (0.33)) (0.91)
Itô’s product rule and (0.91) show that X(u) = Y (u)Z(u) solves the stochasticdifferential equation (0.88) and satisfies the initial condition X(t) = x.
(6.5) (Two-dimensional Feynman-Kac)
(a) With g(t,x1,x2) and f (t,x1,x2) defined by (6.6.1) and 6.6.2), show thatg(t,X1(t),X2(t)) and e−rtf (t,X1(t),X2(t)) are martingales.
Proof. By (6.6.1), g(t,x1,x2) = Et,x1,x2h(X1(t),X2(T )). Since (X1(t),X2(t) satis-
fies a stochastic differential equation system, (X1(t),X2(t) is a Markov processand thus for 0 ≤ u ≤ T , we obtain E[h(X1(T ),h(X2)(T )|F (u)] = g(t,X1(u),X2(u)).Thus, for any 0 ≤ s ≤ t ≤ T , using iterated conditioning we get E[g(t,X1(t),X2(t)|F (s)] =g(s,X1(s),X2(s).
Similarly, we observe that e−rtf (t,X1(t),X2(t) = E[e−rT h(X1(T ),X2(T ))|F (t)].Again, using iterated conditioning, it is easy to see that e−rtf (t,X1(t),X2(t)) is amartingale.
(b) Assuming that W1 and W2 are independent Brownian motions, use the Itô-Doeblin formula to compute the differentials of g(t,X1(t),X2(t)) and e−rtf (t,X1(t),X2(t)),set the dt term to 0, and thereby obtain the partial differential equations (6.6.3)and (6.6.4).
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 37
Proof. This is a routine (and tedious!) application of Itô Doeblin formula. Weonly mention the key points for brevity.
We use dX1(t)dX1(t) = [γ211+γ2
12]dt, dX2(t)dX2(t) = [γ221+γ2
22]dt and dX1(t)dX2(t) =[γ11γ21 +γ12γ22]dt along with Itô-Doeblin formula and setting the dt term tozero to get (6.6.3). In addition, we use Itô product formula and set the dt termto zero to get (6.6.4)
(c) Now consider the case that dW1(t)dW2(t) = ρdt where ρ is a constant. Computethe differentials of g(t,X1(t),X2(t)) and e−rtf (t,X1(t),X2(t)), set the dt term to0, and obtain the partial differential equations
gt + β1gx1+ β2gx2
+ (12γ2
11 + ργ11γ12 +12γ2
12)gx1x1
+ (γ11γ21 + ργ11γ22 + ργ12γ21 +γ12γ22)gx1x2
+ (12γ2
21 + ργ21γ22 +12γ2
22)gx2x2= 0 (*)
ft + β1fx1+ β2fx2
+ (12γ2
11 + ργ11γ12 +12γ2
12)fx1x1
+ (γ11γ21 + ργ11γ22 + ργ12γ21 +γ12γ22)fx1x2
+ (12γ2
21 + ργ21γ22 +12γ2
22)fx2x2= rf . (**)
Proof. Again, we only mention the key points for brevity.
We use dX1(t)dX1(t) = [γ211 + γ2
12 + 2ργ11γ12]dt, dX2(t)dX2(t) = [γ221 + γ2
22 +2ργ21γ22]dt and dX1(t)dX2(t) = [γ11γ21 +γ12γ22 + ργ11γ22 + ργ12γ21]dt alongwith Itô-Doeblin formula and setting the dt term to zero to get (*). In addition,we use Itô product formula and set the dt term to zero to get (**).
(6.2) Solution of Hull-White modelThis exercise solves the ordinary differential equations (6.5.8) and (6.5.9) to pro-duce the solutions C(t,T ) and A(t,T ) given in (6.5.10) and (6.5.11).
(a) Use equation (6.5.8) with s replacing t to show that
dds
[e−∫ s0 b(v)dvC(s,T )] = −e−
∫ s0 b(v)dv .
Proof. By equation (6.5.8), dds [e
−∫ s0 b(v)dvC(s,T )] = e−
∫ s0 b(v)dv[c′(s,T )−b(s)c(s,T )] =
−e−∫ s0 b(v)dv .
(b) Integrate the equation in part a) above from s = t to s = T , and use the terminalcondition C(T ,T ) to obtain (6.5.10).
38 KARTHIK IYER
Proof. Since C(T ,T ) = 0, integration by parts give us
e∫ t0 b(v)dvc(t,T ) =
T∫t
e−∫ s0b(v)dvds
c(t,T ) =
T∫t
e−∫ stb(v)dvds
(c) Replace t by s in (6.5.9), integrate the resulting equation from s = t to s = T ,use the terminal condition A(T ,T ) = 0, and obtain (6.5.11).
Proof. As indicated, we replace t by s in (6.5.9), integrate the resulting equa-tion from s = t to s = T , use the terminal condition A(T ,T ) = 0 and obtain
A(t,T ) =
T∫t
(a(s)C(s,T )− 12σ2(s)C(s,T ))ds (0.92)
(6.4) (Solution of Cox-Ingersoll-Ross model)This exercise solves the ordinary differential equations (6.5.14) and (6.5.15) toproduce the solutions C(t,T ) and A(t,T ) given in (6.5.16) and (6.5.17).
(a) Define the function
φ(t) = exp12σ2
T∫t
C(u,T )du.
Show that
C(t,T ) = −2φ′(t)σ2φ(t)
,
C′(t,T ) = −2φ′′(t)σ2φ(t)
+12σ2C2(t,T ).
Proof. A quick application of chain rule gives C(t,T ) = − 2φ′(t)σ2φ(t) . We now use
the usual product rule to get C′(t,T ) = −2φ′′(t)σ2φ(t) + 1
2σ2C2(t,T ).
(b) Use the equation (6.5.14) to show that
φ′′(t)− bφ′(t)− 12σ2φ(t) = 0.
Proof. We use equation (6.5.14) to get
−2φ′(t)σ2φ(t)
+12σ2C2(t,T ) = −
2bφ′(t)σ2φ(t)
+12σ2C2(t,T )− 1
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 39
Rearranging gives us the desired ODE.
(c) Show that φ(t) must be of the form
φ(t) =c1
12b+γ
e−( 12b+γ)(T−t) − c2
12b −γ
e−( 12b−γ)(T−t)
for some constants c1 and c2, where γ = 12
√b2 + 2γ2.
Proof. As indicated, any solution φ(t) must be of the form φ(t) = a1e( b2 +γ)t +
a2e( b2−γ)t for some constants a1 and a2. We can re-write this in the form
φ(t) = c112b+γ
e−( 12b+γ)(T−t) − c2
12b−γ
e−( 12b−γ)(T−t)
(d) Show that
φ′(t) = c1e−( 1
2b+γ)(T−t) − c2e−( 1
2b−γ)(T−t).
Use the fact that C(T ,T ) = 0 to show that c1 = c2.
Proof. Clearly,φ′(t) = c1e−( 1
2b+γ)(T−t)−c2e−( 1
2b−γ)(T−t). Moreover, C(T ,T ) = 0 =⇒φ′(T ) = c1 − c2 i.e c1 = c2.
(e) Show that
φ(t) =2c1
σ2 e− 1
2b(T−t)[b sinh(γ(T − t)) + 2γ cosh(γ(T − t))],
φ′(t) = −2c1e− 1
2b(T−t) sinh(γ(T − t))
Conclude that C(t,T ) is given by (6.5.16).
Proof. Part (d) above and a bit of algebraic manipulation give us
φ(t) = c1e− 1
2b(T−t) 1
2b −γ14b
2 −γ2e−γ(T−t) −
12b+γ
14b
2 −γ2e−γ(T−t)
φ(t) =
2c1
σ2 e− 1
2b(T−t)[b sinh(γ(T − t)) + 2γ cosh(γ(T − t))],
φ′(t) = −2c1e− 1
2b(T−t) sinh(γ(T − t))
Thus, we see that c(t,T ) is given by (6.5.16).
(f) From (6.5.15) and (6.9.8), we have
A′(t,T ) =2aφ′(t)σ2φ(t)
.
Replace t by s in this equation, integrate from s = t to s = T , and show thatA(t,T ) is given by (6.5.17).
40 KARTHIK IYER
Proof. From (6.5.15) and (6.9.8), we have A′(t,T ) = 2aφ′(t)σ2φ(t) . We replace t by s
and integrate from s = t to s = T and use the fact that A(T ,T ) = 0 to get
A(t,T ) = a
T∫t
C(u,T )du (0.93)
From the expression obtained fro C(t,T ) obtained in part e), we get (6.5.16).
(6.7) (Heston stochastic volatility model)Suppose that under a risk-neutral measure P a stock price is governed by
dS(t) = rS(t)dt +√V (t))S(t) ˜dW1(t),
where the interest rate r is constant and the volatility√V (t) is itself a stochastic
process governed by the equation
dV (t) = (a− bV (t))dt + σ√V (t) ˜dW2(t).
The parameters a,b and σ are positive constants, and W1(t) and W2(t) are correlatedBrownian motions under P with
˜dW1(t) ˜dW2(t) = ρdt
for some ρ ∈ (−1,1). Because the two-dimensional process (S(t),V (t) is governedby the pair of stochastic differential equations (6.9.23) and (6.9.24), it is a two-dimensional Markov process.
At time t, the risk neutral price of a call expiring at time T ≥ t in this stochasticvolatility model is c(t,S(t),V (t)) = E[e−r(T−t)(S(T )− k)+|F (t)],0 ≤ t ≤ T .
This problem shows that c(t, s,v) satisfies the partial differential equation
ct + rscs + (a− bv)cv +12s2vcss+ pσsvcsv +
12σ2vcvv = rc (0.94)
in the region 0 ≤ t < T , s ≥ 0, v ≥ 0 with the terminal condition C(T ,s,v) = (s −K)+
for all s ≥ 0,v ≥ 0.
(a) Show that e−rtc(t,S(t),V (t)) us a martingale under P, and use this fact to showto obtain (0.94).
Proof. e−rtc(t,S(t),V (t)) = E[e−rT (S(T )−K)+|F (t)]. Using iterated conditioning,for any 0 ≤ s ≤ t ≤ T , we have
E[e−rtc(t,S(t),V (t))|F (s)] = E[E[e−rT (S(T )−K)+|F (t)]F (s)]
= E[e−rT (S(T )−K)+|F (s)] = e−rsc(s,S(s),V (s)).
This shows that e−rtc(t,S(t),V (t)) is a martingale under P.Observe that dS(t)ds(t) =V (t)S2(t)dt, dV (t)dV (t) = σ2V (t)dt and dS(t)dV (t) = ρσV (t)S(t)dt. Next, weuse the two-dimensional Itô-Doeblin formula (4.6.8), the above observationsand set the dt term to zero to get (0.94).
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 41
(b) Suppose there are functions f (t,x,v) and g(t,x,v) satisfying
ft + (r +v2
)fx + (a− bv − ρσv)fv +12vfxx + ρσvfxv +
12σ2vfvv = 0, (0.95)
gt + (r − v2
)gx + (a− bv)gv +12vgxx + ρσvgxv +
12σ2vgvv = 0, (0.96)
in the region 0 ≤ t < T , −∞ < x <∞, and v ≥ 0. Show that if we define
c(t, s,v) = sf (t, logs,v)− e−r(T−t)Kg(t, logs,v),
then c(t, s,v) satisfies the partial differential equation (0.94).
Proof. For brevity we suppress the dependence of c, f , s on their variables. Lets = ex. We have
ct = sft −Kre−r(T−t)g −Ke−r(T−t)gtrscs = rsf + rsfx − rKe−r(T−t)gx(a− bv)cv = (a− bv)sfv − (a− bv)Ke−r(T−t)gv12s2vcss =
12svfx +
12svfxx −
12vKe−r(T−t)gxx +
12vKe−r(T−t)gx
ρσsvcsv = ρσsvfv + ρσsvfxv −Kρσve−r(T−t)gxv12σ2vcvv =
12σ2vsfvv −
12σ2vKe−r(T−t)gvv (0.97)
Adding all the equations in (0.97), using (0.95), (0.96) and the fact that c =sf − e−r(T−t)Kg, we see that c satisfies (0.94).
(c) Suppose a pair of processes (X(t),V (t)) is governed by the the stochastic differ-ential equations
dX(t) = (r +V (t2
)dt +√V (t)dW1(t),
dV (t) = (a− bV (t) + ρσV (t))dt + σ√V (t)dW2(t),
where W1(t) and W2(t) are Brownian motions under some probability measureP with dW1(t)dw2(t) = ρdt. Define
f (t,x,v) = Et,x,v
1X(T )≥logK.
Show that f (t,x,v) satisfies the partial differential equation (0.95) and theboundary condition
f (T ,x,v) = 1x≥logK ∀x ∈R,v ≥ 0.
Proof. Define h(X(T ),V (T )) = 1X(T )≥logK where h : R×[0,∞)→R. Clearly, h isBorel measurable. Because (X(t),V (t) together solve a SDE system, (X(t),V (t)is a Markov process and thus by a 2 dimensional analogue of Theorem (6.3.1),f (t,X(t),V (t)) = E
t,x,v[h(X(T ),V (T )|F (t)] is a martingale.
We can now use the 2 dimensional Itô-Doeblin formula (4.6.8) and set thedt term to zero and obtain (0.95). Since f (t,X(t),V (t) is a martingale, the
42 KARTHIK IYER
terminal condition at t = T is f (T ,x,v) = 1x≥logK. for all (x,v) ∈R× [0,∞) i.efor all (x.v) that can be reached by (X(T ),V (T ).
(d) Suppose a pair of processes (X(t),V (t)) is governed by the the stochastic differ-ential equations
dX(t) = (r − V (t2
)dt +√V (t)dW1(t),
dV (t) = (a− bV (t) + ρσV (t))dt + σ√V (t)dW2(t),
where W1(t) and W2(t) are Brownian motions under some probability measureP with dW1(t)dw2(t) = ρdt. Define
g(t,x,v) = Et,x,v
1X(T )≥logK.
Show that g(t,x,v) satisfies the partial differential equation (0.96) and theboundary condition
g(T ,x,v) = 1x≥logK ∀x ∈R,v ≥ 0.
Proof. The proof for this part follows the same outline as in part (d). Wefirst show that g(t,X(t),V (t)) = E[1X(T )≥logK|F (t)] is a martingale. Then weapply the 2 dimensional Itô-Doeblin formula and set the dt term to zero to get(0.96). Since g(t,X(t),V (t) is a martingale, the terminal condition at t = T isg(T ,x,v) = 1x≥logK for all (x,v) ∈R× [0,∞) i.e for all (x.v) that can be reachedby (X(T ),V (T ).
(e) Show that with f (t,x,v) and g(t,x,v) as in part c) and part d) above, thefunction c(t,x,v) of part b) satisfies the terminal condition C(T ,s,v) = (s −K)+
for all s ≥ 0,v ≥ 0.
Proof. Let s = ex. We have, c(T ,s,v) = sf (T , logs,v)−Kg(T , logs,v)= (s −K)1s≥K, ∀s ≥ 0,v ≥ 0.= (s −K)+, ∀s ≥ 0,v ≥ 0.
(6.10) (Implying the volatility surface)Assume that a stock price evolves according to the stochastic differential equation
dS(u) = rS(u)dt + σ (u,S(u))S(u) ˜dW (u),
where the interest rate r is constant, the volatility σ (u,x) is a function of time andthe underlying stock price, and W is a Brownian motion under the risk-neutralprobability measure P. This is a special case of the stochastic differential equation(6.9.46) with β(u,x) = rx and γ(u,x) = σ (u,x)x. Let p(t,T ,x,y) denote the transitiondensity.
According to Exercise 6.9, the transition density p(t,T ,x,y) satisfies the Kolmogorovforward equation
∂∂T
p(t,T ,x,y) = − ∂∂y
(ryp(t,T ,x,y)) +12∂2
∂y2 (σ2(T ,y)y2p(t,T ,x,y)). (0.98)
EXERCISES ON STOCHASTIC CALCULUS IN FINANCE « 43
Let
c(0,T ,x,K) = e−rT∞∫K
(y −K)p(0,T ,x,y)dy (0.99)
denote the time-zero price of a call expiring at time T , struck at K , when the initialstock price is S(0) = x. Note that
CT (0,T ,x,K) = −rc(0,T ,x,K) + e−rT∞∫K
(y −K)pT (0,T ,x,y)dy. (0.100)
(a) Integrate once by parts to show that
−∞∫K
(y −K)∂∂y
(ryp(0,T ,x,y))dy =
∞∫K
ryp(0,T ,x,y)dy. (0.101)
You may assume that
limy→∞
(y −K)ryp(0,T ,x,y) = 0. (0.102)
Proof. As indicated, we integrate by parts once and use the fact that the bound-ary integral vanishes, (using (0.102)) to obtain (0.101).
(b) Integrate by parts and then integrate again to show that
12
∞∫K
(y −K)∂2
∂y2 (σ2(T ,y)y2p(0,T ,x,y))dy =12σ2(T ,K)K2p(0,T ,x,K). (0.103)
You may assume that
limy→∞
(y −K)∂∂y
(σ2(T ,y)y2p(0,T ,x,y)) = 0,
limy→∞
σ2(T ,y)y2p(0,T ,x,y) = 0. (0.104)
Proof. We integrate by parts twice to get
12
∞∫K
(y −K)∂2
∂y2 (σ2(T ,y)y2p(0,T ,x,y))dy
= −12
∞∫K
∂∂y
(σ2(T ,y)y2p(0,T ,x,y))dy (Using (0.104))
=12σ2(T ,K)K2p(0,T ,x,K) (Using (0.104)) (0.105)
44 KARTHIK IYER
(c) Now use (0.100), (0.99), (0.98), (0.97), (0.103), and exercise 5.9 of Chapter 5in that order to obtain the equation
ct(0,T ,x,K)
= e−rT rK
∞∫K
p(0,T ,x,y)dy +12e−rT σ2(T ,K)K2p(0,T ,x,K)
= −rKcK (0,T ,x,K) +12σ2(T ,K)K2cKK (0,T ,x,K). (0.106)
This is the end of the problem. Note that under the assumption that cKK (0,T ,x,k) ,0, (0.106) can be solved for the volatility term σ2(T ,K) in terms of the quanti-ties cT (0,T ,x,K), cK (0,T ,x,K), and cKK (0,T ,x,K), which can be inferred frommarket prices.
Proof. As indicated, we first use (0.100), (0.99), (0.98), (0.101), (0.103) to get
ct(0,T ,x,K) = e−rT rK
∞∫K
p(0,T ,x,y)dy
+12e−rT σ2(T ,K)K2p(0,T ,x,K)
We now use the result of Exercise 5.9 i.e cK (0,T ,x,K) = −e−rT∫∞Kp(0,T ,x,y)dy
and cKK (),T ,x,K) = e−rT p(),T ,x,K) to get (0.106).
E-mail address: [email protected]