Date post: | 01-Jun-2018 |
Category: |
Documents |
Upload: | julianli0220 |
View: | 230 times |
Download: | 0 times |
of 69
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
1/69
Math 323 Fall 2001
Richard Bass
1. Basics of probability.
In this section we give some preliminaries on probabilistic terminology, indepen-
dence, and Gaussian random variables.
Given a space and a -field (or -algebra)F on it, a probability measure, orjust a probability, is a positive finite measure P with total mass 1. (, F,P) is called aprobability space. Elements ofF are called events. Measurable functions from to Rarecalled random variables and are usually denoted X or Y instead off or g. The integral
of X with respect to P is called the expectation of X or the expected value of X, andX()P(d) is often written EX, while
A
X()P(d) is often written E [X; A]. If an
event occurs with probability one, one says almost surely and writes a.s. The indicator
of the set A is the function or random variable denoted 1A that is 1 on A and 0 on thecomplement. The complement ofA is denoted Ac.
If An is a sequence of sets, (An i.o.), read infinitely often, is defined to be
j=1n=jAn. The following easy fact is called the Borel-Cantelli lemma or sometimes thefirst half of the Borel-Cantelli lemma.
Proposition 1.1. Ifn=1 P(An)
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
2/69
Proposition 1.3. Ifg is convex andX andg(X)are integrable, then
E g(X)g(EX).
Proof. Ifg is convex, then g lies above all its tangent lines. So for each x0, there exists
csuch that g(X)g(x0) + c(X x0). Lettingx0= E Xand taking expectations on bothsides, we obtain our result.
The law or distribution ofX is the probability measure PX on R, given by
PX(A) = P(XA). (1.1)
Given any measure on Rwith total mass 1, we can construct a random variable X such
that PX =: let = [0, 1] and let P be Lebesgue measure on [0, 1]. For [0, 1], letX() = inf{t: ((, t])}. (1.2)
Then PX((, a]) = P(Xa) =(, a] for each a, hence PX =.
Proposition 1.4. Iff0 orfis bounded,
E f(X) =
f(x)PX(dx).
Proof. Iff = 1(,a], then
E f(X) =P(Xa) = PX((, a]) =
f(x)PX(dx). (1.3)
By linearity we obtain our result for simple functions f. Taking limits then gives us our
result for fbounded or fnonnegative.
A random vector is just a measurable map from to Rd, and the definition of law
and Proposition 1.4 extend to this case.
We will frequently use the following equality.
2
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
3/69
Proposition 1.5. IfX0,EXp =
0
pp1P(X > )d. (1.4)
Proof. By Fubinis theorem, the right-hand side is equal to
E
0
pp11(X>)d= E X0
pp1d,
which is equal to the left-hand side.
Two events A, B are independent ifP(A B) = P(A)P(B). This definition gener-alizes to n events: A1, . . . , An are independent if
P(Ai1 Ai2 Aij ) = P(Ai1)P(Ai2) P(Aij )for every subset{i1, . . . , ij} of{1, 2, . . . , n}. A -fieldF is independent of a -fieldG ifeach A F is independent of each B G . The -field generated by X, denoted (X),is the collection
{(X
A); A Borel
}. Two random variables are independent if the -
fields generated by X, Yare independent. The notion of an event and a random variable
being independent, or a random variable and a -field being independent are defined in
the obvious way. Note that ifXandY are independent andf andg are Borel measurable
functions, then f(X) and g (Y) are independent.
An example of independent random variables is to let = [0, 1]2, P Lebesgue
measure,Xa function of just the first variable, andYa function of just the second variable.
In fact, it can be shown that independent random variables can always be represented by
means of some suitable product space.
Proposition 1.6. IfX, Y, andXY are integrable andX andYare independent, then
EXY = (EX)(EY).
Proof. IfX is of the formIi=1 ai1Ai , Y is of the formJj=1 bj1Bj and X and Y areindependent, then by linearity and the definition of independence, EXY = (EX)(EY).
Approximating nonnegative X and Yby simple random variables of this form, we obtain
our result by monotone convergence. The case of generalXand Y then follows by linearity.
The characteristic function of a random variable Xis the Fourier transform of its
law:
eiuxPX(dx) = E eiuX (we are using Proposition 1.4 here). If X and Y are inde-
pendent, so areeiuX andeivY , and hence by Proposition 1.6, E ei(uX+vY) =E eiuXE eivY .
Thus whenXandYare independent, the joint characteristic function ofXandY factors
into the product of the respective characteristic functions.
The converse also holds.
3
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
4/69
Proposition 1.7. If E ei(uX+vY) = E eiuXE eivY for all u and v, then X and Y are
independent random variables.
Proof. Let X be a random variable with the same law as X, Y one with the samelaw as Y, and X, Y independent. (We let = [0, 1]2, P Lebesgue measure, X a
function of the first variable, andY a function of the second variable defined as in (1.2).)Then E ei(uX
+vY) =E eiuX
E eivY
. Since X, X have the same law, they have the samecharacteristic function, and similarly for Y, Y. Therefore (X, Y) has the same jointcharacteristic function as (X, Y). By the uniqueness of the Fourier transform, (X, Y) hasthe same joint law as (X, Y), which is easily seen to imply that XandY are independent.
The second half of the Borel-Cantelli lemma is the following assertion.
Proposition 1.8. IfAn is a sequence of independent events andn=1 P(An) =, then
P(An i.o.) = 1.
Proof. Note
P(Nn=jAn) = 1 P(Nn=jAcn) = 1 Nn=j
P(Acn)
= 1 Nn=j
(1 P(An))1 exp
Nn=j
P(An)1
as N . So P(An i.o.) = limj(n=jAn) = 1.
A mean zero, variance one, Gaussian or normal random variable is one where
P(XA) = A
12
ex2/2dx, ABorel. (1.5)
We also describe such an X as having anN(0, 1) distribution or law, read as a normal0,1 law. It is routine to check that such an Xhas mean or expectation 0 and variance
E (X EX)2 = 1. It is standard that E eiuX = eu2/2. Such random variables exist bythe construction in (1.2), with
(dx) = (2)1/2ex2/2dx.
Xhas anN(a, b2) distribution ifX=bZ+ a for some Zhaving aN(0, 1) distribution.A sequence of random variables X
1, . . . , X
n is said to be jointly normal if there
exists a sequence of independentN(0, 1) random variables Z1, . . . , Z m and constants bij
4
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
5/69
andai such thatXi =mj=1 bijZj+ ai,i = 1, . . . , n. In matrix notation,X=BZ+ A. For
simplicity, in what follows let us take A = 0; the modifications for the general case are easy.
The covariance of two random variablesXandY is defined to be E [(XEX)(YEY)].Since we are assuming our normal random variables are mean 0, we can omit the centering
at expectations. Given a sequence of mean 0 random variables, we can talk about the
covariance matrix, which is Cov (X) = E [XXt], where Xt denotes the transpose of thevector X. In the above case, we see Cov (X) = E [(BZ)(BZ)t] = E [BZ ZtBt] = BBt,
since EZZt =I, the identity.
Let us compute the joint characteristic function E eiutX of the vectorX, whereu is
an n-dimensional vector. First, ifv is an m-dimensional vector,
E eivtZ=E
mj=1
eivjZj =mj=1
E eivjZj =mj=1
ev2j /2 =ev
tv/2
using the independence of the Zs. So
E eiut
X =E eiut
BZ =eut
BBt
u/2. (1.6)
By taking u = (0, . . . , 0, a, 0, . . . , 0) to be a constant times the unit vector in the jth
coordinate direction, we deduce that each of the Xs is indeed normal. Taking m = 2,
a R, B =
b1 00 b2
, and u = (a, a), we see that the sum of an N(0, b21) and an
independentN(0, b22) is anN(0, b21 + b22). If Cov (X) =BBt is a diagonal matrix, then thejoint characteristic function of theXs factors, and so by Proposition 1.7, the Xs would in
this case be independent.
IfXn are normal random variables converging in probability (i.e., in measure) to
a random variable X, then X is also normal. This follows since E eiuXn
E eiuX by
dominated convergence. The analogue for random vectors is also true.
2. Conditional expectation.
Suppose X is integrable andF-measurable andEis a sub--field ofF.Definition 2.1. E [X| E] is a random variableY such that
(1) Y isE-measurable(2) E [Y; A] =E [X; A] for allA E.
As an example, suppose we flip a coin over and over, and let Fbe the events thatare observable based on the first 16 flips, whileEare those events observable based on thefirst 8 flips. IfXis the number of heads in 16 flips and Z is the number of heads in the
first 8 flips, then E [X| E] will be 4 + Z, a random variable.
5
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
6/69
Proposition 2.2. IfX is integrable, then E [X| E] exists and is unique up to a.s. equiv-alence.
Proof. For the uniqueness, ifY1, Y2 satisfy the definition, then E [Y1; A] = E [Y2; A] for
all A E. BothY1, Y2 areE-measurable, hence A ={ :Y1()> Y2() +} E. ThenE [Y1 Y2; A]P(A), which implies P(A) = 0. Since is arbitrary, this shows Y1Y2,a.s. Using symmetry, we conclude Y1= Y2 a.s.
To show existence, we may assume that X 0; we do the general case by writingX = X+ X and using linearity. Let P0 be the restriction of P toE and let Q bedefined byQ(A) = E [X; A]. Qis a measure and it is clear that Q is absolutely continuous
with respect to P0. Let Y be the Radon-Nikodym derivative ofQ with respect to P0.
From the proof of the Radon-Nikodym theorem, Y isE measurable. So if A E, thenE [Y; A] =Q(A) = E [X; A].
Note for later use that ifX0, then E [X| E]0 a.s.Proposition 2.3. (1) IfX is
E-measurable, then E [X
| E] =X.
(2) IfXis independent ofE, then E [X| E] =E X.(3) If Ai are disjoint sets whose union is with P(Ai) > 0 for each i, andE is the
-field generated by theAi, then
E [X| E]() =i
E [X; Ai]
P(Ai) 1Ai().
(4) IfX andX Y are integrable andY isE-measurable, then E [XY| E] =YE [X| E].(5) IfD E F, then
E [E [X| D]| E] =E [X| D] =E [E [X| E]| D].
(6) Ifg is convex, andX andg (X)are integrable, then
E [g(X)| E]g(E [X| E]).
(3) ties together the definition from elementary probability courses with this one.
If = [0, 1), P is Lebesgue measure, fis a real-valued function, and for some n we have
Ai = [i12n
, i2n
), i = 1, 2, . . . , 2n, then E [f| E] is the function that is the average off overAi for each Ai.
(6) is known as Jensens inequality for conditional expectations. It has nothing to
do with Jensens formula in complex analysis.
Proof. (1) is clear from the definition.
6
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
7/69
(2)E Xis a constant, so isEmeasurable. IfA E, sinceXandEare independent,then E [X; A] = E [X1A] = (EX)(E 1A) = (EX)P(A). On the other hand E [EX; A] =
(EX)P(A).
(3) It suffices to show both sides integrate over sets B inE the same. IfB = Ai0for somei0, then
E
i
E [X; Ai]
P(Ai) 1Ai ; Ai0
= E
E [X; Ai0 ]P(Ai0)
1Ai0 ; Ai0
= E [X; Ai0 ]
as desired. By linearity and the fact that the Ai are disjoint, we have
E
i
E [X; Ai]
P(Ai) 1Ai ; B
= E [X; B]
whenever B is a finite union of the Ai.
LetMbe the set ofB Efor which this equality holds. IfBn Mwith BnB ,then B M by the monotone convergence theorem. If Bn M with Bn B, thenB M by the dominated convergence theorem. This means thatMis what is known asa monotone class (cf. the proof of Fubinis theorem). By the monotone class lemma from
measure theory,Mcontains the -field generated by the collection of finite unions of theAi. ThereforeMcontainsE. (This sort of argument is called a monotone class argument.)
(4) IfY = 1A with A E and B E, then
E [YE [X| E]; B] =E [E [X| E]; A B] =E [X; A B] =E [XY; B].
Since this holds for each B, then E [XY | E] =YE [X| E] whenever Y is of the form 1A,A
E. By linearity it holds forY of the form Y =
ici1Ai , where the ci are constants
and the Ai are inE. By taking limits, it holds for allE-measurable random variables Y .(5) E [X| D] isD Emeasurable, so the left hand equality holds by (1). IfA D,
then E [E [X| E]; A] =E [X; A]. So the right hand equality follows.(6) If g is convex, then g lies above all its tangent lines. Given x0 there ex-
ists C(x0) such that g(x) g(x0) +C(x0)(xx0). Note that for C(x0) we can uselimsuph0
g(h+x0)g(x0)h . So x0C(x0) is a Borel measurable function.
Let x = X() and x0= E [X| E](). So we have
g(X())g(E [X| E]()) + C(E [X| E]())(X() E [X| E]()),
where C(E [X| E
]()) isE
measurable since it is the composition of a Borel measurable
function with aEmeasurable random variable.
7
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
8/69
We now take the conditional expectation with respect toE:
E [g(X)| E]E [g(E [X| E])| E] + E [C(E [X| E])(XE [X| E])| E]=g(E [X| E]) + C(E [X| E])E [(XE [X| E])| E]=g(E [X
| E]).
(This is not a complete proof, because there are some integrability concerns. A homework
problem will be to give a complete proof.)
Every -field must contain : that is part of the definition of a -field. Then
E [E [X| E]] =E [E [X| E];] =E [X; ] =E X.
3. Stopping times.
Definition 3.1. SupposeF0 F1 F2 . Then N : {0, 1, 2, . . .} is a stoppingtime if(Nk) Fk for each k .
For example, if you drive south on Rte. 195 and I tell you to stop at the 2nd light
after crossing Rte. 275, when you get there, you know to stop. If I say to stop at the 2nd
light before the Windham town line, and you havent been on this road before, you have
to look ahead and then come back to stop at the right place.
If one flips a coin repeatedly and N is the first flip where one gets a heads, this is
a stopping time.
Proposition 3.2. (1)N
n is a stopping time.
(2) IfM, Nare stopping times, so areMN (the maximum ofM andN) andMN(the minimum ofM andN).
(3) IfNnNand theNn are stopping times, then so isN.(4) IfNnNand theNn are stopping times, then so isN.(5) N+ nis a stopping time ifN is.
Proof. These are all quite easy. To prove (2), for example, note that (M N k) =(Mk) (Nk) Fk and (M Nk) = (Mk) (Nk) Fk.
4. Discrete time martingales.
8
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
9/69
Definition 4.1. SupposeF0 F1 . Mn is a martingale if(1) E |Mn|
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
10/69
The condition thatNbe bounded can be replaced with other conditions but cannot
be dispensed with completely. For example, in the double-or-nothing martingale, let N be
the first time Mn is 0. Then MN 0 and its expectation is not the same as EM0.Proof. We do the martingale case, the submartingale case being similar. EMN =
Ki=0 E [Mi; N = i]. Since (N = i) = (N
i)
(N
i
1)
Fi, then E [Mi; N =
i] =E [Mi+1; N =i]. Since (N=i) Fi Fi+1, then E [Mi+1; N=i] =E [Mi+2; N =i].Continuing, E [Mi; N =i] =E [MK; N =i]. Therefore
EMN =Ki=0
E [Mi; N=i] =Ki=0
E [MK; N=i] =E MK.
Note that if Mn is a martingale, then|Mn| is a submartingale. This is becauseg(x) =|x| is convex, and by Proposition 2.3(6)
E [|Mn+1| | Fn] |E [Mn+1| Fn]|=|Mn|.
DefineMn = maxkn |Mk|.Theorem 4.3. (Doobs inequalities)LetMnbe a martingale. (1) P(M
na)E |Mn|/a.
(2) If1 < p
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
11/69
Now multiply both sides by pap1 and integrate over a from 0 to. On the lefthand side we have
0
pap1P(Mna) da= E0
pap11(Mna) da= E Mn0
pap1 da= E (Mn)p.
On the right hand side, we obtain0
pap11
aE [|Mn|; Mna] da= pE
0
ap2|Mn|1(Mna) da
=pE
Mn0
ap2|Mn| da
= p
p 1E [|Mn|(Mn)p1].
Let q be such that p1 +q1 = 1. By Holders inequality, the last term on the right isbounded by
p
p 1E |Mn|p1/p
E (Mn)q(p1)1/q
.
Noteq(p 1) = p. So we have
E (Mn)p p
p 1E |Mn|p
1/pE (Mn)
p1/q
.
Providing it is finite, we divide both sides by (E (Mn)p)1/q and take pth powers and we
are done.
The proof started with P(Mn a) 1aE [|Mn|; Mn a]. Let K > 0, and byconsidering the two cases K > aand Ka, we see that this inequality implies
P(Mn Ka) 1
aE [|Mn| K; Mn Ka].From the above argument, we obtain
E (Mn K)p p
1 ppE (|Mn| K)p.
Now let K and use Fatous lemma.
5. Framework for stochastic processes.
Let (, F,P) be a probability space. A stochastic process is a map Xfrom [0, )to the reals. We writeX
t= X
t() =X(t, ). There are various notions of measurability
one could use. For example, one could require measurability of the mapXwith respect
11
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
12/69
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
13/69
(1) P(supst |Ms|> )E |Mt|/.(2) E [supst |Ms|]p (p/(p 1))pE |Mt|p.
Proof. We will prove (1); the second inequality is proved in the same way. Also we will
do the case where Mt is a martingale, the submartingale case being nearly identical. Let
Dn ={kt/2n : 0k2n}. It is clear that{Ms :sDn} is a discrete time martingale.By Doobs inequality for discrete time martingales,
P( supst,sDn
|Ms|> )E |Mt|
.
Since Mt is right continuous, if|Ms| > for some st, then we can find an sDn forsome n for which|Ms| > . Thus if we let n and use dominated convergence, weobtain (1).
6. Brownian motion.
Definition 6.1. Xt = Xt()is a one dimensional Brownian motion started at 0 if
(1) X0= 0a.s
(2) XtXsis a normal random variable with mean 0 and variancetswhenevers < t.(3) Xt Xs is independent of the -field generated by{Xr; rs}.(4) With probability 1 the map tXt()is continuous as a function oft.
There is some superfluity in the definition. For example, ifst, then
t s= Var (XtXs) = Var Xt+ Var Xs2Cov(Xs, Xt) =t + s 2Cov (Xs, Xt)
from (2) and (1). So
Cov(Xs, Xt) = s (6.1)
ifst. But then
Cov (Xt Xs, Xr) = Cov (Xt, Xr) Cov (Xs, Xr) =r r= 0
ifr < s < t. If we also assume that the Xtare jointly Gaussian, this then implies X
tXs
is independent ofXr, and (3) follows from this.
13
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
14/69
Proposition 6.2. IfXt is a Brownian motion anda >0, then so are
(1) Yt = aXt/a2
(2) Zt = tX1/t.
The first assertion is referred to as scaling, the second as time inversion.
Proof. (1)Yt has continuous paths, and ifs < t, then
Cov(Ys, Yt) =a2Cov (Xs/a2 , Xt/a2) = a
2(s/a2) = s.
Since Gaussian random variables have their joint distribution determined by their means
and covariances, this does it.
(2) Again, Zt has continuous paths. Ifs < t, then 1/t
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
15/69
Lemma 6.3. There existsc1 such that if1 andZis a normal random variable withmean 0 and variance 1, then
1
e
2/2 P(Z > ) c1
e2/2
if >2.
Proof. From the formula for the density of a normal random variable,
P(Z > ) = 1
2
ey2/2dy 1
2
y
ey
2/2dy 1
e2/2,
which gives the left hand inequality. A similar argument shows that
12
1
y2ey
2/2dy 12
y
3ey
2/2dy 12
1
3e
2/2. (6.2)
Then using integration by parts,
12
ey2/2dy =
12
1
yyey
2/2dy= 1
2e
2/2 12
1
y2ey
2/2dy,
and with (6.2) this yields the right hand inequality.
Less trivial is the following.
Lemma 6.4. IfXt is a Brownian motion, then
P(supst
Xs)e2/2t.
Proof. For any a the process{eaXt} is a submartingale: since x eax is convex, thisfollows by the same argument we used in the remark following Theorem 4.2.
By Doobs inequality,
P(supst
Xs> ) = P(supst
eaXs > ea)E eaXt
ea .
Since E eaY =ea2VarY/2 ifY is Gaussian with mean 0, then it follows that the right side
is bounded by eaea2t/2. If we now set a = /t, we obtain
P(supst,
Xs > )e2/2t.
15
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
16/69
7. Path properties of Brownian motion.
The paths of Brownian motion are continuous, but we will see that they are not
differentiable. How continuous are they? A functionf : [0, 1] R is said to be Holdercontinuous of order if there exists a constant M such that
|f(t) f(s)| M|t s|
, s, t[0, 1].We show the paths of Brownian motion are Holder continuous of order if < 1
2 but not
if > 12 . (They are also not Holder continuous of order 12 ; we will see this from the law of
the iterated logarithm.)
Proposition 7.1. If < 12
, the paths of Brownian motion are Holder continuous of order
.
Proof. Let
An ={
k
2n
1 : supk/2nt(k+1)/2n |
Xt
Xk/2n
|> 2n
}.
Since the increments of Brownian motion have a distribution depending only on the length
of the increment,
P(An)2nP( supt1/2n
|Xt X0|> 2n)2 2n exp(22n/2(2n)).
Here we used Lemma 6.4 applied to (Xt X0) and to(Xt X0). Since < 12 , this isless than
2n+1 exp(2n(12)/2)2n+1e3n
ifn is large. HenceP(An)
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
17/69
Note that ifZis a normal random variable with mean 0 and variance 1,
P(|Z| r) = 12
rr
ey2/2dy2r. (7.1)
Proposition 7.2. If > 12 , then Brownian motion is not Holder continuous of order.
Proof. Let
An ={|X(k+1)/2n Xk/2n | n2n for k2n 1}.We have
P(|X1/2n | n2n) =P(2n/2|X1/2n | n2n(12 ))2n2n(12 ), (7.2)
using (7.1) and the fact that 2n/2X1/2n is a mean 0 variance 1 normal random variable.
The right hand side of (7.2) will be less than 12
for n large since > 12
. By independence
of the increments and the stationarity of the increments,
P(An)( 12)2n1
fornlarge. This is summable and by Borel-Cantelli, P(An i.o.) = 0.
Given there exists Ndepending on such that ifnN, then /An. SupposeXt() were Holder continuous of order, so that for someMwe had |XtXs| M|ts|for alls, t[0, 1]. Then choose n larger than M andN. Since /An, there exists k suchthat
|X(k+1)/2n Xk/2n | n2n > M2n/.If we takes = k/2n andt = (k + 1)/2n, we then have found s and t such that|Xt Xs|>M|t s|.
One of the most beautiful theorems in probability theory is the law of the iterated
logarithm (LIL). It shows precisely how Brownian motion oscillates.
Theorem 7.3. (LIL)
(1) lim supt
|Xt|2t log log t
= 1, a.s.
(2) lim supt0
|Xt|2t log log(1/t)
= 1, a.s.
Proof. (2) follows from (1) by time inversion.
17
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
18/69
Proof of (1), upper bound: Let >0 and then choose q >1 but close enough to 1 so that
(1 + )2/q >1. Let
An = ( supsqn
Xs> (1 + )
2qn1 log log qn1).
By Lemma 6.4,
P(An)exp
(1 + )22qn1 log log qn1
2qn
= exp((1 + )2(log(n 1) + log log q)/q) = 1
(n 1)(1+)2/q .
This is summable in n, so
P(An)0 is arbitrary, the upper bound in (1)
is proved.
Proof of (1), lower bound: Let >0 and then take q >1 very large so that (1/2)2/(1q1)< 1 and 4/
q < /2. Let
Bn = (Xqn+1 Xqn >(1 )
2qn+1 log log qn+1).
Since Brownian motion has independent increments, then the Bn are independent.
By Lemma 6.3, we see that
P(Bn)exp(1 /2)22qn+1 log log qn+1
2(qn+1 qn)
= exp
(1 /2)2 log(n + 1) + log log q1 q1
forn large. (We absorbed the c/ term into the exponent by changing (1) to (1/2).)So
P(Bn)c1n
1
(n + 1)(1/2)2/(1q1) =.
By the Borel-Cantelli lemma, with probability 1, is in infinitely many Bn. So with
probability one, infinitely often
Xqn+1 Xqn >(1 )2qn+1 log log qn+1.18
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
19/69
However we know from the upper bound that for n large enough,
|Xqn | 2
2qn log log qn 4q
2qn+1 log log qn+1 (1
3/2)2qn+1 log log qn+1.This proves the lim sup in (1) is greater than (1 3/2) a.s. Since is arbitrary, the lowerbound follows.
Although the paths of Brownian motion are continuous, they are not differentiable.
In fact, we have
Theorem 7.4. The paths of Brownian motion are nowhere differentiable.
Proof. LetM >0 and let
An ={s[0, 1] :|XtXs| M|t s| if|t s| 2/n}Bn =
{k
n :
|Xk/n
X(k
1)/n
| 4M/n,
|X(k+1)/n
Xk/n
| 4M/n,
|X(k+2)/n X(k+1)/n| 4M/n}.NoteA1A2. . .. It is not hard to check that AnBn. To see this, ifAn, therethere exists an s such that |XtXs| M|ts| if|ts| 2/n; let k/n be the largest multipleof 1/n less than or equal to s. Then, |(k+2)/ns| 2/n, |(k+1)/ns| 2/n, and therefore|X(k+2)/nX(k+1)/n| |X(k+2)/nXs|+ |XsX(k+1)/n| 2M/n + 2M/n = 4M/n. Therest of the argument is similar.
The probability ofBn is less than ntimes the probability of the corresponding set
with k fixed, since the probability of a union is less than the sum of the probabilities.
Using independent increments and stationary increments,
P(Bn)n[P(|X1/nX0| 4M/n]3 c1n4Mn 3
0as n . So for each N, P(n=NAn)P(AN)0 as N . This implies that the(outer) probability that there exists s < 1 such that Xs exists and|Xs| M is 0. SinceM is arbitrary, this proves the theorem.
8. Construction of Brownian motion.
There are many different constructions, none of them easy. We give the Levy-
Ciesielski construction. First we need to recall a few facts about orthogonal expansions.
Supposeeiis a collection ofL2 functions on [0, 1]. Theei form a complete orthonor-
mal system if e2i = 1, eiej = 0 ifi=j , and finite linear combinations of the ei form adense subset ofL2[0, 1]. Let us writef, gfor 10
f g andf forf, f1/2.
19
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
20/69
Proposition 8.1. (1)ni=1 f, eiei converges in L2 tof.
(2)f2 =i=1 |f, ei|2(3) Iff , g are in L2, then
f, g=i=1
f, eig, ei.
Proof. We let gn = fni=1 f, eiei. Since the ei are orthogonal,
ni=1 aiei2 =n
i=1 a2i (the cross product terms are 0). With this in mind,
0 gn2 =f2 + ni=1
f, eiei2 2f,ni=1
f, eiei
=f2 +ni=1
f, ei2 2ni=1
f, ei2
=f2 ni=1
f, ei2.
Lettingn , we have Bessels inequality:i=1
f, ei2 f2.
Now
gn gm2 =n
i=m+1
f, ei2.
This tends to 0 as n, m , since by Bessels inequality, the sum is bounded by the tailof a convergent series. Since L2 is complete, gn converges in L2, say to g.
Fix i. As long as ni,gn, ei=f, ei f, ei= 0.
By Cauchy-Schwartz,
|g, ei gn, ei|=|ggn, ei| ggnei 0
as n . We concludeg, ei = 0 for all i. Since the ei are complete, g = 0. Thisproves (1).
(2) follows since
gn2 =f2 n
i=1
f, ei2
20
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
21/69
andgn 0.(3) follows from (2) by polarization. That means we apply (2) to f+ g, to f, and
tog , and use the equalityf, g= 12
(f+ g2 f2 g2).
To construct Brownian motion, we are going to use a particular complete orthonor-
mal system, the Haar functions. It makes more sense to use double subscripts. Let 00beidentically 1. For i a positive integer and j = 1, 2, . . . , 2i1, we let ij(x) equal 2(i1)/2
if (2j 2)/2i x < (2j 1)/2i, equal2(i1)/2 if (2j 1)/2i x < 2j/2i, and equal 0otherwise.
It is easy to check thatij , kl= 0 unless i = k and j = l, in which case it equals1.
We now proceed to define a Brownian motion for t[0, 1]. Letij(t) =t0
ij(s) ds.
Let Yij be iid mean zero variance 1 normal random variables. Let V0(t) = Y0000(t),
Vi(t) =2i1j=1 Yijij(t). LetXt =
i=0 Vi(t).
Theorem 8.2. Xt, t[0, 1] is a Brownian motion.
Proof. Let us first show the convergence of the sum. Let
Ai ={supt
|Vi(t)| i2}.For fixed i, the supports of any two of the ij overlap by at most a single point. Also, ijis bounded by 2(i+1)/2. So
P(Ai)P(j2i1 :|Yij |ij(t)> i2 for some t[0, 1])P(j2i1 :|Yij |2(i+1)/2 > i2)2i1P(|Yij |> 2(i+1)/2i2)2i1e2i/i4 .
For the last line we used Lemma 6.3. This is summable in i, so
iP(Ai)
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
22/69
The last line follows because EYijYkl = 0 unless i = k and j = l. By Proposition 8.1, this
last expression is1[0,s], 1[0,t]= s when s < t.
To define Brownian motion for all t0, we proceed as follows. LetXt, Xt be twoindependent Brownian motions, defined for t 1. Let Yt = tX1/t. Then define Zt to be
Xt ift1 and to be X1+ (Yt Y1) ift is greater than 1. It is elementary to check thatZt is a Brownian motion defined for t0.
9. Stopping times.
We start with a filtrationFt satisfying the usual conditions. A random variableT : [0, ] is a stopping time if for all t, (T < t) Ft.Proposition 9.1 . SupposeFt satisfies the usual conditions.
(1) T is a stopping time if and only if(T t) Ft for allt(2) T t is a stopping time(3) IfS andTare stopping times, so areS T andS T(4) IfTn
are stopping times, so issupn Tn
(5) IfTnare stopping times, so isinfn Tn(6) Ift0, S+ t is a stopping time ifS is.
Proof. We will just prove part of (1). Note (T t) =nN(T < t + 1/n) nNFt+1/nfor all N. So (Tt) Ft+=Ft.
For a Borel measurable set A, letTA= inf{t >0 :XtA}.Proposition 9.2. SupposeFt satisfies the usual conditions andXt has continuous paths.
(1) IfA is open, then TA is a stopping time.
(2) IfA is closed, then TA is a stopping time.
Proof. (1) (TAt) =qQ,q
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
23/69
Proposition 9.3. Let Xt be a Brownian motion, letF0t = (Xr; r t). LetFt be thecompletion ofF0t. ThenFt is right continuous.
Proof. Ift < t1< t20}.Proposition 9.4. (1)FT is a -field.
(2) IfST, thenFS FT.(3) IfFT+=>0FT+, thenFT+=FT.(4) IfXt has right continuous paths, then XT is
FT-measurable.
Proof. IfA FT, thenAc (T t) = (Tt) [A (T t)] Ft, soAc FT. The restof (1) is easy. SupposeA FS and ST. Then A (T t) = [A (St)] (T t).We have A (S t) Ft because A FS, while (T t) Ft because T is a stoppingtime. ThereforeA (Tt) Ft, which proves (2).
For (3), ifA FT+, thenA(T+ t) Ftfor allt, orA(T t) Ft for allt, and henceA (T t) Ft+ for all t. This is true for all , soA (Tt) Ft+=Ft.This saysA FT.
(4) DefineTn() to be k2n
if k12n
T()< k2n
. It is easy to see thatTn is a stopping
time, and TnT. Note
(XTnB) =[(Xk/2nB) (Tn = k/2n)] FTn FT+1/2n .
23
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
24/69
So XTn isFT+1/2n measurable. Ifnm, then XTn isFT+1/2n FT+1/2m measurable.XTn XT, so XT isFT+1/2m measurable for each m. Therefore XT isFT+ =FTmeasurable.
Proposition 9.5. IfXt is a Brownian motion andT is a bounded stopping time, then
XT+tXTis a mean 0 variancet random variable and is independent ofFT.Proof. Let Tn be defined as in part (4) of the preceding proof. LetA FT FT+1/n.Letfbe continuous. Then
E [f(XT+t XT); A] =
E [f(X k2n+t
X k2n
); A Tn = k/2n]=
E [[f(X k2n+t
X k2n
)]P(A Tn = k/2n)=E f(Xt)P(A).
Letn , soE [f(XT+t
XT); A] =E f(Xt)P(A).
Taking limits this equation holds for all bounded and Borel measurable f.
If we take A = and f = 1B, we see that XT+t XThas the same distributionas Xt, which is that of a mean 0 variancet normal random variable. If we let A FT bearbitrary and f= 1B, we see that
P(XT+tXT B, A) = P(XtB)P(A) =P(XT+tXTB)P(A),
which implies that XT+tXTis independent ofFT.
10. Square integrable martingales.A martingale is square integrable ifMt= E [M| Ft] and EM2
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
25/69
Let n . Since|MTn | sup |Mt|, which is integrable since it is square integrable,dominated convergence tells us that E MT =E M0.
In general, look at T K. Then by the above EMTK = EM0. We dominate|MTK| by supt |Mt|; so letting K , dominated convergence shows that EMTKEMT.
Note that where we used the square integrability was in showing sup |Mt|was inte-grable. So it would be enough to assume sup |Mt| is integrable.Proposition 10.2. IfM is square integrable andSTare stopping times, then E [MT|FS] =MS.
Proof. LetA FS and define U() =S()1A() + T()1Ac(). Thus Uis equal to S ifA and otherwise equal to T. Since (U t) = [(St) A] [(T t) Ac] is inFt,using the fact that A FS FT, we see that U is a stopping time. By Proposition 10.1,EM0 = EMU = E [MS; A] + E [MT; A
c] and EM0 = EMT = E [MT; A] + E [MT; Ac]. It
follows that E [MS
; A] =E [MT
; A], which is what we needed to prove.
Proposition 10.3. SupposeMt is a square integrable martingale. Then
E [(MT MS)2 | FS] =E [M2T M2S| FS].
Proof. We have
E [(MT MS)2 | FS] =E [M2T| FS] 2MSE [MT| FS] + M2S=E [M2T| FS] M2S,
which gives us what we want.
If we take expectations in the above, we obtain
E (MT MS)2 =E M2T EM2S.
Theorem 10.4. Suppose M0 = 0, Mt is a continuous martingale, and the paths ofMtare of bounded variation. Then Mt0 for allt.
Proof. Let t0 be fixed and let At denote the total variation up to time t of Mt. If
TN = inf{t : At N}, we look at MTNtt0 . This is also a continuous martingale withpaths of bounded variation, and if this is identically zero, then letting N andt0 ,we obtain our result. So it suffices to suppose the total variation ofM
t is bounded by N
a.s.
25
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
26/69
Let > 0, S0 = 0, and Si+1 = inf{t > Si :|MtMSi | } t0. Since Mt iscontinuous, then|MSi+1 MSi | . We write
EM2t0 =E i=0
(M2Si+1 M2Si)
= E
(MSi+1 MSi)2
E |MSi+1 MSi | N.Since is arbitrary, then EM2t0 = 0. By Doobs inequality, E (supst0M
2s ) = 0. So M is
identically 0 up to time t0.
Definition 10.5. A continuous square integrable martingaleMt has quadratic variation
Mt(sometimes written M, Mt) ifM2t Mtis a martingale, whereMtis a continuousnondecreasing process withM0= 0.
In the case of (stopped) Brownian motion, the quadratic variation is justMt = t.We will show existence and uniqueness of
M
t. Uniqueness is easy.
Proposition 10.6. There is at most one processMt satisfying Definition 10.5.
Proof. IfM2t At and M2t Bt are both martingales, then At Bt is a martingale. IfA0= 0 and B0= 0, if both At andBt are continuous and nondecreasing, then At Bt iscontinuous with paths of bounded variation. So by Theorem 10.4, At Bt is identically 0.
The existence is a bit harder and is based on the following theorem known as the
Doob-Meyer decomposition. In most applications, one can writeMt explicitly, and it isnot necessary to appeal to the Doob-Meyer decomposition.
Theorem 10.7. SupposeZtis a continuous submartingale. Then there exists a continuous
martingale Mt and a continuous nondecreasing process At with A0 = 0 such that Zt =
Mt+ At.
We give the proof in the Appendix to Section 10A, which follows Section 10.
The existence ofMt follows immediately.Theorem 10.8. IfMtis a continuous martingale, there exists a continuous nondecreasing
processMt such that M2t Mt is a martingale.
Proof. By the conditional form of Jensens inequality with g(x) =x2, we see that M2t is
a submartingale. Apply the Doob-Meyer decomposition toXt=M2
t and set
M
t=A
t.
26
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
27/69
IfM and N are two square integrable martingales, we defineM, Nt by polariza-tion:M, Nt = 12 [M+ Nt Mt Nt].10A. Appendix to Section 10.
The following two lemmas are also very useful for applications other than the exis-
tence proof.
Lemma 10A.1. SupposeAk is an increasing process with A00. SupposeAk isFk1measurable and suppose
E [AAk|Fk]N, (10A.1)k= 0, 1, 2, . . .. Then EA22N2.Proof. LetAk be any process of bounded variation with A00. Let ak =Ak+1 Ak.Some algebra shows that
A2= 2k=0
(A Ak)akk=0
a2k. (10A.2)
IfAk is also increasing, so that ak
0 for all k , then
EA2= 2E k=0
E [A Ak|Fk]ak E
k=0
(ak)2 (10A.3)
2NEk=0
ak = 2NEA.
Since E [AA0|F0]NandA00, taking expectations shows EAN. Substitutingin (10A.3) gives the assertion.
Lemma 10A.2. SupposeA(1)k andA(2)k are increasing processes satisfying the hypotheses
of Lemma 10A.1. LetBk =A(1)k A(2)k . Suppose there existsW0 with E W2
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
28/69
The Cauchy-Schwarz inequality and the bounds for E (A(i) )2 show EB2cN(EW2)1/2.
Regarding the L2 bound on the supremum of the Bks, let Mk = E [B|Fk], Nk =E [W|Fk], and Xk = Mk Bk. We have
|Xk|=|E [B Bk|Fk]| Nk,
so by Doobs inequality,
Esupk
X2k E supk
N2k cEN2= cEW2.
Again by Doobs inequality,
Esupk
M2k cEM2= cEB2.
Since supk |Bk| supk |Xk| + supk |Mk|, we therefore have
Esupk
B2kcEW2 + cN(EW2)1/2.
Here is the Doob-Meyer decomposition in the discrete-time case.
Proposition 10A.3. IfZk is a discrete-time supermartingale, there exists a martingale
Mk and an increasing processAk such that Ak+1 isFk measurable andZk =Mk Ak.
Proof. Letak = E [Zk Zk+1|Fk]. SinceZ is a supermartingale, theak are nonnegativeand clearlyFk measurable. Let Ak =
k1j=1aj . It is trivial to check thatZk + Ak is a
martingale.
Proof of Theorem 10.7. Since Zt is continuous, it suffices to show that ZtTN has thedesired decomposition for each N, where TN = inf{t :|Zt| > N}, because we then letN and use the uniqueness result. So we may assumeZis bounded. Also, if we havethe decomposition up to any fixed time M, we can let Mgo to and get our existenceresult. So we may assume that Zt is constant for tM, and hence that almost surely thepaths ofZt are uniformly continuous.
Fix M and n. LetFnk =Fk/2n . Construct Ank using Proposition 10A.3. LetAn
t =Ank andF
n
t =Fnk if (k 1)/2n < tk/2n.Let W() = supsM,sts+ |Zt Zs|. SinceZis bounded, so is W(). Since the
paths ofZare uniformly continuous, W()0 a.s.as 0. HenceW()0 in L2.The first thing we want to show is thatA
n
tconverges inL2 asn
, uniformly over
t. We will do that by showingEsupt |Amt Ant |2 0 as n, m . Then since Cauchy
28
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
29/69
sequences converge, the An
t converge. We will estimate the L2 norm of the supremum of
the difference using Lemma 10A.2. Supposemn. Since Ant and Amt are constant overintervals (k/2m, (k + 1)/2m], the supremum of the difference will take place at some k/2m.
Fixt = k/2m for somek , and we will bound the difference of the conditional expectations
with respect toFmk . Let u be the smallest multiple of 2n bigger than or equal to t. Wehave by Proposition 10A.3
E [Am
Am
t|Fm
t ] =E [Am Amk|Fk/2m ] =E [Zt Z|Ft].
On the other hand
E [An
An
t |Fm
t ] =E [AnAnu2n |Ft]
=EE [An Anu2n |Fu] |Ft
=E
E [ZuZ|Fu] |Ft
=E [Zu Z|Ft],
using the definition ofAn
k
. Then the difference of the conditional expectations is bounded:E [AmAmt|Ft] E [An Ant |Ft]E [|Zt Zu| |Ft]E [W(2n)|Ft].
Using Lemma 10A.2 shows that Esupt |Amt Ant |2 tends to 0 as n .Next we want to show the limit is continuous. The jumps ofA
n
t are
An
t =E [Z(k1)/2n Zk/2n |F(k1)/2n ], t= k/2n,
which are bounded by E [W(2n)|F(k1)/2n ]. So
Esupt
(An
t)2
E supk (E [W(2n)|F(k1)/2n)2
cE [W(2n)]2 0,
by Doobs inequality, since E [W(2n)|F(k1)/2n ] is a martingale. By looking at a suitablesubsequencenj , supt |Anjt | 0, a.s., so the limit is continuous.
Finally, we show that if At is the limit of the Ant, then Zt + At is a martingale.
Since Zt and At are both continuous and square integrable, it suffices to show that for
s, tDn, s < t, and B Fs,
E [Zt+ At; B] =E [Zs+ As; B].
We know this for each Zt+ A
n
t , and the result follows from a passage to the limit, using
the Cauchy-Schwarz inequality.
29
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
30/69
11. Stochastic integrals.
Let us consider a very special case first. Suppose f is continuous and deterministic
(i.e., does not depend on ).
f(s) dB(s) does not make sense as a Lebesgue-Stieltjes
integral becauseB(s) is not differentiable. Suppose we take a Riemann sum approximation
f( i2n )[B(
i+12n ) B( i2n )]. If we take the difference of two successive approximations we
have terms likeiodd
[f(i/2n+1) f((i + 1)/2n+1)][B((i + 1)/2n+1) B(i/2n+1)].
This has mean zero. By the independence, the second moment is[f(i/2n+1) f((i + 1)/2n+1)]2(1/2n+1).
This will be small if f is continuous. So by taking a limit in L2 we obtain a nontrivial
limit.
We now turn to the general case. Let Mt be a square integrable continuous martin-
gale and suppose Ftsatisfies the usual conditions. We will let Pbe the-field on [0, )generated by all processes of the form Hs() =K()1(a,b](s), where K isFa measurableand bounded. P is called the predictable -field. If Hs is adapted and has continuouspaths, then observe that
Hs() = limn
i=1
H(i1)/2n()1((i1)/2n,i/2n](s),
and therefore such Hs are examples of predictable processes.
We will constructt0
Hs dMsfor allHthat are P-measurable with Et0
H2sdMs N orMt > N ort0|dAs| > N}. If we obtain Itos
formula for XtTN, we can let N . Since Itos formula is a path by path result, thatwill suffice. So we may assume Mt,
M
t, At, and
t
0
|dAs
|are all bounded by N. In
particular,Xt is bounded, and so we may assume that f, f, and f are all bounded.Let t0> 0, >0, S0= 0, and define
Si+1= inf{t > Si :|MtMSi |> orMtMSi > or tSi
|dAs|> or tSi > }t0.
NoteSi as i by the continuity of paths.The key idea to Itos formula is Taylors theorem. So we write
f(Xt) f(X0) =i=0
[f(XSi+1) f(XSi)]
= f(XSi)(XSi+1 XSi) +12 f(XSi)(XSi+1 XSi)2+
Ri,
whereRi is the remainder term. We have|Ri| ()(XSi+1 XSi)2, where ()0 as0.
Let us first look at the terms with f in them. Let Hs = f(XSi) if Si s 0 is rational, then there existsrationalr1, . . . , rn such thatr1f1+ . . . + rnfn , or (r1+ )f1+ r2f2+ + rnfn0.Since / N1, then (r1+ )g1+ r2g2+ + rngn 0. Letting 0, it follows thatt1g1+ +tngn 0. Since L(f1) = 1, L can be extended to a positive linear functionalon the closure of the collection of finite linear combinations of the fj . Any uniformlycontinuous function on can be extended uniquely to , so L can be considered as a
positive linear functional on C(). By the Riesz representation theorem, there exists a
probability measure Q(, ) such that L(f) = f()Q(,d).The mappingL(f) is measurable with respect toFfor each finite linear com-
bination of thefj , hence for all uniformly continuous functions on by a limit argument.
IfB G, B
(t1f1+ + tnfn)()Q(,d)
P(d)
=
B
(t1g1+ + tngn)()P(d)
= B
(t1f1+ + tnfn)()P(d)
46
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
47/69
or
f()Q(,d) is a version ofE [f|G] iffis a finite linear combination of the fj . Bya limit argument, the same is true for all fthat are of the form f= 1A with A F.
Let Gni be a sequence of balls of radius 1/n (with respect to the metric of )
contained in and covering . Choose in such that P(iinGni) > 1 1/n2n. The setHn =n1iin Gni is totally bounded; let Kn be the closure ofHn in . Since iscomplete,Kn is complete and totally bounded, and hence compact, and P(Kn)1 1/n.So
E [Q(, nKn); N1]E [Q(, Kn); N1] =P(Kn)1,or Q(, nKn) = 1 a.s. Let N2 be the null set for which this fails. Thus for (N1N2), we see that Q(,d) is a probability measure on . For N1N2, letQ(, ) =P(). This Q is the desired regular conditional probability.
Finally, we have
Proposition 16.7. LetXtbe a solution to (16.2),Ta bounded stopping time. LetQT be
the regular conditional probability forE x[
| FT]. There exists a setB such that P(B) = 0
and if /B , then underQT(, ),XT+tis a solution to (16.2) starting from XT for almostevery .
Proof. The principal step in the proof is to show that ifWt = WT+t WT, then underQT(, ) the processW is a Brownian motion, except for a P-null set of . QT is aprobability measure on , soWis continuous. Let t1
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
48/69
17. Finance models.
LetBt be a Brownian motion. We assume that St is a stock or other risky security,
and that
dSt = St dt + St dBt.
This is plausible, sincedSt/St= dt + dBt, or the relative change in price is a Brownian
motion with drift. The solution to this SDE is
St= S0eBt+((2/2))t;
this can be checked by applying Itos formula.
We also assume the existence of a bond t, which is assumed to be riskless, and the
equation for the bond is
dt = rt dt,
which implies
t= 0ert.
Suppose at timet one buysa shares of stock. The cost isaSt. If one sells the shares
at timet + h, one receivesaSt+h, and the net gain isa(St+h St). One can also sell short,i.e., leta be negative. The formula for the gain is the same.
Suppose at time ti one holds ai shares, up until time ti+1. The total net gain
over the whole period isn1i=0 ai(Sti+1 Sti). This is the same as the stochastic integralt
0at dSt ifat equalsai when tit < ti+1.
One should allow ai to depend on the entire pastFti . Idealizing, one allows con-tinuous trading, and ifat is adapted, the net gain through trading the stock is
t0
as dSs.
One has a similar net gain oft0
bs ds when trading bonds.
The pair (a, b) is called a trading strategy. The strategy is self-financing if
Vt= atSt+ btt = V0+ t0
as dSs+ t0
bs ds
for all t. We assume there is no transactions cost (i.e., no brokerage fee).
There exist options, called European calls, where at time 0 one buys an option with
an exercise price ofKat timeT. This gives the buyer the option of buying the stock at a
fixed timeTat price K.
What is the option worth? At time T, if ST K, the option is worthless. IfST > K, one can use the option to buy the stock at price K and immediately sell it at
priceST, to make a profit ofST K. So the value of the option at time T is (ST K)+.An important question is: how much should the option sell for? What is a fair price for
the option at time 0?
48
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
49/69
There are a myriad of types of options. The American call is like the European call,
except that one can buy at price Kat any time in the interval [0, T]. The European put
gives the buyer the option to sell the stock at price Kat timeT.
18. Black-Scholes formula.
We will give two derivations of the Black-Scholes formula.
1) First of all, r may be considered the rate of inflation. So the value of the option
(ST K)+ in todays dollars is C=erT(ST K)+.In this first derivation we work in todays dollars. Therefore the present day value
of the stock is Pt = ertSt. NoteP0 =S0 and the value of our option at time T is then
(PT erTK)+. By the Ito product formula,
dPt = ertdStrertStdt
=ertStdBt+ ertStdt rertStdt=PtdBt+ ( r)Ptdt.
As in the previous section, the solution to this SDE is
Pt = P0eBt+(r2/2)t.
Also, the net gain or loss in present day dollars when holdingas shares of stock at time s
ist0
asdPs.
Define Q bydQ/dP= MT= exp(r BT (r)2
22 T) Under Q,Bt
t0
1Ms
dB, Msis a martingale with the same quadratic variation as that ofBt, namely t. As we know,
Mt = 1 +
t0
Ms
r
dBs
= 1 t0
r
Ms dBs.
So under Q,Bt = Bt+ r t is a martingale. By Levys theorem,Bt is a Brownian motionunder Q.
Now
dPt= Pt dBt+ ( r)Pt dt= Pt
dBt+ r
dt
= Pt d Bt.
Therefore under Q, Pt is a martingale. The solution to the SDEdPt = Ptd Bt is Pt =P0e
Bt(2/2)t, so Pt andBt have the same filtration.C isFTmeasurable. By Theorem 13.3 Ccan be written
C=E QC+ T0
As d Bs= E QC+ T0
Ds dPs,
49
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
50/69
whereDs= As/(Ps).
Therefore, by buying and selling the stock St with trading strategy D, one can
obtainCE QCdollars at time T. Or, starting with E QCdollars and buying and sellingstock, one can get the identical output as C(a.s.). To avoid riskless profits, C must sell
for EQC.
To find EQC, we write
E QC=E Q[(S0eBT2T/2 erTK)+]
= 1
2T
(S0e
y2T/2 erTK)+ey2/2Tdy.
After some calculus, this reduces to
x(g(x, T)) KerT(h(x, T)),
where (z) = 12
z e
y2/2 dy, x = S0= P0,
g(x, T) = log(x/K) + (r+
2
/2)T
T,
h(x, T) = g(x, T)
T .
2) Second approach. In this approach we use the actual values of the securities, not
the present day values. LetVt be the value of the portfolio and assumeVt = f(St, Tt) forallt, wheref is some function that is sufficiently smooth. We also want VT = (ST K)+.
Recall the multivariate version of Itos formula (12.1). We apply this with d = 2
and Xt = (St, T t). From the SDE that St solves,X1t = 2S2t dt,X2t = 0 (sinceTtis of bounded variation and hence has no martingale part), andX1, X2t = 0. Also,dX2t =
dt. Then
VtV0= f(St, T t) f(S0, T t)
=
t0
fx(Su, T u) dSu t0
fs(Su, T u) du
+1
2
t0
2S2ufxx(Su, T u) du.
On the other hand,
Vt V0= t0
au dSu+
t0
bu du,
where we must havebt= (VtatSt)/t. Also, recall t= 0ert. We must therefore have
at= fx(St, T t) (18.1)
50
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
51/69
and
r[f(St, T t) Stfx(St, T t)] =fs(St, T t) +12
2S2t fxx(St, T t) (18.2)
for all t and all St. (18.2) leads to the parabolic PDE
fs =1
22x2fxx+ rxfx rf, (x, s)(0, ) [0, T),
and
f(x, 0) = (x K)+.Solving this equation for f, f(x, T) is what V0 should be, i.e., the cost of setting up the
equivalent portfolio. Equation (18.1) shows what the trading strategy should be.
If one changes the model, one usually cant solve the PDE exactly, but one then
uses numerical methods.
Let us now briefly discuss American calls. Recall that these are ones where theholder can buy the security at price Kat any time up to time T.
Since the holder of an American call can always wait up to time T, which is equiv-
alent to having a European call, the value of an American call should always be at least
as large as the value of the corresponding European call.
Suppose one exercises an American call early. IfST > Kand one exercised early, at
timeTone has one share of stock, for which one paid K, and one has a profit of (STK).However, because one purchased the stock before time T, one lost the interest Ker(Tt)
that would have accrued by waiting to exercise the option. (We are supposing r0.) Soin this case it would have been better to wait until time Tto exercise the option.
On the other hand, ifST < K, exercising the option early would mean that one has
lost|ST K|, whereas for the European option, one would have not exercised at all, andlost nothing (other than the price of the option).
So in either case, exercising early gains nothing, hence the price of an American call
should be the same as that of a European call. This analysis breaks down for American
puts, because in this case one gains by selling early: one can earn interest on the money
received.
19. The fundamental theorem of finance.
In the preceding section, we showed there was a probability measure under which
Pt was a martingale. This is true very generally. Let St be the price of a security in
present day dollars. We will supposeStis a continuous semimartingale, and can be written
St = Mt+ At.
51
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
52/69
The NFLVR condition (no free lunch with vanishing risk) is that there do not
exist a fixed time T, ,b > 0, and Hn (that areP-measurable withT0 |Hn(s)| |dAs|+T
0 H2n dMs
1
n
, a.s.
for all n and
P
T0
Hn(s) dSs> b
> .
Here T , b , do not depend on n. The condition says that one can with positive
probability make a profit ofb and with a loss no larger than 1/n.
Qis an equivalent martingale measure ifQ is a probability measure, Q is equivalent
to P (which means they have the same null sets), and St is a local martingale under Q.
Theorem 19.1. IfSt is a continuous semimartingale and the NFLVR conditions holds,
then there exists an equivalent martingale measureQ.
Proof. Let us prove first of all that dAt is absolutely continuous with respect to dMt.If not, there exists a set B [0, ) and a fixed timeT such that T
0 1BdAs is almost
surely nonnegative, is strictly positive with positive probability, andT0
1BdMs = 0.Recalling the proofs of the Lebesgue decomposition theorem and the Hahn decomposition
theorem, we see that we can take B to be predictable. ThenT0
1BdSs =T0
1BdMs+T0
1BdAs. The stochastic integral term is 0 becauseT0
12BdMs= 0. We then have theNFLVR condition violated with Hn = 1B. Hence absolute continuity is established, and
At=t0
hs dMs for some h.Our next goal is to show
t
0h2sd
M
s 1n
4/n
4
1/n2 =
4
n2.
LetR2= inf{t:|t0Hs dMs| 1/n}and letHt= Ht1[0,R2]. We then have P(R2< R1)4/n2,
t0Hs dSs 1/nfor all t a.s., and P(T0 Hs dSs 12)P(R1< U < T) 4n2 . We
do this for each n, and thus obtain a contradiction to the NFLVR condition.
Case (2) is similar: choose such thatU+1U+
h2sdMsn4 with positive probability,letHt=
htn4 1[U+,U+1], and proceed as above.
We thus havet0
h2sdMs
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
54/69
and define Q onF1 by dQ/dP = M1. If Nt =t0
(Hs)dBs, then dMt = MtdNt =Mt(Ht)dBt. Hence dM, Bt = Mt(Hs) dt and by the Girsanov theorem, Bt Dt is amartingale under Q, where
Dt= t
0
1
Ms
d
M, B
s=
t
0
1
Ms
Ms(
Hs)ds=
t
0
Hsds.
Therefore under Q, St = Bt+t0
Hsds is a martingale. This example shows that if the
Radon-Nikodym derivative ofdAt with respect to dMt is not too bad, we can apply theGirsanov theorem.
54
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
55/69
20. Multidimensional Brownian motion.
Xt = (X1t , . . . , X
dt) is a d-dimensional Brownian motion if the components are
independent one-dimensional Brownian motions (not necessarily started at 0). Xt starts
atx ifX0= x a.s.
Instead of the covariance being a number, now it is a matrix.: [Cov (Xs, Xt)]ij =
Cov (Xis, Xjt ). So Cov (Xs, Xt) = EXs(Xt)t = (st)I, where I is the identity and the
superscriptt denotes transpose.
Lemma 20.1. If Xt is a Brownian motion started at x andA is an orthogonal matrix
(i.e., At =A1), then AXt is a Brownian motion started at Ax.
Proof. Note Cov (AXs, AXt) = E AXsXttAt = (st)AAt = (st)I, and the result follows
easily from this.
In particular, if we look at Brownian motion started at 0, then all rotations of the
Brownian motion leave the law unchanged. It follows that if we look at the law of Brownian
motion on first exiting the ball about 0 of radius r , it must be rotationally invariant, and
hence the law of the first exit location is uniform on the surface of the ball.
Lemma 20.2. Xi, Xjt= 0 ifi=j .
Proof. We have
E [XitXjt| Fs] =E [(Xit Xis)(Xjt Xjs )| Fs] + XisE [Xjt| Fs] + XjsE [Xit| Fs]
XisXjs=E (Xit Xis)(Xjt Xjs )] + XisXjs=E [Xit Xis]E [Xjt Xjs ] + XisXjs=XisX
js .
ThereforeXiXj is a martingale. SinceXitXjtXi, Xjtmust be a martingale, the lemma
follows.
Itos formula for a multidimensional semimartingale is given by (12.1). In the case
of multidimensional Brownian motion, we have
f(Xt) = f(X0) +
t0
f(Xs) dXs+ 12
t0
f(Xs) ds. (20.1)
21. Harmonic functions.
From now on for any Borel set A we write A
= inf{
t > 0 : Xt
/
A}
and TA
=
inf{t >0 :XtA}.
55
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
56/69
A domain is an open subset ofRd. h is harmonic in D ifh is locally bounded and
for all xD and for all r
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
57/69
Proposition 21.2. Ifhis harmonic in D, then h isC in D .
Proof. AB denotes the symmetric difference ofA and B: AB = (A B) (B A).Ifr
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
58/69
Proposition 21.3. Ifhis harmonic, then h= 0in D.
Proof. Suppose h(x0) > 0. Without loss of generality, take x0 = 0. Since h C,h > 0 inB(0, r) for somer less than the distance to the boundary. Letr = inf{t: Xt /B(0, r)}. By Itos formula,
h(Xtr) h(X0) = martingale +1
2
tr0
h(Xs) ds.
Take expectations with respect to P0. Then let t and use dominated convergence toobtain
E 0h(Xr ) h(0) = 1
2E 0
r0
h(Xs) ds.
The right hand side is strictly positive. On the other hand, because the distribution
of Brownian motion on exiting B(0, r) is rotationally symmetric, the left hand side isB(0,r)
h(z) r(dz) h(0), which is 0, since h is harmonic. This is a contradiction, or wemust have h(0)
0. The same argument shows that h(0)
0, which yields our result.
Proposition 21.4. IfhisC2 andh= 0 in D , then h is harmonic.
Proof. By Itos formula and the fact that h = 0, h(XtT)h(X0) is a martingale,where T is the time to hit B (0, r) and r < dist(x,D). Take Px expectation and let
t . By dominated convergence we have E xh(XT) =h(x). Since Brownian motion isrotationally symmetric, this translates to
h(x) =
B(0,s)
h(x + y) r(dy),
which shows h is harmonic.
Corollary 21.5. Ifh is harmonic in D, then h(XtD )is a martingale.
The following theorem is known as the maximum principle for harmonic functions.
Theorem 21.6. Let D be a bounded domain, hharmonic in D, andh continuous in D ,
the closure ofD . Then
supD
h= supD
h.
Proof. Let T = inf{t : dist(Xt, D) < }. Since h is continuous on D, which isbounded, then h is bounded. By Itos formula, h(X
tT)
h(X0
) is a martingale. Taking
Px expectation, letting0, and then letting t ,h(x) = E xh(XD). (The reason for
58
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
59/69
introducingT is that the first derivatives ofh are bounded on{yD : dist (y,D)> },and so the martingale term is actually a martingale and not only a local martingale.)
Thereforeh(x)supDh. The other direction is obvious.
The example where D is the upper half plane and h(x1, x2) = x2 shows that the
assumption that D be bounded is essential.
22. The Dirichlet problem.
The Dirichlet problem is the following: given a bounded domain D and a continuous
functionfonD, find a harmonic function h that is harmonic in D, continuous onD, and
that agrees with f on D. The Dirichlet problem does not always have a solution, but we
will investigate when it does have a solution and how to find it probabilistically.
A point y is regular for a set A ifPy(TA = 0) = 1. This means that starting at y
(which may or may not be in A), the Brownian motion entersA immediately. An example
to keep in mind is one-dimensional Brownian motion, y = 0, and A = (0, ). By the LIL,Xt enters A immediately.
It turns out that one can solve the Dirichlet problem if every point on the boundaryofD is regular for Dc. This is the same condition the analysts come up with, although
their definition of regular involves the notion of barrier and is quite different.
Lemma 22.1. Supposey D is regular forDc. IfxnD andxny , then for all t,limn Pxn(Dt) = 1.
Proof. Letws(x) = Px(XuDc for someu[s, t]). Then
ws(x) = Ex[PXs(Dt s)] = E x(Xs) =
(y)
1
(2s)d/2e|yx|
2/2s ds
is continuous in x, where (y) = Py(D ts). As s 0, ws(x) Px(D t). So
{x: Px(D
t)> 1
}
is an open set. y will be in this set since Py(D
t) = 1, and so
fornsufficiently large, xn will also be in this set, or Pxn(Dt)1 .
Proposition 22.2. If f is bounded and continuous on D, y Dis regular for Dc,xnD, andxny, then E xnf(XD )f(y).
Proof. We first want to show that XD will be close to y ifn is large. Let > 0. Let
>0 and pickt small so that P0(supst |Xs| /2)< ; this is possible by the continuityof paths ofXt. Choose n large so that|xn y|< /2 and Pxn(Dt)1 . Then
Pxn(XDB(y, ))Pxn(Dt, supst
|Xsxn| /2)
Pxn(Dt) Pxn(supst|Xs xn|> /2)
(1 ) .
59
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
60/69
Now pick such that|f(z) f(y)|< if|z y|< .
E xnf(XD) =Exn [f(XD ); XDB(y, )] + E xn [f(XD ); XD /B(y, )].
The second term on the right is bounded byfPxn(XD /B(y, )), which tends to 0as n . The first term on the right converges to f(y) becauseE xn [f(XD); XDB(y, )] f(y)Pxn(XDB(y, ))Pxn(XDB(y, )).
The following zero-one law is very useful.
Theorem 22.3. IfA F0+, then Px(A) equals 0 or 1.
Proof.
F0+=
F0. Then
Px(A) =E x[1A0; A] =E x[Px(A); A] = (Px(A))2.
This implies Px(A) is zero or one.
Here is a simple condition, the Poincare cone condition, that implies a point is
regular.
Proposition 22.4. Suppose there exists an open coneV with vertexy D such thatV B(y, r)Dc for somer. Then y is regular forDc.
Proof. Without loss of generality assume y = 0.
P0(Dt)P0(XtDc)P0(XtV B(0, r))P0(XtV) P0(Xt /B(0, r))=P0(X1V) P0(X1 /B(0, r
t))
P0(X1V)> 0
as t0. The equality here follows by scaling.So P0(D = 0)> 0. By the zero-one law it must be 1.
We now can solve the Dirichlet problem.
60
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
61/69
Theorem 22.5. Let D be bounded and suppose every point on the boundary of D is
regular for Dc. Let f be continuous on D. There exists one and only one function
h that is harmonic in D, is continuous on D, and agrees with f on D. h is given by
h(x) = E xf(XD).
Proof. Ifh1 and h2 are two solutions, then h1 h2 is harmonic in D and 0 on D. Bythe maximum principle, h1 h20 inD, orh1h2. By symmetry, h2h1, orh1= h2.
Defineh(x) =E xf(XD ). Then ifr 0
}. We will sometimes writex for (x1, . . . , xd1).
Theorem 23.1. Px(XDdy) =PH(x, y) dy, where
PH(x, y) =(d/2)
d/2xd
|x y|d .
Proof. Without loss of generality assumex= 0.E x[eiuXD ] =E x
0
eiuXt ; Ddt
=
0
e|u|2t/2Pxd(Ddt)
=E xdeD ,
where =|u|2/2.If > 0, Mt = e
Xdt2t/2 is a martingale. By optional stopping, exd =E xdMtD . Since Mt is nonnegative and bounded above by 1, then by dominated conver-gence
exd =E xdMD =ExdeX
dD
2D/2 =E xde(2/2)D .
Letting=
2=|u|, then E xdeD =e2xd . Therefore
eiuyPH(x, y) dy= E xeiuXD =e|u|xd .
To finish, we look in a table of Fourier transforms to invert this.
61
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
62/69
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
63/69
Proposition 24.1. Iff0 andd3, then U f(x) = E x 0
f(Xs) ds.
Proof. We write
E x0
f(Xs) ds=
0
E xf(Xs) ds=
0
f(y)
1
(2s)d/2e|xy|
2/2s dy
ds
=
f(y)
0
1
(2)d/21
sd/2e|xy|
2/2s ds
dy.
Making the substitution t =|x y|2/2s, the inside integral is
2d/2|y x|2d0
td/22et dt.
Proposition 24.2. Iff is in C2 with compact support, then the solution tog = f in
Rd isg =
1
2
U f.
Proof. By translation invariance, 2(U f)/xixj = U(2f/xixj) is finite and con-
tinuous. So U f C2. By Itos formula, U f(Xt) Uf(X0) 12t0
U f(Xs) ds is a local
martingale. On the other hand,
U f(Xt) = EXt
0
f(Xs) ds= Ex
0
f(Xs+t) ds| Ft
=E x t
f(Xs) ds| Ft
= E x
0
f(Xs) ds| Ft
t0
f(Xs) ds.
So
t01
2U f(Xs) + f(Xs) ds
is a continuous local martingale that is zero at time 0. Therefore, since it is also of bounded
variation, it is identically 0. Hence 12
Uf+ f= 0 a.s. for almost every s. By continuity,
it holds for every s. Again by continuity, since Xt hits a neighborhood of any point with
positive probability (although not probability one), then f=12
U feverywhere.
25. Green functions.
Let d3 and define
GDf(x) = E x D0
f(Xs) ds.
63
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
64/69
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
65/69
By the Markov property this is
limn
pt/n(x0, x1) pt/n(xn1, xn)1B(x0)1D(x1) 1D(xn1)1A(xn) dx0 dxn,
where pt(x, y) = ( 2t)d/2 exp(
|x
y
|2/2t) are the transition densities for Brownian
motion.
Writingyi forxni and using the symmetry ofp(t,x,y) in x and y , we obtain
limn
pt/n(y0, y1) pt/n(yn1, yn)1A(y0)1D(y1) 1D(yn1)1B(yn) dy0 dyn.
As above this is equal to A
Py(XtB, Dt) dy.
Now integrate over t from 0 toand we have
BE x D
01A(Xt) dtdx=
AE y D
01B(Xt) dtdy,
or B
A
gD(x, y) dy dx=
B
GD1A(x) dx=
A
GD1B(y) dy =
A
B
gD(y, x) dxdy.
This implies gD(x, y) = gD(y, x) for almost every pair (x, y) with respect to the product
measure. From (25.1) we see thatgD(x, y) is harmonic in x and in y, and it follows that
gD(x, y) is continuous except when x = y. Hence gD(x, y) = gD(y, x) for all x and y .
26. Solutions to PDEs.
In this section we will show how one can solve the Dirichlet problem, the Poisson
equation, and the Cauchy problem using probability. The operator we consider is
Lf(x) =12
di,j=1
aij(x) 2f
xixj(x) +
di=1
bi(x)f
xi(x).
We assume that the aij and bi are bounded, the aij are symmetric (i.e., aij(x) =
aji(x)). We say thatL is strictly elliptic if there exists (x) > 0 for all x such thati,jaij(x)yiyj(x)
di=1 y
2i for all (y1, . . . , yd). Another way of phrasing this is to say
that the inverse to the matrix aij
(x) is bounded. We sayL
is uniformly elliptic if we can
take (x) to be independent ofx.
65
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
66/69
Take to be a matrix such that T =a. LetXt be the solution to
Xt = x +
t0
(Xs) dWs+
t0
b(Xs) ds,
whereWt is a d-dimensional Brownian motion. In coordinate language,
Xit =xi+ t0
dj=1
ij(Xs) dWjs +
t0
bi(Xs) ds.
By Itos formula, iffC2,
f(Xt) f(X0) = t0
i
f
xi(Xs)
j
ij(Xs) dWjs +
t0
i
f
xi(Xs)bi(Xs) ds
+1
2
t0
i,j
2f
xixj(Xs) dXi, Xjs.
Since
dXi, Xjs= k
ik(Xs) dWks,
j(Xs) dWs=k
ik(Xs)jk(Xs) ds
= (T)ij(Xs) ds= aij(Xs) ds,
we have
f(Xt) f(X0) = martingale + t0
L(Xs) ds. (26.1)
Theorem 26.1. (Poissons equation) Suppose >0, fC1 with compact support, uC2 withu and its first and second partial derivatives bounded, andu solvesLuu=f.Thenu(x) = E x
0
etf(Xt) dt.
Proof. u(Xt) u(X0) =Mt+t0Lu(Xs) ds, where Mt is a martingale. By the product
formula,
etu(Xt) u(X0) = t0
es dMs+ t0
esLu(Xs) ds t0
esu(Xs) ds.
Take E x expectation and lett . We obtain
u(x) = E x0
es(Lu u)(Xs) ds= E x0
esf(Xs) ds.
There is a version for bounded domains as well. Note we allow = 0 in this version.
66
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
67/69
Theorem 26.2. SupposeD is bounded, uC2 andD, u is continuous on D, u= 0 onD, u solvesLu u=f, and0. Then u(x) = E x D
0 esf(Xs) ds.
Proof. We first need to argue that E xD 0 andxRd, and alsou(x, 0) = f(x). Then
u(x, t) =E xf(Xt).
Proof. Fix t0 and let Mt = u(Xt, t0 t). Note [u(x, t0 t)]/t =(u/t)(x, t0 t).So
u(Xt, t t0) = martingale + t0
Lu(Xs, t0s) ds + t0
(u/t)(Xs, t0 s) ds.
Therefore, since u satisfies u/t =L
u, then Mt
is a martingale, and in particular,
E xM0= ExMt0 .
67
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
68/69
Since
E xMt0 =Exu(Xt0 , 0) = E
xf(Xt0)
and
E xM0= Exu(X0, t0) = u(x, t0),
thenu(x, t0) =Ex
f(Xt0). The result follows because t0 is arbitrary.
27. Feynman-Kac formula.
Let us now look at the operatorLu(x) + q(x)u(x).Ifd = 1, D = (0, 1), q is a constant larger than 2/2, and x = 1
2, then since it is
known that Px(D > t)ce2t/2 for t large, then we conclude E x exp(D0
q(Xs) ds) =.It also can be shown thatLu(x) + q(x)u(x) = 1 with u(0) =u(1) = 0 has no solution withthisq.
What one can say is the following, known as the Feynman-Kac formula.
Theorem 27.1. Suppose f is continuous on D, where D is a bounded nice domain.
Supposeq isC2 in D, u isC2 in D, u is continuous on D , u agrees with f on D, andu
satisfiesLu + qu = 0 in D. IfE x exp(D0
q+(Xs) ds)
8/9/2019 Stochastic Calculus, With Applications to Finance, PDE, And Potential Theory
69/69
ifu(x, t) is the solution to ut (x, t) =Lu + qu, then
u(x, t) = E x
f(Xt)e
t
0q(Xs)ds
.
28. First order terms.
Theorem 28.1. LetLf(x) = 12
di,j=1 aij(x)
2fxixj
(x). Let a= T, where and1
are bounded, dXt = (Xt) dWt, anduC2 solvesLu u=f. Then
u(x) = E x
0
etf(Xt)Mt dt
,
where = b(T)1 and
Mt = exp t
0
(Xs) dW s 12
t0
|(Xs)|2 ds
.
Proof. Let Nt =t0
(Xs) dWs, so that Mt = eNtNt/2. Let dQ/dP= Mt. Under Q we
have Xit N, Xit is a martingale. Note
dN, Xit =j
j(Xs) dWj , Xit =j
j(Xs)ij(Xs) ds= bi(Xs) ds.
LetWt be defined by dWt = 1(Xt)(dXtb(Xt) dt). NoteWt is a martingaleunder Q and a calculation shows thatWi,Wjt =t is i = j and 0 otherwise. SoW is aBrownian motion under Q.
Also,
dX
i
t = j ij(Xt) dWj
t + bi(Xt) dt,
so under Q, Xis associated with the operatorL. Therefore
u(x) =E xQ
0
etf(Xt) dt=0
etE x[f(Xt)Mt] dt.
69