+ All Categories
Home > Documents > Lecturenotes London

Lecturenotes London

Date post: 04-Jun-2018
Category:
Upload: tristan-raoult
View: 231 times
Download: 0 times
Share this document with a friend

of 80

Transcript
  • 8/14/2019 Lecturenotes London

    1/80

    LMS-EPSRC Short CourseStochastic Partial Differential

    EquationsImperial College London, 7-11 July 2008

    Applications of Malliavin Calculusto Stochastic Partial Differential

    Equations

    Marta Sanz-SoleFacultat de MatematiquesUniversitat de Barcelona

    Version August 2008

    Research supported by the grant MTM 2006-01351 from the Ministerio de Ciencia y Tec-

    nologa, Spain

  • 8/14/2019 Lecturenotes London

    2/80

    2

  • 8/14/2019 Lecturenotes London

    3/80

    Contents

    1 Introduction 5

    2 Integration by parts and absolute continuity of probabilitylaws 72.1 Properties derived from an integration by parts formula . . . . 72.2 Malliavins results . . . . . . . . . . . . . . . . . . . . . . . . . 10

    3 Stochastic calculus of variations on an abstract Wiener space 143.1 Finite dimensional Gaussian calculus . . . . . . . . . . . . . . 143.2 Infinite dimensional framework . . . . . . . . . . . . . . . . . 19

    3.3 The derivative and divergence operators . . . . . . . . . . . . 223.4 Some calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    4 Criteria for Existence and Regularity of Densities 304.1 Existence of density . . . . . . . . . . . . . . . . . . . . . . . . 304.2 Smoothness of the density . . . . . . . . . . . . . . . . . . . . 34

    5 Watanabe-Sobolev Differentiability of SPDEs 375.1 A class of linear homogeneous SPDEs . . . . . . . . . . . . . . 375.2 The Malliavin derivative of a SPDE . . . . . . . . . . . . . . . 46

    6 Analysis of Non-Degeneracy 556.1 Existence of moments of the Malliavin covariance . . . . . . . 556.2 Some references . . . . . . . . . . . . . . . . . . . . . . . . . . 62

    7 Small perturbations of the density 647.1 General results . . . . . . . . . . . . . . . . . . . . . . . . . . 667.2 An example: the stochastic heat equation . . . . . . . . . . . . 69

  • 8/14/2019 Lecturenotes London

    4/80

    4

  • 8/14/2019 Lecturenotes London

    5/80

    1 Introduction

    Nowadays, Malliavin calculus is underpinning important developments instochastic analysis and its applications. In particular, research on SPDEs isbenefiting from the ideas and tools of this calculus. Unexpectedly, this hardmachinery is successfully used in financial engineering for the computationof Greeks, and in numerical approximations of SPDEs. The analysis of thedependence of the Malliavin matrix on its structural parameters is used inproblems of potential theory involving SPDEs, like obtaining the optimalsize of some hitting probabilities. The study of such questions, but also ofsome classical issues like the absolute continuity of measures derived fromprobability laws of SPDEs, is still an underdeveloped field.

    These notes are a brief introduction to the basic elements of Malliavin calcu-lus and to some of its applications to SPDEs. They have been prepared for aseries of six lectures at the LMS-EPSRC Short Course on Stochastic PartialDifferential Equations.The first three sections are devoted to introduce the calculus: its motivations,the main operators and rules, and the criteria for existence and smoothnessof densities of probabilities laws. The last three ones deal with applicationsto SPDEs. To be self-contained, we provide some ingredients of the SPDEframework we are using. Then we study differentiability in the Malliavinsense, and non-degeneracy of the Malliavin matrix. The last section is de-voted to sketch a method to analyze the asymptotic behaviour of densitiesof small perturbations of SPDEs. Altogether, this is a short, very short,

    journey through a deep and fascinating subject.To close this short presentation, I would like to express my gratitude toProfessor Dan Crisan, the scientific organizer of the course, for a wonderfuland efficient job, to the London Mathematical Society for the financial sup-port, and to the students whose interest and enthusiasm has been a sourceof motivation and satisfaction.

    Barcelona, August 2008

    5

  • 8/14/2019 Lecturenotes London

    6/80

    6

  • 8/14/2019 Lecturenotes London

    7/80

    2 Integration by parts and absolute continu-

    ity of probability lawsThis lecture is devoted to present the classical sufficient conditions for exis-tence and regularity of density of finite measures on Rn and therefore for thedensities of probability laws. The results go back to Malliavin (see [35], butalso [74], [79] and [46]). To check these conditions, Malliavin developed adifferential calculus on the Wiener space, which in particular allows to provean integration by parts formula. The essentials on this calculus will be givenin the next lecture.

    2.1 Properties derived from an integration by partsformula

    The integration by parts formula of Malliavin calculus is a simple but ex-tremely useful tool underpinning many of the sometimes unexpected appli-cations of this calculus. To illustrate its role and give a motivation, we startby showing how an abstract integration by parts formula leads to explicitexpressions for the densities and their derivatives.Let us introduce some notation. Multi-indices of dimension r are denotedby= (1, . . . , r) {1, . . . , n}r, and we set|| =ri=1 i. For any differ-entiable real valued functiondefined on Rn, we denote by the partial

    derivative||1,...,r. If||= 0, we set = , by convention.Definition 2.1 LetF be aRn-valued random vector andG be an integrablerandom variable defined on some probability space (, F, P). Let be amulti-index. The pairF, Gsatisfies an integration by parts formula of degree|| if there exists a random variableH(F, G)L1()such that

    E

    ()(F)G

    = E

    (F)H(F, G)

    , (2.1)

    for any Cb (Rn).The property expressed in (2.1) is recursive in the following sense. Let =(, ), with = (1, . . . , a), = (1, . . . , b). Then

    E

    ()(F)G

    = E

    ()(F)H(F, G)

    =E

    (F)H(F, H(F, G))

    =E

    (F)H(F, G)

    .

    7

  • 8/14/2019 Lecturenotes London

    8/80

    The interest of this definition in connection with the study of probability

    laws can be deduced from the next result.Proposition 2.1 1. Assume that (2.1) holds for= (1, . . . , 1)andG=

    1. Then the probability law ofFhas a densityp(x) with respect to theLebesgue measure onRn. Moreover,

    p(x) =E

    11(x

  • 8/14/2019 Lecturenotes London

    9/80

    Hence the law ofFis absolutely continuous and its density is given by (2.2).

    SinceH(1,...,1)(F, 1) is assumed to be in L1

    (), formula (2.2) implies the con-tinuity ofp, by bounded convergence. This finishes the proof of part 1.The proof of part 2 is done recursively. For the sake of simplicity, we shallonly give the details of the first iteration for the multi-index = (1, . . . , 1).Let f C0 (Rn), (x) =

    x1

    xn f(y)dy, (x) =

    x1

    xn(y)dy.

    By assumption,

    E

    f(F)

    = E

    (F)H(1,...,1)(F, 1)

    =E

    (F)H(1, ,1)(F, H(1,...,1)(F, 1))

    =E

    (F)H(2,...,2)(F, 1)

    .

    Fubinis Theorem yields

    E

    (F)H(2,...,2)(F, 1)

    =E

    F1

    dy1 Fn

    dyn

    y1

    dz1 yn

    dznf(z)

    H(2,...,2)(F, 1)

    =E

    F1

    dz1 Fn

    dznf(z) F1

    z1dy1

    Fnzn

    dynH(2,...,2)(F, 1)

    =Rn

    dzf(z)E

    ni=1(Fi zi)+H(2,...,2)(F, 1)

    .

    This shows that the density ofFis given by

    p(x) =E

    ni=1(Fi xi)+H(2,...,2)(F, 1)

    ,

    using a limit argument, as in the first part of the proof. The functionx ni=1(Fi xi)+ is differentiable, except when xi = Fi for somei = 1, . . . , n, which happens with probability zero, since F is absolutelycontinuous. Therefore by bounded convergence

    (1,...,1)p(x) = (1)nE

    11[x,)(F)H(2,...,2)(F, 1)).

    Remark 2.1 The conclusion in part 2 of the preceding Proposition is quite

    easy to understand by formal arguments. Indeed, roughly speaking the func-tion in (2.1) should be such that its derivative is the delta Dirac function0. Since taking primitives makes functions smoother, the higher|| is, thesmoothershould be. Thus, having (2.1) for any multi-indexyields infinite

    differentiability forp(x) =E

    0(F x)

    .

    9

  • 8/14/2019 Lecturenotes London

    10/80

    Remark 2.2 Assume that (2.1) holds for = (1, . . . , 1) and a positive,

    integrable random variableG. By considering the measuredQ= GdP, andwith a similar proof as for the first statement of Proposition 2.1, we concludethat the measureQ1Fis absolutely continuous with respect to the Lebesguemeasure and its density p is given by

    p(x) =E

    11(xF)H(1,...,1)(F, G)

    .

    2.2 Malliavins results

    We now give Malliavins criteria for the existence of density (see [35]). Tobetter understand the assumption, let us explore first the one-dimensionalcase.Consider a finite measureon R. Assume that for every function C0 (R)there exists a positive constants C, not depending on , such that

    R

    d

    C||||.Define

    a,b(x) =

    0 ifxaxaba

    ifa < x < b

    1 ifxb,(2.5)

    < a < b < +. By approximating a,b by a sequence of functions inC0 (R) we obtain

    ([a, b])C(b a).Since this holds for any such a < b, it follows that is absolutely continuouswith respect to the Lebesgue measure.Malliavin proved that the same result holds true in dimension n > 1 as isstated in the next proposition

    Proposition 2.2 Letbe a finite measure onRn. Assume that for anyi{1, 2, . . . , n} and every function C0 (Rn), there exist positive constants

    Ci, not depending on, such thatR

    id Ci||||. (2.6)

    Then is absolutely continuous with respect to the Lebesgue measure and thedensity belongs to L

    nn1 .

    10

  • 8/14/2019 Lecturenotes London

    11/80

    When applying this proposition to the law of a random vector F, we have

    the following particular statement:

    Proposition 2.3 Assume that for any i {1, 2, . . . , n} and every function C0 (Rn), there exist positive constantsCi, not depending on, such that

    |E((i)(F))| Ci||||. (2.7)

    Then the law ofF has a density.

    In [35], the density obtained in the preceding theorem is proved to be in L1;however in a remark the improvement to L

    nn1 is mentioned and a hint for

    the proof is provided. We prove Proposition 2.2 following [46] which takesinto account Malliavins remark.

    Proof: Consider an approximation of the identity on Rn, for example

    (x) = (2)n2 exp

    |x|

    2

    2

    .

    Consider also functions cM, M 1, belonging toC0 (Rn), 0cM 1, suchthat

    cM(x) = 1 if|x| M0 if|x| M+ 1

    and with partial derivatives uniformly bounded, independently on M. ThefunctionscM ( ) clearly belong toC0 (Rn) and give an approximationof. Then, by Gagliardo-Nirenberg inequality (see a note at the end of thislecture)

    cM ( )L nn1n

    i=1

    i(cM ( ))1n

    L1.

    We next prove that he right-hand side of this inequality is bounded. For this,we notice that assumption (2.6) implies that the functional

    C0 (Rn)Rn

    id

    is linear and continuous and therefore it defines a signed measure with finitetotal mass (see for instance [32], page 82). We shall denote by i,i = 1, . . . , n

    11

  • 8/14/2019 Lecturenotes London

    12/80

    this measure. Then,

    i(cM ( ))L1Rn

    cM(x)Rn

    i(x y)(dy) dx

    +Rn

    |icM(x)|Rn

    (x y)(dy) dx

    Rn

    Rn

    (x y)i(dy) dx

    +Rn

    |icM(x)|Rn

    (x y)(dy) dx.

    By applying Fubinis theorem, and because of the choice of, it is easy tocheck that each one of the two last terms is bounded by a finite constant,

    independent ofM and. As a consequence, the set of functions{cM ( ), M1, >0}is bounded inL nn1 . By using the weak compactness of theunit ball ofL

    nn1 (Alouglus theorem), we obtain that has a density and it

    belongs to L nn1 .

    The next result (see [74]) gives sufficient conditions on ensuring smoothnessof the density with respect to the Lebesgue measure.

    Proposition 2.4 Let be a finite measure on Rn. Assume that for any

    multi-index and every function C0 (R

    n

    ) there exist positive constantsC not depending on such that

    Rn

    d C. (2.8)

    Then possesses a density which is aC function.

    When particularising to the law of a random vector F, condition (2.8)clearly reads

    |E(() (F))| C. (2.9)

    Remark 2.3 When checking (2.6), (2.8), we have to get rid of the deriva-tivesi, and thus one naturally thinks of an integration by parts procedure.

    Some comments:

    12

  • 8/14/2019 Lecturenotes London

    13/80

    1. Letn = 1. The assumption in part 1) of Proposition 2.1 implies (2.6).

    However, forn >1, both hypotheses are not comparable. The conclu-sion in the former Proposition gives more information on the densitythan in Proposition 2.4.

    2. Letn >1. Assume that (2.1) holds for any multi-index with||= 1.Then, by the recursivity of the integration by parts formula, we obtainthe validity of (2.1) for = (1, , 1).

    3. Since the random variableH(F, G) in (2.1) belongs toL1(), the iden-

    tity (2.1) with G = 1 clearly implies (2.9). Therefore the assumptionin part 2 of Proposition 2.1 is stronger than in Proposition 2.4 but theconclusion more precise too.

    AnnexGagliardo-Nirenberg inequality

    Letf C0 (Rn), thenf

    Ln

    n1

    ni=1

    if1n

    L1.

    For a proof, we refer the reader to [73], page 129.

    13

  • 8/14/2019 Lecturenotes London

    14/80

    3 Stochastic calculus of variations on an ab-

    stract Wiener spaceThis lecture is devoted to introduce the main ingredients of Malliavin calcu-lus: the derivative, divergence and Ornstein Uhlenbeck operators, and rulesof calculus for them.

    3.1 Finite dimensional Gaussian calculus

    To start with, we shall consider a very particular situation. Letm be thestandard Gaussian measure on Rm:

    m(dx) = (2)m2 exp |x|2

    2

    dx.

    Consider the probability space (Rm, B(Rm), m). Here ndimensional ran-dom vectors are functions F : Rm Rn. We shall denote by Em the expec-tation with respect to the measure m.The purpose is to find sufficient conditions ensuring absolute continuity withrespect to the Lebesgue measure on Rn of the probability law ofF, and thesmoothness of the density. More precisely, we would like to obtain expressionssuch as (2.1). This will be done in a quite sophisticated way, as a prelude tothe methodology we shall apply in the infinite dimensional case. For the sake

    of simplicity, we will only deal with multi-indices of order one. Hence, weshall only address the problem of existence of density for the random vectorF. As references of this section we mention [35], [74], [54].

    The Ornstein-Uhlenbeck operator

    Let (Bt, t 0) be a standard Rm-valued Brownian motion. Consider thelinear stochastic differential equation

    dXt(x) =

    2dBt Xt(x)dt, (3.1)with initial condition xRm. Using Itos formula, it is immediate to checkthat the solution to (3.1) is given by

    Xt(x) = exp(t)x+

    2 t0

    exp((t s))dBs. (3.2)

    The operator semigroup associated with the Markov process solution to (3.1)is defined byPtf(x) =Emf(Xt(x)), for a suitable class of functionsf. Notice

    14

  • 8/14/2019 Lecturenotes London

    15/80

    that the law ofZt(x) =

    2

    t0exp((t s))dBs is Gaussian, mean zero and

    with covariance given by (1exp(2t))I. This fact, together with (3.2),yieldsPtf(x) =

    Rm

    f(exp(t)x+

    1 exp(2t)y)m(dy). (3.3)We are going to identify the class of functionsffor which the right hand-sideof (3.3) makes sense, and we will also compute the infinitesimal generator ofthe semigroup. This is the Ornstein-Uhlenbeck operator in finite dimension.

    Lemma 3.1 The semigroup generated by(Xt, t0) satisfies the following:1. (Pt, t0) is a contraction semigroup onLp(Rm; m), for allp1.

    2. For anyf C2

    b (Rm

    ) and everyxRm

    ,

    limt0

    1

    t

    Ptf(x) f(x)

    = Lmf(x), (3.4)

    whereLm= x =mi=1 2xixi mi=1 xixi.3. (Pt, t0) is a symmetric semigroup onL2(Rm; m).

    Proof. 1) LetX and Ybe independent random variables with law m. The

    law of exp(t)X+

    1 exp(2t)Y is alsom. Therefore, (m m)T1 =m, where T(x, y) = exp(

    t)x+ 1 exp(2t)y. Then, the definition ofPtfand this remark yields

    Rm|Ptf(x)|pm(dx)

    Rm

    Rm

    |f(T(x, y))|pm(dx)m(dy)

    =Rm

    |f(x)|pm(dx).

    2) This follows very easily by applying the Ito formula to the process f(Xt).3) We must prove that for any gL2(Rm; m),

    RmPtf(x)g(x)m(dx) =

    Rm

    f(x)Ptg(x)m(dx),

    or equivalently

    Em

    f(exp(t)X+

    1 exp(2t)Y)g(X)

    =Em

    g(exp(t)X+

    1 exp(2t)Y)f(X)

    ,

    15

  • 8/14/2019 Lecturenotes London

    16/80

    where X and Y are two independent standard Gaussian variables. This

    follows easily from the fact that the vector (Z, X), where

    Z= exp(t)X+

    1 exp(2t)Y,

    has a Gaussian distribution and each component has law m.

    The adjoint of the differential

    We are looking for an operator m which is the adjoint of the gradient inL2(Rm, m). Such an operator must act on functions : R

    m Rm, takevalues in the space of real-valued functions defined on Rm, and satisfy theduality relation

    Emf, = Em(f m), (3.5)where, denotes the inner product in Rm. Let = (1, . . . , m). Assumefirst that the functions f, i : Rm R, i = 1, . . . , m, are continuouslydifferentiable. An usual integration by parts yields

    Emf, =m

    i=1

    Rm

    if(x)i(x)m(dx)

    =m

    i=1

    Rm

    f(x)

    xii(x) ii(x)

    m(dx).

    Hence

    m=m

    i=1

    (xii ii). (3.6)

    Notice that onC2(Rm),m =Lm.The definition (3.6) yields the next useful formula

    m(fg) =f, g f Lmg, (3.7)

    for anyf , g smooth enough.

    Example 3.1 Let n 1; consider the Hermite polynomial of degree n onR, which is defined by

    Hn(x) =(1)n

    n! exp

    x2

    2

    dn

    dxnexp

    x

    2

    2

    .

    16

  • 8/14/2019 Lecturenotes London

    17/80

    The operator1 satisfies

    1Hn(x) =xHn(x) Hn(x) =xHn(x) Hn1(x)= (n+ 1)Hn+1(x).

    Therefore it increases the order of a Hermite polynomial by one.

    An integration by parts formula

    Using the operators ,mand Lm, and for random vectorsF = (F1, . . . , F n)regular enough (meaning that all the differentiations performed throughoutthis section make sense), we are going to establish an integration by parts

    formula of the type (2.1).We start by introducing the finite dimensionalMalliavin matrix, also termedcovariance matrix, as follows:

    A(x) =Fi(x), Fj (x)

    1i,jn

    .

    Notice that by its very definition, A(x) is a symmetric, non-negative definitematrix, for anyxRm. ClearlyA(x) =DF(x)DF(x)T, whereDF(x) is theJacobian matrix atx and the superscriptTmeans the transpose.Let us consider a function C1(Rn), and perform some computationsshowing that (i)(F), i = 1, . . . , n, satisfies a linear system of equations.Indeed, by the chain rule,

    (F(x)

    , Fl(x)

    =

    mj=1

    nk=1

    (k)(F(x))j Fk(x)jF

    l(x)

    =n

    k=1

    Fl(x), Fk(x)(k)(F(x))

    =

    A(x)(T)(F(x))

    l, (3.8)

    l = 1, . . . , n. Assume that the matrix A(x) is inversible m-almost every-where. Then one gets

    (i)(F) =n

    l=1

    (F(x))

    , A1i,l (x)Fl(x)

    , (3.9)

    for everyi = 1, . . . , n,m-almost everywhere.

    17

  • 8/14/2019 Lecturenotes London

    18/80

    Taking expectations and using (3.7), (3.9) yields

    Em

    (i)(F)

    =n

    l=1

    Em

    (F)

    , A1i,lFl

    =n

    l=1

    Em

    (F)m(A1i,lFl)

    =n

    l=1

    Em

    (F)

    A1i,l , Fl A1i,l LmFl

    . (3.10)

    Hence we can write

    Em

    i(F)

    = Em

    (F)Hi(F, 1)

    , (3.11)

    with

    Hi(F, 1) =n

    l=1

    m(A1i.lFl)

    =n

    l=1

    A1i,l , Fl +A1i,lLmFl

    . (3.12)

    This is an integration by parts formula, as in Definition 2.1, for multi-indicesof length one.

    For multi-indices of length greater than one, things are a little bit moredifficult; essentialy the same ideas would lead to the analogue of formula(2.1) with = (1, , 1) andG= 1.The preceding discussion and Proposition 2.2 yield the following result.

    Proposition 3.1 LetFbe continuous differentiable up to the second ordersuch thatFand its partial derivatives up to order two belong to Lp(Rm; m),

    for anyp[1, [. Assume that:

    (1) The matrixA(x)is invertible for everyx Rm, m-almost everywhere.

    (2) det A1 Lp(Rm; m), (det A1) Lr(Rm; m), for some p, r (1, ).

    Then the law ofF is absolutely continuous with respect to Lebesgue measureonRn.

    18

  • 8/14/2019 Lecturenotes London

    19/80

    Proof: The assumptions on Fand in (2) show that

    Ci:=n

    l=1

    EmA1i,l, Fl + A1i,l LmFl

    is finite. Therefore, one can take expectations on both sides of (3.9). By(3.10), it follows that

    |Em(i)(F)| Ci||||.

    This finishes the proof of the Proposition.

    Remark 3.1 The proof of smoothness properties for the density requires aniteration of the procedure presented in the proof of Proposition 3.1.

    3.2 Infinite dimensional framework

    This section is devoted to describe an infinite dimensional analogue of theprobability space (Rm, B(Rm), m). We start by introducing a family of Gaus-sian random variables. LetHbe a real separable Hilbert space. Denote by||||Hand , Hthe norm and the inner product onH, respectively. There ex-ist a probability space (, G, ) and a family M= W(h), hHof randomvariables defined on this space, such that the mapping hW(h) is linear,each W(h) is Gaussian, EW(h) = 0 and E

    W(h1)W(h2)

    =h1, h2H (see

    for instance, [63], Chapter 1, Proposition 1.3). Such family is constructedas follows. Let (en, n 1) be a complete orthonormal system in H. Con-sider the canonical probability space (, G, P) associated with a sequence(gn, n 1) of standard independent Gaussian random variables. That is, = RN,G =BN, =N1 where, according to the notations of Chapter1, 1 denotes the standard Gaussian measure on R. For each h H, theseries

    n1h, enHgn converges in L2(, G, ) to a random variable that we

    denote by W(h). Notice that the setM is a closed Gaussian subspace ofL2() that is isometric to H. In the sequel, we will replace

    Gby the -field

    generated byM.

    19

  • 8/14/2019 Lecturenotes London

    20/80

    Examples

    White Noise

    LetH=L2(A, A, m), where (A, A, m) is a separable -finite, atomlessmeasure space. For anyF A with m(F)

  • 8/14/2019 Lecturenotes London

    21/80

    LetH denote the completion of (E, , E). Elements of the GaussianfamilyM= (W(h), h H) satisfy

    E(W(h1)W(h2)) =Rd

    (dx)

    h1 h2

    (x),

    h1, h2 H.The family

    W(11F), F Bb(Rd)

    can be rigourously defined by approx-

    imating 11Fby a sequence of elements inH. It is called a colored noisewith covariance .

    We notice that for =0,

    , E=, L2

    (Rd

    ).

    White-Correlated Noise

    In the theory of SPDEs, stochastic processes are usually indexed by(t, x) R+Rd and the role oft and x is different -time and space,respectively. Sometimes the driving noise of the equation is white intime and in space (see the example termedwhite noisebefore). Anotherimportant class of examples are based on noises white in time andcorrelated in space. We give here the background for this type of noise.

    With the same notations and hypotheses as in the preceding example,

    we consider functions, D(Rd+1) and defineJ(, ) =

    R+

    dsRd

    (dx)

    (x). (3.15)

    By the above quoted result in [71], J defines an inner product. SetHT =L2([0, T]; H). Elements of the Gaussian familyM= (W(h), hHT) satisfy

    E(W(h1)W(h2)) =R+

    dsRd

    (dx)

    h1(s) h2(s)

    (x), (3.16)

    h1, h2 HT. We can then consider W(t, A), t[0, [, A Bb(Rd),whereW(t, A) :=W(11[0,t] 11A) is defined by an approximation proce-dure. This family is called a Gaussian noise, white in time and station-ary correlated (or coloured) in space.

    21

  • 8/14/2019 Lecturenotes London

    22/80

    3.3 The derivative and divergence operators

    Throughout this section, we consider the probability space ( , G, ), definedin section 3.2 and a Gaussian familyM = (W(h), h H), as has beendescribed before.There are several possibilities to define the Malliavin derivative for randomvectorsF : Rn. Here we shall follow theanalytic approachwhich roughlyspeaking consists of an extension by a limiting procedure of differentiationin Rm.To start with, we consider finite-dimensionalobjects, termed smooth func-tionals. They are random variables of the type

    F =f(W(h1), . . . , W (hn)), (3.17)

    withh1, . . . , hnHand f : Rn Rregular enough.Different choices of regularity offlead to different classes ofsmoothfunction-als. For example, iff Cp (Rn), the set of infinitely differentiable functionssuch thatfand its partial derivatives of any order have polynomial growth,we denote the corresponding class ofsmoothfunctionals by S; iff Cb (Rn),the set of infinitely differentiable functions such thatfand its partial deriva-tives of any order are bounded, we denote bySb the corresponding class. Iffis a polynomial, then smoothfunctionals are denoted byP. ClearlyP SandSb S.We define the operator D on

    S (on

    P, on

    Sb) with values on the set of

    H-valued random variables, by

    DF =n

    i=1

    if

    W(h1), . . . , W (hn)

    hi. (3.18)

    FixhHand setFh =f

    W(h1) +h, h1H, . . . , W (hn) +h, hnH

    ,

    >0. Then it is immediate to check thatDF,hH= dd Fh

    =0. Therefore,

    for smooth functionals, D is a directional derivative. It is also routine toprove that ifF, Gare smooth functionals then,D(F G) =F DG+GDF.

    Our next aim is to prove that D is closable as an operator from Lp() toLp(; H), for anyp1. That is, if{Fn, n1} Sis a sequence convergingto zero inLp() and the sequence{DFn, n1}converges toG in Lp(; H),then G = 0. The tool for arguing this is a simple version of an integrationby parts formulaproved in the next lemma.

    22

  • 8/14/2019 Lecturenotes London

    23/80

    Lemma 3.2 For anyF S, hH, we have

    EDF,hH

    = E

    F W(h)

    . (3.19)

    Proof: Without loss of generality, we shall assume that

    F =f

    W(h1), . . . , W (hn)

    ,

    whereh1, . . . , hn are orthonormal elements ofHand h1= h. Then

    EDF,hH

    =

    Rn

    1f(x)n(dx)

    =Rn f(x)x1n(dx) =E

    F W(h1)

    .

    The proof is complete.

    Formula (3.19) is a statement about duality between the operator D and aintegralwith respect to W.LetF, G S. Applying formula (3.19) to the smooth functional F G yields

    E

    GDF,hH

    =E

    FDG,hH

    +E

    F GW(h)

    . (3.20)

    With this result, we can now prove that D is closable. Indeed, consider a

    sequence{Fn, n1} Ssatisfying the properties stated above. Let hHandF Sb be such that F W(h) is bounded. Using (3.20), we obtain

    E

    FG, hH

    = limn

    E

    FDFn, hH

    = limn

    E

    FnDF,hH+ FnF W(h)

    = 0.

    Indeed, the sequence (Fn, n 1) converges to zero in Lp andDF,hH,F W(h) are bounded. This yields G= 0.

    Let D1,p be the closure of the setSwith respect to the seminorm

    ||F||1,p = E(|F|p) +E(||DF||pH)1p

    . (3.21)

    The set D1,p is the domain of the operator D in Lp(). Notice that D1,p isdense inLp(). The above procedure can be iterated as follows. Clearly, onecan recursively define the operator Dk, k N, on the setS. This yields an

    23

  • 8/14/2019 Lecturenotes London

    24/80

    Hk-valued random vector. As for D, one proves that Dk is closable. Then

    we can introduce the seminorms

    ||F||k,p =

    E(|F|p) +k

    j=1

    E(||DjF||pHj 1p , (3.22)

    p[1, ), and define the sets Dk,p to be the closure ofSwith respect to theseminorm (3.22). Notice that by definition, Dj,q Dk,p forkj andpq.By convention D0,p =Lp() and 0,p = p, the usual norm in Lp().We now introduce thedivergence operator, which corresponds to the infinitedimensional analogue of the operator m defined in (3.6).

    For this, we notice that the Malliavin derivative D is an unbounded operatorfromL2() intoL2(; H). Moreover, the domain ofD inL2(), denoted byD

    1,2, is dense in L2(). Then, by an standard procedure (see for instance[80]) one can define the adjointofD, that we shall denote by .

    Indeed, the domain of the adjoint, denoted by Dom , is the set of randomvectors uL2(; H) such that for any F D1,2,

    EDF,uHc||F||2,wherec is a constant depending on u. IfuDom , then u is the elementofL

    2

    () characterized by the identity

    E

    F (u)

    = EDF,uH

    , (3.23)

    for all F D1,2.Equation (3.23) expresses the duality between D and . It is called theintegration by parts formula(compare with (3.19)). The analogy betweenandmdefined in (3.6) can be easily established onfinite dimensionalrandomvectors ofL2(; H), as follows.

    LetSH be the set of random vectors of the type

    u=n

    j=1

    Fjhj,

    whereFj S, hjH,j= 1, . . . , n. Let us prove that uDom .

    24

  • 8/14/2019 Lecturenotes London

    25/80

    Indeed, owing to formula (3.20), for any F S,EDF,uH= n

    j=1

    E

    FjDF,hjH

    n

    j=1

    EFDFj, hjH + EF FjW(hj)

    C||F||2.HenceuDom . Moreover, by the same computations,

    (u) =n

    j=1

    FjW(hj) n

    j=1

    DFj, hjH. (3.24)

    Hence, the gradient operator in the finite dimensional case is replaced bythe Malliavin directional derivative, and the coordinate variables xj by therandom coordinatesW(hj).

    Remark 3.2 The divergence operator coincides with a stochastic integral in-troduced by Skorohod in [72]. This integral allows for non adapted integrands.It is actually an extension of Itos integral. Readers interested in this topicare suggested to consult the monographs [46] and [47].

    3.4 Some calculus

    In this section we prove several basic rules of calculus for the two operatorsdefined so far. The first result is a chain rule.

    Proposition 3.2 Let: Rm R be a continuously differentiable functionwith bounded partial derivatives. LetF = (F1, . . . , F m) be a random vectorwhose components belong to D1,p for somep1. Then(F)D1,p and

    D((F)) =m

    i=1

    i(F)DFi. (3.25)

    The proof of this result is straightforward. First, we assume that F

    S; in

    this case, formula (3.25) follows by the classical rules of differential calculus.The proof for F D1,p is done by an approximation procedure.The preceding chain rule can be extended to Lipschitz functions. The toolfor this improvement is given in the next Proposition. For its proof, we usethe Wiener chaos decomposition ofL2(, G) (see [22]).

    25

  • 8/14/2019 Lecturenotes London

    26/80

    Proposition 3.3 Let(Fn, n1) be a sequence of random variables inD1,2

    converging to F inL2

    ()and such that

    supn

    E||DFn||2H

  • 8/14/2019 Lecturenotes London

    27/80

    that n C and that the sequence (n, n 1) converges to uniformly.In additionn is bounded by the Lipschitz constant of.Proposition 3.2 yields,

    D(n(F)) =m

    i=1

    in(F)DFi. (3.28)

    Now we apply Proposition 3.3 to the sequence Fn =n(F). It is clear thatlimn n(F) = (F) in L

    2(). Moreover, by the boundedness propertyonn, the sequence D(n(F)), n 1, is bounded in L2(; H). Hence(F)D1,2 andD(n(F)), n1 converges in the weak topology ofL2(; H)to D((F)). Since the sequence (

    n(F), n

    1), is bounded, a.s., there

    exists a subsequence that converges to some random bounded vector G inthe weak topology of L2(; H). By passing to the limit as n theequality (3.28), we finish the proof of the Proposition.

    Remark 3.3 Let Cp (Rm) and F = (F1, . . . , F m) be a random vectorwhose components belong top[1,)D1,p. Then the conclusion of Proposition3.2 also holds. Moreover, (F) p[1,)D1,p.The chain rule (3.25) can be iterated; we obtain Leibnizs rule for Malliavinsderivatives. For example, if F is one-dimensional (m= 1) then

    Dk((F)) =k

    l=1

    Pl

    cl(l)(F)li=1D

    |pi|F, (3.29)

    wherePl denotes the set of partitions of{1, , k} consisting of l disjointsetsp1, , pl, l= 1, , k,|pi| denotes the cardinal of the setpi andcl arepositive coefficients.

    For anyF DomD, hHwe set DhF =DF,hH. The next propositionsprovide important calculus rules.

    Proposition 3.5 Letu SH. Then

    Dh((u)) =u, hH+(Dhu). (3.30)

    27

  • 8/14/2019 Lecturenotes London

    28/80

    Proof: Fixu =

    n

    j=1 Fj hj ,Fj S,hjH,j = 1, . . . , n. By virtue of (3.24),we have

    Dh((u)) =n

    j=1

    (DhFj)W(hj) +Fjhj , h D(DhFj), hjH

    .

    Notice that by (3.24),

    (Dhu) =n

    j=1

    (DhFj)W(hj ) D(DhFj), hjH

    . (3.31)

    Hence (3.30) holds.

    The next result is anisometry propertyfor the integral defined by the operator.

    Proposition 3.6 Letu, v D1,2(H). Then

    E

    (u)(v)

    = E(u, vH) +E(tr(Du Dv)), (3.32)

    wheretr(DuDv) =

    i,j=1 Dej u, eiHDeiv, ejH, with(ei, i1) a complete

    orthonormal system inH.Consequently, ifuD1,2(H) thenuDom and

    E

    (u)2 E(||u||2H) +E(||Du||2HH). (3.33)

    Proof: Assume first thatu, v SH. The duality relation between D and yields

    E((u)(v)) = E

    v, D((u))H

    = E

    i=1v, eiHDei((u))

    .

    By virtue of (3.30), this last expression is equal to

    E

    i=1

    v, eiH(u, eiH+ (Deiu)

    .

    28

  • 8/14/2019 Lecturenotes London

    29/80

    The duality relation between D andimplies

    Ev, eiH(Deiu)

    = E

    Deiu, Dv, eiHH

    =

    j=1

    EDeiu, ejHej, Dv, eiH

    =

    j=1

    E

    Deiu, ejHDej v, eiH

    .

    This establishes (3.32). Taking u = v and applying Schwarz inequality yield(3.33).The extension to u, v D1,2(H) is done by a limit procedure.

    Remark 3.4 Proposition 3.6 can be used to extend the validity of (3.30) tou D2,2(H). Indeed, let un SH be a sequence of processes convergingto u inD2,2(H). Formula (3.30) holds true for un. We can take limits inL2(; H) asn tends to infinity and conclude, because the operatorsD andare closed.

    Proposition 3.7 Let F D1,2, u Dom , F u L2(; H). If F (u)DF,uHL2(), then

    (F u) =F (u) DF,uH. (3.34)

    Proof: Assume first thatF Sandu SH. LetG S. Then by the dualityrelation betweenD and and the calculus rules on the derivatives, we have

    E(G(F u)) = E(DG,FuH)=E(u, (D(F G) GDF)H)=E(G(F (u) u,DFH)).

    By the definition of the operator , (3.34) holds under the assumptions ofthe proposition.

    29

  • 8/14/2019 Lecturenotes London

    30/80

    4 Criteria for Existence and Regularity of

    DensitiesIn lecture 1, we have shown how an integration by parts formula (see Def-inition 2.1) leads to results on densities of probability laws. The questionwe tackle in this lecture is how to derive such a formula. In particular wewill give an expression for the random variable H(F, G). For this, we shallapply the calculus developed in Section 3.We consider here the probability space associated with a Gaussian family(W(h), hH), as has been described in Section 3.2.

    4.1 Existence of densityLet us start with a very simple example.

    Proposition 4.1 LetFbe a random variable belonging to D1,2. Assume thatthe random variable DF

    ||DF||2H

    belongs to the domain of in L2(; H). Then

    the law ofF is absolutely continuous. Moreover, its density is given by

    p(x) =E

    11(F >x) DF||DF||2H

    (4.1)

    and therefore it is continuous and bounded.

    Proof: We will check that for any C

    b

    (R),

    E(

    (F)) =E

    (F) DF||DF||2H

    . (4.2)

    Thus (2.1) holds for G = 1 with H1(F, 1) =

    DF||DF||2

    H

    . Then the results

    follow from part 1 of Proposition 2.1.The chain rule of Malliavin calculus yields D((F)) =

    (F)DF. Thus,

    (F) =

    D((F)),

    DF

    ||DF||2H

    H

    .

    Therefore, the integration by parts formula implies

    E

    (F)

    = E

    D((F)), DF||DF||2H

    H

    =E

    (F)

    DF

    ||DF||2H

    ,

    30

  • 8/14/2019 Lecturenotes London

    31/80

    proving (4.2).

    Remark 4.1 Notice the analogy between (4.2) and the finite dimensionalformula (3.11).

    Remark 4.2 Using the explicit formula (4.1) to particular examples andLp() estimates of the Skorohod integral leads to interesting estimates forthe density (see for instance [47]).

    Remark 4.3 In Proposition 4.1 we have established the formula

    H1(F, 1) = DF

    ||DF||2H , (4.3)

    whereF : R.For random vectorsF, (n >1), we can obtain similar results by using matrixcalculus, as it is illustrated in the next statement. In the computations,instead ofDFH, we have to deal with the Malliavin matrix, a notion givenin the next definition.

    Definition 4.1 LetF : Rn be a random vector with componentsFj D

    1,2, j = 1, . . . , n. The Malliavin matrix ofF is then n matrix, denotedby , whose entries are the random variables i,j =

    DFi, DFj

    H, i, j =

    1, . . . , n.

    Proposition 4.2 Let F : Rn be a random vector with componentsFj D1,2, j = 1, . . . , n. Assume that

    (1) the Malliavin matrix is inversible, a.s.

    (2) For every i, j = 1, . . . , n, the random variables (1)i,j DFj belong to

    Dom .

    Then for any function Cb (Rn),E(i(F)) =E((F)Hi(F, 1)), (4.4)

    with

    Hi(F, 1) =n

    l=1

    ((1)i,lDFl). (4.5)

    Consequently the law ofF is absolutely continuous.

    31

  • 8/14/2019 Lecturenotes London

    32/80

    Proof: Fix Cb (Rn). By virtue of the chain rule, we have(F) D1,2

    and

    D((F)), DFlH=n

    k=1

    k(F)DFk, DFlH

    =n

    k=1

    k(F)k,l,

    l = 1, . . . , n. Since is inversible a.s., this system of linear equations ink(F), k = 1, . . . , n, can be solved, and

    i(F) =n

    l=1

    D((F)), (1)i,lDFl

    H, (4.6)

    i= 1, . . . , n, a.s.The assumption (2), the duality formula along with (4.6) yield

    nl=1

    E

    (F)

    (1)i,lDFl

    =n

    l=1

    ED((F)), (1)i,lDFlH

    =E

    i(F)

    .

    Hence (4.4), (4.5) is proved.Notice that by assumption Hi(F, 1) L2(). Thus Proposition 2.2 part 1)yields the existence of the density.

    Remark 4.4 The equalities (4.4), (4.5) give the integration by parts formula(in the sense of Definition 2.1) forndimensional random vectors, for multi-indices of length one.

    The assumption of part 2 of Proposition 4.2 may not be easy to check. In the

    next Proposition we give a statement which is more suitable for applications.Theorem 4.1 LetF : Rn be a random vector satisfying the followingconditions:

    (a) Fj D2,4, for anyj= 1, . . . , n,

    32

  • 8/14/2019 Lecturenotes London

    33/80

    (b) the Malliavin matrix is inversible, a.s.

    Then the law ofFhas a density with respect to Lebesgue measure onRn.

    Proof: As in the proof of Proposition 4.2, we obtain the system of equations(4.6) for any function Cb . That is,

    i(F) =n

    l=1

    D((F)), (1)i,lDFlH,

    i= 1, . . . , n, a.s.We would like to take expectations on both sides of this expression. However,assumption (a) does not ensure the integrability of1. We overcome this

    problem by localising (4.6), as follows.For any natural number N 1, we define the set

    CN=

    L(Rn,Rn) :|||| N, | det | 1N

    .

    Then we consider a nonnegative function N C0 (L(Rn,Rn)) satisfying(i)N() = 1, ifCN,(ii)N() = 0, ifCN+1.

    From (4.6), it follows that

    E

    N()i(F)

    =n

    l=1

    ED((F)), N()DFl(1)i,lH

    (4.7)

    The random variable N()DFl(1)i,l belongs to D

    1,2(H), by assumption(a). Consequently N()DF

    l(1)i,l Dom (see Proposition 3.6). Hence,by the duality identity,

    E

    N()i(F)=

    nl=1

    ED((F)), N()DFl(1)i,lH

    E n

    l=1

    N()DFl(1)i,l

    ||||.Let PNbe the finite measure on (, G) absolutely continuous with respectto P with density given by N(). Then, by Proposition 2.2, PN F1 is

    33

  • 8/14/2019 Lecturenotes London

    34/80

    absolutely continuous with respect to Lebesgue measure. Therefore, for any

    B B(Rn

    ) with Lebesgue measure equal to zero, we haveF1(B)

    N()dP = 0.

    Let N . Assumption (b) implies that limN N() = 1. Hence, bybounded convergence, we obtain P(F1(B)) = 0. This finishes the proof ofthe Proposition.

    Remark 4.5 The existence of density for the probability law of a randomvectorFcan be obtained under weaker assumptions than in Theorem 4.1 (or

    Proposition 4.2). Indeed, Bouleau and Hirsch proved a better result usingother techniques in the more general setting of Dirichlet forms. For the sakeof completenes we give one of their statements, the most similar to Theorem4.1, and refer the reader to [8] for complete information.

    Proposition 4.3 Let F : Rn be a random vector satisfying the fol-lowing conditions:

    (a) Fj D1,2, for anyj= 1, . . . , n,(b) the Malliavin matrix is inversible, a.s.

    Then the law ofFhas a density with respect to the Lebesgue measure onR

    n

    .

    4.2 Smoothness of the density

    As we have seen in the first lecture, in order to obtain regularity propertiesof the density, we need an integration by parts formula for multi-indices oforder greater than one. In practice, this can be obtained recursively. In thenext proposition we give the details of such a procedure.

    An integration by parts formula

    Proposition 4.4 LetF : Rn

    be a random vector such thatFj

    D

    for anyj = 1, . . . , n. Assume that

    det 1 p[1,)Lp(). (4.8)Then:

    34

  • 8/14/2019 Lecturenotes London

    35/80

    (1) det 1 D and1 D(Rm Rm).(2) LetG D. For any multi-index {1, . . . , n}r, r 1, there exists

    a random variable H(F, G) D such that for any function Cb (Rn),

    E

    ()(F)G

    = E

    (F)H(F, G)

    . (4.9)

    The random variablesH(F, G) can be defined recursively as follows:

    If||= 1, = i, then

    Hi(F, G) =n

    l=1

    (G(1)i,lDFl), (4.10)

    and in general, for= (1, . . . , r1, r),

    H(F, G) =Hr(F, H(1,...,r1)(F, G)). (4.11)

    Proof: Consider the sequence of random variables

    YN= (det + 1N

    )1, N1

    . Fix an arbitrary p[1, [. Assumption (4.8) clearly yields

    limN

    YN= det 1

    inLp().We now prove the following facts:

    (a)YN D, for anyN 1,(b) (DkYN, N 1) is a Cauchy sequence in Lp(; Hk), for any natural

    number k .

    Since the operator Dk is closed, the claim (1) will follow.Consider the function N(x) = (x+

    1N

    )1, x 0. Notice that N Cb .Then Remark 3.3 yields recursively (a). Indeed, det D.Let us now prove (b). The sequence of derivatives

    (n)

    N (det ), N 1 isCauchy in Lp(), for any p [1, ). This can be proved using (4.8) andbounded convergence. The result now follows by expressing the differenceDkYN DkYM, N, M 1, by means of Leibnizs rule (see (3.29)) and usingthat det D.

    35

  • 8/14/2019 Lecturenotes London

    36/80

    Once we have proved that det 1 D, we trivially obtain1 D(Rm R

    m

    ), by a direct computation of the inverse of a matrix and using thatFj D.The proof of (4.9)(4.11) is done by induction on the order r of the multi-index . Let r = 1. Consider the identity (4.6), multiply both sides by Gand take expectations. We obtain (4.9) and (4.10).Assume that (4.9) holds for multi-indices of order r 1. Fix =(1, . . . , r1, r). Then,

    E

    ()(F)G

    = E

    (1,...,r1)((r)(F))G

    =E

    (r)(F)H(1,...,r1)(F, G)

    =E

    (F)Hr(F, H(1,...,r1)(F, G)

    .

    The proof is complete.

    A criterion for smooth densities

    As a consequence of the preceding proposition and part 2 of Proposition 2.1we have a criterion on smoothness of density, as follows.

    Theorem 4.2 LetF : Rn be a random vector satisfying the assump-tions

    (a) Fj D, for anyj = 1, . . . , n,(b) the Malliavin matrix is invertible a.s. and

    det 1 p[1,)Lp().

    Then the law of F has an infinitely differentiable density with respect toLebesgue measure onRn.

    36

  • 8/14/2019 Lecturenotes London

    37/80

    5 Watanabe-Sobolev Differentiability of

    SPDEs

    5.1 A class of linear homogeneous SPDEs

    LetL be a second order differential operator acting on real functions definedon [0, [Rd. Examples ofL where the results of this lecture can be appliedgather the heat operator and the wave operator. With some minor modi-fications, the damped wave operator and some class of parabolic operatorswith time and space dependent coefficients could also be covered. We areinterested in SPDEs of the following type

    Lu(t, x) = (u(t, x)) W(t, x) +b (u(t, x)) , (5.1)

    t]0, T],x Rd, with suitable initial conditions. This is a Cauchy problem,with finite time horizon T > 0, driven by the differential operator L, andwith a stochastic input given by W(t, x). For the sake of simplicity we shallassume that the initial conditions vanish.

    Hypotheses on W

    We assume that (W(), D(Rd+1)) is a Gaussian process, zero mean, andnon-degenerate covariance function given by E(W(1)W(2)) = J(1, 2),where the functional J is defined in (3.15). By setting =

    F1, the

    covariance can be written as

    E(W(1)W(2)) =R+

    dsRd

    (d)F1(s)()F2(s)(),

    (see (3.16)).From this process, we obtain the Gaussian family (W(h), h HT) (see Sec-tion 3.2).

    A cylindrical Wiener process derived from (W(h), h HT)The process (Wt, t[0, T]) defined by

    Wt =

    j=1

    ejj(t),

    where (ej, j 1) is a CONS ofH and j , j 1, a sequence of independentstandard Wiener processes, defines a cylindrical Wiener process onH (see

    37

  • 8/14/2019 Lecturenotes London

    38/80

    [18], Proposition 4.11, page 96, for a definition of this object). In particular,

    Wt(g) :=Wt, gH satisfiesE(Wt(g1)Ws(g2)) = (s t)g1, g2H.

    The relationship between (Wt, t[0, T]) and (W(h), h HT) can be estab-lished as follows. Consider h HTof the particular type h= 11[0,t]g, g H.Then, the respective laws of the stochastic processes W(11[0,t]g), t[0, T] andWt, gH, t[0, T] are the same.Indeed, by linearity,

    W(11[0,t]g) =

    j=1

    g, ej

    HW(11[0,t]ej ),

    By the definition of (W(h), h HT), the family (W(11[0,t]ej), t[0, T], j1)is a sequence of independent standard Brownian motions.On the other hand,

    Wt, gH=

    j=1

    g, ejHj(t).

    This finishes the proof of the statement.In connection with the process (Wt, t [0, T]), we consider the filtration(Ft, t [0, T]), whereFt is the -field generated by the random variablesW

    s(g), 0

    s

    t, g

    H. It will be termed thenatural filtration associated

    withW.

    Hypotheses on L

    We shall denote by the fundamental solution of Lu = 0, and we shallassume(HL) is a deterministic function oft taking values in the space of non-negative measures with rapid decrease (as a distribution), satisfying

    T0

    dtRd

    (d) |F(t)()|2

  • 8/14/2019 Lecturenotes London

    39/80

    Examples

    1 Heat operator: L= t d,d1.The fundamental solution of this operator possesses the following property:for anyt0, Rd,

    C1t

    1 + ||2 t0

    ds|F(s)()|2 C2 t+ 11 + ||2 , (5.4)

    for some positive constants Ci,i = 1, 2.Consequently (5.2) holds if and only if

    Rd

    (d)

    1 + ||2 1), we have1 exp(42t||2)

    42||2 1

    22||2 C

    1 + ||2 .

    On the other hand, on (|| 1), we use the property 1 ex x, x 0,and we obtain

    1 exp(42t||2)42

    |

    |2

    Ct1 +

    |

    |2

    .

    This yields the upper bound in (5.4).Moreover, the inequality 1 ex x1+x , valid for any x0, implies t

    0ds|F(t)()|2 C t

    1 + 42t||2 .

    39

  • 8/14/2019 Lecturenotes London

    40/80

    Assume that 42t||2 1. Then 1 + 42t||2 82t||2; i f 42t||2 1then 1 + 4

    2

    t||2

    < 2 and therefore, 1

    1+42t||2 1

    2(1+||2) . Hence, we obtainthe lower bound in (5.4) and now the equivalence between (5.2) and (5.5) isobvious.Condition (5.3) is clearly satisfied.

    2 Wave operator: L= 2tt d,d1.For anyt0, Rd, it holds that

    c1(t t3) 11 + ||2

    t0

    ds|F(s)()|2 c2(t+t3) 11 + ||2 , (5.6)

    for some positive constants ci, i = 1, 2. Thus, (5.2) is equivalent to (5.5).Let us prove (5.6). It is well known (see for instance [75]) that

    F(t)() =sin(2t||)2|| .

    Therefore

    |F(t)()|2 122(1 + ||2)11(||1)+t

    211(||1)

    C 1 +t2

    1 + ||2 .

    This yields the upper bound in (5.6).

    Assume that 2t|| 1. Then sin(4t||)

    2t|| and consequently, t0

    dssin2(2s||)

    (2||)2 C t

    1 + ||2 2t0

    sin2(u||)du

    =C t

    1 + ||2 (2 sin(4t||)

    2t|| )

    C t1 + ||2 .

    Next we assume that 2t|| 1 and we notice that for r [0, 1], sin2 rr2

    sin2 1. Thus, t

    0ds

    sin2(2s||)(2||)2 Csin

    2 1 2t0

    s2ds

    C t3

    1 + ||2 .

    40

  • 8/14/2019 Lecturenotes London

    41/80

    This finishes the proof of the lower bound in (5.6).

    Ford3, condition (5.3) holds true. In fact,

    (t,dx) =

    1211{|x|

  • 8/14/2019 Lecturenotes London

    42/80

    t]0, T], x Rd, is required to be predictable and to belong to the spaceL2

    ( [0, T]; H).We address this question following the approach of [53] with a few changes,in particular we allow more general covariances (see Lemma 3.2 and Propo-sition 3.3 in [53]).

    Lemma 5.1 Assume that satisfies (HL), then HT and

    2HT = T0

    dtRd

    (d)|F(t)()|2.

    Proof: Let be a non-negative function inC

    (Rd) with support containedin the unit ball and such that

    Rd

    (x)dx= 1. Set n(x) =nd(nx), n1.Define n(t) = n(t). It is well known that n 0 inS(Rd) andn(t) S(Rd). Moreover, for any Rd,|Fn(t)()| |F(t)()|.By virtue of (5.2), (n, n1) HT, and it is Cauchy sequence. Indeed,

    n mHT = T0

    dtRd

    (d)|F(t)()|2|F(n() m(t))|2

    and sinceF(n() m()) converges pointwise to zero as n, m , wehave

    limn,m

    n

    mHT

    = 0,

    by bounded convergence. Consequently, (n, n1) converges in HTand thelimit is . Finally, by using again bounded convergence,

    2HT = limn n2HT

    = T0

    dtRd

    (d)|F(t)()|2.

    The next Proposition gives a large class of examples for which the stochasticconvolution againstWcan be defined.

    Proposition 5.1 Assume that satisfies (HL). LetZ ={Z(t, x), (t, x)[0, T] Rd} be a predictable process, bounded inL2. Set

    CZ := sup(t,x)[0,T]Rd

    E|Z(t, x)|2

    .

    42

  • 8/14/2019 Lecturenotes London

    43/80

    Then, z(t,dx) :=Z(t, x)(t,dx)is a predictable process belonging to L2( [0, T]; H) and

    Ez2HT

    CZ

    T0

    dtRd

    (d)|F(t)()|2.

    Hence, the stochastic integralT

    0

    Rd

    zdW is well-defined as anL2()-valuedrandom variable and

    T0

    Rd

    zdW

    2

    L2()

    =Ez2HT

    CZ

    T

    0dt

    Rd

    (d)|F

    (t)()|2. (5.8)

    Proof: By decomposing the process Z into its positive and negative part,it suffices to consider non-negative processes Z. Since (t) is a temperedmeasure, so is z(t). Hence we can consider the sequence ofS(Rd)-valuedfunctions zn(t) = nz(t), n 1, where n is the approximation of theidentity defined in the proof of the preceding lemma.Using Fubinis theorem and the boundedness property ofZwe obtain

    E(

    zn

    2HT

    ) =E T

    0

    dt R

    d(dx) [zn(t)

    zn(t)] (x)

    CZ T0

    dtRd

    (dx)Rd

    dzn(t, d(z))n(t, d(x z))

    =CZ

    T0

    dtRd

    (dx)n(t) n(t)

    (x)

    =CZ

    T0

    dtRd

    (d)|Fn(t)()|2

    CZ T0

    dtRd

    (d)|F(t)()|2

  • 8/14/2019 Lecturenotes London

    44/80

    then, using similar arguments as in the preceding lemma, we can prove that

    (zn, n 1) converges in L2

    ([0, T]; H) to z, finishing the proof of theproposition.For the proof of (5.9), we proceed as follows. Firstly, to simplify the expre-sions, we write zn,m instead ofzn zm, and n,m forn m.

    E(zn,m2HT) =E T

    0dt

    Rd

    (dx)Rd

    dyzn,m(t, x y)zn,m(t, y)

    =E T

    0dt

    Rd

    (dx)Rd

    dy

    Rddzn,m(x y z)z(t, z)

    Rd dzn,m(y z)z(t, z)

    =E T

    0dt

    Rd

    Rd

    Z(t, z)Z(t, z)(t,dz)(t,dz)

    Rd

    (dx)[n,m(. z) n,m(.+z]

    =E T

    0dt

    Rd

    Rd

    Z(t, z)Z(t, z)(t,dz)(t,dz)

    Rd

    (d)e2i(zz)|Fn,m()|2

    . (5.10)

    Then, since the Fourier transform of a convolution is the product of theFourier transform of the corresponding factors, using Fubinis theorem thislast expression is equal to

    E

    T0

    dtRd

    (d)|F(Z(t)(t))()|2|Fn,m()|2

    .

    Hence (5.9) is established.Finally, (5.8) is obtained by the isometry property of the stochastic convolu-tion, combined with the estimate of the integrand proved before.

    Remark 5.1 Assume that the processZ is bounded away from zero, that itinf(t,x)[0,T]Rd |Z(t, x)| c0 > 0. Then

    E(zn2HT)c20 T0

    dtRd

    (d)|F(t)()|2|Fn()|2. (5.11)

    44

  • 8/14/2019 Lecturenotes London

    45/80

    Indeed, arguing as in (5.10), we see that

    E(zn2HT) =E

    T

    0dt

    Rd

    Rd

    Z(t, z)Z(t, z)(t,dz)(t,dz)

    Rd

    (dx)[n(. z) n(.+z]

    c20 T0

    dtRd

    Rd

    (t,dz)(t,dz)

    Rd

    (dx)[n(. z) n(.+z]

    =c20

    T0

    dtRd

    Rd

    (t,dz)(t,dz)

    Rd (d)e2i(zz)

    |Fn()|2

    =c20

    T0

    dtRd

    (d)|F(t)()|2|Fn()|2.

    Remark 5.2 Assume that the coefficient in (5.7) has linear growth andthat the processu satisfies the conditions given at the beginning of the section.ThenZ(s, y) :=(u(s, y))satisfies the assumption of Proposition 5.1 and thestochastic integral (stochastic convolution) in (5.7) is well-defined.

    A result on existence and uniqueness of solution

    Theorem 5.1 Assume that, b: R R are Lipschitz functions and thatsatisfies (HL). Then there exists a unique mild solution to Equation (5.1).Such a solution is a random field indexed by(t, x)[0, T] Rd, continuousinL2(), and for anyp[1, [,

    sup(t,x)[0,T]Rd

    E(|u(t, x)|p)

  • 8/14/2019 Lecturenotes London

    46/80

    for n 1. We refer the reader to Theorem 13 of [16] for the details ofthe proof of the convergence of the Picard sequence and the extensions ofGronwalls lemma suitable thereof (see also Theorem 6.2 and Lemma 6.2 in[69]).

    5.2 The Malliavin derivative of a SPDE

    Consider a SPDE in its mild formulation (see 5.7). We would like to studyits differentiability in the Watanabe-Sobolev sense. There are two aspects ofthe problem:

    (A) to prove differentiability,

    (B) to give an equation satisfied by the Malliavin derivative.

    A useful tool to prove differentiability of Wiener functionals is provided bythe next result, which is an immediate consequence from the fact that theNth order Malliavin derivative is a closed operator defined on Lp() withvalues in Lp(; HN), for any p[1, [. In our context H:=HT. A resultof the same vein has been presented in Proposition 3.3.

    Lemma 5.2 Let(Fn, n1) be a sequence of random variables belonging toD

    N,p. Assume that:

    (a) there exists a random variableFsuch thatFn converges toF inLp

    (),asn tends to,(b) the sequence(DNFn, n1) converges inLp(; HNT ), asn tends to,

    ThenF belongs to DN,p andDNF =Lp(; HNT ) limn DNFn.We shall apply this lemma to F := u(t, x), the solution of Equation (5.7).Therefore, we have to find out an approximating sequence of the SPDE sat-isfying the assumptions (a) and (b) above. A possible candidate is providedin the proof of the existence and uniqueness of solution: the Picard approx-imations defined in (5.13). The verification of condition (a) for the sequence

    (un(t, x), n0), for fixed (t, x)[0, T] Rd is part of the proof of Theorem5.1As regards condition (b), we will avoid too many technicalities by focussingon the first order derivative and taking p = 2. For this we need the functions and b to be of classC1.

    46

  • 8/14/2019 Lecturenotes London

    47/80

    A possible strategy might consists in proving recursively thatun(t, x) belongs

    toD

    1,2

    , then applying rules of Malliavin calculus (for instance, an extensionof (3.30)) we will obtain

    Du0(t, x) = 0

    Dun(t, x) = (t , x )(un1(, ))+

    t0

    Rd

    (t s, x y)(un1(s, y))Dun1(s, y)W(ds, dy)

    + t0

    dsRd

    (s,dy)b(un1(t s, x y))Dun1(t s, x y),(5.14)

    n1.A natural candidate for the limit of this sequence is the process satisfying(5.16).At this point some comments are pertinent:

    1. The Malliavin derivative is a random vector with values inHT. There-fore, Equations (5.14) and (5.16) correspond to the mild formulationof a Hilbert-valued SPDE.

    2. The notation (t, x)(un1(, )) aims to show up the dependenceon the time variable (written with a dot) and on the space variable(written with a star). By Proposition 5.1 and Remark 5.2 such a termis in L2( [0, T]; H).

    3. The stochastic convolution term in (5.14) is not covered by the previousdiscussion, since the process (un1(s, y))Dun1(s, y) takes values onHT. A sketch of the required extension is given in the next paragraphs.

    Stochastic convolution with Hilbert-valued integrands

    Let K be a separable real Hilbert space with inner-product and norm denotedby, K and K, respectively. LetK={K(s, z), (s, z)[0, T] Rd} beaK-valued predictable process satisfying

    CK := sup(s,z)[0,T]Rd

    E||K(s, z)||2K

  • 8/14/2019 Lecturenotes London

    48/80

    5.1, zj (t,dx) = Kj(t, x)(t,dx) is a predictable process and belongs to

    L2

    ( [0, T]; H), and then K(t, x)(t,dx) is also a predictable processand belongs to L2( [0, T]; H K). TheK-valued stochastic convolutionT0

    Rd

    (t, x)K(t, x)W(dt,dx) is defined as T0

    Rd

    (t, x)Kj (t, x)W(dt,dx), j0

    and satisfies

    E

    T0

    Rd

    (t, x)K(t, x)W(dt,dx)

    2

    K

    = EK2HK

    CK T

    0 dtRd (d)|F(t)()|

    2

    . (5.15)

    Going back to the application of Lemma 5.2, we might guess as limit of thesequence (5.14) aHT-valued process (Du(t, x), (t, x)[0, T] Rd) satisfyingthe equation

    Du(t, x) = (t , x )(u(, ))+

    t0

    Rd

    (t s, x y)(u(s, y))Du(s, y)W(ds, dy)

    + t0

    dsRd

    (s,dy)b(u(t s, x y))Du(t s, x y). (5.16)

    Yet another result on existence and uniqueness of solution

    Theorem 5.1 is not general enough to cover SPDEs like (5.16). In this sectionwe set up a suitable framework for this (actually to deal with Malliavinderivatives of any order). For more details we refer the reader to [69], Chapter6.LetK1,K be two separable Hilbert spaces. If there is no reason for mis-understanding we will use the same notation,|| ||,, , for the norms andinner products in these two spaces, respectively.Consider two mappings

    , b:K1 K Ksatisfying the next two conditions for some positive constant C:

    (c1)

    supxK1

    ||(x, y) (x, y)|| + ||b(x, y) b(x, y)||

    C||y y||,

    48

  • 8/14/2019 Lecturenotes London

    49/80

    (c2) there exists q[1, ) such that

    ||(x, 0)|| + ||b(x, 0)|| C(1 + ||x||q),

    x K1, y, y K.Notice that (c1) and (c2) clearly imply

    (c3)||(x, y)|| + ||b(x, y)|| C(1 + ||x||q + ||y||).

    Let V =

    V(t, x), (t, x) [0, T] Rd

    be a predictableK1-valued processsuch that

    sup(t,x)[0,T]Rd

    E||V(t, x)

    ||p

  • 8/14/2019 Lecturenotes London

    50/80

    Main result

    We will now apply Lemma 5.2 to prove that for any fixed (t, x)[0, T]Rd,u(t, x)D1,2. The next results provide a verification of conditions (a) and (b)of the Lemma. We shall assume that the functions andb are differentiablewith bounded derivatives.

    Lemma 5.3 The sequence of random variables

    un(t, x), n0

    defined re-

    cursively in (5.13) is a subset ofD1,2.In addition,

    supn0

    sup(t,x)[0,T]Rd

    EDun(t, x)2HT

  • 8/14/2019 Lecturenotes London

    51/80

    Finally, for the third termB3,n(t, x) we use Schwarzs inequality with respect

    to the finite measure (s,dz)ds. Then, the assumptions on b and yield

    E(B3,n(t, x)2HT C t0

    ds sup(,z)[0,s]Rd

    E||Dun1(, z)||2HT

    .

    Therefore,

    sup(s,z)[0,t]Rd

    E||Dun(s, z)||2HT

    C1 + t

    0ds sup

    (,z)[0,s]RdE||Du

    n1(, z)||2HT(J(t s) + 1).

    Then, by Gronwalls Lemma (see Lemma 6.2 in [69]), we finish the proof.

    Lemma 5.4 Under the standing hypotheses, the sequenceDun(t, x), n0,converges inL2(; HT), uniformly in (t, x) [0, T] Rd, to theHT-valuedstochastic processes

    U(t, x), (t, x)[0, T] Rd

    solution of the equation

    U(t, x) =H(t, x)

    + t0

    Rd

    (t s, x z)U(s, z)(u(s, z))W(ds, dz)

    + t0

    dsRd

    (s,dz)U(t s, x z)b(u(t s, x z)), (5.21)

    withH(t, x) =(u(, ))(t , x ).

    Proof : We must prove

    sup(t,x)[0,T]Rd

    EDun(t, x) U(t, x)2

    HT

    0, (5.22)

    as n tends to infinity.

    51

  • 8/14/2019 Lecturenotes London

    52/80

    Set

    In,NZ (t, x) = (t , x )(un1(, )) (u(, )),In (t, x) =

    t0

    Rd

    (t s, x z)(un1(s, z))Dun1(s, z)W(ds,dz)

    t0

    Rd

    (t s, x z)(u(s, z))U(s, z)W(ds, dz),

    Inb(t, x) = t0

    dsRd

    (s,dz)

    b(un1(t s, x z))Dun1(t s, x z) b(u(t s, x z))U(t s, x z)

    .

    The Lipschitz property of yields

    E(||In,NZ (t, x)||2HT)C sup(t,x)[0,T]Rd

    E(|un1(t, x) u(t, x)|2) t

    0 ds

    Rd

    (d)|F(s)()|2

    C sup(t,x)[0,T]Rd

    E(|un1(t, x) u(t, x)|2).

    Hence,lim

    n sup

    (t,x)[0,T]RdE(||In,NZ (t, x)||2HT) = 0. (5.23)

    Consider the decomposition

    E(

    In (t, x)2HT

    )

    C(D1,n(t, x) +D2,n(t, x),

    where

    D1,n(t, x) =E t0

    Rd

    (t s, x z)[(un1(s, z)) (u(s, z))]Dun1(s, z)W(ds, dz)2HT

    ,

    D2,n(t, x) =E t0

    Rd

    (t s, x z)(u(s, z))[Dun1(s, z) U(s, z)]W(ds, dz)2HT

    .

    The isometry property of the stochastic integral, Cauchy-Schwarzs inequalityand the properties of yield

    D1,n(t, x)C sup(s,y)[0,T]Rd

    E(|un1(s, y) u(s, y)|4)E(Dun1(s, y)4HT)

    12

    t0

    dsRd

    (d)|F(s)()|2.

    52

  • 8/14/2019 Lecturenotes London

    53/80

    Owing to and Lemma 5.3 we conclude that

    limn

    sup(t,x)[0,T]Rd

    D1,n(t, x) = 0.

    Similarly,

    D2,n(t, x)C t0

    ds sup(,y)[0,s]Rd

    E(Dun1(, y) U(, y)2HT)J(t s).(5.24)

    For the pathwise integral term, we have

    E(

    Inb(t, x)

    2HT

    )

    C(b1,n(t, x) +b2,n(t, x)),

    with

    b1,n(t, x) =E|| t0

    dsRd

    (s,dz)[b(un1(t s, x z)) b(u(t s, x z))] Dun1(t s, x z)||2HT

    ,

    b2,n(t, x) =E t0

    dsRd

    (s,dz)b(u(t s, x z)) [Dun1(t s, x z) U(t s, x z)]2HT

    .

    By the properties of the deterministic integral of Hilbert-valued processes,

    the assumptions on band Cauchy-Schwarzs inequality we obtain

    b1,n(t, x) t0

    dsRd

    (s,dz)E|b(un1(t s, x z)) b(u(t s, x z))|2

    Dun1(t s, x z)2HT

    sup(s,y)[0,T]Rd

    E|un1(s, y) u(s, y)|4 EDun1(s, y)4HT

    1/2 t0

    ds(s,dz).

    Thus,lim

    n

    sup(t,x)[0,T]Rd

    b1,n(t, x) = 0.

    Similar arguments yield

    b2,n(t, x)C t0

    ds sup(,y)[0,s]Rd

    E(Dun1(, y) U(, y)2HT).

    53

  • 8/14/2019 Lecturenotes London

    54/80

    Therefore we have obtained that

    sup(s,x)[0,t]Rd

    E(||Dun(s, x) U(s, x)||2HT)

    Cn+C t0

    ds sup(,x)[0,s]Rd

    E(||Dun1(, x) U(, x)||2HT)(J(t s) + 1),

    with limn Cn = 0. Thus applying a version of Gronwalls lemma (seeLemma 6.2 in [69]) we complete the proof of (5.22).

    54

  • 8/14/2019 Lecturenotes London

    55/80

    6 Analysis of Non-Degeneracy

    In comparison with SDEs, the application of the criteria for existence andsmoothness of density for Gaussian functionals (see for instance Proposition4.3) and Theorem 4.2) to SPDEs is not a well developed topic. Most of theresults for SPDEs are proved under ellipticity conditions. In this lecture,we shall discuss the non-degeneracy of the Malliavin matrix for the classof SPDEs studied in the preceding lecture, in a very simple situation: indimension one and assuming ellipticity.

    6.1 Existence of moments of the Malliavin covariance

    Throughout this section, we fix (t, x)]0, T] Rd

    and consider the randomvariableu(t, x) obtained as a solution of (5.7). Hence we are in the frameworkof Section 5 and therefore, we are assuming in particular that satisfieshypotheses (HL).Following Definition 4.1, the Malliavin matrix is the random variableDu(t, x)HT. In this section, we want to study the property

    EDu(t, x)pHT

    0, a.s.Clearly, having (6.1) for somep >0 is a sufficient condition for this propertyto hold.

    The classical connection between moments and distribution func-tions

    Lemma 6.1 Fix p]0, ]. The property (6.1) holds if and only if thereexists0 > 0, depending onp, such that

    0

    0

    (1+p)P(||

    Du(t, x)||2HT

    < )d )d.

    55

  • 8/14/2019 Lecturenotes London

    56/80

    In fact, this follows easily from Fubinis theorem.

    Apply this formula to Y :=||Du(t, x)||2p

    HT . We obtain

    E(||Du(t, x)||2pHT) =m1+m2,

    with

    m1= 00

    P(||Du(t, x)||2pHT > )d,

    m2=

    0P(||Du(t, x)||2pHT > )d.

    Clearly,m1

    0. The change of variable = p implies

    m2 =

    0P(||Du(t, x)||2pHT > )d

    =

    0P(||Du(t, x)||2HT <

    1p )d

    =p 1p

    0

    0(1+p)P(||Du(t, x)||2HT < )d.

    This finishes the proof.

    Moments of low order

    Knowing the size in of the term P(||Du(t, x)||2HT < ) will help us to verifythe integrability of(1+p)P(||Du(t, x)||2HT < ) at zero, and a posterioritoestablish the validity of (6.1). The next proposition gives a result in thisdirection.

    Proposition 6.1 We assume that

    (1) there exists0 > 0 such thatinf{|(z)|, z R} 0,

    (2) there exist such that for anyt(0, 1),

    C1t

    t0

    dsRd

    (d)|F(s)()|2, (6.3)

    56

  • 8/14/2019 Lecturenotes London

    57/80

    Then for any]0, 1[,

    PDu(t, x)2HT <

    C1 1 . (6.4)

    Consequently, (6.1) holds for anyp 0 such thatt 0. From (5.16), the definition ofHT, andthe triangular inequality, we clearly have

    Du(t, x)2HT t

    tdsDs,u(t, x)2H

    12

    t

    tds(t s, x )(u(s, ))2H I(t, x; ),

    where

    I(t, x; ) = t

    tds

    ts

    Rd

    (t r, x z)(u(r, z))Ds,u(r, z)W(dr, dz)

    + t

    sdr

    Rd

    (t r,dz)b(u(r, x z))Ds,u(r, x z)2H.

    Set

    M1() = 0

    dsRd

    (d)|F(s)()|2,

    M2() = 0

    dsRd

    (s, y)dy.

    Notice that by (5.3),M2()C.Our aim is to prove

    tt

    ds(t s, x )(u(s, ))2H20M1(), (6.5)

    E(I(t, x; ))CM1() (M1() +M2()) . (6.6)

    For any]0, 1[, we can choose := ()> 0 such thatM1() = 420

    . Notice

    that by (6.3) this is possible. Then, < 420

    1

    1 , and assuming that (6.5),

    57

  • 8/14/2019 Lecturenotes London

    58/80

    (6.6) hold true, we have

    PDu(t, x)2HT <

    P

    t

    tdsDs,u(t, x)2H <

    P

    I(t, x; )20

    2M1()

    C1E(I(t, x; ()))C1

    2 +1+

    1

    C1 1 .

    This is (6.4).

    Proof of (6.5):By a change of variables t

    tds(t s, x )(u(s, ))2H =

    0

    ds(s, x )(u(t s, ))2H.

    Then, the inequality (5.11) applied to Z(s, y) =|(u(t s, y))| and T := yields (6.5). Indeed, for this choice ofZand T,

    0ds(s, x )(u(t s, ))2H = limnE(zn

    2H

    )0M1().

    Proof of (6.6):

    We shall give a bound for the mathematical expectation of each one of theterms

    I1(t, x; ) = 0

    ds

    t

    ts

    Rd

    (t r, x z)(u(r, z))Dts,u(r, z)W(dr,dz)2

    H,

    I2(t, x; ) = 0

    ds t

    tsdr

    Rd

    (t r,dz)b(u(r, x z))Dts,u(r, x z)2

    H.

    Since is bounded, the inequality (5.15) yields

    E(I1(t, x; ))C sup(s,y)[0,]Rd EDt,u(t s, y)2

    H

    M1().

    For the pathwise integral, it is easy to prove that

    E(I2(t, x; ))C sup(s,y)[0,]Rd

    EDt,u(t s, y)2H

    M2().

    58

  • 8/14/2019 Lecturenotes London

    59/80

    Since

    sup(s,y)[0,]Rd EDt,u(t s, y)2HCM1(),

    (see for instance [61] or Lemma 8.2 in [69]), we get

    E(I1(t, x; ))CM1()2,E(I2(t, x; ))CM1()M2().

    This finishes the proof of (6.7) and therefore of (6.4).The statement about the validity of (6.1) for the given range ofp is a conse-quence of Lemma 6.1.

    An example: the stochastic wave equation in dimensiond3Proposition 6.1 can be applied for instance to the stochastic wave equation.Indeed, let be the fundamental solution of L = 0 with L = 2tt d,d= 1, 2, 3. Assume that the measure satisfies

    0 0 such thatinf{|(z)|, z R} 0,(3) satisfies the assumptions (HL),

    (4) there exist >0, such that for anyt(0, 1),

    C1t

    t0

    dsRd

    (d)|F(s)()|2

    Then, for any fixed (t, x)]0, T] Rd, the random variable u(t, x) belongsto D1,2, and for any p < 1 1

    , E(Du(t, x)pHT)

  • 8/14/2019 Lecturenotes London

    61/80

    At this point, we can apply Chebychevs inequality to obtain

    P

    I(t, x; )202

    M1() CqE(I(t, x; ()))q ,

    for anyq >1.Using Lq()- estimates for the Hilbert-valued stochastic convolutions (seefor instance [69], Theorem 6.1), and for pathwise integrals as well, yield

    E(I(t, x; ()))q C()q1

    M1(())2q +M1(())

    q(())q

    .

    By the choice of(), this implies

    PDu(t, x)2HT <

    C q1 +[q q ].

    Sinceqcan be chosen arbitrarily large, we obtain (6.1) for any p0.Regularity of the density

    Proceeding recursively, it is possible to extend the results in Section 5.2and prove that under suitable assumptions, the solution of (5.7) is infinitelydifferentiable in the Watanabe-Sobolev sense (see [69], Chapter 7). Then,owing to the results discussed in the preceding paragraphs, applying Theorem4.2 yields the following:

    Theorem 6.2 Consider the stochastic process{u(t, x), (t, x)[0, T] Rd}solution of (5.7). We assume:

    (1) the functions , b belong toC

    and have bounded derivatives of anyorder,

    (2) there exists0> 0 such thatinf{|(z)|, z R} 0,(3) satisfies the assumptions (HL),

    (4) there exist >0, such that for anyt(0, 1),

    C1t

    t0

    dsRd

    (d)|F(s)()|2

    Then, for any fixed(t, x)]0, T] Rd, the random variableu(t, x)belongs toD

    , and for anyp >0,E(Du(t, x)p

    HT)

  • 8/14/2019 Lecturenotes London

    62/80

    6.2 Some references

    To end this lecture, we mention some references on existence and smoothnessof density of probability laws, as a guide for the reader to have a furtherinsight into the subject.The first application of Malliavin calculus to SPDEs may be found in [51]; itconcerns the hyperbolic equation on Rn

    2

    stX(s, t) =A(X(s, t)) Ws,t+A0(X(s, t)), (6.8)

    withs, t]0, 1] and initial condition X(s, t) = 0 ifs t= 0. HereA: Rn

    R

    d

    R

    n, A0 : Rn

    R

    n,

    are smooth functions, andW ad-dimensional Brownian sheet on [0, 1]2, thatis, W = (Ws,t = (W

    1s,t, . . . W

    ds,t), (s, t) [0, 1]2), with independent Gaussian

    components, zero mean and covariance function given by

    E(Wis1,t1Wi

    s2,t2) = (s1 s2)(t1 t2),

    i= 1, . . . , d.In dimension n = 1, this equation is transformed into the standard waveequation after a rotation of forty-five degrees. Otherwise, (6.8) is an extensionto a two-parameter space of the Ito equation. The existence and smoothness

    of density for the probability law of the solution to (6.8) at a fixed timeparameter (s, t), with st = 0 has been proved under a specific type ofHormanders condition on the vector fields Ai, i= 1, . . . , d, which does notcoincide with Hormanders condition for diffusions. An extension to a nonrestricted Hormanders condition, that is, including the vector field A0, hasbeen done in [52].The one-dimensional wave equation perturbed by space-time white noise, asan initial value problem but also as a boundary value problem, has beenstudied in [13]. The degeneracy conditions on the free terms of the equationare different from those in Theorem 6.2.Existence for the density of equation (4.1) in dimension one, with L= and

    space-time white noise, has been first studied in [58]. The authors considera Dirichlet boundary value problem on [0, 1], with initial conditionu(0, x) =u0(x). The required non-degeneracy condition reads as follows:

    (u0(y))= 0, for some y]0, 1[. (6.9)

    62

  • 8/14/2019 Lecturenotes London

    63/80

    The same equation has been analyzed in [3]. In this reference, the authors

    consider the random vector (u(t, x1), . . . , u(t, xm)) obtained by looking at thesolution of the equation at time t= 0 and different points x1, . . . , xm]0, 1[.Under assumption (2) of Theorem 6.2, they obtain the smoothness of thedensity.Recently in [45], this result has been improved. The authors prove that form= 1, the assumption 6.9 yields the smoothness of the density as well.The first application of Malliavin calculus to SPDEs with correlated noiseappears in [43], and a first attempt for an unified approach of stochastic heatand wave equations is done in [37]. Hormanders type conditions in a generalcontext of Volterra equations have been given in [64].

    63

  • 8/14/2019 Lecturenotes London

    64/80

    7 Small perturbations of the density

    Consider the SPDE (5.1) that we write in its mild form, as in (5.7). Wereplace W(t, x) by W(t, x), with ]0, 1[ and we are interested in the be-haviour, as 0 of the solution of the modified equation, that we willdenote by u. At the moment, this is a very vague plan; roughly speaking,one would like to know the effect of small noise on a deterministic evolutionequation. Several questions may be addressed. For instance, denoting by

    the probability law of the solution u, we may want to prove a large deviationprincipleon spaces where the solution lives.We recall that a family (, ]0, 1[) of probability measures on a Polishspace E is said to satisfy a large deviation principlewith rate functional I

    ifI : E [0, ] is a lower semicontinous function such that the level sets{I(x)a} are compact, and for any Borel set B E,inf

    xBI(x)lim inf

    02 log((B))limsup

    02 log((B)) inf

    xBI(x).

    In many of the applications that have motivated the theory of large devi-ations, as 0, degenerates to a delta Dirac measure at zero. Hence,typically a large deviation principle provides the rate of convergence and anaccurate description of the degeneracy.Suppose that the measures live in Rd and have a density with respect tothe Lebesgue measure. A natural question is whether from a large deviation

    principle one could obtain a precise lower and upper bound for the density.This question has been addressed by several authors in the context of dif-fusion processes (Azencott, Ben Arous and Leandre, Varadhan, to mentionsome of them), and the result is known as thelogarithmic estimatesand alsoas theVaradhan estimates. We recall this result since it is the inspiration forextensions to SPDEs.Consider the family of stochastic differential equations on Rn (in theStratonovich formulation)

    Xt =x+ t0

    A(Xs) dBs+ t0

    A0(Xs)ds,

    t [0, 1], > 0, where A : Rn Rd Rn, A0 : Rn Rn and B is ad-dimensional Brownian motion. For each h in the Cameron-Martin spaceH associated with B, we consider the ordinary (deterministic) equation

    Sht =x+ t0

    A(Shs )hsds+ t0

    A0(Shs )ds,

    64

  • 8/14/2019 Lecturenotes London

    65/80

    t[0, 1], termed the skeletonofX. Fory Rn, setd2(y) = inf{h2H; Sh1 =y}d2R(y) = inf{h2H; Sh1 =y, det Sh

    1>0},

    where Sh1

    denotes the n-dimensional matrix whose en-

    tries areD(Sh1 )i, D(Sh1 )j, i, j = 1, . . . , n. Here D stands for the Frechetdifferential operator on Banach spaces and we assume that the random fieldsA1, . . . , Ad (the components ofA) and A0 are smooth enough. Notice thatwe use the same notation for the deterministic matrix (D(Sh1 )i, D(Sh1 )j)i,jthan for the Malliavin matrix. In this context, the former is often termedthe deterministic Malliavin matrix. The quantitiesd2(y), d2R(y) are related

    with the energy needed by a system described by the skeleton to leave theinitial condition x Rn and reach y Rn at time t = 1.This is the result for SDEs.

    Theorem 7.1 LetA : Rn RdRn,A0: Rn Rn be infinite differentiablefunctions with bounded derivatives of any order. Assume:

    (HM) There existsk0 1 such that the vector space spanned by the vectorfields

    [Aj1, [Ajk2 , [. . . [Ajk , Aj0]] ], 0kk0,where j0

    {1, 2, . . . , d

    } and ji

    {0, 1, 2, . . . , d

    } if 1

    i

    k at the point

    x Rn has dimensionn.Then, the random variableX1 has a smooth densityp, and

    d2R(y)liminf0 22 logp(y)limsup

    022 logp(y) d2(y). (7.1)

    In addition, if inf{det Sh1

    ;for h such that Sh1 =y}> 0, thend2(y) =d2R(y)and consequently,

    lim0

    22 logp(y) =d2(y). (7.2)

    The assumption (HM) in this theorem is termed Hormanders unrestricted

    assumption; the notation [, ] refers to the Lie brackets.The proof of this theorem given in [11] admits an extension to the generalframework of an abstract Wiener space. This fact has been noticed andapplied in [33], and then written in [47]; since then, it has been applied toseveral examples of SPDEs. In this lecture we shall give this general result

    65

  • 8/14/2019 Lecturenotes London

    66/80

    and then some hints on its application to an example of stochastic heat

    equation.Throughout this section,{W(h), h H} is a Gaussian family, as has beendefined in Section 3.2. We will consider non-degenerate random vectors F,which in this context means that F D and

    det 1F p[1,[Lp(),

    where Fdenotes the Malliavin matrix ofF. Notice that by Theorem 4.2,non-degenerate random vectors Fhave an infinitely differentiable density.

    7.1 General results

    Lower bound

    Proposition 7.1 Let {F, ]0, 1]} be a family of non-degenerate n-dimensional random vectors and let Cp (H;Rn) (the space ofC func-tions with polynomial growth) be such that for eachhH, the limit

    lim0

    1

    F

    +

    h

    (h)

    = Z(h) (7.3)

    exists in the topology ofD and defines an-dimensional random vector with

    absolute continuous distribution.Then, setting

    d2R(y) = inf{h2H : (h) =y, det (h) > 0},

    y Rn, we haved2R(y)liminf0

    2 logp(y). (7.4)

    Proof: Lety Rn be such thatd2R(y)0 there existshHsuch that (h) =y, det (h) >0 andh2H d2R(y) +. For any functionf C

    0 (R

    n), we can write

    E(f(F)) = exp

    h

    2H

    22

    E

    f

    F

    +

    h

    exp

    W(h)

    , (7.5)

    by Girsanovs theorem.

    66

  • 8/14/2019 Lecturenotes London

    67/80

    Consider a smooth approximation of 11[,] given by a function C,01, such that (t) = 0, ift /[2, 2], (t) = 1 ift[, ]. Thenusing (7.5), and assuming that fis a positive function, we have

    E(f(F))exph

    2H+ 4

    22

    E

    f

    F

    +

    h

    (W(h))

    .

    We now apply this inequality to a sequence fn, n 1, of smooth approxi-mations of the delta Dirac function at y. Passing to the limit and takinglogarithms, we obtain

    22 logp(y)

    (

    h

    2H+ 4) + 2

    2 log Ey F

    +h

    (W(h)) .

    Hence, to complete the proof we need checking that

    lim0

    2 log E

    y

    F

    +

    h

    (W(h))

    = 0. (7.6)

    Sincey = (h), we clearly have

    E

    y

    F

    +

    h

    (W(h))

    = mE

    0

    F

    + h

    (h)

    (W(h))

    .

    The expression

    E

    0

    F

    + h

    (h)

    (W(h))

    tends to the density ofZ(h) at zero, as 0, as can be proved using theintegration by parts formula (4.9)(4.11). Hence (7.6) holds true and thisends the proof of the Proposition.

    Upper boundStating the upper bound for the logarithm of the density needs more de-manding assumptions. Among others, the family{F, ]0, 1]}must satisfya large deviation principle, and there should be a control of the norm of theinverse of the Malliavin matrix in terms of powers of.

    67

  • 8/14/2019 Lecturenotes London

    68/80

    Proposition 7.2 Let {F, ]0, 1]} be a family of non-degenerate n-dimensional random vectors and let C

    p (H;R

    n

    ) be such that:1. sup]0,1] Fk,p 0 and N(p) ]1, [ such that

    (F)1pN(p), for each integerp,3.{F, ]0, 1]}satisfies a large deviation principle onRn with rate func-

    tionI(y), y Rn.Then,

    limsup0

    22 logp(y)

    I(y). (7.7)

    Proof: It is an application of the integration by parts formula (4.9)(4.11).Indeed, fixy Rn and a smooth function C0 (Rn), 01 such thatis equal to one in a nighbourhood ofy . Then we can write

    p(y) =E((F)y(F

    )) .

    Applying Holders inequality with p, q]1, [ with 1p + 1q = 1, we obtain

    E((F)y(F)) =E

    11{Fy}H1,...,1(F

    , (F))

    E(|H1,...,1(F, (F))|)=E

    |H1,...,1(F, (F))|11{Fsupp }

    )

    (P{F supp }) 1q H1,...,1(F, (F))p.By theLp estimates of the Skorohod integral (see for instance [79], or Propo-sition 3.2.2 in [47]), there exist real numbers greater than one, p, a, b, a, band

    H1,...,1(F, (F))pC(F)1pFa,b(F)a,b.The assumptions (1) (2) below ensure

    limsup0 22 log H1,...,1(F, (F))p= 0,

    while (3) implies

    limsup0

    22 log P{F supp} inf(I(y), ysupp)

    68

  • 8/14/2019 Lecturenotes London

    69/80

    and consequently,

    lim sup0

    22 logp(y) 1q

    inf(I(y), ysupp).

    Set I(supp) = inf(I(y), y supp). For any > 0, there exists q > 1such that 1

    qI(supp)I(supp) . Then, by taking a sequence of smooth

    functions n (with the same properties as ) such that suppn decreases to{y}, we see that

    limsup0

    22 logp(y) I(y) +,

    and since is arbitrary, we obtain (7.7).This ends the proof.

    7.2 An example: the stochastic heat equation

    In this section, we consider the SPDE

    u(t, x) = t0

    R

    G(t s, x y)(u(s, y))W(ds,dy)

    + t0

    R

    G(t s, x y)b(u(s, y))dsdy, (7.8)

    t [0, T], where G(t, x) = (2t) 12exp |x|22t and W is space-time whitenoise. In the framework of Section 5 this corresponds to a stochastic heatequation in dimension d = 1 and to a spatial covariation measure given by(dy) =0(y).The companion perturbed family we would like to study is

    u(t, x) = t0

    R

    G(t s, x y)(u(s, y))W(ds, dy)

    + t0

    R

    G(t s, x y)b(u(s, y))dsdy, (7.9)

    ]0, 1[.Our purpose is to apply Propositions 7.1 and 7.2 and therefore to obtainlogarithm estimates for the density.Throughout this section we will assume the following condition, althoughsome of the results hold under weaker assumptions.

    69

  • 8/14/2019 Lecturenotes London

    70/80

    (C)The functions , b areC with bounded derivatives.The Hilbert spaceHwe should take into account here is the Cameron-Martinspace associated withW. It consists of the set of functionsh: [0, T]R Rabsolutely continuous with respect to the product Lebesgue measure dtdx

    and such thath2H :=T

    0

    R

    |ht,x|2dtdx12 0.Then the family (u, ]0, 1[) is non-degenerate and therefore the randomvariable u(t, x) possesses aC density p.This result has been proved in [3] (see also sections 5 and 6).

    2 Large deviation principle

    Large deviation principles for the family (u, ]0, 1[) in the topology ofHolder continuous functions have been established under different type ofassumptions by Sowers ([66]) and Chenal and Millet. Here, for the sake ofsimplicity we shall consider the topology of uniform convergence on compactsets and denote by

    C([0, T]

    R) the set of continuous functions defined on

    [0, T]R with respect to this topology. It is known that (u, ]0, 1[) satisfiesa large deviation principle onC([0, T] R) with rate function

    I(f) = inf

    h2H2

    ; hH, h =f

    .

    70

  • 8/14/2019 Lecturenotes London

    71/80

    This is a functional large deviation principle; the contraction principle (a

    transfer principle of large deviations through continuous functionals) givesrise to the following statement:Fix (t, x)]0, T] R. Assuming (C), u(t, x) satisfies a large deviationprinciple on R with rate function

    I(y) = inf

    h2H2

    ; hH, h(t, x) =y

    , y R. (7.11)

    A result for the stochastic heat equation

    For the family defined in (7.9), we have the following theorem ([40], Theorem2.1)

    Theorem 7.2 Assume that the functions, b satisfy(C) and also (ND).Fix (t, x)]0, T]R and let I : R R be defined in (7.11). Then thedensities(pt,x, ]0, 1[) of(u(t,x,]0, 1[), satisfy

    lim0

    2 logpt,x(y) =I(y). (7.12).

    Proof: We shall consider the abstract Wiener space associated with the space-time white noise W and check that F := u(t, x), ]0, 1[, satisfy theassumptions of Propositions 7.1, 7.2. We give some hints for this in thesequel.

    Proposition 7.2, Assumption 1.

    Following the results of Section 5, we already know that u(t, x) D, thatis,u(t, x)k,p

  • 8/14/2019 Lecturenotes London

    72/80

    for anyp[1, [. Then we will haveE

    ||p2p sup]0,1[

    E|Q|pC2p,

    and we will obtain the desired conclusion with N(p) = 2.Let us give some hints for the proof of (7.13). Remember that =Du(t, x)2H. The Hvalued stochastic process Du(t, x) satisfies an equa-tion similar to (5.16), where is replaced by (and consequently

    replaced by ). By uniqueness of solution, it holds that Du(t, x) =(u(., ))Y(t, x), where Y(t, x) is a Hvalued stochastic process solutionto the equation

    Y(t, x) =G(t , x ) + t0

    Rd

    G(t s, x y)(u(s, y))Y(s, y)W(ds,dy)+

    t0

    Rd

    G(t s, x y)b(u(s, y))Y(s, y)dsdy.

    ThenQcorresponds to(u(., ))Y(t, x)2H, which essentially behaves likeDu(t, x)2H.Proposition 7.2, Assumption 3.

    See the result under the heading large deviation principle.

    Assumptions of Proposition 7.1.

    The non-degeneracy of the family u

    (t, x), ]0, 1[ has already been dis-cussed. Now we will discuss the existence of limit (7.3), which is a hypoth-esis on existence of directional derivative with respect to . Let us proceedformally.For anyhH, setZ,h(t, x)() =u(t, x)( + h ). By uniqueness of solutionZ,h(t, x) is given by

    Z,h(t, x) = t0

    Rd

    G(t s, x y)

    (Z,h(s, y)W(ds,dy)

    +

    (Z,h(s, y)hs,y+b(Z,h(s, y))

    dsdy

    .

    It is now clear that the candidate for (h) in Proposition 7.1 should beZ0,h(t, x), and by uniqueness of solution Z0,h(t, x) = h(t, x). Hence, wehave to check that the mapping ]0, 1[Z,h(t, x) is diffe


Recommended