Home >Documents >Lectures on singular stochastic PDEsperkowsk/files/lectures-ebp.pdf · the basic construction of...

Date post: | 07-Aug-2020 |

Category: | ## Documents |

View: | 9 times |

Download: | 0 times |

Share this document with a friend

Transcript:

SOCIEDADE BRASILEIRA DE MATEMÁTICA ENSAIOS MATEMÁTICOS2015, Volume 29, 1–89

Lectures on singular stochasticPDEs

Massimiliano GubinelliNicolas Perkowski

Abstract. These are the notes for a course at the 18th Brazilian Schoolof Probability held from August 3rd to 9th, 2014 in Mambucaba. Theaim of the course is to introduce the basic problems of non–linear PDEswith stochastic and irregular terms. We explain how it is possible tohandle them using two main techniques: the notion of energy solutions in[Gonçalves and Jara, Arch. Ration. Mech. Anal., 2014] and [Gubinelli andJara, Stoch. Partial Di�. Equations: Analysis and Computations, 2013],and that of paracontrolled distributions, recently introduced in [Gubinelli,Imkeller, and Perkowski, Forum Math. Pi, 2015]. In order to maintaina link with physical intuitions, we motivate such singular SPDEs via ahomogenization result for a di�usion in a random potential.

2010 Mathematics Subject Classification: 60H15, 60G15, 35S50.

Contents

1 Introduction 5

2 Energy solutions 9

2.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2 The Stochastic Burgers equation . . . . . . . . . . . . . . . 112.3 The Ornstein–Uhlenbeck process . . . . . . . . . . . . . . . 122.4 Gaussian computations . . . . . . . . . . . . . . . . . . . . . 192.5 The Itô trick . . . . . . . . . . . . . . . . . . . . . . . . . . 272.6 Controlled distributions . . . . . . . . . . . . . . . . . . . . 332.7 Existence of solutions . . . . . . . . . . . . . . . . . . . . . 34

3 Besov spaces 37

4 Di�usion in a random environment 44

4.1 The 2d generalized parabolic Anderson model . . . . . . . . 504.2 More singular problems . . . . . . . . . . . . . . . . . . . . 524.3 Hairer’s regularity structures . . . . . . . . . . . . . . . . . 54

5 The paracontrolled PAM 55

5.1 The paraproduct and the resonant term . . . . . . . . . . . 565.2 Commutator estimates and paralinearization . . . . . . . . 595.3 Paracontrolled distributions . . . . . . . . . . . . . . . . . . 665.4 Fixpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705.5 Renormalization . . . . . . . . . . . . . . . . . . . . . . . . 715.6 Construction of the extended data . . . . . . . . . . . . . . 76

6 The stochastic Burgers equation 81

6.1 Structure of the solution . . . . . . . . . . . . . . . . . . . . 826.2 Paracontrolled solution . . . . . . . . . . . . . . . . . . . . . 83

Bibliography 87

3

Chapter 1

Introduction

The aim of these lectures is to explain how to apply controlled pathideas [12] to solve basic problems in singular stochastic parabolic equations.The hope is that the insight gained by doing so can inspire new applicationsor the construction of other more powerful tools to analyze a wider classof problems.

To understand the origin of such singular equations, we have chosen topresent the example of a homogenization problem for a singular potentialin a linear parabolic equation. This point of view has the added benefitthat it allows us to track back the renormalization needed to handle thesingularities as e�ects living on other scales than those of interest. Thebasic problem is that of having to handle e�ects of the microscopic scalesand their interaction through non–linearities on the macroscopic behaviourof the solution.

Mathematically, this problem translates into the attempt of makingSchwartz’s theory of distributions coexist with non–linear operations whichare notoriously not continuous in the usual topologies on distributions.This is a very old problem of analysis and has been widely studied. Theadditional input which is not present in the usual approaches is that thesingularities which force us to treat the problem in the setting of Schwartz’sdistributions are of a stochastic nature. So we dispose of two handles onthe problem: the analytical one and the probabilistic one. The right mixof the two will provide an e�ective solution to a wide class of problems.

A first and deep understanding of these problems has been obtainedstarting from the late ’90s by T. Lyons [25], who introduced a theory ofrough paths in order to settle the conflict of topology and non–linearityin the context of driven di�erential equations, or more generally in thecontext of the non–linear analysis of time–varying signals. Nowadays thereare many expositions of this theory [27, 9, 26, 8] and we refer the readerto the literature for more details.

5

6 M. Gubinelli and N. Perkowski

In [12, 13], the notion of controlled paths has been introduced in orderto extend the applicability of the rough path ideas to a larger class ofproblems that are not necessarily related to the integration of ODEs butwhich still retain the one–dimensional nature of the directions in which theirregularity manifests itself. The controlled path approach has been usedto make sense of the evolution of irregular objects such as vortex filamentsand certain SPDEs. Later Hairer understood how to apply these ideas tothe long standing problem of the Kardar–Parisi–Zhang equation [18], andhis insights prompted the researchers to try more ambitious approaches toextend rough paths to a multidimensional setting.

In [14], in collaboration with P. Imkeller, we introduced a notion ofparacontrolled distributions which is suitable to handle a wide class ofSPDEs which were well out of reach with previously known methods.Paracontrolled distributions can be understood as an extension ofcontrolled paths to a multidimensional setting, and they are based onnew combinations of basic tools from harmonic analysis.

At the same time, Hairer managed to devise a vast generalization ofthe basic construction of controlled rough paths in the multidimensionaland distributional setting, which he called the theory of regularitystructures [19] and which subsumes standard analysis based on Hölderspaces and controlled rough path theory but goes well beyond that. Justa few days after the lectures in Mambucaba took place, it was announcedthat Martin Hairer was awarded a Fields Medal for his work on SPDEsand in particular for his theory of regularity structures [19] as a tool fordealing with singular SPDEs. This prize witnesses the exciting period weare experiencing: we now understand sound lines of attack to long standingproblems, and there are countless opportunities to apply similar ideas tonew problems.

The plan of the lectures is the following. We start by discussing energysolutions [10, 11, 15] of the stationary stochastic Burgers equation (oneof the avatars of the Kardar–Parisi–Zhang equation).1 Energy solutionshave the advantage of being relatively easy to handle and of being based ontools that are familiar to probabilists. On the other side, they only applyin the specific example of the stochastic Burgers equation in equilibrium,and here we will only focus on the existence but not on the uniquenessof energy solutions. Starting our lectures in this way will allow us tointroduce the reader to SPDEs in a progressive manner, and also tointroduce Gaussian tools on the way (Wick products, hypercontractivity)and to present some of the basic phenomena that appear when dealing withsingular SPDEs. Next we set up the analytical tools we need in the restof the lectures: Besov spaces and some basic harmonic analysis based on

1The paper [11] is the revised published version of [10]. We would like to cite themtogether to acknowledge that the notion of energy solutions historically predates thatof Hairer in [18].

Chapter 1. Introduction 7

the Littlewood–Paley decomposition of distributions. In order to motivatethe reader and to provide a physical ground for the intuition to standon, we then discuss a homogenization problem for the linear heat equationwith random potential which describes di�usion in a random environment.This will allow us to derive the need for the weak topologies we shalluse and for irregular objects like the white noise from first principles and“concrete” applications. The homogenization problem also allows us tosee that there are naturally appearing renormalization e�ects and to keeptrack of their mathematical meaning. Starting from these problems weintroduce the two–dimensional parabolic Anderson model, the simplestSPDE in which most of the features of more di�cult problems are alreadypresent, and we explain how to use paraproducts and the paracontrolledansatz in order to keep the non–linear e�ect of the singular data undercontrol. Then we return to the stochastic Burgers equation and showhow to apply paracontrolled distributions in order to obtain the existenceand uniqueness of solutions also in the non–stationary case. e�ect of thesingular data under control. Then we return to the stochastic Burgersequation and show how to apply paracontrolled distributions in order toobtain the existence and uniqueness of solutions also in the non–stationarycase. e�ect of the singular data under control. Then we return tothe stochastic Burgers equation and show how to apply paracontrolleddistributions in order to obtain the existence and uniqueness of solutionsalso in the non–stationary case.

Acknowledgements. The authors would like to thank the twoanonymous referees for the careful reading and the manifold suggestionswhich helped up to greatly improve the manuscript. We would also liketo thank the organizers of the Brazilian Summer Schools in Probabilityfor the invitation and the researchers who attended the meeting for thewonderful atmosphere.

The main part of the research was carried out while N. P. wasemployed by Université Paris Dauphine. N. P. was supported by theFondation Sciences Mathématiques de Paris (FSMP) and by a public grantoverseen by the French National Research Agency (ANR) as part of the“Investissements d’Avenir” program (reference: ANR-10-LABX-0098).

Conventions and notations. We write a . b if there exists a constantC > 0, independent of the variables under consideration, such that a 6 Cb.Similarly we define &. We write a ƒ b if a . b and b . a. If wewant to emphasize the dependence of C on the variable x, then we writea(x) .x b(x).

If a is a complex number, we write aú for its complex conjugate.If i and j are index variables of Littlewood–Paley blocks (to be defined

below), then i . j is to be interpreted as 2i . 2j , and similarly for ƒ and

8 M. Gubinelli and N. Perkowski

.. In other words, i . j means i 6 j + N for some fixed N œ N that doesnot depend on i or j.

We use standard multi-index notation: for µ œ Nd0

we write |µ| =µ

1

+ . . . + µd and ˆµ = ˆ|µ|/ˆµ1x1 . . . ˆµ

d

xd

, as well as xµ = xµ11

· . . . · xµddfor x œ Rd.

For – > 0 we write C–b for the bounded functions F : R æ R which areÂ–Ê times continuously di�erentiable with bounded and (– ≠ Â–Ê)–Höldercontinuous derivatives of order Â–Ê, equipped with the norm

ÎFÎC–b

= supµ:0Æ|µ|ÆÂ–Ê

ÎˆµFÎLŒ + supµ:|µ|=Â–Ê

supx”=y

|ˆµF (x) ≠ ˆµF (y)||x ≠ y|–≠Â–Ê

.

If we write u œ C –≠, then that means that u is in C –≠Á for all Á > 0.The C – spaces will be defined below.

If X is a Banach space with norm Î · ÎX and if T > 0, then we define CXand CTX as the spaces of continuous functions from [0, Œ) respectively[0, T ] to X, and CTX is equipped with the supremum norm Î · ÎC

T

X. If– œ (0, 1) then we write C–X for the functions in CX that are ––Höldercontinuous on every interval [0, T ], and we write

ÎfÎC–T

X = sup06s

Chapter 2

Energy solutions

The first issue one encounters when dealing with singular SPDEs is theill–posed character of the equation, even in a weak sense. Typically, theequation features some non–linearity that does not make sense in thenatural spaces where solutions live and one has to provide a suitablesmaller space in which it is possible to give an appropriate interpretationto “ambiguous quantities” that appear in the equation.

Energy solutions [11, 15] are a relatively simple tool in order to come upwith well–defined non–linearities. Moreover, proving existence of energysolutions or even convergence to energy solutions is usually a quite simpleproblem, at least compared to the other approaches like paracontrolledsolutions or regularity structures, where already existence requires quitea large amount of computations but where uniqueness can be establishedquite easily afterwards. The main drawback is that we lack of generaluniqueness results for energy solutions. Only very recently, after thecompletion of these notes, we were able to prove that energy solutionsfor the stationary stochastic Burgers equation are unique. This topic willnot be touched upon here. The interested reader can find the details inthe preprint [17].

2.1 Distributions

We will need to use distributions defined on the d-dimensional torus Tdwhere T = R/(2fiZ). We collect here some basic results and definitions.The space of distributions S Õ = S Õ(Td) is the set of linear maps f fromS = CŒ(Td,C) to C, such that there exist k œ N and C > 0 with

|Èf, ÏÍ| := |f(Ï)| 6 C sup|µ|6k

ÎˆµÏÎLŒ(Td)

9

10 M. Gubinelli and N. Perkowski

for all Ï œ S .Example 1. Clearly Lp = Lp(Td) µ S Õ for all p > 1, and more generallythe space of finite signed measures on (Td, B(Td)) is contained in S Õ.Another example of a distribution is Ï ‘æ ˆµÏ(x) for µ œ Nd

0

and x œ T.In particular, the Fourier transform Ff : Zd æ C,

Ff(k) = f̂(k) = Èf, ekÍ,

with ek = e≠iÈk,·Í/(2fi)d/2, is defined for all f œ S Õ, and it satisfies|Ff(k)| 6 |P (k)| for a suitable polynomial P . Conversely, if (g(k))kœZd isat most of polynomial growth, then its inverse Fourier transform

F ≠1g =ÿ

kœZdg(k)eúk

defines a distribution (here eúk = eiÈk,·Í/(2fi)d/2 is the complex conjugateof ek).Exercise 1. Show that the Fourier transform of Ï œ S decays faster thanany rational function (we say that it is of rapid decay). Combine this withthe fact that F defines a bijection from L2(Td) to ¸2(Zd) with inverse F ≠1to show that F ≠1Ff = f for all f œ S Õ and FF ≠1g = g for all g ofpolynomial growth. Extend the Parseval formula

Èf, ÏúÍL2(Td) =⁄

Tdf(x)Ï(x)údx =

ÿ

k

f̂(k)Ï̂(k)ú

from f, Ï œ L2(Td) to f œ S Õ and Ï œ S .Exercise 2. Fix a complete probability space (�, F ,P). On that space let› be a spatial white noise on Td, i.e. › is a centered Gaussian processindexed by L2(Td), with covariance

E[›(f)›(g)] =⁄

Tdf(x)g(x)dx.

Show that there exists ›̃ with P(›̃(f) = ›(f)) = 1 for all f œ L2, such that›̃(Ê) œ S Õ for all Ê œ �.

Hint: Show that E[qkœZd exp(⁄|›(ek)|2)/(1 + |k|d+1)] < Œ for somesuitable ⁄ > 0.

Linear maps on S Õ can be defined by duality: if A : S æ Sis such that for all k œ N there exists n œ N and C > 0 withsup|µ|6k Îˆµ(AÏ)ÎLŒ 6 C sup|µ|6n ÎˆµÏÎLŒ , then we set ÈtAf, ÏÍ =Èf, AÏÍ. Di�erential operators are defined by Èˆµf, ÏÍ = (≠1)|µ|Èf, ˆµÏÍ.If Ï : Zd æ C grows at most polynomially, then it defines a Fouriermultiplier

Ï(D) : S Õ æ S Õ, Ï(D)f = F ≠1(ÏFf).

Chapter 2. Energy solutions 11

Exercise 3. Use the Fourier inversion formula of Exercise 1 to show thatfor f œ S Õ, Ï œ S and for u, v : Zd æ C with u of polynomial growth andv of rapid decay

F (fÏ)(k) = (2fi)≠d/2ÿ

¸

f̂(k ≠ ¸)Ï̂(¸)

andF ≠1(uv)(x) = (2fi)d/2ÈF ≠1u, (F ≠1v)(x ≠ ·)Í.

2.2 The Stochastic Burgers equationOur aim here is to motivate the ideas at the base of the notion of energysolutions. We will not insist on a detailed formulation of all the availableresults. The reader can always refer to the original paper [15] for missingdetails. Applications to the large scale behavior of particle systems arestudied in [11].

We will study the case of the stochastic Burgers equation on the torusT. The solution of the stochastic Burgers equation is the derivative of thesolution of the Kardar–Parisi–Zhang equation, a universal model for thefluctuations in random interface growth which has been at the center ofseveral spectacular results of the past years. Excellent surveys on the KPZequation and related areas are [6, 28, 29].

The unknown u : R+

◊ T æ R should satisfy

ˆtu = �u + ˆxu2 + ˆx›,

where › : R+

◊ T æ R is a space–time white noise defined on a givenprobability space (�, F ,P) fixed once and for all. That is, › is a centeredGaussian process indexed by L2(R

+

◊ T) with covariance

E[›(f)›(g)] =⁄

R+◊Tf(t, x)g(t, x)dtdx.

The equation has to be understood as a relation for processes whichare distributions in space with su�ciently regular time dependence. Inparticular, if we test the above relation with Ï œ S := S (T) := CŒ(T),denote with ut(Ï) the pairing of the distribution u(t, ·) with Ï, andintegrate in time over the interval [0, t], we formally get

ut(Ï) = u0(Ï) +⁄ t

0

us(�Ï)ds ≠⁄ t

0

Èu2s, ˆxÏÍds ≠⁄ t

0

›s(ˆxÏ)ds.

12 M. Gubinelli and N. Perkowski

Let us discuss the various terms in this equation. In order to make senseof ut(Ï) and

s t0

us(�Ï)ds, it is enough to assume that for all Ï œ Sthe mapping (t, Ê) ‘æ ut(Ï)(Ê) is a stochastic process with continuoustrajectories. Next, if we denote Mt(Ï) = ≠

s t0

›s(ˆxÏ)ds then, at least bya formal computation, we have that (Mt(Ï))t>0,ÏœS is a Gaussian randomfield with covariance

E[Mt(Ï)Ms(Â)] = (t · s)ÈˆxÏ, ˆxÂÍL2(T).

In particular, for every Ï œ S the stochastic process (Mt(Ï))t>0 is aBrownian motion with covariance

ÎÏÎ2H1(T) := ÈˆxÏ, ˆxÏÍL2(T).

We will use this fact to have a rigorous interpretation of the white noise› appearing in the equation. Here we used the notation M in order tostress the fact that Mt(Ï) is a martingale in its natural filtration andmore generally in the filtration Ft = ‡(Ms(Ï) : s 6 t, Ï œ H1(T)), t > 0.

The most di�cult term is of course the nonlinear one:s t

0

Èu2s, ˆxÏÍds.In order to define it, we need to square the distribution ut, an operationwhich in general can be quite dangerous. A natural approach would be todefine it as the limit of some regularizations. For example, if fl : R æ R

+

is a compactly supported CŒ function such thatsR fl(x)dx = 1, and we

set flÁ(·) = fl(·/Á)/Á, then we can set Nt,Á(u)(x) =s t

0

((flÁ ú us)(x))2ds anddefine Nt(u) = limÁæ0 Nt,Á(u) whenever the limit exists in S Õ := S Õ(T),the space of distributions on T. Then the question arises which propertiesu should have for this convergence to occur.

2.3 The Ornstein–Uhlenbeck process

Let us simplify the problem and start by studying the linearized equationobtained by neglecting the non–linear term. Let X be a solution to

Xt(Ï) = X0(Ï) +⁄ t

0

Xs(�Ï)ds + Mt(Ï) (2.1)

for all t > 0 and Ï œ S . This equation has at most one solution (forfixed X

0

). Indeed, the di�erence D between two solutions should satisfyDt(Ï) =

s t0

Ds(�Ï)ds, which means that D is a distributional solution tothe heat equation. Taking Ï(x) = ek(x), where

ek(x) := exp(≠ikx)/Ô

2fi, k œ Z,

Chapter 2. Energy solutions 13

we get Dt(ek) = ≠k2s t

0

Ds(ek)ds and then by Gronwall’s inequalityDt(ek) = 0 for all t > 0. This easily implies that Dt = 0 in S Õ forall t > 0.

To obtain the existence of a solution, observe that

Xt(ek) = X0(ek) ≠ k2⁄ t

0

Xs(ek)ds + Mt(ek)

and that Mt(e0) = 0, while for all k ”= 0 the process —t(k) = Mt(ek)/(≠ik)is a complex valued Brownian motion (i.e. real and imaginary part areindependent Brownian motions with the same variance). The covarianceof — is given by

E[—t(k)—s(m)] = (t · s)”k+m=0and moreover —t(k)ú = —t(≠k) for all k ”= 0 (where ·ú denotes complexconjugation), as well as —t(0) = 0. In other words, (Xt(ek)) is a complex–valued Ornstein–Uhlenbeck process ([23], Example 5.6.8) which solves alinear one–dimensional SDE and has an explicit representation given bythe variation of constants formula

Xt(ek) = e≠k2tX

0

(ek) ≠ ik⁄ t

0

e≠k2(t≠s)ds—s(k).

This is enough to determine Xt(Ï) for all t > 0 and Ï œ S .Exercise 4. Show that (Xt(ek) : t œ R+, k œ Z) is a complex Gaussianrandom field, that is for all n œ N, for all t

1

, . . . , tn œ R+, k1, . . . , kn œ Z,the vector

(Re(Xt1(ek1)), . . . , Re(Xtn(ekn)), Im(Xt1(ek1)), . . . , Im(Xtn(ekn)))

is multivariate Gaussian. Show that X has mean E[Xt(ek)] = e≠k2tX

0

(ek)and covariance

E[(Xt

(ek

)≠E[Xt

(ek

)])(Xs

(em

)≠E[Xs

(em

)])] = k2”k+m=0

⁄t·s

0e≠k

2(t≠r)≠k2(s≠r)dr

as well as

E[(Xt

(ek

)≠E[Xt

(ek

)])(Xs

(em

)≠E[Xs

(em

)])ú] = k2”k=m

⁄t·s

0e≠k

2(t≠r)≠k2(s≠r)dr.

In particular,

E[|Xt(ek) ≠ E[Xt(ek)]|2] =1 ≠ e≠2k2t

2 .

Next we examine the Sobolev regularity of X. For this purpose, we needthe following definition.

14 M. Gubinelli and N. Perkowski

Definition 1. Let – œ R. Then the Sobolev space H– is defined as

H– := H–(T) :=I

fl œ S Õ : ÎflÎ2H– :=ÿ

kœZ(1 + |k|2)–|fl(ek)|2 < Œ

J.

We also write CH– for the space of continuous functions from R+

to H–.Lemma 1. Let “ 6 ≠1/2 and assume that X

0

œ H“ . Then almost surelyX œ CH“≠.Proof. Let – = “ ≠ Á and consider

ÎXt ≠ XsÎ2H– =ÿ

kœZ(1 + |k|2)–|Xt(ek) ≠ Xs(ek)|2.

Let us estimate the L2p(�) norm of this quantity for p œ N by writing

EÎXt ≠ XsÎ2pH– =ÿ

k1,...,kpœZ

pŸ

i=1

(1 + |ki|2)–EpŸ

i=1

|Xt(eki

) ≠ Xs(eki

)|2.

By Hölder’s inequality, we get

EÎXt ≠ XsÎ2pH– .ÿ

k1,...,kpœZ

pŸ

i=1

(1 + |ki|2)–pŸ

i=1

(E|Xt(eki

) ≠ Xs(eki

)|2p)1/p.

Note now that Xt(eki

) ≠ Xs(eki

) is a Gaussian random variable, so thatthere exists a universal constant Cp for which

E|Xt(eki

) ≠ Xs(eki

)|2p 6 Cp(E|Xt(eki

) ≠ Xs(eki

)|2)p.

Moreover,

Xt(ek) ≠ Xs(ek) = (e≠k2(t≠s) ≠ 1)Xs(ek) + ik

⁄ t

se≠k

2(t≠r)dr—r(k),

leading to

E|Xt(ek) ≠ Xs(ek)|2

=(e≠k2(t≠s) ≠ 1)2E|Xs(ek)|2 + k2

⁄ t

se≠2k

2(t≠r)dr

=(e≠k2(t≠s) ≠ 1)2e≠2k

2s|X0

(ek)|2 + (e≠k2(t≠s) ≠ 1)2k2

⁄ s

0

e≠2k2(s≠r)dr

+ k2⁄ t

se≠2k

2(t≠r)dr

=(e≠k2t ≠ e≠k

2s)2|X0

(ek)|2 +12(e

≠k2(t≠s) ≠ 1)2(1 ≠ e≠2k2s)

+ 12(1 ≠ e≠2k2(t≠s)).

Chapter 2. Energy solutions 15

For any Ÿ œ [0, 1] and k ”= 0, we thus have

E|Xt(ek) ≠ Xs(ek)|2 . (k2(t ≠ s))Ÿ(|X0(ek)|2 + 1),

while for k = 0 we have E|Xt(e0) ≠ Xs(e0)|2 = 0. Let us introduce thenotation Z

0

= Z \ {0}. Therefore,

EÎXt ≠ XsÎ2pH– .ÿ

k1,...,kpœZ0

pŸ

i=1

(1 + |ki|2)–pŸ

i=1

E|Xt(eki

) ≠ Xs(eki

)|2

. (t ≠ s)Ÿpÿ

k1,...,kpœZ0

pŸ

i=1

(1 + |ki|2)–(k2i )Ÿ(|X0(eki)|2 + 1)

. (t ≠ s)ŸpË ÿ

kœZ0(1 + |k|2)–(k2)Ÿ(|X

0

(ek)|2 + 1)Èp

. (t ≠ s)Ÿp1

ÎX0

Î2pH–+Ÿ(T) +Ë ÿ

kœZ0(1 + |k|2)–(k2)Ÿ

Èp2.

If – < ≠1/2 ≠ Ÿ, the sum on the right hand side is finite and we obtain anestimation for the modulus of continuity of t ‘æ Xt in L2p(�; H–):

EÎXt ≠ XsÎ2pH– . (t ≠ s)Ÿp[1 + ÎX0Î2pH–+Ÿ ].

Now Kolmogorov’s continuity criterion allows us to conclude that almostsurely X œ CH– whenever X

0

œ H–+Ÿ. ⇤Now note that the regularity of the Ornstein–Uhlenbeck process does

not allow us to form the quantity X2t point–wise in time since by Fourierinversion Xt =

qk Xt(ek)eúk, and therefore we should have

X2t (ek) = (2fi)≠1/2ÿ

¸+m=k

Xt(e¸)Xt(em).

Of course, at the moment this expression is purely formal since we cannotguarantee that the infinite sum converges. A reasonable thing to try is toapproximate the square by regularizing the distribution, taking the square,and then trying to remove the regularization. Let �N be the projector ofa distribution onto a finite number of Fourier modes:

(�N fl)(x) =ÿ

|k|6Nfl(ek)eúk(x).

Then �N Xt(x) is a smooth function of x and we can consider (�N Xt)2which satisfies

(�N Xt)2(ek) = (2fi)≠1/2ÿ

¸+m=k

I|¸|6N,|m|6N Xt(e¸)Xt(em).

16 M. Gubinelli and N. Perkowski

We would then like to take the limit N æ +Œ. For convenience, we willperform the computations below in the limit N = +Œ, but one has tocome back to the case of finite N in order to make it rigorous.

ThenE[X2t (ek)] =(2fi)≠1/2”k=0

ÿ

mœZ0E[Xt(e≠m)Xt(em)]

=(2fi)≠1/2”k=0ÿ

mœZ0e≠2m

2t|X0

(em)|2

+ (2fi)≠1/2”k=0ÿ

mœZ0m2

⁄ t

0

e≠2m2(t≠s)ds

andÿ

mœZ0m2

⁄ t

0

e≠2m2(t≠s)ds = 12

ÿ

mœZ0(1 ≠ e≠2m

2t) = +Œ.

This is not really a problem since in Burgers’ equation only components ofu2t (ek) with k ”= 0 appear (due to the presence of the derivative). However,X2t (ek) is not even a well–defined random variable. For the remainder ofthis subsection let us assume that X

0

= 0, which will slightly simplify thecomputation. If k ”= 0, we haveE[|X2t (ek)|2] =E[X2t (ek)X2t (e≠k)]

=(2fi)≠1ÿ

¸+m=k

ÿ

¸Õ+mÕ=≠kE[Xt(e¸)Xt(em)Xt(e¸Õ)Xt(emÕ)].

By Wick’s theorem (see [22], Theorem 1.28), the expectation can becomputed in terms of the covariances of all possible pairings of the fourGaussian random variables (3 possible combinations):E[Xt(e¸)Xt(em)Xt(e¸Õ)Xt(emÕ)] = E[Xt(e¸)Xt(em)]E[Xt(e¸Õ)Xt(emÕ)]

+ E[Xt(e¸)Xt(e¸Õ)]E[Xt(em)Xt(emÕ)]+ E[Xt(e¸)Xt(emÕ)]E[Xt(em)Xt(e¸Õ)].

Since k ”= 0, we have ¸ + m ”= 0 and ¸Õ + mÕ ”= 0 which allows us to neglectthe first term since it is zero. By symmetry of the summations, the twoother terms give the same contribution and we remain with

E[|X2t (ek)|2] =1fi

ÿ

¸+m=k

ÿ

¸Õ+mÕ=≠kE[Xt(e¸)Xt(e¸Õ)]E[Xt(em)Xt(emÕ)]

(2.2)

= 1fi

ÿ

¸+m=k

E[Xt(e¸)Xt(e≠¸)]E[Xt(em)Xt(e≠m)]

= 14fiÿ

¸+m=k

(1 ≠ e≠2¸2t)(1 ≠ e≠2m

2t) = +Œ.

Chapter 2. Energy solutions 17

This shows that even when tested against smooth test functions, X2t is notin L2(�). This indicates that there are problems with X2t and indeed onecan show that X2t (ek) does not make sense as a random variable.

To understand this better, observe that the Ornstein–Uhlenbeck processcan be decomposed as

Xt(ek) = ik⁄ t

≠Œe≠k

2(t≠s)d—s(k) ≠ ike≠k

2t

⁄0

≠Œek

2sd—s(k),

where we extended the Brownian motions (—s(k))s>0 to two sided complexBrownian motions by considering independent copies. The interest in thisdecomposition is in the fact that it is not di�cult to show that the secondterm gives rise to a smooth function if t > 0, so all the irregularity of Xtis described by the first term which we call Yt(ek) and which is stationaryin time. Note that Yt(ek) ≥ NC(0, 1/2) for all k œ Z0 and t œ R, where wewrite

U ≥ NC(0, ‡2)

if U = V + iW , where V and W are independent random variableswith distribution N (0, ‡2/2). The random distribution Yt then satisfiesYt(Ï) ≥ N (0, ÎÏÎ2L2(T)/2), and moreover it is (1/

Ô2 times) the white noise

on T. It is also possible to deduce that the white noise on T is indeed theinvariant measure of the Ornstein–Uhlenbeck process, that it is the onlyone, and that it is approached quite fast [23].

So we should expect that, at fixed time, the regularity of the Ornstein–Uhlenbeck process is like that of the space white noise and this is a wayof understanding our di�culties in defining X2t since this will be, modulosmooth terms, the square of the space white noise.

A di�erent matter is to make sense of the time–integral of ˆxX2t . Letus give it a name and call it Jt(Ï) =

s t0

ˆxX2s (Ï)ds. For Jt(ek), thecomputation of its variance gives a quite di�erent result.

Lemma 2. Almost surely, J œ C1/2≠H≠1/2≠.

Proof. Proceeding as in (2.2), we have now

E[|Jt

(ek

)|2] = 1fi

k2⁄

t

0

⁄t

0

ÿ

¸+m=k

E[Xs

(e¸

)Xs

Õ (e≠¸)]E[Xs(em)Xs

Õ (e≠m)]dsdsÕ.

If s > sÕ, we have

E[Xs(e¸)XsÕ(e≠¸)] =12e

≠¸2(s≠sÕ)(1 ≠ e≠2¸2sÕ),

18 M. Gubinelli and N. Perkowski

and therefore

E[|Jt

(ek

)|2] = k2

4fi

⁄t

0

⁄t

0

ÿ

¸+m=k

e≠(¸2+m2)|s≠sÕ|(1 ≠ e≠2¸

2(sÕ·s))(1 ≠ e≠2m2(sÕ·s))dsdsÕ

6 k2

4fi

⁄t

0

⁄t

0

ÿ

¸+m=k

e≠(¸2+m2)|s≠sÕ|dsdsÕ

6 12fi k2t

ÿ

¸+m=k

⁄ Œ

0e≠(¸

2+m2)rdr

= 12fi k2t

ÿ

¸+m=k

1¸2 + m2 .

Now for k ”= 0ÿ

¸+m=k

1¸2 + m2 .

⁄

R

dxx2 + (k ≠ x)2 .

1|k| .

So finally E[|Jt(ek)|2] . |k|t. From which is easy to conclude that at fixedt the random field Jt belongs almost surely to H≠1/2≠. Redoing a similarcomputation in the case Jt(ek) ≠ Js(ek), we obtain E[|Jt(ek) ≠ Js(ek)|2] .|k| ◊ |t ≠ s|. To go from this estimate to a path–wise regularity result ofthe distribution (Jt)t, following the line of reasoning of Lemma 1, we needto estimate the p-th moment of Jt(ek) ≠ Js(ek). We already used in theproof of Lemma 1 that all moments of a Gaussian random variable arecomparable. By Gaussian hypercontractivity (see Theorem 3.50 of [22])this also holds for polynomials of Gaussian random variables, so that

E[|Jt(ek) ≠ Js(ek)|2p] .p (E[|Jt(ek) ≠ Js(ek)|2])p.

From here we easily derive that almost surely J œ C1/2≠H≠1/2≠ which isthe space of 1/2≠Hölder continuous functions with values in H≠1/2≠. ⇤

This shows that ˆxX2t exists as a space–time distribution but notas a continuous function of time with values in distributions in space.The key point in the proof of Lemma 2 is the fact that the correlationE[Xs(e¸)XsÕ(e≠¸)] of the Ornstein–Uhlenbeck process decays quite rapidlyin time.

The construction of the process J does not solve our problem ofconstructing

s t0

ˆxu2sds since we need similar properties for the full solutionu of the non–linear dynamics (or for some approximations thereof), andall we have done so far relies on explicit computations and the specificGaussian features of the Ornstein–Uhlenbeck process. But at least thisgive us a hint that indeed there could exist a way of making sense of theterm ˆxu(t, x)2, even if only as a space–time distribution, and that in doingso we should exploit some decorrelation properties of the dynamics.

Chapter 2. Energy solutions 19

So when dealing with the full solution u, we need a replacement for theGaussian computations based on the explicit distribution of X that weused above. This will be provided, in the current setting, by stochasticcalculus along the time direction. Indeed, note that for each Ï œ S theprocess (Xt(Ï))t>0 is a semimartingale in the filtration (Ft)t>0.

Before proceeding with these computations, we need to develop sometools to describe the Itô formula for functions of the Ornstein–Uhlenbeckprocess. This will also serve us as an opportunity to set up some analysison Gaussian spaces.

2.4 Gaussian computationsFor cylindrical functions F : S Õ æ R of the form F (fl) =f(fl(Ï

1

), . . . , fl(Ïn)) with Ï1, . . . , Ïn œ S and f : Rn æ R at least C2b ,we have by Itô’s formula

dtF (Xt) =nÿ

i=1

Fi(Xt)dXt(Ïi) +12

nÿ

i,j=1

Fi,j(Xt)dÈX(Ïi), X(Ïj)Ít,

where ÈÍt denotes the quadratic covariation of two continuoussemimartingales and where Fi(fl) = ˆif(fl(Ï1), . . . , fl(Ïn)) and Fi,j(fl) =ˆ2i,jf(fl(Ï1), . . . , fl(Ïn)), with ˆi denoting the derivative with respect tothe i-th argument. Now recall that dXt(Ïi) = Xt(�Ïi)dt + dMt(Ïi) is acontinuous semimartingale, and therefore

dÈX(Ïi), X(Ïj)Ít = dÈM(Ïi), M(Ïj)Ít = ÈˆxÏi, ˆxÏjÍL2(T)dt,

and then

dtF (Xt) =nÿ

i=1

Fi(Xt)dMt(Ïi) + L0F (Xt)dt,

where L0

is the second–order di�erential operator defined on cylindricalfunctions F as

L0

F (fl) =nÿ

i=1

Fi(fl)fl(�Ïi) +nÿ

i,j=1

12Fi,j(fl)ÈˆxÏi, ˆxÏjÍL2(T). (2.3)

Another way to describe the generator L0

is to give its value on thefunctions fl ‘æ exp(fl(Â)) for Â œ S , which is

L0

efl(Â) = efl(Â)(fl(�Â) ≠ 12 ÈÂ, �ÂÍL2(T)).

20 M. Gubinelli and N. Perkowski

If F, G are two cylindrical functions (which we can take of the formF (fl) = f(fl(Ï

1

), . . . , fl(Ïn)) and G(fl) = g(fl(Ï1), . . . , fl(Ïn)) for the sameÏ

1

, . . . , Ïn œ S ), we can check thatL

0

(FG) = (L0

F )G + F (L0

G) + E(F, G), (2.4)where the quadratic form E is given by

E(F, G)(fl) =ÿ

i,j

Fi(fl)Gj(fl)ÈˆxÏi, ˆxÏjÍL2(T). (2.5)

In particular, the quadratic variation of the martingale obtained in the Itôformula for F is given by

de ⁄ ·

0

nÿ

i=1

Fi(Xs)dMs(Ïi)f

t= E(F, F )(Xt)dt.

Lemma 3. (Gaussian integration by parts) Let (Zi)i=1,...,M bean M -dimensional Gaussian vector with zero mean and covariance(Ci,j)i,j=1,...,M . Then for all g œ C1b (RM ) we have

E[Zkg(Z)] =ÿ

¸

Ck,¸E5

ˆg(Z)ˆZ¸

6.

Proof. Use that E[eiÈZ,⁄Í] = e≠È⁄,C⁄Í/2 and moreover that

E[ZkeiÈZ,⁄Í] = (≠i)ˆ

ˆ⁄kE[eiÈZ,⁄Í] = (≠i) ˆ

ˆ⁄ke≠È⁄,C⁄Í/2 = i(C⁄)ke≠È⁄,C⁄Í/2

= iÿ

¸

Ck,¸⁄¸E[eiÈZ,⁄Í] =ÿ

¸

Ck,¸E[ˆ

ˆZ¸eiÈZ,⁄Í].

The relation is true for trigonometric functions and taking Fouriertransforms we see that it holds for all g œ S . Is then a matter of takinglimits to show that we can extend it to any g œ C1b (RM ). ⇤

As a first application of this formula let us show that E[L0

F (÷)] = 0 forevery cylindrical function, where ÷ is a space white noise with mean zero,i.e. ÷(Ï) ≥ N (0, ÎÏÎ2L2(T)/2) for all Ï œ L20(T), and ÷(1) = 0. Here wewrite L2

0

(T) for the subspace of all Ï œ L2(T) withsT Ïdx = 0. Indeed,

note that by polarization E[÷(Ïi)÷(�Ïj)] = 12

ÈÏi, �ÏjÍL2(T), leading to

Enÿ

i,j=1

12Fi,j(÷)ÈˆxÏi, ˆxÏjÍL2(T) = ≠E

nÿ

i,j=1

12Fi,j(÷)ÈÏi, �ÏjÍL2(T)

= ≠12

nÿ

i,j=1

ÈÏi, �ÏjÍL2(T)Eˆ

ˆ÷(Ïi)Fj(÷)

= ≠nÿ

j=1

E[÷(�Ïj)Fj(÷)],

Chapter 2. Energy solutions 21

so that E[L0

F (÷)] = 0 (here we interpreted ˆjf as a function of n + 1variables, with trivial dependence on the (n + 1)-th one). In combinationwith Itô’s formula, this indicates that the white noise law should indeedbe a stationary distribution for X (convince yourself of it!). From now onwe fix the initial distribution X

0

≥ ÷, which means that Xt ≥ ÷ for allt > 0.

As another application of the Gaussian integration by parts formula, weget

12E[E(F, G)(÷)] = ≠

12

ÿ

i,j

E[Fi(÷)Gj(÷)]ÈÏi, �ÏjÍL2(T).

= ≠12ÿ

i,j

E[(F (÷)Gj(÷))i]ÈÏi, �ÏjÍL2(T)

+ 12ÿ

i,j

E[F (÷)Gij(÷)]ÈÏi, �ÏjÍL2(T)

= ≠ÿ

j

E[F (÷)Gj(÷)÷(�Ïj)]

+ 12ÿ

i,j

E[F (÷)Gij(÷)]ÈÏi, �ÏjÍL2(T)

= ≠E[(FL0

G)(÷)].

Combining this with (2.4) and with E[L0

(FG)(÷)] = 0, we obtainE[(FL

0

G)(÷)] = E[(GL0

F )(÷)]. That is, L0

is a symmetric operator withrespect to the law of ÷.

Consider now the operator D, defined on cylindrical functions F by

DF (fl) =ÿ

i

Fi(fl)Ïi (2.6)

so that DF takes values in S Õ, the continuous linear functionals on S .Exercise 5. Show that D is independent of the specific representation ofF , that is if

F (fl) = f(fl(Ï1

), . . . , fl(Ïn)) = g(fl(Â1), . . . , fl(Âm))

for all fl œ S Õ, thenÿ

i

ˆif(fl(Ï1), . . . , fl(Ïn))Ïi =ÿ

j

ˆjg(fl(Â1), . . . , fl(Âm))Âm.

Hint: One possible strategy is to show that for all ◊ œ S ,

ÈDF (fl), ◊Í = ddÁF (fl + Á◊)|Á=0.

22 M. Gubinelli and N. Perkowski

By Gaussian integration by parts we get

E[F (÷)ÈÂ, DG(÷)Í] + E[G(÷)ÈÂ, DF (÷)Í] =ÿ

i

E[(FG)i(÷)ÈÂ, ÏiÍ]

=2E[÷(Â)(FG)(÷)],

and therefore

E[F (÷)ÈÂ, DG(÷)Í] = E[G(÷)ÈÂ, ≠DF (÷) + 2flF (÷)Í].

So if we consider the space L2(law(÷)) with inner product E[F (÷)G(÷)],then the adjoint of D is given by DúF (fl) = ≠DF (fl) + 2flF (fl). LetDÂF (fl) = ÈÂ, DF (fl)Í and similarly for DúÂF (fl) = ≠DÂF (fl)+2fl(Â)F (fl).

Exercise 6. Let (en)n>1 be an orthonormal basis of L2(T). Show that

L0

= 12ÿ

n

Dúen

D�e

n

.

Recall that the commutator between two operators A and B is definedas [A, B] := AB ≠ BA. In our case we have

[D◊, DúÂ]F (fl) = (D◊DúÂ ≠ DúÂD◊)F (fl) = 2ÈÂ, ◊ÍL2(T)F (fl),

whereas [Dú◊, DúÂ] = 0. Therefore,

[L0

, DúÂ] =12

ÿ

n

[Dúen

D�e

n

, DúÂ]

=12ÿ

n

Dúen

[D�e

n

, DúÂ] +12

ÿ

n

[Dúen

, DúÂ]D�en

=ÿ

n

Dúen

ÈÂ, �enÍL2(T) = Dú�Â.

So if Â is an eigenvector of � with eigenvalue ⁄, then [L0

, DúÂ] = ⁄DúÂ.Let now (Ân)nœN be an orthonormal eigenbasis for � with eigenvalues�Ân = ⁄nÂn and consider the functions

H(Âi1 , . . . , Âin) : S Õ æ R, H(Âi1 , . . . , Âin)(fl) = (DúÂi1

· · · DúÂi

n

1)(fl).

Then

L0

H(Âi1 , . . . , Âin) = L0DúÂi1

· · · DúÂi

n

1= DúÂ

i1L

0

DúÂi2

· · · DúÂi

n

1 + ⁄i1DúÂi1

· · · DúÂi

n

1 (2.7)= · · · = (⁄i1 + · · · + ⁄in)H(Âi1 , . . . , Âin),

Chapter 2. Energy solutions 23

where we used that L0

1 = 0. So these functions are eigenfunctions for L0

and the eigenvalues are all the possible combinations of ⁄i1 + · · · + ⁄in fori1

, . . . , in œ N. We have immediately that for di�erent n these functionsare orthogonal in L2(law(÷)). They are actually orthogonal as soon asthe indices i di�er since in that case there is an index j which is in onebut not in the other and using the fact that DúÂ

j

is adjoint to DÂj

andthat DÂ

j

G = 0 if G does not depend on Âj we get the orthogonality.The functions H(Âi1 , . . . , Âin) are polynomials and they are called Wickpolynomials.

Lemma 4. For all Â œ S , almost surely

(eDúÂ 1)(÷) = e2÷(Â)≠ÎÂÎ

2.

Proof. If F is a cylindrical function of the form F (fl) =f(fl(Ï

1

), . . . , fl(Ïm)) with f œ S (Rm), then

E[F (÷)(eDúÂ 1)(÷)] = E[eDÂ F (÷)] = E[F (÷ + Â)] = E[F (÷)e2÷(Â)≠ÎÂÎ

2],

where the second step follows from the fact that if we note �t(÷) =F (÷ + tÂ) (note that every Â œ S can be interpreted as an element of S Õ)we have ˆt�t(÷) = DÂ�t(÷) and �0(÷) = F (÷) so that �t(÷) = (etDÂ F )(÷)for all t > 0 and in particular for t = 1. The last step is simply a Gaussianchange of variables. Indeed if we take Ï

1

= Â and Ïk‹Â for k > 2 we have

E[F (÷ + Â)] = E[f(÷(Â) + ÈÂ, ÂÍ, ÷(Ï2

), . . . , ÷(Ïm))]

since (÷+Â)(Ïk) = ÷(Ïk) for k > 2. Now observe that ÷(Â) is independentof (÷(Ï

2

), . . . , ÷(Ïm)) so that

E[f(÷(Â) + ÈÂ, ÂÍ, ÷(Ï2

), . . . , ÷(Ïm))]

=⁄

R

e≠z2/ÎÂÎ2

fiÎÂÎ2

E[f(z + ÈÂ, ÂÍ, ÷(Ï2

), . . . , ÷(Ïm))]

=⁄

R

e≠z2/ÎÂÎ2

fiÎÂÎ2

e2z≠ÎÂÎ2E[f(z, ÷(Ï

2

), . . . , ÷(Ïm))]

=E[F (÷)e2÷(Â)≠ÎÂÎ2].

To conclude the proof, it su�ces to note that E[F (÷)(eDúÂ 1)(÷)] =

E[F (÷)e2÷(Â)≠ÎÂÎ2 ] for all cylindrical functions F implies that (eDúÂ 1)(÷) =

e2÷(Â)≠ÎÂÎ2 . ⇤

Theorem 1. The Wick polynomials {H(Âi1 , . . . , Âin)(÷) : n >0, i

1

, . . . , in œ N} form an orthogonal basis of L2(law(÷)).

24 M. Gubinelli and N. Perkowski

Proof. Taking Â =q

i ‡iÂi in Lemma 4, we get

e2q

i

‡i

÷(Âi

)≠q

i

‡2i

ÎÂi

Î2 = (eDúÂ 1)(÷) =

ÿ

n>0

((DúÂ)n1)(÷)n!

=ÿ

n>0

ÿ

i1,...,in

‡i1 · · · ‡inn! H(Âi1 , . . . , Âin¸ ˚˙ ˝

n times

)(÷),

which is enough to show that any random variable in L2 can be expandedin a series of Wick polynomials showing that the Wick polynomials are anorthogonal basis of L2(law(÷)) (but they are still not normalized). Indeedassume that Z œ L2(law(÷)) but Z‹H(Âi1 , . . . , Âin)(÷) for all n > 0,i1

, . . . , in œ N, then

0 =eq

i

‡2i

ÎÂi

Î2E[Z(eDúÂ 1)(÷)]

=eq

i

‡2i

ÎÂi

Î2E[Ze2q

i

‡i

÷(Âi

)≠q

i

‡2i

ÎÂi

Î2 ]

=E[Ze2q

i

‡i

÷(Âi

)].

Since the ‡i are arbitrary, this means that Z is orthogonal to anypolynomial in ÷ (consider the derivatives in ‡ © 0) and then that it isorthogonal also to exp(i

qi ‡i÷(Âi)). So let f œ S (RM ) and ‡i = 0 for

i > m, and observe that

0 =(2fi)≠m/2⁄

d‡1

· · · d‡mFf(‡1, . . . , ‡m)E[Zeiq

i

‡i

÷(Âi

)]

=E[Zf(÷(Â1

), . . . , ÷(ÂM ))],

which means that Z is orthogonal to all the random variables in L2 whichare measurable with respect to the ‡–field generated by (÷(Ân))n>0. Thisimplies Z = 0. That is, Wick polynomials form a basis for L2(law(÷)). ⇤

Example 2. The first few (un–normalized) Wick polynomials are

H(Âi)(fl) = DúÂi

1(fl) = 2fl(Âi),

H(Âi, Âj)(fl) = DúÂi

DúÂj

1 = 2DúÂi

fl(Âj) = ≠2”i=j + 4fl(Âi)fl(Âj),

and

H(Âi, Âj , Âk)(fl) = DúÂi

(≠2”j=k + 4fl(Âj)fl(Âk))= ≠4”j=kfl(Âi) ≠ 4”i=jfl(Âk) ≠ 4”i=kfl(Âj)

+ 8fl(Âi)fl(Âj)fl(Âk).

Chapter 2. Energy solutions 25

Some other properties of Wick polynomials can be derived using thecommutation relation between D and Dú. By linearity DúÏ+Â = DúÏ + DúÂ,so that using the symmetry of H we get

Hn(Ï + Â) := H (Ï + Â, . . . , Ï + Â)¸ ˚˙ ˝n

=ÿ

06k6n

3n

k

4H(Ï, . . . , Ï¸ ˚˙ ˝

k

, Â, . . . , Â¸ ˚˙ ˝n≠k

).

Then note that by Lemma 4 we have

(eDúÏ1)(÷)(eD

úÂ 1)(÷) = e2÷(Ï)≠ÎÏÎ

2e2÷(Â)≠ÎÂÎ

2= e2÷(Ï+Â)≠ÎÏ+ÂÎ

2+2ÈÏ,ÂÍ

= (eDúÏ+Â 1)(÷)e2ÈÏ,ÂÍ.

Expanding the exponentials,

ÿ

m,n

Hm(Ï)m!

Hn(Â)n! =

ÿ

r,¸

Hr(Ï + Â)r!

(2ÈÏ, ÂÍ)¸

¸!

=ÿ

p,q,¸

H(p˙ ˝¸ ˚

Ï, . . . , Ï,

q˙ ˝¸ ˚Â, . . . , Â)

p!q!(2ÈÏ, ÂÍ)¸

¸! ,

and identifying the terms of the same homogeneity in Ï and Â respectivelywe get

Hm(Ï)Hn(Â) =ÿ

p+¸=m

ÿ

q+¸=n

m!n!p!q!¸!H(

p˙ ˝¸ ˚Ï, . . . , Ï,

q˙ ˝¸ ˚Â, . . . , Â) (2ÈÏ, ÂÍ)¸ .

(2.8)This gives a general formula for such products. By polarization of thismultilinear form, we can also get a general formula for the productsof general Wick polynomials. Indeed taking Ï =

qmi=1 ŸiÏi and Â =qn

j=1 ⁄jÂj for arbitrary real coe�cients Ÿ1, . . . , Ÿm and ⁄1, . . . , ⁄n, wehave

Hm(mÿ

i=1

ŸiÏi)Hn(nÿ

j=1

⁄jÂj)

=ÿ

i1,...,im

ÿ

j1,...,jn

Ÿi1 · · · Ÿim⁄j1 · · · ⁄jmH(Ïi1 , . . . , Ïim)H(Âj1 , . . . , Âjn).

Deriving this with respect to all the Ÿ, ⁄ parameters and setting them tozero, we single out the term

ÿ

‡œSm

,ÊœSn

H(Ï‡(1), . . . , Ï‡(m))H(ÂÊ(1), . . . , ÂÊ(n))

= m!n!H(Ï1

, . . . , Ïm)H(Â1, . . . , Ân),

26 M. Gubinelli and N. Perkowski

where Sk denotes the symmetric group on {1, . . . , k}, and where we usedthe symmetry of the Wick polynomials. Doing the same also for the righthand side of (2.8) we get

H(Ï1

, . . . , Ïm)H(Â1, . . . , Ân)

=ÿ

p+¸=m

ÿ

q+¸=n

1p!q!¸!

ÿ

i,j

H(p˙ ˝¸ ˚

Ïi1 , . . . , Ïip ,

q˙ ˝¸ ˚Âj1 , . . . , Âjq )

Ÿ̧

r=1

(2ÈÏip+r , Âjq+r Í),

where the sum over i, j runs over i1

, . . . , im permutation of 1, . . . , m andsimilarly for j

1

, . . . , jn. Since H(Ïi1 , . . . , Ïip , Âj1 , . . . , Âjq )(÷) is orthogonalto 1 whenever p + q > 0, we obtain in particular

E[H(Â1

, . . . , Ân)(÷)H(Â1, . . . , Ân)(÷)] =1n!

ÿ

i,j

nŸ

r=1

(2ÈÂir

, Âjr

Í)

=ÿ

‡œSn

nŸ

r=1

(2ÈÂr, Â‡(r)Í).

In conclusion, we have shown that the family

Ó1 ÿ

‡œSn

nŸ

r=1

(2ÈÂr, Â‡(r)Í)2≠1/2

H(Âi1 , . . . , Âin)(÷) : n > 0, i1, . . . , in œ NÔ

is an orthonormal basis of L2(law(÷)).

Remark 1. In our problem it will be convenient to take the Fourierbasis as basis in the above computations. Let ek(x) = exp(ikx)/

Ô2fi =

ak(x)+ibk(x) where (Ô

2ak)kœN and (Ô

2bk)kœN form together a real valuedorthonormal basis for L2(T). Then fl(ek)ú = fl(e≠k) whenever fl is realvalued, and we will denote Dk = De

k

= Dak

+ iDbk

and similarly forDúk = Dúa

k

≠ iDúbk

= ≠D≠k + 2fl(e≠k). In this way, Dúk is the adjointof Dk with respect to the Hermitian scalar product on L2(�;C) and theOrnstein–Uhlenbeck generator takes the form

L0

=ÿ

kœN(Dúˆ

x

ak

Dˆx

ak

+ Dúˆx

bk

Dˆx

bk

) = 12ÿ

kœZk2DúkDk (2.9)

(convince yourself of the last identity by observing that DúkDk +Dú≠kD≠k =2(Dúa

k

Dak

+ Dúbk

Dbk

)!). Similarly,

E(F, G) =ÿ

kœZk2(DkF )ú(DkG). (2.10)

Chapter 2. Energy solutions 27

2.5 The Itô trickWe are ready now to start our computations. Recall that we want toanalyse Jt(Ï) =

s t0

ˆxX2s (Ï)ds using Itô calculus with respect to theOrnstein–Uhlenbeck process. We want to understand Jt as a correctionterm in Itô’s formula: if we can find a function G such that L

0

G(Xt) =ˆxX2t , then we get from Itô’s formula

⁄ t

0

ˆxX2

s ds = G(Xt) ≠ G(X0) ≠ MG,t,

where MG is a martingale depending on G. Of course, G will not bea cylindrical function but we only defined L

0

on cylindrical functions.So to make the following calculations rigorous we would again have toreplace ˆxX2t by ˆx�nX2t and then pass to the limit, see the paper [15]for details. As before we will perform the calculations already in thelimit N = +Œ, in order to simplify the computations and not to obscurethe ideas through technicalities. The next problem is that the pointwiseevaluation

s t0

ˆxX2s (x)ds does not make any sense because the integral willonly be defined as a space distribution. So we will consider

G : S Õ æ S Õ

instead of G : S Õ æ C. Note however that we can reduce every such G toa function from S Õ to C by considering fl ‘æ G(fl)(ek) for all k.

Now for a fixed k, we have

ˆxX2

t (ek) =ikÔ

2ÿ

¸+m=k

Xt(e¸)Xt(em) =ikÔ

2ÿ

¸+m=k

H¸,m(Xt), (2.11)

where H¸,m(fl) = 14

(Dú≠¸Dú≠m1)(fl) = fl(e¸)fl(em) ≠ 12

”¸+m=0 is a secondorder Wick polynomial so that L

0

H¸,m = ≠(¸2 + m2)H¸,m by (2.7).Therefore, it is enough to take

G(Xt)(ek) = ≠ikÿ

¸+m=k

H¸,m(Xt)¸2 + m2 . (2.12)

This corresponds to the distribution G(Xt)(Ï) = ≠s Œ

0

ˆx(es�Xt)2(Ï)ds(check it!). Then

G(Xt)(Ï) = G(X0)(Ï) + MG,t(Ï) + Jt(Ï),

where MG,t(Ï) is a martingale with quadratic variation

dÈMG,ú(Ï), MG,ú(Ï)Ít = E(G(ú)(Ï), G(ú)(Ï))(Xt)dt.

28 M. Gubinelli and N. Perkowski

We can estimate

E[|Jt

(Ï) ≠ Js

(Ï)|2p] .p

E[|MG,t

(Ï) ≠ MG,s

(Ï)|2p] +E[|G(Xt

)(Ï) ≠ G(Xs

)(Ï)|2p].

To bound the martingale expectation, we will use the following Burkholderinequality:

Lemma 5. Let m be a continuous local martingale with m0

= 0. Thenfor all T > 0 and p > 1,

E[supt6T

|mt|2p] 6 CpE[ÈmÍpT ].

Proof. Start by assuming that m and ÈmÍ are bounded. Itô’s formulayields

d|mt|2p = (2p)|mt|2p≠1dmt +12(2p)(2p ≠ 1)|mt|

2p≠2dÈmÍt,

and therefore

E[|mT |2p] = CpEË ⁄ T

0

|ms|2p≠2dÈmÍsÈ6 CpE[sup

t6T|mt|2p≠2ÈmÍT ].

By the Cauchy–Schwarz inequality we get

E[|mT |2p] 6 CpE[supt6T

|mt|2p](2p≠2)/2pE[ÈmÍpT ]1/p.

But now Doob’s Lp inequality yields E[supt6T |mt|2p] 6 C ÕpE[|mT |2p], andthis implies the claim in the bounded case. The unbounded case can betreated with a localization argument. ⇤

Applying Burkholder’s inequality, we obtain

E[|Jt(Ï) ≠ Js(Ï)|2p] .p EË---

⁄ t

sE(G(ú)(Ï), G(ú)(Ï))(Xr)dr

---pÈ

+ E[|G(Xt)(Ï) ≠ G(Xs)(Ï)|2p]

6 (t ≠ s)p≠1⁄ t

sE[|E(G(ú)(Ï), G(ú)(Ï))(Xr)|p]dr

+ E[|G(Xt)(Ï) ≠ G(Xs)(Ï)|2p]= (t ≠ s)pE[|E(G(ú)(Ï), G(ú)(Ï))(÷)|p]

+ E[|G(Xt)(Ï) ≠ G(Xs)(Ï)|2p],

using that Xr ≥ ÷. Now

DmG(fl)(ek) = ≠2ikfl(ek≠m)

(k ≠ m)2 + m2 ,

Chapter 2. Energy solutions 29

and therefore

E(G(ú)(ek), G(ú)(ek))(fl) =ÿ

m

m2D≠mG(fl)(e≠k)DmG(fl)(ek)

= 4k2ÿ

¸+m=k

m2|fl(e¸)|2

(¸2 + m2)2

. k2ÿ

¸+m=k

|fl(e¸)|2¸2 + m2 ,

which implies that

E[|E(G(ú)(ek), G(ú)(ek))(÷)|] . k2Eÿ

¸+m=k

|÷(e¸)|2¸2 + m2

. k2ÿ

¸+m=k

1¸2 + m2

. |k|.

A similar computation gives also that

E[|E(G(ú)(ek), G(ú)(ek))(÷)|p] . |k|p.

Further, we have

E[|G(Xt)(ek) ≠ G(Xs)(ek)|2] . k2ÿ

¸+m=k

EË |H¸,m(Xt) ≠ H¸,m(Xs))2

(¸2 + m2|2È

. k2|t ≠ s|ÿ

¸+m=k

m2

(¸2 + m2)2 . |k||t ≠ s|.

And finally, since G is a second order polynomial of a Gaussian process wecan apply once more Gaussian hypercontractivity to obtain

E[|Jt(ek) ≠ Js(ek)|2p] .p (t ≠ s)p|k|p.

The advantage of the Itô trick with respect to the explicit Gaussiancomputation is that it goes over to the non–Gaussian case. Indeed notethat while the boundary term G(Xt)(Ï) ≠ G(Xs)(Ï) has been estimatedusing a lot of the Gaussian information about X, we used only the law ata fixed time to handle the term

s ts E(G(ú)(Ï), G(ú)(Ï))(Xr)dr.

In order to carry over these computation to the solution of the non–lineardynamics u, we need to replace the generator of X with that of u and tohave a way to handle the boundary terms. The idea is now to reverse theMarkov process u in time, which will allow us to kill the antisymmetric

30 M. Gubinelli and N. Perkowski

part of the generator and at the same time kill the boundary terms. Indeedobserve that if u solves the stochastic Burgers equation, then formally wehave the Itô formula

dtF (ut) =nÿ

i=1

Fi(ut)dMt(Ïi) + LF (ut)dt,

where L is now the full generator of the non–linear dynamics, given by

LF (fl) = L0

F (fl) +ÿ

i

Fi(fl)Èˆxfl2, ÏiÍ = L0F (fl) + BF (fl),

whereBF (fl) =

ÿ

k

(ˆxfl2)(ek)DkF (fl).

Formally, the non–linear term is antisymmetric with respect to theinvariant measure of L

0

. Indeed since B is a first order operator

E[(BF (÷))G(÷)] = E[(B(FG)(÷))] ≠ E[F (÷)(BG(÷))] = ≠E[F (÷)(BG(÷))](2.13)

provided E[BF (÷)] = 0 for any cylinder function F . Let us show this. Wehave

E[BF (÷)] =ÿ

k

E[(ˆx÷2)(ek)DkF (÷)]

= ≠ÿ

k

E[(Dk(ˆx÷2)(ek))F (÷)] +ÿ

k

E[Dk[(ˆx÷2)(ek)F (÷)]].

But now we get from (2.11)

Dk(ˆx÷2)(ek) =Ô

2ik÷(e0

) = fi≠1/2ikÈ÷, 1Í = 0,

where we used that È÷, 1Í = 0. Gaussian integration by parts then formallygives

E[BF (÷)] =ÿ

k

E[Dk[(ˆx÷2)(ek)F (÷)]] =ÿ

k

E[÷(ek)(ˆx÷2)(ek)F (÷)]

= E[È÷, ˆx÷2ÍF (÷)] =13E[È1, ˆx÷

3ÍF (÷)] = 0

since È1, ˆx÷3Í = ≠Èˆx1, ÷3Í = 0 (but of course È÷, ˆx÷2Í is not welldefined).

The dynamics of u backwards in time has a Markovian description whichis the subject of the next exercise.

Chapter 2. Energy solutions 31

Exercise 7. Let (yt)t>0 be a stationary Markov process on a Polish space,with semigroup (Pt)t>0 and stationary distribution µ. Show that if P útis the adjoint of Pt in L2(µ), then (P út ) is a semigroup of operators onL2(µ) (that is P ú

0

= id and P ús+t = P ús P út as operators on L2(µ)). Showthat if y

0

≥ µ, then for all T > 0 the process ŷt = yT ≠t, t œ [0, T ], isalso Markov, with semigroup (P út )tœ[0,T ], and that µ is also an invariantdistribution for (P út ). Show also that if (Pt) has generator L then (P út )has generator Lú which is the adjoint of L with respect to L2(µ).

Now if we reverse the process in time letting ût = uT ≠t, we have bystationarity

E[F (ût)G(û0)] = E[F (uT ≠t)G(uT )] = E[F (u0)G(ut)].

So if we denote by L̂ the generator of û:

E[L̂F (û0

)G(û0

)] = ddt

----t=0

E[F (ût)G(û0)]

= ddt

----t=0

E[F (u0

)G(ut)]

=E[LG(u0

)F (u0

)],

which means that L̂ is the adjoint of L, that is

L̂F (fl) = L0

F (fl) ≠ BF (fl) = L0

F (fl) ≠ÿ

k

(ˆxfl2)(ek)DkF (fl).

In other words, the reversed process solves

ût(Ï) = û0(Ï) +⁄ t

0

ûs(�Ï)ds +⁄ t

0

Èû2s, ˆxÏÍds ≠⁄ t

0

›̂s(ˆxÏ)ds

for a di�erent space-time white noise ›̂. Then Itô’s formula for û gives

dtF (ût) =nÿ

i=1

Fi(ût)dM̂t(Ïi) + L̂F (ût)dt,

where for all test functions Ï, the process M̂(Ï) is a martingale in thefiltration of û with covariance

dÈM̂(Ï), M̂(Â)Ít = ÈˆxÏ, ˆxÂÍL2(T)dt.

Combining the Itô formulas for u and û, we get

F (uT )(Ï) = F (u0)(Ï) + MF,T (Ï) +⁄ T

0

LF (us)(Ï)ds

32 M. Gubinelli and N. Perkowski

and

F (u0

)(Ï) = F (ûT )(Ï) = F (û0)(Ï) + M̂F,T (Ï) +⁄ T

0

L̂F (ûs)(Ï)ds

= F (uT )(Ï) + M̂F,T (Ï) +⁄ T

0

L̂F (us)(Ï)ds,

and summing up these two equalities gives

0 = MF,T (Ï) + M̂F,T (Ï) +⁄ T

0

(L̂ + L)F (us)(Ï)ds,

that is

2⁄ T

0

L0

F (us)(Ï)ds = ≠MF,T (Ï) ≠ M̂F,T (Ï).

An added benefit of this forward–backward representation is that the onlyterm which required quite a lot of informations about X, that is theboundary term F (Xt)(Ï) ≠ F (Xs)(Ï) does not appear at all now. Asabove if 2L

0

F (fl) = ˆxfl2, we end up with

⁄ T

0

ˆxu2

s(Ï)ds = ≠MF,T (Ï) ≠ M̂F,T (Ï). (2.14)

Exercise 8. Perform a similar formal calculation as in (2.13) to see thatE[LF (÷)] = 0 for all cylindrical functions F , so that ÷ should also beinvariant for the stochastic Burgers equation. Combine this with (2.14) toshow that setting N Nt (Ï) =

s t0

ˆx(�N us)2(Ï)ds we have

E[|N Nt (ek) ≠ N Ns (ek)|2p] .p (t ≠ s)p|k|p

and letting N N,Mt = N Nt ≠ N Mt we get

E[|N N,Mt (ek) ≠ N N,Ms (ek)|2p] .p (|k|/N)Áp(t ≠ s)p|k|p

for all 1 6 N 6 M . Use this to derive that

(E[ÎN N,Mt ≠ N N,Ms Î2pH– ])

1/2p .p,– N≠Á/2(t ≠ s)1/2

for all – < ≠1 ≠ Á, and realize that this estimate allows you to provecompactness of the approximations N N and then convergence to a limit Nin L2p(�; C1/2≠H≠1≠).

Chapter 2. Energy solutions 33

2.6 Controlled distributionsLet us cook up a definition which will allow us to rigorously perform theformal computations above in a general setting.

Definition 2. Let u, A : R+

◊ T æ S Õ(T) be a couple of generalized (i.e.distribution-valued) processes such that

i. For all Ï œ S (T) the process t ‘æ ut(Ï) is a continuoussemimartingale satisfying

ut(Ï) = u0(Ï) +⁄ t

0

us(�Ï)ds + At(Ï) + Mt(Ï),

where t ‘æ Mt(Ï) is a martingale with quadratic variationÈM(Ï), M(Â)Ít = ÈˆxÏ, ˆxÂÍL2(T)t and t ‘æ At(Ï) is a finite variationprocess with A

0

(Ï) = 0.

ii. For all t > 0 the random distribution Ï ‘æ ut(Ï) is a zero mean spacewhite noise with variance ÎÏÎ2L20/2.

iii. For any T > 0 the reversed process ût = uT ≠t has again propertiesi, ii with martingale M̂ and finite variation part Â such that Ât(Ï) =≠(AT (Ï) ≠ AT ≠t(Ï)).

Any pair of processes (u, A) satisfying these condition will be calledcontrolled by the Ornstein–Uhlenbeck process and we will denote the setof all such processes with Q

ou

.

Theorem 2 ([15], Lemma 1). Assume that (u, A) œ Qou

and for anyN > 1, t > 0, Ï œ S let

N Nt (Ï) =⁄ t

0

ˆx(�N us)2(Ï)ds

Then for any p > 1 (N N )N>1 converges in Lp(�) to a space–timedistribution N œ C1/2≠H≠1≠.

We are now at a point where we can give a meaning to our originalequation.

Definition 3. A pair of random distribution (u, A) œ Qou

is an energysolution to the stochastic Burgers equation if it satisfies

ut(Ï) = u0(Ï) +⁄ t

0

us(�Ï)ds + Nt(Ï) + Mt(Ï)

for all t > 0 and Ï œ S . That is if A = N .

34 M. Gubinelli and N. Perkowski

Now we are in a relatively standard setting of needing to prove existenceand uniqueness of such energy solutions. Note that in general the solutionsare pairs of processes (u, A).

Remark 2. The notion of energy solution has been introduced (in a slightlydi�erent way) in the work of Gonçalves and Jara [11] on macroscopicuniversal fluctuations of weakly asymmetric interacting particle systems.

2.7 Existence of solutionsFor the existence, the way to proceed is quite standard. We approximatethe equation, construct approximate solutions and then try to have enoughcompactness to have limiting points which then naturally will satisfy therequirements for energy solutions. For any N > 1 consider solutions uNto

ˆtuN = �uN + ˆx�N (�N uN )2 + ˆx›

These are generalized functions such that

duNt (ek) = ≠k2uNt (ek)dt + [ˆx�N (�N uN )2](ek)dt + ikd—t(k)

for k œ Z and t > 0. We take u0

to be the white noise with covarianceu

0

(Ï) ≥ N (0, ÎÏÎ2/2). The point of our choice of the non–linearity isthat this (infinite–dimensional) system of equations decomposes into afinite dimensional system for (vN (k) = �N uN (ek))k:|k|6N and an infinitenumber of one–dimensional equations for each uN (ek) with |k| > N .Indeed if |k| > N we have [ˆx�N (�N uN )2](ek) = 0 so ut(ek) = Xt(ek) theOrnstein–Uhlenbeck process with initial condition X

0

(ek) = u0(ek) whichrenders it stationary in time (check it). The equation for (vN (k))|k|6Nreads

dvNt (k) = ≠k2vNt (k)dt + bk(vNt )dt + ikd—t(k), |k| 6 N, t > 0

wherebk(vNt ) = ik

ÿ

¸+m=k

I|¸|,|k|,|m|6N vNt (¸)vNt (m).

This is a standard finite–dimensional ODE having global solutions for allinitial conditions which gives rise to a nice Markov process. The fact thatsolutions do not blow up even if the interaction is quadratic can be seenby computing the evolution of the norm

At =ÿ

|k|6N|vNt (k)|2

Chapter 2. Energy solutions 35

and by showing that

dAt =2ÿ

|k|6NvNt (≠k)dvNt (k)

= ≠ 2ÿ

|k|ÆN

k2|vNt (k)|2dt + 2ÿ

|k|6NvNt (≠k)bk(vNt )dt

+ 2ikÿ

|k|6NvNt (≠k)d—t(k).

Since A is nonnegative, we increase its absolute value by omitting the firstcontribution. But nowÿ

|k|6NvNt (≠k)bk(vNt ) = 2i

ÿ

k,¸,m:¸+m=k

I|¸|,|k|,|m|6N kvNt (¸)vNt (m)vNt (≠k)

= ≠2iÿ

k,¸,m:¸+m+k=0

I|¸|,|k|,|m|6N (k)vNt (¸)vNt (m)vNt (k)

and by symmetry of this expression it is equal to

≠23 iÿ

k,¸,m:¸+m+k=0

I|¸|,|k|,|m|6N (k + ¸ + m)vNt (¸)vNt (m)vNt (k) = 0,

so |At| Æ |A0 + Mt| where dMt = 2q

|k|6N I|k|6N (ik)vNt (≠k)d—t(k). Now

E[M2T ] .⁄ T

0

ÿ

|k|6Nk2|vNt (k)|2dt . N2

⁄ T

0

Atdt

and then by martingales inequalities

E[ suptœ[0,T ]

(At)2] 6 2E[A20

] + 2E[ suptœ[0,T ]

(Mt)2] 6 2E[A20

] + 8E[M2T ]

6 2E[A20

] + CN2⁄ T

0

E(At)dt.

Now Gronwall’s inequality gives

E[ suptœ[0,T ]

(At)2] . eCN2TE[A2

0

],

from where we can deduce (by a continuation argument) that almost surelythere is no blowup at finite time for the dynamics. The generator LN forthe Galerkin dynamics is given by

LN F (fl) = L0

F (fl) + BN F (fl),

36 M. Gubinelli and N. Perkowski

whereBN F (fl) =

ÿ

k

I|k|6N (ˆxfl2)(ek)DkF (fl).

And again the non–linear drift BN is antisymmetric with respect to theinvariant measure of L

0

by a computation similar to that for the full driftB. Next, using Echeverría’s criterion [7] we can obtain the invariance ofthe white noise from its infinitesimal invariance which can be checked atthe level of the generator LN . Finally it is also possible to rigorously showthat the reversed process is a Markov process with generator

L̂N F (fl) = L0

F (fl) ≠ BN F (fl),

thus proving that the reversed non-linear drift is the opposite of theforward one. Taking

ANt (ek) =⁄ t

0

bk(vNs )ds

we obtain that (vN , AN ) œ Qou

. Note that this result depends on the factthat we kept the full linear part L

0

of the generator. A more standardGalerkin truncation would have lead us to a process which is controlled bythe Galerkin–truncated OU process. Estimates would have resulted in asimilar way but our setup is simpler.

Given that (vN , AN ) is controlled by the OU process, the Itô trickapplied to AN provides enough compactness in order to pass to the limit asN æ Œ and build an energy solution to the Stochastic Burgers equation.See [15] for additional details on the limiting procedure and [30] for detailson how to implement the Itô trick on the level of di�usions.

Remark 3. There is however one small catch: For a controlleddistribution (u, A) we required A(Ï) to be of finite variation for every testfunction Ï. The solution (vN , AN ) to the truncated equation will satisfythis, but in the limit A(Ï) will only have vanishing quadratic variationand it will not be of finite variation (in other words u(Ï) is a Dirichletprocess and not a semimartingale). Luckily in this setting it is still possibleto derive an Itô formula and everything goes through as described above,see [15] for details.

Chapter 3

Besov spaces

Here we collect some classical results from harmonic analysis which we willneed in the following. We concentrate on distributions and SPDEs on thetorus, but everything in this Section applies mutatis mutandis on the fullspace Rd, see [14]. The only problem is that then the stochastic terms willno longer be in the Besov spaces C – which we encounter below but ratherin weighted Besov spaces. Handling SPDEs in weighted function spaces ismore delicate and we prefer here to concentrate on the simpler situationof the torus.

We will use Littlewood–Paley blocks to obtain a decomposition ofdistributions into an infinite series of smooth functions. Of course, we havealready such a decomposition at our disposal: f =

qk f̂(k)eúk. But it turns

out to be convenient not to consider each Fourier coe�cient separately, butto work with projections on dyadic Fourier blocks.

Definition 4. A dyadic partition of unity (‰, fl) consists of twononnegative radial functions ‰, fl œ CŒ(Rd,R), where ‰ is supported ina ball B = {|x| 6 c} and fl is supported in an annulus A = {a 6 |x| 6 b}for suitable a, b, c > 0, such that

1. ‰ +q

j>0 fl(2≠j ·) © 1 and

2. supp(‰) fl supp(fl(2≠j ·)) = ÿ for j > 1 and supp(fl(2≠i·)) flsupp(fl(2≠j ·)) = ÿ for all i, j > 0 with |i ≠ j| > 1.

We will often write fl≠1 = ‰ and flj = fl(2≠j ·) for j > 0.

Dyadic partitions of unity exist, see [1]. From now on we fix a dyadicpartition of unity (‰, fl) and define the dyadic blocks

�jf = flj(D)f = F ≠1(flj f̂), j > ≠1,

37

38 M. Gubinelli and N. Perkowski

where here and in the following we use that every function on Rd can benaturally interpreted as a function on Zd. We also use the notation

Sjf =ÿ

i6j≠1�if

as well as Ki = (2fi)d/2F ≠1fli so that

Ki ú f = F ≠1(fliFf) = �if.

From this representation we can also see the reason for considering smoothpartitions rather than indicator functions: From Young’s inequality weget only ÎI

[2

j ,2j+1)(|D|)fÎLŒ Æ ÎF ≠1I[2j ,2j+1)ÎL1ÎfÎLŒ . jÎfÎLŒ forf œ LŒ, whereas Îflj(D)fÎLŒ . ÎfÎLŒ uniformly in j.

Every dyadic block has a compactly supported Fourier transform and istherefore in S . It is easy to see that f =

qj>≠1 �jf = limjæŒ Sjf for

all f œ S Õ.For – œ R, the Hölder–Besov space C – is given by C – = B–Œ,Œ(Td,R),

where for p, q œ [1, Œ] we define

B–p,q = B–p,q(Td,R) =Ó

f œ S Õ : ÎfÎB–p,q

=1 ÿ

j>≠1(2j–Î�jfÎLp)q

21/q

< ŒÔ

,

with the usual interpretation as ¸Œ norm if q = Œ. Then B–p,q is a Banachspace and while the norm Î·ÎB–

p,q

depends on (‰, fl), the space B–p,q does notand any other dyadic partition of unity corresponds to an equivalent norm(for (p, q) = (Œ, Œ) this follows from Lemma 10 below, for the generalcase see [1], Lemma 2.69). We write Î·Î– instead of Î·ÎB–Œ,Œ .

Exercise 9. Let ”0

denote the Dirac delta in 0. Show that ”0

œ C ≠d.If – œ (0, Œ) \ N, then C – is the space of Â–Ê times di�erentiable

functions whose partial derivatives of order Â–Ê are (– ≠ Â–Ê)–Höldercontinuous (see page 99 of [1]). Note however, that for k œ N the space C kis strictly larger than Ck, the space of k times continuously di�erentiablefunctions. Below we will give the proof for – œ (0, 1), but before we stillneed some tools.

Recall that Schwartz functions on Rd are functions f œ CŒ(Rd) suchthat for every multiindex µ and all n Ø 0 we have

supxœRd

(1 + |x|)n|ˆµf(x)| < Œ.

Lemma 6. (Poisson summation) Let Ï : Rd æ C be a Schwartz function.Then

F ≠1Ï(x) =ÿ

kœZdF ≠1Rd Ï(x + 2fik),

for all x œ Td, where F ≠1Rd Ï(x) = (2fi)≠d/2 s

Rd Ï(y)eiÈx,yÍdy.

Chapter 3. Besov spaces 39

Proof. Let g(x) =q

kœZd F≠1Rd Ï(x + 2fik). The function F

≠1Rd Ï is of

rapid decay since Ï œ S so the sum converges absolutely and defines acontinuous function g : Rd æ R which is periodic of period 2fi in everydirection. The Fourier transform over the torus Td of this function is

Fg(y) =⁄

Tde≠iÈx,yÍg(x) dx(2fi)d/2

=⁄

Td

ÿ

kœZdF ≠1Rd Ï(x + 2fik)e

≠iÈx+2fik,yÍ dx(2fi)d/2

since e≠iÈ2fik,yÍ = 1 for all y œ Zd. By dominated convergence the sumand the integral can be combined in an overall integration over Rd:

Fg(y) =⁄

RdF ≠1Rd Ï(x)e

≠iÈx,yÍ dx(2fi)d/2 = FRdF

≠1Rd Ï(y) = Ï(y),

where FRdf(x) = F ≠1Rd f(≠x). So we deduce that g(x) = F≠1Ï(x). ⇤

Exercise 10. Show that Î·Î– 6 Î·Î— for – 6 —, that Î·ÎLŒ . Î·Î– for– > 0, that Î·Î– . Î·ÎLŒ for – 6 0, and that ÎSj · ÎLŒ . 2j–Î · Î– for– < 0. These inequalities will be very important for us in the followingand we will often use them without mentioning it specifically.

Hint: When proving Î·Î– . Î·ÎLŒ for – 6 0, you might need Poisson’ssummation formula.

The following Bernstein inequality is extremely useful when dealing withfunctions with compactly supported Fourier transform.

Lemma 7. (Bernstein inequality) Let B be a ball and k œ N0

. For any⁄ > 1, 1 6 p 6 q 6 Œ, and f œ Lp with supp(Ff) ™ ⁄B we have

maxµœNd:|µ|=k

ÎˆµfÎLq .k,B ⁄k+d(1p

≠ 1q

)ÎfÎLp .

Proof. Let Â be a compactly supported CŒ function on Rd such thatÂ © 1 on B and write Â⁄(x) = Â(⁄≠1x). Then

ˆµf(x) = ˆµF ≠1(Â⁄Ff)(x) = (2fi)d/2Èf, ˆµ(F ≠1Â⁄)(x ≠ ·)Í= (2fi)d/2(f ú ˆµ(F ≠1Â⁄))(x).

By Young’s inequality, we get

ÎˆµfÎLq . ÎfÎLpÎˆµ(F ≠1Â⁄)ÎLr ,

where 1 + 1/q = 1/p + 1/r. Now it is a short exercise to verify

40 M. Gubinelli and N. Perkowski

Î · ÎLr 6 Î · Î1/rL1 Î · Î1≠1/rLŒ , and

..ˆµ!F ≠1Â⁄

"..L1

=⁄

Td

---ÿ

k

ˆµ!F ≠1Rd Â⁄

"(x + 2fik)

---dx

6⁄

Rd|ˆµ(F ≠1Rd Â⁄)(x)|dx

= ⁄|µ|⁄

Rd⁄d|(ˆµF ≠1Rd Â)(⁄x)|dx

ƒ ⁄|µ|,

whereas

supxœTd

---ÿ

k

ˆµ(F ≠1Rd Â⁄)(x + 2fik)--- = ⁄d+|µ| sup

xœTd

---ÿ

k

(ˆµF ≠1Rd Â)(⁄(x + 2fik))---

. ⁄d+|µ| supxœTd

ÿ

k

(1 + ⁄|x + 2fik|)≠2d

. ⁄d+|µ| supxœTd

ÿ

k

(1 + |x + 2fik|)≠2d

. ⁄d+|µ|.

We end up with

ÎˆµfÎLq . ÎfÎLpÎˆµ(F ≠1Â⁄)ÎLr. ÎfÎLp⁄|µ|/r⁄(d+|µ|)(1≠1/r)

= ÎfÎLp⁄|µ|+d(1/p≠1/q).

⇤It then follows immediately that for – œ R, f œ C –, µ œ Nd

0

, we haveˆµf œ C –≠|µ|. Another simple application of the Bernstein inequalities isthe Besov embedding theorem, the proof of which we leave as an exercise.

Lemma 8. (Besov embedding) Let 1 6 p1

6 p2

6 Œ and 1 6 q1

6q

2

6 Œ, and let – œ R. Then B–p1,q1 is continuously embedded intoB–≠d(1/p1≠1/p2)p2,q2 .

Exercise 11. In the setting of Exercise 2, use Besov embedding to showthat E[Î›̃Îp≠d/2≠Á] < Œ for all p > 1 and Á > 0 (in particular ›̃ œ C ≠d/2≠almost surely).

Hint: Estimate E[Î›̃Î2pB–2p,2p ] using Gaussian hypercontractivity(equivalence of moments).

As another application of the Bernstein inequality, let us show thatC – = C– for – œ (0, 1).

Chapter 3. Besov spaces 41

Lemma 9. For – œ (0, 1) we have C – = C–, the space of –≠Höldercontinuous functions, and

ÎfÎ– ƒ ÎfÎC– = ÎfÎLŒ + supx”=y

|f(x) ≠ f(y)|dTd(x, y)–

,

where dTd(x, y) denotes the canonical distance on Td.

Proof. Start by noting that for f œ C – we have ÎfÎLŒ 6q

j Î�jfÎLŒ 6qj 2≠j–ÎfÎ– . ÎfÎ–. Let now x ”= y œ Td and choose j0 with

2≠j0 ƒ dTd(x, y). For j 6 j0 we use Bernstein’s inequality to obtain

|�jf(x) ≠ �jf(y)| . ÎD�jfÎLŒdTd(x, y). 2jÎ�jfÎLŒdTd(x, y)6 2j(1≠–)ÎfÎ–dTd(x, y),

whereas for j > j0

we simply estimate

|�jf(x) ≠ �jf(y)| . Î�jfÎLŒ . 2≠j–ÎfÎ–.

Summing over j, we get

|f(x) ≠ f(y)| 6ÿ

j6j02j(1≠–)ÎfÎ–dTd(x, y) +

ÿ

j>j0

2≠j–ÎfÎ–

ƒ ÎfÎ–(2j0(1≠–)dTd(x, y) + 2≠j0–) ƒ ÎfÎ–dTd(x, y)–.

Conversely, if f œ C–, then we estimate Î�≠1fÎLŒ . ÎfÎLŒ . For j > 0,the function flj satisfies

s(F ≠1flj)(x)dx = 0, and therefore

|�jf(x)| =---⁄

TdF ≠1flj(x ≠ y)(f(y) ≠ f(x))dy

---

=---⁄

Td

ÿ

k

F ≠1Rd flj(x ≠ y + 2fik)(f(y) ≠ f(x))dy---

=---⁄

RdF ≠1Rd flj(x ≠ y)(f(y) ≠ f(x))dy

---.

Now |f(y) ≠ f(x)| 6 ÎfÎC–dTd(x, y)– 6 ÎfÎC– |x ≠ y|–, and thus we endup with

|�jf(x)| 6 ÎfÎC–---2jd

⁄

Rd|(F ≠1Rd fl)(2

j(x ≠ y))||x ≠ y|–dy---

= ÎfÎC–2≠j–---2jd

⁄

Rd|(F ≠1Rd fl)(2

j(x ≠ y))||2j(x ≠ y)|–dy---

. ÎfÎC–2≠j–.

⇤

42 M. Gubinelli and N. Perkowski

The following lemma, a characterization of Besov regularity for functionsthat can be decomposed into pieces which are localized in Fourier space,will be immensely useful in what follows.

Lemma 10.

1. Let A be an annulus, let – œ R, and let (uj) be a sequence ofsmooth functions such that Fuj has its support in 2jA , and such thatÎujÎLŒ . 2≠j– for all j. Then

u =ÿ

j>≠1uj œ C – and ÎuÎ– . sup

j>≠1{2j–ÎujÎLŒ}.

2. Let B be a ball, let – > 0, and let (uj) be a sequence of smooth functionssuch that Fuj has its support in 2jB, and such that ÎujÎLŒ . 2≠j–for all j. Then

u =ÿ

j>≠1uj œ C – and ÎuÎ– . sup

j>≠1{2j–ÎujÎLŒ}.

Proof. If Fuj is supported in 2jA , then �iuj ”= 0 only for i ≥ j. Hence,we obtain

Î�iuÎLŒ 6ÿ

j:j≥iÎ�iujÎLŒ

6 supk>≠1

{2k–ÎukÎLŒ}ÿ

j:j≥i2≠j–

ƒ supk>≠1

{2k–ÎukÎLŒ}2≠i–.

If Fuj is supported in 2jB, then �iuj ”= 0 only for i . j. Therefore,

Î�iuÎLŒ 6ÿ

j:j&iÎ�iujÎLŒ

6 supk>≠1

{2k–ÎukÎLŒ}ÿ

j:j&i2≠j–

. supk>≠1

{2k–ÎukÎLŒ}2≠i–,

using – > 0 in the last step. ⇤When solving SPDEs, we will need the smoothing properties of the heat

semigroup. We define L – = CC – fl C–/2LŒ for – œ (0, 2). For T > 0 weset L –T = CT C – fl C

–/2T L

Œ and we equip L –T with the norm

Î · ÎL –T

= max{Î · ÎCT

C – , Î · ÎC–/2T

LŒ}.

Chapter 3. Besov spaces 43

The notation L – is chosen to be reminiscent of the operator L = ˆt ≠ �and indeed the parabolic spaces L – are adapted to L in the sense thatthe temporal regularity “counts twice”, which is due to the fact that Lcontains a first order temporal but a second order spatial derivative. If wewould replace � by a fractional Laplacian ≠(≠�)‡, then we would haveto consider the space CC – fl C–/(2‡)LŒ instead of L –.

We have the following Schauder estimate on the scale of (L –)– spaces:

Lemma 11. Let – œ (0, 2) and let (Pt)t>0 be the semigroup generated bythe periodic Laplacian, F (Ptf)(k) = e≠t|k|

2Ff(k). For f œ CC –≠2 define

Jf(t) =s t

0

Pt≠sfsds. Then Jf is the solution to L Jf = f , Jf(0) = 0,and we have

ÎJfÎL –T

. (1 + T )ÎfÎCT

C –≠2

for all T > 0. If u œ C –, then t ‘æ Ptu is the solution to L P·u = 0,P

0

u = u, and we have

Ît ‘æ PtuÎL –T

. ÎuÎ–.

Bibliographic notes. For a gentle introduction to Littlewood–Paleytheory and Besov spaces see the recent monograph [1], where most of ourresults are taken from. There the case of tempered distributions on Rd isconsidered. The theory on the torus is developed in [31]. The Schauderestimates for the heat semigroup are classical and can be found in [14, 16].

Chapter 4

Di�usion in a randomenvironment

Let us consider the following d-dimensional homogenization problem. FixÁ > 0 and let uÁ : R

+

◊ Td æ R be the solution to the Cauchy problem

ˆtuÁ(t, x) = �uÁ(t, x) + Á≠–V (x/Á)uÁ(t, x), uÁ(0) = u

0

, (4.1)

where V : TdÁ æ R is a random field defined on the rescaled torusTdÁ = (R/(2fiÁ≠1Z))d. This model describes the di�usion of particles ina random medium (replacing ˆt by iˆt gives the Schrödinger equation of aquantum particle evolving in a random potential). For a review of relatedresults the reader can give a look at the recent paper of Bal and Gu [2].The limit Á æ 0 corresponds to looking at the large scale behavior of themodel since (4.1) can be understood as the equation for the macroscopicdensity uÁ(t, x) = u(t/Á2, x/Á) which corresponds to a microscopic densityu : R

+

◊ TdÁ æ R evolving according to the parabolic equation

ˆtu(t, x) = �u(t, x) + Á2≠–V (x)u(t, x), u(0, ·) = u0(Á·).

Slightly abusing notation, we do not index u or V by Á despite the factthat they of course depend on it. We assume that V : TdÁ æ R is Gaussianand has mean zero and homogeneous correlation function CÁ given by

CÁ(x ≠ y) = E[V (x)V (y)] = (Á/Ô

2fi)dÿ

kœÁZdeiÈx≠y,kÍR(k).

On R : Rd æ R+

we make the following hypothesis: for some — œ (0, d]we have R(k) = |k|—≠dR̃(k) where R̃ œ S (Rd) is a smooth radial functionof rapid decay. For — < d it would be equivalent to require that spatialcorrelations (in the limit Á æ 0) decay as |x|≠— . For — = d this hypothesis

44

Chapter 4. Di�usion in a random environment 45

means that spatial correlations are of rapid decay. Indeed by dominatedconvergence

limÁæ0

CÁ(x) =⁄

Rd

dk(2fi)d/2 e

iÈx,kÍR(k) =⁄

Rd

dk(2fi)d/2 e

iÈx,kÍ|k|—≠dR̃(k)

= (2fi)d/2!F ≠1Rd (| · |

—≠d) ú F ≠1Rd (R̃)"

(x).

Here we applied the formula of Exercise 3, which also holds for the Fouriertransform on Rd. Now F ≠1Rd (R̃) œ S (R

d) and F ≠1Rd (| · |—≠d)(x) ƒ |x|≠— if

0 < — < d (see for example Proposition 1.29 of [1]), so limÁæ0 |CÁ(x)| .|x|≠— for |x| æ +Œ.

Let us write VÁ(x) = Á≠–V (x/Á) so that (4.1) can be rewritten asˆtuÁ = �uÁ + VÁuÁ, and let us compute the variance of the Littlewood–Paley blocks of VÁ.

In order to perform more easily some computations we can introduce afamily of centered complex Gaussian random variables {g(k)}kœÁZ0 suchthat g(k)ú = g(≠k) and E[g(k)g(kÕ)] = ”k+kÕ=0 and represent VÁ(x) as

VÁ(x) =Ád/2≠–

(Ô

2fi)d/2ÿ

kœÁZdeiÈx,k/ÁÍ

R(k)g(k).

Lemma 12. Assume — ≠ 2– > 0.We have for any Á > 0 and i > 0 andany 0 6 Ÿ 6 — ≠ 2–:

E[|�iVÁ(x)|2] . 2(2–+Ÿ)iÁŸ.

This estimate implies that if — > 2–, then for all ” > 0 we have VÁ æ 0in L2(�; B≠–≠”

2,2 (Td)) as Á æ 0.

Proof. A spectral computation gives

�iVÁ(x) =Ád/2≠–

(Ô

2fi)d/2ÿ

kœÁZdeiÈx,k/ÁÍfli(k/Á)

R(k)g(k)

so

E[|�iVÁ(x)|2] = Ád(Ô

2fi)≠dÁ≠2–q

kœÁZd fli(k/Á)2R(k)= (

Ô2fi)≠dÁd≠2–

qkœÁZd fl(k/(Á2i))2R(k)

. Ád≠2–2id supkœÁ2iA R(k),(4.2)

where A is the annulus in which fl is supported. Now recall that — Æ dso that (Á2i)—≠d Ø 1 whenever Á2i 6 1, which leads to E[|�iVÁ(x)|2] .2idÁd≠2–(Á2i)—≠d = Á—≠2–2i— in that case. The assumption — ≠ 2– > 0

46 M. Gubinelli and N. Perkowski

then implies E[|�iVÁ(x)|2] . 2(2–+Ÿ)iÁŸ for any 0 6 Ÿ 6 — ≠ 2–. In thecase Á2i > 1 we use that

sRd R(k)dk < +Œ to estimate

Ádÿ

kœÁZdfl(k/(Á2i))2R(k) 6 Ád

ÿ

kœZdR(Ák) .

⁄

RdR(k)dk < +Œ,

and then E[|�iVÁ(x)|2] . Á≠2– . 22–i(Á2i)Ÿ for any small Ÿ > 0. ⇤

Remark 4. Using Gaussian hypercontractivity, we get from Lemma 12that

E[|�iVÁ(x)|2p] . E[|�iVÁ(x)|2]p . 2(2–+Ÿ)piÁŸp

whenever p Ø 1, and therefore

limÁæ0

E[ÎVÁÎ2pB≠–≠”2p,2p] = lim

Áæ0

ÿ

iØ≠12i(≠–≠”)2p

⁄

TE[|�iVÁ(x)|2p]dx = 0

whenever ” > 0. By the Besov embedding theorem, this shows that for allp, ” > 0

limÁæ0

E[ÎVÁÎpC ≠–≠” ] = 0.

Slightly improving the computation carried out in equation (4.2) wecan also see that if — ≠ 2– < 0, then essentially VÁ does not convergein any reasonable sense since the variance of the Littlewood–Paley blocksexplodes.

Remark 5. The same calculation as in (4.2) shows that

E[�iVÁ(x)�jVÁ(x)] = 0

whenever |i ≠ j| > 1, because in that case fliflj © 0.

The previous analysis shows that it is reasonable to take – 6 —/2 inorder to have some hope of obtaining a well defined limit as Á æ 0. In thiscase VÁ stays bounded in probability (at least) in spaces of distributionsof regularity ≠–≠. This brings us to the problem of obtaining estimatesfor the parabolic PDE

L uÁ(t, x) = (ˆt ≠ �)uÁ(t, x) = VÁ(x)uÁ(t, x), (t, x) œ [0, T ] ◊ Td,

depending only on negative regularity norms of VÁ. On one side theregularity of uÁ is then limited by the regularity of the right hand sidewhich cannot be better than that of VÁ. On the other side the productof VÁ with uÁ can cause problems since we try to multiply an (a priori)irregular object with one of limited regularity.

Assume that VÁ converges to zero in C “≠2 for “ > 0. It is thenreasonable to assume that also VÁuÁ œ CT C “≠2, uniformly in Á > 0, and

Chapter 4. Di�usion in a random environment 47

that uÁ œ CT C “ as a consequence of the regularizing e�ect of the heatoperator (Lemma 11). We will see in Section 5.1 below that the productVÁuÁ is under control only if “ + “ ≠ 2 > 0, that is if “ > 1. If VÁ æ 0 inC ≠1+, it is not di�cult to show that uÁ converges as Á æ 0 to the solutionu of the linear equation L u = 0 (for example this will follow from ouranalysis below, but in fact it is much simpler to show). In this case therandom potential will not have any e�ect in the limit.

The interesting situation then is when “ 6 1. To understand what couldhappen in this case let us use a simple transformation of the solution. WriteuÁ = exp(XÁ)vÁ where XÁ satisfies the equation L XÁ = VÁ with initialcondition XÁ(0, ·) = 0. Then

L uÁ = exp(XÁ)!vÁL XÁ + L vÁ ≠ vÁ(ˆxXÁ)2 ≠ 2ÈˆxXÁ, ˆxvÁÍRd

"

= exp(XÁ)vÁVÁ.

Since exp(XÁ) > 0 on [0, T ] ◊ Td, this implies that vÁ satisfies

L vÁ ≠ vÁ|ˆxXÁ|2 ≠ 2ÈˆxXÁ, ˆxvÁÍRd = 0, (t, x) œ [0, T ] ◊ Td.

Our Schauder estimates imply that XÁ = JVÁ œ CT C “ with uniformbounds in Á > 0, so that the problematic term is |ˆxXÁ|2 for which thisestimate does not guarantee existence.

Note that J(eiÈ·,kÍ)(t, x) = eiÈx,kÍ(1 ≠ e≠t|k|2)/|k|2, which yields

ˆxXÁ(t, x) = Á

d/2≠–

(Ô

2fi)d/2ÿ

kœÁZd0

eiÈx,k/ÁÍGÁ(t, k)g(k) (4.3)

where Zd0

= Zd\{0} and where

GÁ(t, k) = ik

Á

[1 ≠ e≠t|k/Á|2 ]|k/Á|2

R(k).

Lemma 13. Assume that

‡2 = (Ô

2fi)d⁄

Rd

R(k)k2

dk < +Œ.

Then if – = 1 and t > 0 we have

limÁæ0

E[|ˆxXÁ|2(t, x)] = ‡2,

and if – < 1 and t > 0

limÁæ0

E[(|ˆxXÁ|)2(t, x)] = 0.

Moreover

Var[�q(|ˆxXÁ|2)(t, x)] . Á4≠4– min(‡4, (Á2q)—≠2ÎR̃ÎŒ‡2).

48 M. Gubinelli and N. Perkowski

Proof. A computation similar to that leading to equation (4.2) gives

E[|ˆxXÁ|2(t, x)] = Ád(Ô

2fi)dÁ≠2–ÿ

kœÁZd0

|k/Á|2Ë ⁄ t

0

e≠(t≠s)|k/Á|2ds

È2

R(k)

= Ád(Ô

2fi)dÁ2≠2–ÿ

kœÁZd0

[1 ≠ e≠t(k/Á)2 ]2k2

R(k),

which for Á æ 0, t > 0, and – Æ 1 tends to

limÁæ0

E[|ˆxXÁ|2(t, x)] = I–=1(Ô

2fi)d⁄

Rd

R(k)k2

dk = I–=1‡2.

Let us now study the variance of |ˆxXÁ|2(t, x). Using equation (4.3) wehave

�q(|ˆxXÁ|2)(t, x)

= Ád≠2–

(2fi)d/2ÿ

k1,k2œÁZd0

eiÈk1+k2,x/ÁÍflq((k1 + k2)/Á)GÁ(t, k1)GÁ(t, k2)g(k1)g(k2).

By Wick’s theorem ([22], Theorem 1.28)

Cov(g(k1

)g(k2

), g(kÕ1

)g(kÕ2

)) =E[g(k1

)g(kÕ1

)]E[g(k2

)g(kÕ2

)]+ E[g(k

1

)g(kÕ2

)]E[g(k2

)g(kÕ1

)]=Ik1+kÕ1=k2+kÕ2=0 + Ik1+kÕ2=k2+kÕ1=0,

which implies

Var[�q

(|ˆx

XÁ|2)(t, x)] = 2Á2d≠4–

(2fi)dÿ

k1,k2œÁZd0

(flq

((k1+k2)/Á))2|GÁ(t, k1)|2|GÁ(t, k2)|2.

For any q > 0 (the case q = ≠1 is left to the reader), the variables k1

andk

2

are bounded away from 0 and we have

Var[�q(|ˆxXÁ|2)(t, x)] . Á2d+4≠4–ÿ

k1,k2œÁZd0

(flq((k1+k2)/Á))2|R(k

1

)||R(k2

)||k

1

|2|k2

|2 .

A first estimate is obtained by just dropping the factor flq((k1 +k2)/Á) andresults in the bound

Var[�q(|ˆxXÁ|2)(t, x)] . Á2d+4≠4–ÿ

k1,k2œÁZd0

|R(k1

)||R(k2

)||k

1

|2|k2

|2 . Á4≠4–‡4.

Chapter 4. Di�usion in a random environment 49

Another estimate proceeds by taking into account the constraint given bythe support of flq((k1 + k2)/Á). In order to satisfy k1 + k2 ≥ Á2q we musthave k

2

. k1

≥ Á2q or Á2q . k1

≥ k2

. In the first case

Á2d+4≠4–ÿ

k1,k2œÁZd0

Ik2.k1≥Á2q|R(k

1

)||R(k2

)||k

1

|2|k2

|2

.2q(—≠2)Ád+—+2≠4–ÎR̃ÎŒÿ

k2œÁZd0

Ik2.Á2q|

Click here to load reader

Embed Size (px)

Recommended