+ All Categories
Home > Documents > Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of...

Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of...

Date post: 10-Jun-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
117
MSc Mathematics Master Thesis Wellposedness of Stochastic Differential Equations in Infinite Dimensions Author: Supervisor: Colin Groot dr. S. G. Cox Examination date: August 16, 2017 Korteweg-de Vries Institute for Mathematics
Transcript
Page 1: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

MSc Mathematics

Master Thesis

Wellposedness of Stochastic DifferentialEquations in Infinite Dimensions

Author: Supervisor:Colin Groot dr. S. G. Cox

Examination date:August 16, 2017

Korteweg-de Vries Institute forMathematics

Page 2: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

Abstract

We investigate the wellposedness of stochastic differential equations in infinite dimensions, fol-lowing the variational approach given in Liu and Rockner (Stochastic Partial Differential Equa-tions: An Introduction, Springer, 2015). We look at the existence and uniqueness of (varia-tional) solutions to stochastic differential equations driven by an infinite-dimensional standardcylindrical Wiener process.

The results we prove require some preliminary knowledge on Bochner integrals and probabilityand martingale theory in infinite-dimensional Banach spaces amongst other results. We will alsointroduce the stochastic integral with respect to a Q-Wiener process and a standard cylindricalWiener process.

After that, we sketch the setting for the main result: we discuss the Gelfand triple andsketch the general setting of the existence and uniqueness result. We impose conditions onthe coefficients of the stochastic differential equation, namely hemicontinuity, boundedness,coercivity and weak monotonicity.

The proof of existence relies on the existence of strong solutions for finite-dimensional SDEsand weak convergence results. The uniqueness follows from an integration-by-parts argument.

Title: Wellposedness of Stochastic Differential Equations in Infinite DimensionsAuthor: Colin Groot, [email protected], 10283781Supervisor: dr. S. G. CoxSecond Examiner: prof. dr. P. J. C. SpreijExamination date: August 16, 2017

Korteweg-de Vries Institute for MathematicsUniversity of AmsterdamScience Park 105-107, 1098 XG Amsterdamhttp://kdvi.uva.nl

2

Page 3: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

Contents

Introduction 5

1 Preliminaries 81.1 Functional analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.1.1 Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.1.1.1 Properties of Hilbert spaces . . . . . . . . . . . . . . . . . . . . . 101.1.1.2 Symmetric and nonnegative operators . . . . . . . . . . . . . . . 11

1.1.2 Hilbert-Schmidt and finite trace operators, eigenvectors and eigenvalues . 111.1.2.1 Finite trace operators . . . . . . . . . . . . . . . . . . . . . . . . 131.1.2.2 Eigenvectors and eigenvalues . . . . . . . . . . . . . . . . . . . . 14

1.1.3 Reflexivity, weak convergence and weak* convergence . . . . . . . . . . . 151.1.3.1 Weak topology and weak convergence . . . . . . . . . . . . . . . 161.1.3.2 Weak* topology and weak* convergence . . . . . . . . . . . . . . 191.1.3.3 Strong operator topology on L(X) . . . . . . . . . . . . . . . . . 20

1.2 Bochner spaces and Bochner integral . . . . . . . . . . . . . . . . . . . . . . . . . 201.2.1 Bochner integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201.2.2 Bochner spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

1.3 A short overview of results in measure, probability and real-valued (local) mar-tingale theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

1.4 A brief summary of properties of functions . . . . . . . . . . . . . . . . . . . . . . 251.4.1 Lower semi-continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261.4.2 Functions of bounded variation . . . . . . . . . . . . . . . . . . . . . . . . 26

2 Stochastic integration in Hilbert spaces 272.1 Probability and martingale theory in Banach spaces . . . . . . . . . . . . . . . . 27

2.1.1 Probability theory: Gaussian measures on Hilbert spaces . . . . . . . . . 272.1.2 Random variables and stochastic processes . . . . . . . . . . . . . . . . . 28

2.1.2.1 Random variables . . . . . . . . . . . . . . . . . . . . . . . . . . 282.1.2.2 Stochastic processes . . . . . . . . . . . . . . . . . . . . . . . . . 282.1.2.3 Progressive measurability . . . . . . . . . . . . . . . . . . . . . . 29

2.1.3 Martingale theory in Banach spaces . . . . . . . . . . . . . . . . . . . . . 302.1.3.1 Conditional expectation of Banach space-valued random variables 302.1.3.2 Martingale theory in Banach spaces . . . . . . . . . . . . . . . . 30

2.2 Q-Wiener processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.3 Stochastic integral with respect to a Q-Wiener process and properties of the

stochastic integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332.3.1 Stochastic integral with respect to a Q-Wiener process . . . . . . . . . . . 332.3.2 Properties of the stochastic integral . . . . . . . . . . . . . . . . . . . . . 37

2.4 Q-cylindrical Wiener processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392.4.1 Stochastic integral with respect to standard cylindrical Wiener processes . 40

2.5 A very important remark regarding the integrands of the stochastic integral . . . 43

3 Main result 443.1 General setting of main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.1.1 Gelfand triple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3

Page 4: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

3.1.2 Setting and conditions on the coefficients . . . . . . . . . . . . . . . . . . 463.2 Formulation of the main result (Theorem 3.2.3) . . . . . . . . . . . . . . . . . . . 493.3 Auxiliary results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.3.1 Approximation of a stochastic process (Lemma 3.3.1) . . . . . . . . . . . 503.3.2 A special case of Ito’s formula (Theorem 3.3.17) . . . . . . . . . . . . . . 57

3.4 Existence and uniqueness of strong solutions to SDEs in finite dimensions . . . . 803.5 Proof of the main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

3.5.1 Preparations for the proof of Theorem 3.2.3 . . . . . . . . . . . . . . . . . 823.5.2 Weak convergence results . . . . . . . . . . . . . . . . . . . . . . . . . . . 893.5.3 Proof of Theorem 3.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

3.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Populaire samenvatting 114

4

Page 5: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

Introduction

A historical context on stochastic calculus (up to around 1970) can be found in [19]. An overviewon the results on stochastic calculus from 1950 onwards is given in [25].

The foundation of stochastic calculus is Brownian motion. This process is named after thebotanist Robert Brown, who first observed the erratic movement of pollen grains suspendedin water in 1827 ([32], p. 1). Back in 1900, Louis Jean-Baptiste Alphonse Bachelier, who “isseen by many as the founder of modern Mathematical Finance” ([19], p. 1) created a model ofBrownian motion in his thesis Theorie de la Speculation to describe the dynamic behavior ofthe Paris stock market.

The first person to provide a mathematical foundation of Brownian motion was NorbertWiener in 1923. In recognition of his work, his construction of Brownian motion is often referredto as the Wiener process ([19], p. 2). Later on in this thesis, we will also encounter processesthat are named after him (see Chapter 2). Similarly, one cannot forgo stochastic integrationwithout mentioning Kiyosi Ito, “the father of stochastic integration” ([19], p. 3), who developedthe theory of stochastic differential equations. Or, as it is put in [25], page 7,

Ito’s most important contribution is not to have defined stochastic integrals [...] butto have developed their calculus (this is the famous “Ito’s formula”, which expresseshow this integral differs from the ordinary integral) and especially to have used themto develop a very complete theory of stochastic differential equations – in a style soluminous by the way that these old articles have not aged.

We will later see that the famous Ito isometry also holds, however in a slightly modified version,for stochastic integrals taking values in Hilbert spaces (Chapter 2, Proposition 2.3.3). Moreover,we will derive a special case of Ito’s formula (Theorem 3.3.17).

Stochastic (partial) differential equations (from now on: S(P)DEs) are used to model thedynamics in man-made systems and in nature. One can think of the famous Black-Scholesformula, which estimates the right price for an European call option, or the stochastic heatequation, describing the distribution of heat in a given region over time, see Example 3.6.4below.

S(P)DEs in R and Rn have been extensively studied and are known to admit solutions whenthe coefficients satisfy, for example, some Lipschitz continuity conditions. The proof in thatcase hinges on a fixed-point argument; compare Theorem 10.2 in [33].

In this thesis, we will discuss S(P)DEs in infinite dimensions where the coefficients do generallynot satisfy a Lipschitz continuity condition, so we need to expand our toolbox to show that,under set conditions, S(P)DEs admit a solution. We follow the approach of Liu and Rockner [23].

However, before we can derive the existence and uniqueness of solutions, we need to developthe framework to do so. We will now discuss the structure of this thesis. In Chapter 1, we willdiscuss preliminary knowledge from different branches of mathematics. The approach we takein proving the existence of solutions depends heavily on functional analysis, so a large part ofChapter 1 will entail a review of results from functional analysis. Topics that we will encounterare weak convergence, the Banach-Alaoglu Theorem and Hilbert-Schmidt operators.

Moreover, we will introduce the Bochner integral in Chapter 1, see Section 1.2, that extendsthe notion of the Lebesgue-integral to (separable) Banach spaces. We give a short overview onresults in measure and probability theory and repeat some necessary tools regarding stochasticcalculus (Section 1.3) and end Chapter 1 with results on lower semi-continuous functions andfunctions of bounded variation.

5

Page 6: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

In Chapter 2, we develop probability and martingale theory in Banach spaces (Section 2.1)and infinite-dimensional Wiener processes. After this, we will introduce infinite-dimensionalstochastic calculus.

We continue with the definition of a cylindrical Wiener process and develop the stochasticintegral with respect to such a process (Section 2.4), using the construction of the stochasticintegral with respect to Q-Wiener processes we developed earlier. We end Chapter 2 with animportant remark regarding the integrands of the stochastic integral.

The main topic of this thesis will be discussed in Chapter 3. Let us quickly sketch the settingof the main result, which will be discussed in further detail in Section 3.1. We work on afiltered probability space (Ω,F , F t : t ∈ [0, T ],P) and assume that the underlying filtrationis normal, i.e. satisfies the usual conditions. Consider a separable Banach space V with dualspace V∗ and a separable Hilbert space H. Suppose that the following embeddings

V → H : v 7→ v, H → V∗ : h 7→ h∗|Vare dense and continuous; here, we denote by h∗ the map h∗ : H→ R : x 7→ 〈h, x〉H. With thesetwo embeddings, we have “V ⊂ H ⊂ V∗”, which is known as a Gelfand triple.

Let U be another separable Hilbert space and denote by L2(U,H) the space of Hilbert-Schmidtoperators from U to H (this will be discussed in greater length in Section 1.1.2). Consider thefollowing maps (compare (3.6) and (3.7) on page 46 below)

A : [0, T ]× V×Ω→ V∗,

B : [0, T ]× V×Ω→ L2(U,H).

These maps satisfy certain conditions that can be found in Condition 3.1.3.We consider the SDE, which we will label with (3.14) to correspond to the same SDE (3.14)

discussed later on,dX(t) = A(t,X(t)) dt+ B(t,X(t)) dW (t). (3.14)

Here, the process W is a cylindrical Wiener process, which will be defined in Section 2.4 asmentioned above. We define a solution to this SDE as follows; compare Definition 3.2.1. Theconstant α we will encounter in the definition is due to Condition (H3) on page 46.

Definition. A process X = X(t) : t ∈ [0, T ] is called a solution of (3.14) if X is a continu-ous H-valued, F-adapted process that coincides λ ⊗ P-almost everywhere with a progressivelymeasurable V-valued process X ∈ Lα ([0, T ]× Ω;V)∩L2 ([0, T ]× Ω;H) such that for all t ∈ [0, T ]

X(t) = X(0) +

∫ t

0A(s, X(s)

)ds+

∫ t

0B(s, X(s)

)dW (s) P-almost surely.

The main topic of this thesis will concern Theorem 3.2.3: we will show that the SDE (3.14)has a solution in the sense defined above. In short, we prove the following statement.

Theorem. Let the maps A and B defined above satisfy the conditions in Condition 3.1.3. Then,there exists a solution X to the SDE (3.14). This solution is unique up to P-indistinguishability.

The main result, Theorem 3.2.3, also contains an integrability condition. Again, this isdiscussed later on.

In proving Theorem 3.2.3, we make use of the auxiliary results we derive in Section 3.3, withTheorem 3.3.17 in particular. The latter theorem is a special case of Ito’s formula.

We will dedicate a lot of space to the proofs of the auxiliary results and the proof of our mainresult, in the meanwhile using [23] as a backbone for our arguments. The arguments therein arecompact and some of them deserve a deeper exploration. We provide the reader with details tohelp understand these compact arguments.

We will also shortly sketch the proof of the main result. The uniqueness of the solution followsby an easy integration-by-parts argument. The proof of the existence of a solution to the SDE(3.14) in the sense of Definition 3.2.1 essentially boils down to three points.

6

Page 7: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

• SDEs admit strong solutions in finite dimensions (see Section 3.4). Since H is a separableHilbert space, it admits a countable basis (the definition of a basis can be found inSection 1.1). We will then project the coefficients of the SDE onto the first n basisvectors and use an identification argument to obtain a strong solution for the “projected”SDE for every n ∈ N; see Section 3.5.1 and in particular Lemma 3.5.5 for an in-depthdiscussion.

• The sequence of strong solutions we obtain from the “projected” SDEs are uniformlybounded in a sense made more formal in Lemma 3.5.11 below. Then, we can apply (aconsequence of the) Banach-Alaoglu Theorem to obtain essential weak convergence results,see Section 3.5.2 and Corollary 3.5.14 in particular.

• The inequalities (3.117) in Lemma 3.5.25 and (3.119) in Lemma 3.5.27.

We end this thesis with a few examples (Section 3.6), mostly due to [23], that fit the frameworkwe will discuss here. One of these examples is the stochastic porous medium equation, seeExample 3.6.6, describing the time evolution of the density of a substance in a porous medium.

7

Page 8: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

1 Preliminaries

Before we tackle our main topic, we need to familiarize ourselves with some preliminary know-ledge from different fields of mathematics, which we will do in this chapter. We will start withan assumption.

Assumption 1.0.1. All vector spaces we will encounter are assumed to be real.

1.1 Functional analysis

Let (X, ‖ · ‖X) and (Y, ‖ · ‖Y) be (real) normed vector spaces. We will write B(X) for the Borelσ-algebra on X. In addition, we will denote the collection of bounded linear operators from Xto Y by L(X,Y). We will write L(X) for L(X,X) and say that A is a bounded linear operatoron X if A ∈ L(X). If Y is complete, then L(X,Y) is complete as well under the usual operatornorm, given by the following equivalent definitions

‖A‖L(X,Y) := supx∈X:‖x‖X≤1

‖A(x)‖Y = supx∈X:‖x‖X=1

‖A(x)‖Y = supx∈X \0X

‖A(x)‖Y‖x‖X

for each A ∈ L(X,Y). The dual space of X (or simply the dual), consisting of all bounded linearoperators from X to the underlying scalar field R, will be denoted by X∗. As R is a Banachspace itself (with respect to the absolute value on R), the dual of any normed vector space iscomplete. Moreover, we can express the norm of an element x ∈ X by using the dual space.

Theorem 1.1.1 (Theorem 4.3(b) in [30]). Define BX∗ := f ∈ X∗ : ‖f‖X∗ ≤ 1. Then, we havethat ‖x‖X = sup |f(x)| : f ∈ BX∗ for all x ∈ X.

In addition, if we denote the dimension of a normed vector space X by dim(X), we have thefollowing result.

Theorem 1.1.2 (Theorem 5.1 in [31]). Let X be a finite-dimensional normed vector space.Then, we have dim(X) = dim(X∗).

Later on, we will employ the notation

〈A, x〉X∗×X := A(x), (1.1)

for any A ∈ X∗ and any x ∈ X. This is called the dual pairing or the dual pair. Furthermore, asis customary, we will sometimes write Ax instead of A(x), just like we will sometimes denotethe composition of two bounded linear operators S and T as ST instead of S T .

In a normed vector space (X, ‖ · ‖X), we will denote the norm closure of a subset B ⊂ X byB. For a subset B ⊂ X, we define Sp(B), the linear span of B, to be

Sp(B) :=

n∑i=1

λixi :n∈N

xi∈B for all i = 1, . . . , nλi∈R for all i = 1, . . . , n

.

A collection X ⊂ X is called a basis when

Sp(X) = X .

8

Page 9: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

and when this collection is linearly independent.A normed vector space (X, ‖ · ‖X) is called separable if X contains a countable, dense subset.

We have a useful characterization of a dense set in a Banach space, which is a consequence ofthe celebrated Hahn-Banach Theorem.

Proposition 1.1.3. Let X be a Banach space and let A ⊂ X. Then, the set A is dense in X ifand only if every functional f ∈ X∗ that vanishes on A is the zero functional.

In addition, the separability of a normed vector space follows from the separability of its dualspace.

Theorem 1.1.4 (Theorem 5.24 in [31]). Let X be a normed vector space. If X∗ is separable,then so is X.

We end here with by listing four useful results.

Lemma 1.1.5. Let S and T be bounded linear operators on a Banach space X. Suppose thatS and T commute and that T is invertible. Then, the operators S and T−1 commute as well.

Lemma 1.1.6. Let X and Y be Banach spaces. Suppose that the inclusion map X→ Y : x 7→ xis continuous, i.e., there exists a constant α > 0 such that ‖x‖Y ≤ α‖x‖X for all x ∈ X. Letxn : n ∈ N ⊂ X be a sequence that is convergent in X and Y. Then, the limits in the respectivespaces coincide.

Theorem 1.1.7. Let X0 be X1 be Banach spaces. Suppose that X0 and X1 both are linearsubspaces of a normed vector space X and assume that the inclusion map ιk : Xk → X : x 7→ x iscontinuous for k = 0, 1. Then, the intersection space X0 ∩X1 is a Banach space when endowedwith the norm

‖x‖X0 ∩X1 := max‖x‖X0

, ‖x‖X1

for all x ∈ X0 ∩X1.

Proof. See Theorem 1.3 in Chapter 3.1 (page 97) in [3].

Proposition 1.1.8. Let X and Y be Banach spaces. Then, the product space X×Y is a Banachspace, when endowed with the norm

‖(x, y)‖X×Y := max ‖x‖X , ‖y‖Y

for all (x, y) ∈ X×Y.

1.1.1 Hilbert spaces

A Hilbert space H is a vector space endowed with an inner product 〈·, ·〉H that is complete underits inner product-induced norm, given by

‖h‖H :=√〈h, h〉H

for every h ∈ H. This additional inner product structure gives Hilbert spaces useful properties.

9

Page 10: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

1.1.1.1 Properties of Hilbert spaces

The previous definitions of basis and separability come together nicely when we consider aHilbert space and want to determine if this space is separable. Before we do that, however, weneed to two more definitions.

A collection H ⊂ H is called orthogonal if 〈h, h′〉 = 0 for all h, h′ ∈ H with h 6= h′. Moreover,this collection is called orthonormal if H is orthogonal and if ‖h‖H = 1 for every h ∈ H.As a consequence of Zorn’s lemma, every Hilbert space has an orthonormal basis. From thecardinality of this basis, we can determine whether a Hilbert space is separable.

Proposition 1.1.9. A Hilbert space (H, 〈·, ·〉H) is separable if and only if it contains an at mostcountable orthonormal basis.

Proof. See Proposition 3.4.7 in [20].

Before we proceed, let us make the following assumption.

Assumption 1.1.10. All Hilbert spaces that we will consider are assumed to be separable,unless specified otherwise.

As a consequence of Proposition 1.1.9, every finite-dimensional Hilbert space is separable. Inaddition, we have the next result.

Corollary 1.1.11. Let (H, 〈·, ·〉H) be a separable Hilbert space. Then, every orthogonal set inH is at most countable.

The following proposition can be shown by the Gram-Schmidt orthonormalization procedure.

Proposition 1.1.12. Let (H, 〈·, ·〉H) be a separable Hilbert space and let X ⊂ H be a denselinear subspace. Then, H has an orthonormal basis consisting of elements in X.

Every separable Hilbert space (H, 〈·, ·〉H) has an orthonormal basis hn : n ∈ N ⊂ H; usingthis basis, we can express every element h ∈ H in terms of elements from the orthonormal basisthrough

h =∑n∈N〈h, hn〉H hn. (1.2)

Note that it is not entirely clear beforehand that the series on the right-hand side convergesand, when it does, that it converges to h. Luckily, it does, and a justification can be found inSection 3.4 in [31]. From this expression (1.2) follows a well-known formula, Parseval’s identity.

Proposition 1.1.13 (Parseval’s identity). Let (H, 〈·, ·〉H) be a separable Hilbert space and lethn : n ∈ N ⊂ H be an orthonormal basis for H. Then, we have that

‖h‖2H =∑n∈N|〈h, hn〉H|

2 (1.3)

for any h ∈ H.

This inner product-induced norm has some other special properties.

Lemma 1.1.14. Let (H, 〈·, ·〉H) be a real inner product space. Then, we have the followingresults.

(i) The parallellogram rule holds:

2 ‖x+ y‖2H + 2 ‖x− y‖2H = 2(‖x‖2H + ‖y‖2H

)(1.4)

for all x, y ∈ H.

10

Page 11: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

(ii) The polarization identity holds:

〈x, y〉H = 14

(‖x+ y‖2H − ‖x− y‖2H

)(1.5)

for all x, y ∈ H.

(iii) The following identity holds:

‖x− y‖2H = ‖x‖2H + ‖y‖2H − 2〈x, y〉H (1.6)

= ‖x‖2H − ‖y‖2H − 2 〈x− y, y〉H (1.7)

for all x, y ∈ H.

1.1.1.2 Symmetric and nonnegative operators

Using the inner product 〈·, ·〉H on a Hilbert space H, we can classify bounded linear operatorsin L(H). We say that a bounded linear operator A ∈ L(H) is symmetric if

〈Ax, y〉H = 〈x,Ay〉H

for all x, y ∈ H. We say that bounded linear operator A ∈ L(H) is nonnegative if

〈Ax, x〉H ∈ [0,∞)

for every x ∈ H. Suppose that A satisfies both criteria, then we have the following result, seee.g. Proposition 2.3.4 in [27]: a symmetric nonnegative operator A admits a “square root”.

Theorem 1.1.15. Let (H, 〈·, ·〉H) be a Hilbert space and let A ∈ L(H). If A is symmetric andnonnegative, then there exists a unique symmetric and nonnegative bounded linear operator,denoted A

12 , such that A

12 A

12 = A.

1.1.2 Hilbert-Schmidt and finite trace operators, eigenvectors andeigenvalues

Let (H, 〈·, ·〉H) and (K, 〈·, ·〉K) be Hilbert spaces and let A ∈ L(H,K). Then, there exists a uniquebounded linear operator from K to H, denoted A∗, such that

〈Ax, y〉K = 〈x,A∗y〉H

for all x, y ∈ K. This bounded linear operator A∗ is called the adjoint of the operator A ∈L(H,K) or just the adjoint operator. The existence of such an operator is not clear beforehand —it can be shown using the Riesz-Frechet Theorem (see Theorem 6.1 in [31])— but the uniquenessfollows directly. The following properties hold for adjoint operators, see Section 12.9 in [30].

Lemma 1.1.16. Let G, H and K be separable Hilbert spaces. Let S ∈ L(G,H) and T ∈ L(H,K).Then, the following relations hold:

(i) ‖S‖L(G,H) = ‖S∗‖L(H,G),

(ii) (TS)∗ = S∗T ∗,

(iii) (S∗)∗ = S,

(iv) (αS + βT )∗ = αS∗ + βT ∗.

The introduction of the adjoint operator allows us to show a result that we will use in definingHilbert-Schmidt operators.

11

Page 12: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

Lemma 1.1.17 (Lemma 3.5.27 in [20]). Let (H, 〈·, ·〉H) and (K, 〈·, ·〉K) be separable Hilbertspaces. Let hn : n ∈ N be an orthonormal basis of H and let kn : n ∈ N be an orthonormalbasis of K. Let A ∈ L(H,K). Then, we have that∑

n∈N‖Ahn‖2K =

∑n∈N‖A∗kn‖2H ,

where both series could equal infinity.

When we consider the orthonormal basis hn : n ∈ N of H as in the preceding Lemma, wecan look at the infinite series

∑n∈N ‖Ahn‖

2K. Does this expression depend on our choice of basis?

Suppose that h′n : n ∈ N is another orthonormal basis of H, then Lemma 1.1.17 guaranteesthat we have ∑

n∈N‖Ahn‖2K =

∑n∈N‖A∗kn‖2H =

∑n∈N

∥∥Ah′n∥∥2

K

so the expression does not depend on our choice of orthonormal basis.This means that we can now give the definition of a Hilbert-Schmidt operator.

Definition 1.1.18 (Hilbert-Schmidt operator). Let (H, 〈·, ·〉H) and (K, 〈·, ·〉K) be separableHilbert spaces and let hn : n ∈ N be an orthonormal basis of H. Then, we say that abounded linear operator A ∈ L(H,K) is a Hilbert-Schmidt operator if(∑

n∈N‖Ahn‖2K

)1/2

(1.8)

is finite. As remarked before, note that this number does not depend on the choice of basis.

We will denote by L2(H,K) the set of all Hilbert-Schmidt operators from the separable Hilbertspace H to the separable Hilbert space K. When we have two Hilbert-Schmidt operators S andT , Minkowski’s inequality guarantees that the sum S+T of the two operators also satisfies (1.8),turning this set into a vector space. We can even endow this space with the following innerproduct: let hn : n ∈ N be an orthonormal basis of H and take arbitrary S, T ∈ L2(H,K).Then, we define the inner product 〈·, ·〉L2(H,K) by

〈S, T 〉L2(H,K) :=∑n∈N〈Shn, Thn〉K . (1.9)

The Cauchy-Schwarz inequality, combined with Holder’s inequality and the Hilbert-Schmidtproperty of the bounded linear operators S and T guarantees that the infinite series on theright-hand side converges. (One can check that this indeed defines an inner product on thespace of Hilbert-Schmidt operators from H to K.) However, just as before, one might wonderwhether the definition of this inner product depends on the choice of basis in H. Luckily, thisis not the case: by the polarization identity (1.5), we have

〈S, T 〉L2(H,K) = 14

([∑n∈N‖(S + T )hn‖2K

]−

[∑n∈N‖(S − T )hn‖2K

])

and we already established that these infinite series on the right-hand side do not depend onour choice of basis. From the definition of the inner product 〈·, ·〉L2(H,K) on L2(H,K), it followsdirectly that (1.8) is the inner product induced norm on this space. We will denote this normby ‖ · ‖L2(H,K). Looking back at Lemma 1.1.17, the next result will not come as a surprise.

Corollary 1.1.19. Let H and K be separable Hilbert spaces and let A : H → K be a Hilbert-Schmidt operator. Then, the adjoint operator A∗ is a Hilbert-Schmidt operator from K to Hand we have ‖A‖L2(H,K) = ‖A∗‖L2(K,H).

12

Page 13: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

Moreover, this inner product turns L2(H,K) into a separable Hilbert space, under the condi-tion that both H and K are separable.

Proposition 1.1.20 (Proposition B.0.7 in [27]). Let H and K be separable Hilbert spaces. Lethi : k ∈ N be an orthonormal basis of H and let kj : j ∈ N be an orthonormal basis ofK. Define, for i, j ∈ N, the bounded linear operator Ai,j : H → K : h 7→ 〈h, hi〉H kj . Then, thecollection Ai,j : i ∈ N, j ∈ N is an orthonormal basis of L2(H,K).

Finally, when we have a Hilbert-Schmidt operator at hand, we can use it to construct manymore Hilbert-Schmidt operators.

Lemma 1.1.21. Let G1, G2, H and K be separable Hilbert spaces and let T : H → K be aHilbert-Schmidt operator. In addition, let S ∈ L(G1,H) and R ∈ L(K,G2). Then, the mapR T S is a Hilbert-Schmidt operator from G1 to G2 and the inequality

‖R T S‖L2(G1,G2) ≤ ‖R‖L(K,G2) · ‖T‖L2(H,K) · ‖S‖L(G1,H)

holds.

Proof. See Remark B.0.6(iii) in [23].

1.1.2.1 Finite trace operators

In the previous section, we introduced Hilbert-Schmidt operators: the norm on the space ofHilbert-Schmidt operators is given by (1.8). Now, suppose that we have a symmetric andnonnegative bounded linear operator A on H. Then, by Theorem 1.1.15, we know that thisoperator A admits a square root A

12 .

As H is separable, there exists an orthonormal basis hn : n ∈ N of H. Let A be a symmetricand nonnegative bounded linear operator mapping on H. Then, we define its trace as

tr(A) :=∥∥∥A 1

2

∥∥∥2

L2(H). (1.10)

If we expand the right-hand side, we find∥∥∥A 12

∥∥∥2

L2(H)=∑n∈N

∥∥∥A 12hn

∥∥∥2

H=∑n∈N

⟨A

12hn, A

12hn

⟩H.

Let us remark again that this expression does not depend on the chosen orthonormal basis.As the bounded linear operator A

12 is symmetric and has the property that A

12 A

12 = A by

Theorem 1.1.15, we see that⟨A

12hn, A

12hn

⟩H

= 〈Ahn, hn〉H for every n ∈ N. This yields

tr(A) =∑n∈N〈Ahn, hn〉H . (1.11)

This expression is often used as the definition for the trace of an arbitrary bounded linearoperator on a Hilbert space, but only if the expression in (1.11) is finite — see for exampleDefinition B.0.3 in [27]. These definitions coincide when a bounded linear operator is symmetricand nonnegative.

We say that a bounded linear operator A on H has finite trace if the expression in (1.11) isfinite. Also note that the trace operation is linear on the space of all finite trace operators.

We end this section by remarking that in the case where an operator A is symmetric andnonnegative, its square root is a Hilbert-Schmidt operator if A has finite trace; this followsdirectly from (1.10) being finite and Definition 1.1.18.

13

Page 14: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

1.1.2.2 Eigenvectors and eigenvalues

If the symmetric and nonnegative bounded linear operator A has finite trace and the underlyingHilbert space H is separable, we can relate the trace of A with the eigenvalues of A. We willstart with defining eigenvalues and eigenvectors for a symmetric and nonnegative boundedlinear operator A with finite trace.

A number λ ∈ R is called an eigenvalue of the operator A if A − λ idH is not injective, or,equivalently, when the equation Ah = λh has a non-trivial solution h ∈ H. In that case, wecall the vector h an eigenvector of A with corresponding eigenvalue λ. Here, we implicitly usedthe result that all eigenvalues of a symmetric operator are real-valued (Proposition 5.8 in [6]).Moreover, it is necessary to remark that the set of eigenvalues of A, known as the point spectrumof A and denoted by σp(A), is non-empty, as ‖A‖L(H) ∈ σp(A), see Lemma 5.9 in Chapter II in[6].

Note that every eigenvalue of a symmetric and nonnegative bounded linear operator is non-negative. In addition, just like it is the case with symmetric matrices, one can show thateigenvectors corresponding to two distinct eigenvalues are necessarily orthogonal. As remarkedbefore in Corollary 1.1.11, we know that every orthogonal set in a separable Hilbert space is atmost countable. In particular, this means that the operator A only has countable many distincteigenvalues, i.e. the set σp(A) is countable. Even more is true, as we will see next in what isknown as Lidskii’s (Trace) Theorem.

Theorem 1.1.22 (Lidskii, 1959). Let A be a symmetric, non-negative operator with finitetrace on a separable Hilbert space H. Then, we have that∑

λ∈σp(A)

λ = tr(A) <∞.

Proof. See, for example, Theorem 6.1 in Chapter VI.6 (page 63) of [16].

In particular, this theorem implies that every distinct non-zero eigenvalue λ only has finitemultiplicity. This makes it possible to order all eigenvalues in non-increasing order, as the onlypossible eigenvalue with infinite multiplicity is 0.

One can use the symmetric and nonnegative bounded linear operator A with finite trace toconstruct an orthonormal basis of H that consists of eigenvectors of A.

Proposition 1.1.23. Let A be a symmetric, non-negative operator with finite trace on aseparable Hilbert space H. Let λn : n ≥ 1 be the sequence of eigenvalues of A, countedfor multiplicity and given in non-increasing order. Then there exists an orthonormal basishn : n ∈ N of H and such that

Ahn = λnhn

for every n ∈ N.

Proof. See Proposition 2.1.5 in [27].

Moreover, the symmetric, non-negative operator A with finite trace operator admits a “squareroot” operator A

12 by Theorem 1.1.15 and we can relate the eigenvalues and eigenvectors of these

two bounded linear operators, which is the topic of the next two propositions.

Proposition 1.1.24. Let A ∈ L(H) be a symmetric, non-negative operator with finite trace.Let hn : n ∈ N be an orthonormal basis of H, consisting of eigenvectors of A with correspond-ing eigenvalues λn : n ∈ N. Then, if λn is a positive eigenvalue of A for some n ∈ N with

corresponding eigenvector hn, it holds that(√λn, hn

)is an eigenpair of A

12 .

14

Page 15: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

Proof. See page 34 in [23].

Proposition 1.1.25. Let A be a symmetric, non-negative operator with finite trace on aseparable Hilbert space H. Then, the null space of A coincides with the null space of A

12 .

Proof. It follows almost directly that the null space of A contains the null space of A12 . For

the other inclusion, note that if h ∈ ker(A), we have 0 = 〈Ah, h〉H =⟨A

12h,A

12h⟩H

, so it must

follow that A12h = 0H.

Together, these two propositions yield the next result.

Corollary 1.1.26. Let A be a symmetric, non-negative operator with finite trace on a separableHilbert space H. Let hn : n ∈ N be an orthonormal basis of H, consisting of eigenvectors ofA, with corresponding eigenvalues λn : n ∈ N. Then, for every n ∈ N, the vector hn is an

eigenvector of A12 with corresponding eigenvalue

√λn.

This corollary can be used to introduce the last result of this section: an important linearsubspace of H, namely the image of A

12 , which also is a separable Hilbert space when equipped

with an appropiately chosen inner product.

Proposition 1.1.27. Let A be a symmetric, non-negative operator with finite trace on aseparable Hilbert space H. Let hn : n ≥ 1 be an orthonormal basis of H, consisting ofeigenvectors of A with corresponding eigenvalues λn : n ∈ N, counted for multiplicity andgiven in non-increasing order. Define Λ := n ∈ N : λn > 0, the index set of non-zeroeigenvalues. Then, we have

A12 (H) = ker

(A

12

)⊥∩

y ∈ H :

∑n∈Λ

1

λn|〈y, hn〉H|

2 < +∞

.

This space A12 (H) is a separable Hilbert space when endowed with the 〈·, ·〉0-inner product,

defined by

〈g, h〉0 :=∑n∈Λ

1

λn〈g, hn〉H 〈h, hn〉H

for all g, h ∈ A12 (H), with orthonormal basis

√λnhn : n ∈ Λ =

A

12hn : n ∈ Λ

.

Proof. The first assertion follows by element tracing (also, compare Exercise 3 in Section II.5

in [6], page 49). One can verify that 〈·, ·〉0 is indeed an inner product on A12 (H). For the

separability and the completeness of the inner product space(A

12 (H), 〈·, ·〉0

), we refer to e.g.

page 23 in [15] or page 96 in [8].

1.1.3 Reflexivity, weak convergence and weak* convergence

Let (X, ‖ · ‖X) be a normed vector space. Then, we already saw that its dual space X∗ is aBanach space. We can even go one step further and consider the dual space of X∗, which wewill denote by X∗∗ and call the second dual or double dual of X. This again is a Banach space,when endowed with the usual operator norm.

For any element x ∈ X, we can define the mapping

Fx : X∗ → R : f 7→ f(x), (1.12)

hence Fx(f) = f(x) for all f ∈ X∗. Then, the map Fx is an element of X∗∗ for any x ∈ X.Moreover, it holds that ‖Fx‖X∗∗ = ‖x‖X for every x ∈ X.

15

Page 16: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

Having introduced the map Fx, we can also define the function JX from X to its second dualX∗∗ by

JX : X→ X∗∗ : x 7→ Fx. (1.13)

This mapping JX is linear and it is even isometric, since ‖Fx‖X∗∗ = ‖x‖X for every x ∈ X.Now, we call a normed vector space X reflexive if JX(X) = X∗∗. Examples of reflexive Banach

spaces are the Lebesgue Lp-spaces for each p ∈ (1,∞), every finite dimensional normed vectorspace and every Hilbert space.

Note that for reflexive spaces the map JX is an isometric isomorphism, it follows that X mustbe a complete space and therefore, only Banach spaces can be reflexive.

It does however not mean that every Banach space is reflexive. Moreover, reflexivity of aBanach space X is only be defined through the “natural map” JX: there exists a non-reflexiveBanach space that is isometrically isomorphic to its second dual, see [18]. We will end thisdiscussion on reflexivity by listing some results of reflexive spaces.

Lemma 1.1.28 (Theorem 5.23 in [31]). A Banach space X is reflexive if and only if its dual X∗

is reflexive.

Lemma 1.1.29 (Corollary 5.56 in [31]). If normed vector spaces X and Y are isomorphic, thenX is reflexive if and only if Y is reflexive.

Lemma 1.1.30 (Theorem 5.44 in [31]). If a normed vector space X is reflexive and Y is a closedlinear subspace of X, then Y is reflexive.

Lemma 1.1.31 (Theorem 20.4 in [26]). If normed vector spaces X and Y are reflexive, thenthe product space X×Y is reflexive.

1.1.3.1 Weak topology and weak convergence

Every element of the dual of a normed vector space (X, ‖ · ‖X) is continuous with respect to thenorm topology that is generated by the norm on X. This means that the norm topology on Xmight not be the coarsest topology on the vector space that makes all the elements of the dualspace continuous. In this section, we will construct this topology, called the weak topology onX.

Suppose that we have a vector space Y, endowed with a seminorm, p. The collection of openballs around elements of Y with respect to the seminorm p forms the basis for a topology on Y.In this fashion, every bounded linear functional f ∈ X∗ induces a seminorm ‖ · ‖f on X, givenby ‖x‖f := |f(x)| for each x ∈ X, and therefore constructs a topology, say Tf , on X. Then, wedefine the weak topology on X to be the coarcest topology on X to contain each Tf . This weaktopology on X also characterizes weak convergence, by virtue of the next proposition, based onExercise 1.9.3 from [34]. We will only give an outline of the proof.

Proposition 1.1.32. Let V be a vector space and let Fα : α ∈ A be a non-empty (possiblyinfinite) family of topologies on V. Let F be the coarcest topology on V that contains Fα forevery α ∈ A. Then, a sequence (vn)n∈N ⊂ V converges to v ∈ V in F if and only if vn → v inFα for every α ∈ A.

Outline of the proof. Firstly, one can show that the collection

B =

n⋂i=1

Fi :n∈N,

αi∈A for all i = 1, . . . , nFi∈Fαi for all i = 1, . . . , n

is a basis that generates F . Then, to prove the statement concerning the convergence, note thatthe “only if”-part follows also immediately, as every open neighborhood in Fα is also an open

16

Page 17: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

neighborhood in F , as F contains every Fα. For the “if”-part, it suffices to check that foropen neighborhoods in B the convergence criterion holds: it follows quickly, since every openneighborhood is the countable intersection of sets for which we know the convergence alreadyhappens.

From Proposition 1.1.32, we can look at weak topology, the smallest topology on the normedvector space X that contains the seminorm-induced topology Tf for every f ∈ X∗; a normedvector space X equipped with the weak topology is a Hausdorff space, by virtue of the Hahn-Banach Theorem.

Using Proposition 1.1.32 again, we can give the definition of weak convergence: a sequence(xn)n∈N converges to x ∈ X in the weak topology if and only if xn → x in Tf for every f ∈ X∗.The latter definition is equivalent to the condition that ‖xn − x‖f → 0 for every f ∈ X∗, which,in turn, by the linearity of each f ∈ X∗, is equivalent to f(xn)→ f(x) for every bounded linearoperator f ∈ X∗. The last definition will be used to define weak convergence of a sequence(xn)n∈N to an element x ∈ X. The following notation will be used to denote weak convergence,

xnw−→ x,

or, sometimes, we will just write “xn → x weakly”. Weak limits are necessarily unique. Todistinguish between this notion of convergence and the usual convergence on X, we will refer tothe latter as “norm convergence” where confusion could arise and sometimes, we will call theusual topology on X that is induced by the norm the norm topology.

Remark 1.1.33. Other authors take another route to define the weak topology, by defining itas the coarcest topology to make all elements of the dual space continuous. One can show thatthe two definitions coincide, by checking that Tf coincides with the topology generated by theinverse images of open sets with respect to f for every f ∈ X∗. A similar approach is found onpage 161 in [13].

We can use the uniform boundedness principle to show that every weakly convergent sequenceis bounded. Moreover, we have the following bounds on a weakly convergent sequence.

Lemma 1.1.34. Let X be a Banach space and suppose that the sequence (xn)n∈N ⊂ X convergesweakly to x ∈ X. Then, the following inequalities

‖x‖X ≤ lim infn→∞

‖xn‖X ≤ supn∈N‖xn‖X

hold.

The continuity of a linear functional f ∈ X∗ implies that if a sequence (xn)n∈N converges innorm to x ∈ X, the sequence converges weakly to x as well. The converse of this statement doesnot hold in general: we will give an example in Example 1.1.38 below, where we connect weakconvergence in Hilbert spaces to the inner product structure as described in Proposition 1.1.37.We would like to point out that the notions of norm and weak convergence do however coincidewhen the underlying normed vector space is finite-dimensional.

Lemma 1.1.35 (Lemma 5.70(c) in [31]). Norm convergence and weak convergence are equiv-alent in finite-dimensional normed vector spaces.

Regarding continuity with respect to the weak topology, we have the following well-knownresult that we will use later on.

Theorem 1.1.36. Let X and Y be normed vector spaces. Then, T : X → Y is continuouswith respect to the norm topologies on X and Y (norm-to-norm continuous) if and only if T iscontinuous with respect to the weak topologies on X and Y (weak-to-weak continuous).

17

Page 18: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

For Hilbert spaces, we can define weak convergence in a different, equivalent manner.

Proposition 1.1.37 (Equivalent definition of weak convergence in Hilbert spaces). Let H bea Hilbert space. A sequence hn : n ≥ 1 converges weakly to h ∈ H if and only if 〈h′, hn〉H →〈h′, h〉H for all h′ ∈ H.

Proof. Follows from the Riesz-Frechet Theorem.

We can use this proposition to show that weak convergence does not generally imply normconvergence.

Example 1.1.38. Consider the Hilbert space `2 of square-summable sequences. Lete(n) : n ∈ N

be the canonical basis of `2 and let 0 = (0, 0, . . . ) denote the all-zeros sequence in `2. Everysequence a = (a1, a2, . . . ) ∈ `2 converges to zero, as it is square summable and this means that⟨

a, e(n)⟩`2

=∑k∈N

ake(n)k = an → 0 = 〈a,0〉`2 ,

hence e(n) w→ 0. The strong convergence ofe(n) : n ∈ N

to 0 cannot hold, as

∥∥e(n) − 0∥∥`2

= 1for every n ∈ N, which proves that the converse does not hold in general.

Later on, we will encounter functions that we will call weakly continuous into a Banach space.

Definition 1.1.39. Let [0, T ] ⊂ R be an interval and let X be a normed vector space. Afunction f : [0, T ]→ X is said to be weakly continuous into X if tn → t implies f(tn)

w→ f(t).

Regarding these kind of functions, we have the following result.

Proposition 1.1.40. Let Y be a Banach space and let X ⊂ Y be a linear subspace that is densein Y. Let f : [0, T ]→ Y be continuous. Then, if the map f only takes values in X, it is weaklycontinuous as a map from [0, T ] into X.

Proof. We will check that for every S ∈ X∗, it holds that S(f(tn))→ S(f(t)) when tn → t. Forevery A ∈ Y∗, we have A(f(tn))→ A(f(t)) when tn → t, as both f : [0, T ]→ Y and A : Y → Rare continuous maps. Let S ∈ X∗ be arbitrary. Then, since X is a dense linear subspace of Y,the functional S ∈ X∗ can be extended to a functional S ∈ Y∗ that coincides with S on X bythe Hahn-Banach Theorem. However, as S(f(tn))→ S(f(t)), since S ∈ Y∗, and f is X-valued,it follows that S(f(tn)) = S(f(tn))→ S(f(t)) = S(f(t)).

We end the discussion on the weak topology with two useful weak convergence results.

Proposition 1.1.41. Let X0 and X1 be Banach spaces. Suppose that X0 and X1 both are linearsubspaces of a normed vector space X and assume that the inclusion map ιk : Xk → X : x 7→ xis continuous for k = 0, 1. Consider the Banach space X0 ∩X1 defined in Theorem 1.1.7. If asequence (xn)n∈N ⊂ X0 ∩X1 converges weakly to x ∈ X0 ∩X1 in the intersection space X0 ∩X1,then xn

w→ x in X0 and X1.

Proof. Restrict functionals on X0 (respectively X1) to X0 ∩X1; the continuity of these restrictedfunctionals is dealt with by the norm on the intersection.

Using a similar restriction argument, we obtain a comparable result.

Proposition 1.1.42. Let X be a linear subspace of Y and suppose that (xn) ⊂ X convergesweakly in X to x ∈ X. Then, it also holds that xn

w→ x in Y.

18

Page 19: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.1 CHAPTER 1. PRELIMINARIES

1.1.3.2 Weak* topology and weak* convergence

We can also define a topology on the dual space X∗ of a normed vector space X that is differentfrom the norm topology on the dual generated by the operator norm. The topology we willconstuct on the dual space is called the weak* topology (on X∗); the construction of this topologywill be similar to the weak topology we saw earlier.

Just like before, we will construct seminorms, but in this case, it will be seminorms on thedual space X∗. For every x ∈ X, the function ‖ · ‖x, given by ‖f‖x := |f(x)| for all f ∈ X∗,generates a seminorm on the dual space for each x ∈ X. In contrast with the seminorms on thatgenerated the weak topology, here, the input is a function, an element of the dual space, andnot a point of X.

Every seminorm ‖ · ‖x generates a topology on X∗. Then, we define the weak* topology to bethe coarcest topology on X∗ to contain every topology that is generated by ‖ · ‖x. Equivalently,it is also the coarcest topology that will make the evaluation map Fx : X∗ → R : f 7→ f(x)continuous for every x ∈ X, which can be shown in a similar fashion as with weak convergence(see Remark 1.1.33). This topology turns the dual space X∗ into a Hausdorff space.

We can again invoke Proposition 1.1.32 to characterize weak* convergence: we say that asequence (fn)n∈N ⊂ X∗ converges to a functional f ∈ X∗ in weak* sense if fn(x)→ f(x) for allx ∈ X.

It is known that, in general, the weak* topology on X∗ is coarcer than the weak topologyon the dual space X∗ (which is in this case defined to be the smallest topology on X∗ to makeevery map S ∈ X∗∗ continuous). This means that when we have a sequence (fn)n∈N ⊂ X∗ thatconverges weakly to an element f ∈ X∗, we also have weak* convergence of this sequence tof ∈ X∗, but the converse need not be true. However, when the normed vector space is reflexive,the notions of weak and weak* convergence on X∗ coincide.

Proposition 1.1.43. Let (X, ‖ · ‖X) be a reflexive Banach space. Let (fn)n∈N be a sequence inX∗ and let f ∈ X∗. Then, we have weak convergence of the sequence (fn)n∈N to f if and only if(fn)n∈N converges to f in weak* sense.

In the context of the weak* topology and weak* convergence, we have the following importantresult.

Theorem 1.1.44 (Banach-Alaoglu). Let (X, ‖ · ‖X) be a normed vector space. Then, the norm-closed unit ball in X∗,

f ∈ X∗ : ‖f‖X∗ ≤ 1

is compact in the weak* topology.

Proof. See Theorem 5.18 in [13].

In contrast: if X is a normed vector space with an infinite-dimensional dual space X∗, thenorm-closed unit ball in X∗ is not compact in the operator norm topology by Riesz’ lemma: seealso Theorem 2.26 in [31].

We would like to end here with an important consequence of the Banach-Alaoglu Theoremin reflexive Banach spaces.

Theorem 1.1.45. Let (X, ‖ · ‖X) be a reflexive Banach space. If (xn)n∈N is a bounded sequencein X, it has a weakly convergent subsequence.

Proof. See Theorem 5.73 in [31].

19

Page 20: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.2 CHAPTER 1. PRELIMINARIES

1.1.3.3 Strong operator topology on L(X)

This section will be a short one, mentioning the strong operator topology and the convergencein said topology. Fix a normed vector space X. When we look at L(X), we see that everyx ∈ X generates a seminorm ‖ · ‖x : L(X)→ [0,∞) : T 7→ ‖Tx‖X and therefore also generates atopology Tx. Then, the coarcest topology on L(X) that contains every topology Tx is called thestrong operator topology. One can show that this topology coincides with the smallest topologythat makes the evaluation maps T 7→ Tx continuous for every x ∈ X. We end here with thenext observation, which is a consequence of Proposition 1.1.32.

Lemma 1.1.46. A sequence Tn : n ≥ 1 ⊂ L(X) converges in strong operator topology if andonly if ‖Tnx− Tx‖X → 0 for each x ∈ X.

1.2 Bochner spaces and Bochner integral

We can extend our notion of Lp-spaces, that consist of functions from some measure space intoR, to functions taking values into a more general Banach space, (E, ‖ · ‖E). This section is basedon Sections 1.2 and 1.3 in [36].

Before we can define Bochner spaces and Bochner integrals, we have to introduce a differ-ent notion of measurability: µ-strong measurability, which we can formalize through simplefunctions. Let (A,A , µ) be a measure space.

A function s : A→ E is called a simple function when s can be written in the following form

s =m∑k=1

1Ak ek (1.14)

where A1, . . . , Am ∈ A are disjoint sets and e1, . . . , em ∈ E are distinct elements of E.Now, we say that a function f : A → E is µ-strongly measurable if there exists a sequence

(fn)n∈N of simple functions that converge pointwise µ-almost everywhere to f .The collection of µ-strongly measurable functions is easily seen to be a vector space and

every simple function s is µ-strongly measurable (just take fn = s for every n ∈ N). Regarding(strong) measurability, we have the following observations.

Observation 1.2.1. If the measure space (A,A , µ) is complete, every µ-strongly measurablefunction is measurable as well.

Observation 1.2.2 (Consequence of Proposition 1.8 in [36]). Let (A,A , µ) be a finite measurespace and let E be a Banach space. If E is separable, the notions of µ-strong measurability andmeasurability coincide.

The following result will be used to define the Bochner integral.

Lemma 1.2.3. Let (A,A , µ) be a finite measure space and let E and F be Banach spaces. Ifthe map f : A → E is µ-strongly measurable and the function φ : E → F is continous, thecomposition φ f : A → F is µ-strongly measurable. If the finite measure space (A,A , µ) iscomplete, the composition φ f : A→ F is measurable as well.

Proof. The first statement is Corollary 1.13 in [36]. The second statement follows immediatelyfrom the observation we made earlier.

1.2.1 Bochner integral

The Bochner integral is a generalisation of the Lebesgue integral to vector-valued functions.This section is based Section 1.3 in [36]. As was the case with Lebesgue integration, we will dothis through simple functions. The construction is pretty straightforward and we will dive rightinto it.

20

Page 21: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.2 CHAPTER 1. PRELIMINARIES

Throughout this section, we let (A,A , µ) be a finite measure space and let (E, ‖ · ‖E) be aBanach space. We say that a function f : A → E is µ-Bochner integrable if there exists asequence (fn)n∈N of simple functions from A to E such that the following two conditions hold:

(i) the sequence (fn)n∈N converges pointwise to f µ-almost everywhere;

(ii) limn→∞∫A ‖fn − f‖E dµ = 0.

Note that a µ-Bochner integrable function f is µ-strongly measurable. In addition, note thatin the second condition, the function fn − f is µ-strongly measurable, so that ‖fn − f‖E is ameasurable R-valued function for every n ∈ N by Observation 1.2.2; this means that the integralin Condition (ii) is to be interpreted in the ordinary Lebesgue sense. Every simple function iseasily seen to be µ-Bochner integrable.

As with the construction of the Lebesgue integral, we define the Bochner integral of a simplefunction s to be ∫

As dµ :=

m∑k=1

µ(Ak)ek

and one can verify that this definition is independent of our chosen representation. On the vectorspace of simple functions, the Bochner integral is linear and satisfies the following inequality∥∥∥∥∫

As dµ

∥∥∥∥E

≤∫A‖s‖E dµ. (1.15)

Using this inequality, we can define the Bochner integral of a µ-Bochner integrable functionf : A→ E. Note that the sequence

∫A fn dµ : n ∈ N

is Cauchy by (1.15), since

supn≥m

∥∥∥∥∫Afn dµ−

∫Afm dµ

∥∥∥∥E

= supn≥m

∥∥∥∥∫A

(fn − fm) dµ

∥∥∥∥E

≤ supn≥m

∥∥∥∥∫A

(fn − fm) dµ

∥∥∥∥E

≤ supn≥m

∫A‖fn − fm‖E dµ ≤ 2 sup

n≥m

∫A‖fn − f‖E dµ→ 0

as m→∞ by Condition (ii). Hence, the limit∫Af dµ := lim

n→∞

∫Afn dµ. (1.16)

exists in E and can again be shown to be independent of the chosen approximating sequence ofsimple functions by merging two approximation sequences into one. Also, the Bochner integralis linear, as is readily verified.

The next result gives an alternative characterization of µ-Bochner integrable functions.

Proposition 1.2.4 (Proposition 1.16 in [36]). A µ-strongly measurable function f : A → E isµ-Bochner integrable if and only if ∫

A‖f‖E dµ <∞,

in which case, Bochner’s inequality holds∥∥∥∥∫Xf dµ

∥∥∥∥E

≤∫X‖f‖E dµ (1.17)

holds.

The Bochner integral is some way “compatible” with the dual space of E.

21

Page 22: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.2 CHAPTER 1. PRELIMINARIES

Lemma 1.2.5 (Proposition A.2.2 in [23]). Let (A,A , µ) be a finite measure space and let(E, ‖ · ‖E) be a Banach space. Let f : A → E be a µ-Bochner integrable function. Let ` ∈ E∗.Then, we have

`

(∫Af dµ

)=

∫A

(` f) dµ. (1.18)

An important consequence of the previous lemma is that in Hilbert spaces, the inner productcommutes with the Bochner integral.

There also exists a similar statement regarding E∗, the dual space of E. As the dual space is aBanach space as well, the Bochner integral exists for µ-Bochner integrable functions g : A→ E∗

and the Bochner integral with respect to g is again an element of E∗. Then, one has the followingresult: the Bochner integral with respect to E∗-valued (Bochner integrable) functions commuteswith elements from E.

Lemma 1.2.6. Let (A,A , µ) be a finite measure space and let E∗ be the dual of a Banachspace E. Let g : A→ E∗ be a µ-Bochner integrable function. Then, we have(∫

Ag dµ

)(e) =

∫Ag(e) dµ

for all e ∈ E.

As noted on page 10 in [36], “results from the theory of Lebesgue integration carry over to theBochner integral as long as there are no non-negativity assumptions involved.” This means thatthere are, in general, no analogues of Fatou’s lemma and the Monotone Convergence Theorem,but there does exist a Dominated Convergence Theorem for Bochner integrals and a version ofFubini’s theorem.

Theorem 1.2.7 (Dominated Convergence Theorem for Bochner integrals). Let (A,A , µ) be afinite measure space and let (E, ‖ · ‖E) be a Banach space. Let fn : n ≥ 1 be a sequence ofµ-Bochner integrable functions. Suppose that there exists a function f : A → E and a non-negative Lebesgue integrable function g : A→ R such that the sequence fn : n ≥ 1 convergesµ-almost everywhere to f and that the inequality ‖fn‖E ≤ g holds µ-almost everywhere forevery n ∈ N.

Then the function f is Bochner-integrable and∫A‖f − fn‖E dµ→ 0

when n tends to infinity.

Proof. See Proposition 1.18 in [36].

Theorem 1.2.8 (Fubini’s theorem). Let (A1,A1, µ) and (A2,A2, ν) be σ-finite measure spacesand let (E, ‖ · ‖E) be a Banach space. Suppose that the map f : A1×A2 → E is µ⊗ ν-Bochnerintergrable. Then, the following assertions hold.

(1) For µ-almost all a1 ∈ A1, the function a2 7→ f(a1, a2) is ν-Bochner integrable.

(2) For ν-almost all a2 ∈ A2, the function a1 7→ f(a1, a2) is µ-Bochner integrable.

(3) The function a1 7→∫A2f(a1, a2) dν(a2), respectively a2 7→

∫A1f(a1, a2) dµ(a1), is µ-

Bochner integrable, respectively ν-Bochner integrable, and∫A1×A2

f(a1, a2) d(µ⊗ ν)(a1, a2) =

∫A1

(∫A2

f(a1, a2) dν(a2)

)dµ(a1)

=

∫A2

(∫A1

f(a1, a2) dµ(a1)

)dν(a2).

Proof. Proposition 1.2.7 in [17].

22

Page 23: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.2 CHAPTER 1. PRELIMINARIES

1.2.2 Bochner spaces

When one is familiar with the “ordinary” Lp- and Lp-spaces, the construction of Bochner spaceswill come as no surprise.

Let (A,A , µ) be a finite measure space and let (E, ‖ · ‖E) be a Banach space. Let p ≥ 1 bea real number. Then, we let Lp (A;E) consist of all strongly µ-measurable functions f : A→ Efor which ∫

A‖f‖pE dµ <∞.

We call two functions equivalent if they are equal µ-almost everywhere: from this equivalencerelation, we obtain the Lp (A;E)-space, the space of µ-equivalence classes of functions from Ato E. If it is of particular importance, we will explicitly mention the underlying σ-algebra andmeasure. As usual, we will use the accepted abuse of notation and write f for its correspond-ing µ-equivalence class [f ] in Lp(A;E) and refer to these µ-equivalence classes as functions orprocesses. This is a vector space that is complete under the norm

‖f‖Lp(A;E) :=

(∫A‖f‖pE dµ

)1/p

, (1.19)

as the proof for the ordinary Lp-spaces carries over to this setting; see page 12 in [36]. Perconstruction, it holds that the E-valued simple functions are dense in Lp(A;E) with respect tothe ‖ · ‖Lp(A;E)-norm.

If F ⊂ E is a dense subset, the F-valued simple functions are dense in Lp(A;E) as well.

Lemma 1.2.9. Let p ∈ [1,∞). Consider a complete finite measure space (A,A , µ) and supposethat F is a dense subset of the Banach space E. Then, the F-valued simple functions are densein Lp(A;E).

Proof. Approximate an E-valued simple functions with an F-valued simple function.

Moreover, if the Banach space E is reflexive, the same is true for the Lp-Bochner spaceLp(A;E).

Proposition 1.2.10 (Proposition 2.2.3(c) in [14]). Let p ∈ (1,∞), let (A,A , µ) be a completefinite measure space and let (E, ‖ · ‖E) be a reflexive Banach space. Then, Lp(A;E) is reflexive.

Again, to draw comparison to the ordinary Lp spaces, the dual of the Bochner space Lp(A;E)can, under certain conditions, be identified with the conjugate Bochner space of E∗-valuedfunctions.

Theorem 1.2.11. Let (A,A , µ) be a complete finite measure space and let (E, ‖ · ‖E) be aBanach space. Let p ∈ [1,∞). Then, if the dual space E∗ is reflexive or separable, the map

Lpp−1 (A;E∗)→ Lp(A;E)∗ : g 7→

∫A〈g, ·〉E∗×E dµ

is an isometric isomorphism. In particular, this map is an isometric isomorphism if E is reflexive.

Proof/remark. The stated conditions are sufficient, but not necessary, as this result holds for allBanach spaces for which the dual has the Radon-Nikodym property with respect to (A,A , µ).It is known that reflexive and separable spaces possess the Radon-Nikodym property. For aproof of the first statement, we refer to Theorem 1.3.10 in [17].

For the “In particular” part, we note that the reflexivity of E implies the reflexivity of thedual space E∗, see Lemma 1.1.28, and so the assertion follows directly.

A consequence of this is the following result.

23

Page 24: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.3 CHAPTER 1. PRELIMINARIES

Corollary 1.2.12. Let (A,A , µ) be a complete finite measure space and let (E, ‖ · ‖E) be areflexive Banach space. Let p ∈ [1,∞). Then,

I : Lpp−1 ([0, T ]× Ω;E)→ (Lp ([0, T ]× Ω;E∗))∗ : Φ 7→ E

[∫ T

0〈·,Φ(t)〉E∗×E dt

]is an isometric isomorphism.

The last statement we will discuss here will be used regularly.

Proposition 1.2.13 (Proposition 1.2.24 in [17]). Let (A1,A1, µ1) and (A2,A2, µ2) be finitemeasure spaces, let (E, ‖ · ‖E) be a Banach space and let p ∈ [1,∞). Suppose that F ∈Lp(A1×A2;E). Then, for µ1-almost all x ∈ A1, the map y 7→ F (x, y) belongs to Lp(A2;E).

When we replace the Banach space (E, ‖ · ‖E) with a Hilbert space (H, 〈·, ·〉H) and look at thespace L2(A;H), this is an inner product space, with the inner product given by

〈f, g〉L2(A;H) =

∫A〈f, g〉H dµ

for all (µ-equivalence classes of) functions f, g ∈ L2(A;H). From this inner product, one imme-diately recognizes that the inner product induced norm is the same as the norm in (1.19): thismakes the inner product space L2(A;H) complete, hence, a Hilbert space.

There also exists an L∞(A;E)-Bochner space: we will not use this Bochner space here andwill therefore refer the interested reader to Section 1.3.2 in [36].

1.3 A short overview of results in measure, probability andreal-valued (local) martingale theory

Let (Ω,F ,P) be a probability space and let [0, T ] ⊂ R be a finite interval. A filtration F :=F t : t ∈ [0, T ] is said to be normal if F 0 contains all P-null sets and when the filtration isright-continuous, that is, if F t is equal to

F+t :=

⋂t<u≤T

F u

for every t ∈ [0, T ).A sequence of stopping times (τn)n∈N on the probability space (Ω,F ,P) is said to be a

localizing sequence of stopping times if τn ≤ τn+1 for all n ∈ N and if τn ↑ T P-almost surely.When the probability space (Ω,F ,P) is endowed with a normal filtration F, we say that aright-continuous, F-adapted, real-valued process X = X(t) : t ≤ T is a local martingale ifthere exists a localizing seqeunce of stopping times τn : n ≥ 1 such that for every n ∈ N, thestopped process Xτn := X (τn ∧ t) : t ≤ T is a uniformly integrable martingale. When X iscontinuous, even more is true.

Proposition 1.3.1 (Proposition 4.5 in [33]). If X is a continuous local martingale starting atzero P-almost surely, then there exists a localizing sequence of stopping times τn : n ∈ N ofstopping times such that the processes Xτn are bounded martingales.

Denote by 〈X〉 the quadratic variation process for a real-valued continuous local martingaleX starting at zero. That is: the process 〈X〉 the natural increasing process, that is unique upto P-indistinguishability, turning X2 − 〈X〉 into a martingale.

Then, the Burkholder-Davis-Gundy inequality holds, a result that relates the maximum of alocal martingale to its quadratic variation.

24

Page 25: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.4 CHAPTER 1. PRELIMINARIES

Proposition 1.3.2 (Burkholder-Davis-Gundy inequality). For every p > 0, there exists auniversal constant Cp, only depending on p, such that for all continuous local martingalesM = M(t) : t ≤ T starting at zero with respect to a normal filtration F

E[supt≤τ|M(t)|2p

]≤ Cp E [〈M〉pτ ] (1.20)

holds for every finite stopping time τ .

Proof. See Theorem 3.28 (page 166) in [21].

From the Burkholder-Davis-Gundy inequality, we obtain the following auxilary results.

Corollary 1.3.3. Let δ, ε ∈ (0,∞). Let M = M(t) : t ≤ T be a real-valued continuouslocal martingale starting at zero with respect to a normal filtration F on a probability space(Ω,F ,P). Then, we have

P

(supt≤T|M(t)| ≥ ε

)≤ 3

εE[〈M〉1/2T ∧ δ

]+ P

(〈M〉T > δ2

).

Proof. See Corollary D0.2 in [23].

Proposition 1.3.4. Let M = M(t) : t ≤ T be a real-valued continuous local martingalestarting at zero with respect to a normal filtration F on a probability space (Ω,F ,P). Suppose

that E[〈M〉1/2T

]<∞. Then, the local martingale M is a martingale starting at zero.

Proof. See Proposition D.0.1(ii) in [23].

We end with two measure theoretic results.

Lemma 1.3.5 (Lemma 2.12.5 in [4]). Let (X,AX), (Y1,A1),. . . ,(Yk,Ak) be measurable spaces.Let the space Y := Y1× · · · × Yk be equipped with the σ-algebra AY := A1 ⊗ · · · ⊗Ak. Then,the mapping F := (F1, . . . , Fk) : X→ Y is AX/AY-measurable if and only if every Fi is AX/Ai-measurable.

Lemma 1.3.6. Let (A1,A1, µ) and (A2,A2, ν) be finite measure spaces. If µ is a measure on(A1,A1) that is absolutely continuous with respect to µ, then we have µ ⊗ ν µ ⊗ ν on themeasurable space (A1×A2,A1 ⊗A2).

Proof. The assertion is valid on the π-system A1 ×A2 : A1 ∈ A1, A2 ∈ A2 that generates theproduct σ-algebra A1 ⊗A2.

1.4 A brief summary of properties of functions

Later on in this thesis, we will need some results on lower semi-continuous functions and func-tions of bounded variation. This topic is not a main focus of this thesis and we will thereforeonly list the necessary definitions and results.

25

Page 26: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 1.4 CHAPTER 1. PRELIMINARIES

1.4.1 Lower semi-continuity

This section is based on Section 2.10 in [2].

Definition 1.4.1 (Lower semi-continuous function). Let X be a topological space. Then, afunction f : X→ [−∞,∞] is called lower semi-continuous if the set f−1 [[−∞, c]] is closed in Xfor each c ∈ R.

In particular, this means that every lower semi-continuous function is measurable and thatevery continuous function is lower-semicontinuous as well. When X is a metric space, we havethe following results.

Lemma 1.4.2 (Lemma 2.42 in [2]). Let (X, d) be a metric space. A function f : X→ [−∞,∞]is lower semi-continuous if and only if lim infx→x0 f(x) ≥ f (x0) for all x0 ∈ X.

Lemma 1.4.3. Let (X, d) be a metric space and let f : X→ [0,∞] be a non-negative lower semi-continuous function. If the set A ⊂ X is dense in X, it holds that supx∈A f(x) = supx∈X f(x).

Proof. One inequality is obvious from A ⊂ X, whereas the other inequality can be shown withLemma 1.4.2.

We end with two useful results on lower semi-continuity.

Lemma 1.4.4 (Lemma 2.41 in [2]). The pointwise supremum of lower semi-continuous functionsis lower semi-continuous.

Lemma 1.4.5. Let X and Y be topological spaces. Suppose that the function ψ : X → Y iscontinuous and that the function f : Y → [−∞,∞] is lower semi-continuous. Then, the mapf ψ : X→ Y is lower semi-continuous as well.

Proof. The set ψ−1[f−1 [[−∞, c]]

]is closed in X for every c ∈ R.

1.4.2 Functions of bounded variation

We will list some results on functions of bounded variation.

Proposition 1.4.6 (Jordan decomposition for functions of bounded variation). Let [a, b] ⊂ Rbe a finite interval. A function f : [a, b] → R is of bounded variation if and only if f is thedifference of two monotonely non-decreasing real-valued functions on [a, b].

Proof. See [29], Theorem 5 in Part 1, Chapter 5, Section 2 (pp. 103).

We call a function f : [a, b] → R monotone if it is either monotonely non-decreasing ormonotonely non-increasing.

Corollary 1.4.7. If f : [a, b]→ R is monotone, it is of bounded variation.

Lemma 1.4.8. Let g : [a, b] → R be integrable and let c ∈ R. Then, the function f : [a, b] →R : x 7→ c+

∫ xa g(s) ds is a continuous function of bounded variation.

Proof. Follows from Lemma 7 in Part 1, Chapter 5, Section 3 (pp. 105) in [29].

We will later employ the ‘integration by parts’-formula for continuous functions of boundedvariation.

Proposition 1.4.9 (Integration by parts). Let f, g : [a, b] → R be continuous functions ofbounded variation. Then, the following ‘integration by parts’-formula holds:∫ b

af(x) dg(x) +

∫ b

ag(x) df(x) = f(b)g(b)− f(a)g(a).

Proof. See Proposition C.6 in [33].

26

Page 27: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

2 Stochastic integration in Hilbert spaces

Anyone that is familiar with real-valued stochastic integrals knows what the ingredients arefor the construction of a stochastic integral. In a nutshell, one has to introduce Brownianmotion, also known as a Wiener process, consider simple processes and then approach a, say,continuous process with these simple processes. From this approximation follows the definitionof a stochastic integral.

The approach that we take for stochastic integrals in Hilbert spaces is not that must different,but we need to translate the real-valued setting into a Hilbert space-setting and that requiressome effort. Luckily, many properties of real-valued stochastic integrals carry over, such as theIto isometry, even though in a slightly different form, and the optional sampling property.

In this chapter, we fix two separable (real) Hilbert spaces U and H. The approach we takeis more or less the same as is done in Chapter 2 in [27] and in Chapter 2 in [23] and we willmostly omit the proofs of the statements posed in this chapter by referring the reader to thecorresponding statements in the literature.

Before we can define the stochastic integral, we need to introduce the Hilbert-space equivalentof Brownian motion, known as a Q-Wiener process, that depends on a non-negative symmetricfinite trace operator Q on U; see Section 2.2. Just like in the real-valued setting, this is acontinuous process with independent, Gaussian increments (see Definition 2.2.1). However, wefirst need to define what “Gaussian” means in this context and introduce more probability andmartingale theory in Banach and Hilbert spaces: we will therefore start this chapter with asection on that topic (Section 2.1).

We will also encounter cylindrical Wiener processes in Section 2.4 and construct the stochasticintegral with respect to a standard cylindrical Wiener process; more on that later.

2.1 Probability and martingale theory in Banach spaces

We will introduce Gaussian measures on Hilbert spaces and give a summary on the generaltheory of E-valued random variables and stochastic processes.

2.1.1 Probability theory: Gaussian measures on Hilbert spaces

The Riesz-Frechet Theorem allows us to identify every element of U∗ with a unique elementof U. We will denote the Riesz-Frechet isomorphism by Φ. Recall that we write B(U) for theBorel-σ-algebra on U. Let µ be a measure on (U,B(U)). Since every element in U∗ is continuous,it is measurable with respect to the Borel-σ-algebra B(U) on U, which allows us to define thepush-forward measure µu for each u ∈ U by

µu : B(R)→ [0,∞) : A 7→ µ(

(Φ(u))−1 [A]).

This measure µ is called Gaussian if the pushforward measure µu admits a Gaussian law for everyu ∈ U, that is, for every u ∈ U, there exist mu ∈ R and σu ∈ [0,∞) such that µu ∼ N

(mu, σ

2u

),

i.e.,

µu(A) =1

σu√

∫Ae− (x−mu)2

σ2u dx

for all A ∈ B(R) if σu > 0 or that, if σu = 0, we have µu(A) = δmu(A) for all A ∈ B(R). Wethus allow the pushforward measures µu to be degenerate.

27

Page 28: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.1 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

There is another way we can determine whether a measure µ on (U,B(U)) is Gaussian.

Theorem 2.1.1. A measure µ on (U,B(U)) is Gaussian if and only if there exists an m ∈ Uand a nonnegative, symmetric Q ∈ L(U) with finite trace such that

µ(u) :=

∫Uei〈u,v〉U dµ(v) = ei〈m,u〉U−

12〈Qu,u〉U

for all u ∈ U. Moreover, this µ is uniquely determined by m and Q.

Proof. See Theorem 2.1.2 in [27].

The element m ∈ U in Theorem 2.1.1 is called the mean of µ and the operator Q ∈ L(U) iscalled the covariance (operator). If µ has mean m and covariance Q, we will write µ ∼ N(m,Q).

A Gaussian measure µ on (U,B(U)) has the following properties.

Lemma 2.1.2. Let µ be a Gaussian measure on (U,B(U)) with mean m and covariance operatorQ. Then, the following properties hold.

(i) For all u ∈ U, we have∫U 〈x, u〉U dµ(x) = 〈m,u〉U.

(ii) For all u, v ∈ U, we have∫U (〈x−m,u〉U) (〈x−m, v〉U) dµ(x) = 〈Qu, v〉U.

(iii) The identity∫U ‖x−m‖

2U dµ(x) = tr(Q) holds.

Proof. See Theorem 2.1.2 in [27].

2.1.2 Random variables and stochastic processes

We will summarize the general theory of E-valued random variables and stochastic processes.In this section, we will assume throughout that E is a separable Banach space.

2.1.2.1 Random variables

A map X from Ω into E is called a random variable if it is measurable. Normally, we wouldrequire this map to be P-strongly measurable, but since E is separable, measurability is sufficient,see Observation 1.2.2. The random variable X is called p-(Bochner) integrable if E

[‖X‖pE

]<∞.

When p = 2, we say that X is square-integrable and when p = 1, we will just say that X isintegrable.

If a random variable X on a probability space (Ω,F ,P) takes values in a separable Hilbertspace U, we say that X is Gaussian if its law

PX := P X−1 : B(U)→ [0, 1] : A 7→ P(X−1[A]

)is Gaussian, hence, there is an element m ∈ U and a nonnegative, symmetric, finite traceoperator Q ∈ L(U) such that PX ∼ N(m,Q).

2.1.2.2 Stochastic processes

We say that a family X := Xt : t ∈ [0, T ] is an E-valued (stochastic) process if Xt is a E-valuedrandom variable for every t ∈ [0, T ]. We will sometimes refer to an element t ∈ [0, T ] as a timepoint.

Comparable to the real-valued case, we say that a process X is continuous if all sample paths[0, T ] → E : t 7→ Xt(ω) are continuous and we call a process X P-almost surely continuousif the sample paths are continuous P-almost surely. In addition, a process X is said to bep-integrable if Xt is p-integrable for each t ∈ [0, T ], with natural definitions for integrable andsquare-integrable random variables.

28

Page 29: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.1 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

Two E-valued processes X and Y are called modifications (of each other) if, for every t ∈ [0, T ],the random variables Xt and Yt coincide on a set of full probability. We call two processes Xand Y P-indistinguishable if the set ω ∈ Ω : Xt(ω) = Yt(ω) for all t ∈ [0, T ] contains a set offull probability. The latter property obviously implies the former.

Furthermore, if the probability space (Ω,F ,P) is endowed with a filtration F := F t : t ∈[0, T ], a process X is called F-adapted (or just adapted) is the random variable Xt is F t-measurable for each t ∈ [0, T ].

We list some useful results.

Proposition 2.1.3. Let (Ω,F ,P) be a probability space endowed with a filtration F. Let Xand Y be two continuous E-valued processes. If X and Y are modifications, then the processesare P-indistinguishable.

Proof. Follows from Lemma 3.2.10 in [5].

Proposition 2.1.4. Let X and Y two E-valued processes that are modifications of each other.Suppose that X is adapted to the filtration F. The process Y is adapted if the underlyingfiltration F is normal.

There is another way we can determine whether two processes X and Y are equal. We saythat a E-valued process X, seen as a mapping from the product space [0, T ]×Ω to E, is (jointly)measurable if it is B([0, T ]) ⊗ F /B(E)-measurable. Two measurable E-valued processes arecalled λ⊗ P-versions (of each other) if they coindice λ⊗ P-almost everywhere.

We end with a measurability result regarding continuous processes.

Proposition 2.1.5. Let X be a continuous E-valued process. Then, it is measurable.

2.1.2.3 Progressive measurability

Progressive measurability is a desirable property for a stochastic process to posses. We startwith a definition.

Definition 2.1.6 (Progressively measurable process). A process X is called progressively mea-surable if the map [0, t]×Ω→ E : (s, ω) 7→ Xs(ω) is B([0, t])⊗F t-measurable for all t ∈ [0, T ].

However, there is a equivalent way to classify a stochastic process as progressively measurable,due to [28, Definition 4.8], which will prove useful later.

Definition 2.1.7 (Progressive σ-algebra, progressively measurable process). A set A ⊂ [0, T ]×Ω is called progressively measurable or just progressively if the process 1A is a progressivelymeasurable process. The collection of progressive sets is a σ-algebra on [0, T ] × Ω called theprogressive σ-algebra and denoted MT .

A process X = Xt : t ∈ [0, T ] is progressively measurable if and only if X is measurablewith respect to MT .

We end this discussion on progressive measurability with three results. The proofs can befound in Part 2, Section 2.1.2 “Progressive Measurability” (pp. 388-390) in [10].

Proposition 2.1.8. Let (Ω,F ,P) be a probability space equipped with a filtration F. Let Xbe a continuous E-valued, F-adapted process. Then, the process X is progressively measurable.

Proposition 2.1.9. Let (Ω,F ,P) be a probability space equipped with a filtration F. Supposethat X is a progressively measurable E-valued process. Then, it is adapted and measurable.

Proposition 2.1.10. Let (Ω,F ,P) be a probability space equipped with a filtration F. Supposethat X is a progressively measurable E-valued process. Let F be another Banach space and letf : E → F be measurable. Then, the F-valued process f(X(t)) : t ∈ [0, T ] is progressivelymeasurable.

29

Page 30: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.1 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

2.1.3 Martingale theory in Banach spaces

In this section, we fix a separable Banach space E. We start by looking at the existence anduniqueness of the conditional expectation for integrable E-valued random variables, which willallow us the define martingales.

2.1.3.1 Conditional expectation of Banach space-valued random variables

We will waste no time and directly start with the existence result on the conditional expectationof any integrable random variable taking values in E.

Proposition 2.1.11 (Existence of conditional expectation). Let (E, ‖ · ‖E) be a separable Ba-nach space and let (Ω,F ,P) be a probability space. Let X be an integrable E-valued randomvariable. Let G ⊂ F be a sub-σ-algebra of F .

Then there exists an integrable, E-valued random variable Z that is measurable with respectto G and unique up to a P-null set such that the next equaility of Bochner integrals∫

AX dP =

∫AZ dP

holds for all A ∈ G .

Proof. See Proposition 2.2.1 in [27].

As always, we will denote the G -measurable Z in Proposition 2.1.11 by E [X | G ] and refer tothis random variable as the conditional expection of X given G . Moreover, the following resultholds.

Lemma 2.1.12. Let (E, ‖ · ‖E) be a separable Banach space and let (Ω,F ,P) be a probabilityspace. Let X be an integrable E-valued random variable. Let G ⊂ F be a sub-σ-algebra of F .Let E [X | G ] be the conditional expection of X given G . Then, the following inequality

‖E [X | G ]‖E ≤ E [‖X‖E | G ] (2.1)

holds.

Proof. See Proposition 2.2.1 in [27].

Combining inequality (2.1) with Jensen’s inequality, we have the following result.

Corollary 2.1.13. Let p ≥ 1. Let (E, ‖ · ‖E) be a separable Banach space and let (Ω,F ,P) bea probability space. Let G ⊂ F be a sub-σ-algebra of F . Suppose that X is a p-integrablerandom variable. Then, the following inequality

‖E [X | G ]‖pE ≤ E[‖X‖pE | G

]holds.

2.1.3.2 Martingale theory in Banach spaces

The existence of the conditional expectation allows us to define a E-valued martingale.

Definition 2.1.14. Let M := M(t) : t ≤ T be a stochastic process on a probability space(Ω,F ,P) that takes values in a separable Banach space (E, ‖ · ‖E). Let F := F t : t ≤ T be afiltration on (Ω,F ,P).

The process M is said to be a p-integrable martingale with respect to F if

(i) E[‖M(t)‖pE

]<∞ for all t ≤ T ;

30

Page 31: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.2 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

(ii) the random variable M(t) is F t-measurable for every t ≤ T ;

(iii) the identity E [M(t) | F s] = M(s) holds P-a.s. for all 0 ≤ s ≤ t ≤ T .

Then, when we look back at Corollary 2.1.13, we have the following consequences.

Corollary 2.1.15 (Maximal inequality). Let p > 1 and suppose that M = M(t) : t ≤ Tis a continuous, E-valued p-integrable martingale. Then, the process

‖M(t)‖pE : t ≤ T

is a

real-valued, non-negative, continuous F-submartingale. Hence, Doob’s Lp-inequality applies to‖M(t)‖pE : t ≤ T

and we find that the inequality(

E

[(supt≤T‖M(t)‖E

)p]) 1p

≤ p

p− 1·(E[‖M(T )‖pE

]) 1p . (2.2)

holds.

Now, we will introduce a space that will play a crucial role in defining the stochastic integral.

Definition 2.1.16 (M 2T (E)). Let M 2

T (E) denote the space of continuous, square-integrableE-valued martingales M := M(t) : t ≤ T. This is a Banach space when equipped with thenorm

‖M‖M 2T (E) :=

(E[‖M(T )‖2E

])1/2,

see Proposition 2.2.9 in [27] for a proof.Moreover, when we replace E with a Hilbert space H, the space M 2

T (H) is a Hilbert spacewhen endowed with the inner product

〈M,N〉M 2T (H) := E

[〈M(T ), N(T )〉2H

]for all M,N ∈M 2

T (H).

A consequence of the norm on M 2T (E) and inequality (2.2) is the following result.

Corollary 2.1.17. The map

Πt : M 2T (E)→ L2(Ω,F ,P;E) : M 7→M(t)

is continuous for every t ∈ [0, T ].

Moreover, the definition of a martingale also allows us to define local martingales.

Definition 2.1.18 (Local martingale). A process M = M(t) : t ∈ [0, T ] is said to be a localmartingale if there exists a localizing sequence of stopping times τn : n ∈ N such that forevery n ∈ N, the stopped process

M τn := M(τn ∧ t) : t ∈ [0, T ]

is a martingale.

2.2 Q-Wiener processes

The previous section will allow us to introduce a Wiener process: the generalization of Brownianmotion taking values in the separable real Hilbert space U.

Definition 2.2.1 (Q-Wiener process). Let (Ω,F ,P) be a probability space and let Q ∈ L(U)be a symmetric, nonnegative bounded linear operator with finite trace. A U-valued processW := W (t) : t ∈ [0, T ] is called a Q-Wiener process if

31

Page 32: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.2 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

(i) W (0) = 0U;

(ii) W has continuous trajectories P-almost surely;

(iii) for any collection 0 ≤ t1 < t2 < · · · < tn ≤ T , the random variables

W (t1),W (t2)−W (t1), . . . ,W (tn)−W (tn−1)

are independent;

(iv) the increments have the following Gaussian laws: for all 0 ≤ s ≤ t ≤ T , we have thatP(W (t)−W (s)) ∼ N(0U, (t− s)Q).

Regarding the continuity of the sample paths, we have the following assumption.

Assumption 2.2.2. We will henceforth assume that all Q-Wiener processes and Brownianmotions have continuous paths. For a justification of this assumption regarding Q-Wienerprocesses, see the comment on page 99 in [12].

One might wonder if such a Q-Wiener process even exists. This will follow as a result of thenext proposition, which will also give a little more intuition behind Q-Wiener processes.

Proposition 2.2.3. Let (Ω,F ,P) be a probability space and let Q ∈ L(U) be a non-negative,symmetric, finite trace operator on U. Let ek : k ∈ N be the orthonormal basis of U,given by the eigenvectors of Q, with corresponding non-negative eigenvalues λk : k ∈ N,counted for multiplicity and numbered in non-increasing order; see Proposition 1.1.23. LetΛ := n ∈ N : λn > 0 be the index set of non-zero eigenvalues. In that case, a U-valuedstochastic process W := W (t) : t ∈ [0, T ] is a Q-Wiener process if and only if there exists asequence of independent real-valued continuous Brownian motions βk : k ∈ Λ on (Ω,F ,P)such that

W (t) =∑k∈Λ

√λkβk(t)ek

for every t ∈ [0, T ].The series on the right-hand side converges in L2 ((Ω,F ,P); C([0, T ];U)), where C([0, T ];U)

is equipped with the sup-norm.

Proof/remark. For a proof, see Proposition 2.1.10 in [23] or Proposition 2.2 in Chapter 6 in[37]. We would like to point out that some authors take this decomposition as their definitionfor a Q-Wiener process, see e.g. Definition 2.1.6 in [15].

This proposition has the following useful consequence, when combined with Corollary 1.1.26.

Corollary 2.2.4. Sample paths of a Q-Wiener process lie completely in Q12 (U).

Sometimes, we endow the underlying probability space with a filtration and we want theQ-Wiener process to “behave appropriately” in conjunction with the filtration. This is laid outin the next definition.

Definition 2.2.5 (Q-Wiener process with respect to filtration). Let (Ω,F ,P) be a probabilityspace, let F = F t : t ≤ T be a filtration on (Ω,F ,P), let Q ∈ L(U) be a symmetric, nonneg-ative bounded linear operator with finite trace. A U-valued process W := W (t) : t ∈ [0, T ] iscalled a Q-Wiener process with respect to the filtration F if it satisfies satisfies (i), (ii), (iv) inDefinition 2.2.1 and if

• Wt is F t-measurable for every t ∈ [0, T ];

• for any 0 ≤ s ≤ t ≤ T , the random variable W (t)−W (s) is independent of F s.

32

Page 33: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.3 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

Luckily, we can also turn this around: given a Q-Wiener process W on a probability space(Ω,F ,P), we can construct a filtration F so that W is a Q-Wiener process with respect to F.

Construction 2.2.6. Let W := W (t) : t ∈ [0, T ] be a Q-Wiener process on a probability space(Ω,F ,P). Let N be the collection of all P-null sets; define F t := σ (W (s) : s ≤ t) and let

Ft := σ (N ∪F t ). If we then let

F t :=⋂

t<u≤TFu

for all t < T and F T := FT , then F is a normal filtration per construction and W is a Q-Wiener

process with respect to the filtration F = F t : t ∈ [0, T ], see Proposition 2.1.13 in [27]. Thefiltration constructed here is called the normal filtration associated to W .

We end this section on Q-Wiener processes with a result that bears a resemblance to Brownianmotion, namely that a Q-Wiener process is a square-integrable martingale.

Proposition 2.2.7. Let T > 0 and let U be a separable Hilbert space. Let W (t) : t ≤ Tbe a U-valued Q-Wiener process with respect to a normal filtration F := F t : t ≤ T on aprobability space (Ω,F ,P). Then, W := W (t) : t ≤ T belongs to M 2

T (U).

Proof. See Proposition 2.2.10 in [23].

In the next section, we will construct the stochastic integral with respect to a Q-Wienerprocess.

2.3 Stochastic integral with respect to a Q-Wiener process andproperties of the stochastic integral

We take the construction of the stochastic integral with respect to a standard Wiener processstep-by-step, by following the approach of Chapter 2 in [23], Section 4.2 in [8] and Section 2.2in [15]. Just like with stochastic integrals with respect to finite-dimensional Brownian motion,we will first define the stochastic integral for processes that are reasonably easy to digest andtake it from there.

2.3.1 Stochastic integral with respect to a Q-Wiener process

In this section, we let (Ω,F ,P) be a complete probability space that is endowed with a normalfiltration F = F t : t ∈ [0, T ]. Also, let U and H be two separable Hilbert spaces. We will endowthe interval [0, T ] with the Lebesgue-measure λ. Let Q ∈ L(U) be a symmetric, nonnegativebounded linear operator with finite trace and letW be aQ-Wiener process that takes values in U.Since Q is symmetric and nonnegative, it admits a square root operator Q

12 . Let un : n ∈ N

and λn : n ∈ N be the sequence of eigenvectors of Q12 with corresponding eigenvalues we

obtain from Corollary 1.1.26. Denote the index set of non-zero eigenvalues of Q12 by Λ.

Before we define the stochastic integral with respect to a Q-Wiener process, we would like torecall Proposition 1.1.27 and Corollary 2.2.4. From the latter, we obtain that the paths of the Q-Wiener process W lie completely in Q

12 (U). We will henceforth refer to Q

12 (U) as U0. Together

with the 〈·, ·〉0-inner product defined in Proposition 1.1.27, the space U0 forms a separableHilbert space with orthonormal basis

√λnun : n ∈ Λ

. We will, unless explicitly mentioned

otherwise, always assume that U0 is endowed with the 〈·, ·〉0-inner product. In addition, wedefine L0

2(H) := L2(U0,H). Note that this is a separable Hilbert space, see Proposition 1.1.20,by the separability of U0 and H.

We start by defining the stochastic integral for elementary processes, that we will introducenext.

33

Page 34: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.3 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

An L02(H)-valued process Φ = Φ(t) : t ∈ [0, T ] is said to be an elementary process if, for

some m ∈ N, there exists a partition 0 = t0 < t1 < · · · < tm = T of the interval [0, T ] anda collection of L0

2(H)-valued random variables φ0, . . . , φm−1, where every random variableφk : Ω→ L0

2(H) is F tk -measurable and takes finitely many values, such that

Φ(t) =m−1∑k=0

φk 1(tk,tk+1](t) (2.3)

for every t ∈ [0, T ]. We will denote the class of L02(H)-valued elementary processes by E .

Let Φ ∈ E be given by (2.3). Then, for each t ∈ [0, T ], we define∫ t

0Φ(s) dW (s) :=

m−1∑k=0

φk (W (tk+1 ∧ t)−W (tk ∧ t)) . (2.4)

Then, we define the stochastic integral of Φ (with respect to W ) to be the stochastic processgiven by ∫ t

0Φ(s) dW (s) : t ∈ [0, T ]

=:

∫ ·0

Φ(s) dW (s).

One can verify that this definition is independent of the chosen representation of the elementaryprocess Φ. In addition, the stochastic integral

∫ ·0 Φ(s) dW (s) is a square-integrable H-valued

martingale.

Proposition 2.3.1. Let Φ be an elementary L02(H)-valued process. Then, the stochastic integral∫ ·

0 Φ(s) dW (s) belongs to M 2T (H).

Proof. See Proposition 2.3.2 in [23].

The “square root” operator Q12 is a bounded linear operator, even a Hilbert-Schmidt operator

on U, but when viewed as an operator from U to U0, one could lose this continuity by the differentnorm on the codomain, but luckily, this not the case. As

√λnun : n ∈ Λ

is an orthonormal

basis of U0 and since un : n ∈ Λ is an orthonormal set in U and a collection of eigenvectors of

Q12 , we find by Parseval’s identity and the symmetry of Q

12 with respect to the U-inner product,

∥∥∥Q 12u∥∥∥2

0=∑n∈Λ

⟨Q

12u,√λnun

⟩2

0=∑n∈Λ

(∑k∈Λ

1

λk

⟨Q

12u, uk

⟩U

⟨√λnun, uk

⟩U

)2

=∑n∈Λ

(1

λn

⟨u,Q

12un

⟩U

⟨√λnun, un

⟩U

)2

=∑n∈Λ

〈u, un〉2U ≤ ‖u‖2U

for all u ∈ U. This means that for every A ∈ L02(H), we have AQ

12 ∈ L2(U,H) by Lemma 1.1.21.

As an elementary process Φ ∈ E is L02(H)-valued, it follows that Φ(t) Q

12 ∈ L2(U,H) for every

t ∈ [0, T ]. This allows us to endow E with a seminorm.

Definition 2.3.2. The function ‖ · ‖T , given by

‖Φ‖T :=

(E[∫ T

0

∥∥∥Φ(s) Q12

∥∥∥2

L2(U,H)ds

]) 12

for every Φ ∈ E , is a seminorm on E .

34

Page 35: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.3 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

We can rewrite this seminorm using Proposition 1.1.27. We have∥∥∥A Q 12

∥∥∥2

L2(U,H)=∑n∈Λ

∥∥∥A(Q 12 (un)

)∥∥∥2

H+∑n/∈Λ

∥∥∥A(Q 12 (un)

)∥∥∥2

H= ‖A‖2L0

2(H) (2.5)

for every A ∈ L02(H), as

Q

12un : n ∈ Λ

is an orthonormal basis of U0 and Q

12un = 0U for all

n /∈ Λ. Hence,

‖Φ‖T =

(E[∫ T

0‖Φ(s)‖2L0

2(H) ds

]) 12

(2.6)

for every Φ ∈ EFor this seminorm to be a genuine norm, we consider two elementary processes Φ and Ψ to be

equivalent if and only if ‖Φ−Ψ‖T = 0; we denote the equivalence class of an elementary processΦ by [Φ]. Using this equivalence relation, we denote by E the space of equivalence classes of E ,on which ‖ · ‖T is a norm. As always, we will write Φ for its corresponding equivalence class [Φ]in E . With this norm on E , the (linear) mapping

int : E →M 2T (H) : Φ 7→

∫ ·0

Φ(s) dW (s)

becomes an isometry, by virtue of the next proposition.

Proposition 2.3.3 (“Ito isometry”). We have

‖Φ‖T =

∥∥∥∥∫ ·0

Φ(s) dW (s)

∥∥∥∥M 2

T (H)

=

(E

[∥∥∥∥∫ T

0Φ(s) dW (s)

∥∥∥∥2

H

]) 12

(2.7)

for all Φ ∈ E .

Proof. See Proposition 2.3.5 in [23].

As int is a bounded linear operator between the normed vector space (E , ‖ · ‖T ) and theHilbert space M 2

T (H), we can extend it uniquely and isometrically to the closure of E withrespect to the ‖ · ‖T -norm, that we will denote by E . Before we can give a characterization of E ,we introduce another concept, the predictable σ-algebra, a σ-algebra on [0, T ]× Ω that we willdenote by PT . It is constructed using the underlying normal filtration F by

PT := σ ((s, t]× Fs : 0 ≤ s < t ≤ T and Fs ∈ F s ∪ 0 × F0 : F0 ∈ F 0) .

Processes that map into a separable Hilbert space K and are PT /B(K)-measurable are calledpredictable or sometimes K-predictable if we want to stress the state space of the process. EveryK-valued process that is continuous and F-adapted is predictable. Moreover, it is known thatthe predictable σ-algebra on [0, T ] × Ω is contained in the progressive σ-algebra we defined inDefinition 2.1.7, i.e. PT ⊂MT . Both results are consequences of Proposition 2.2.3 in [24].

The next statement will tell what kind of processes belong to E .

Proposition 2.3.4. Let Φ ∈ E be (the equivalence class with respect to ‖·‖T of) an elementaryL0

2(H)-valued process, then Φ is L02(H)-predictable. Moreover, if Φ is (the equivalence class with

respect to ‖ · ‖T of) a predictable L02(H)-valued process with ‖Φ‖T <∞, there exists a sequence

Φ(n) : n ∈ N⊂ E of (equivalence classes of) elementary L0

2(H)-valued processes such that∥∥∥Φ− Φ(n)∥∥∥T→ 0

as n→∞.

35

Page 36: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.3 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

Proof. See Proposition 4.22 in [8].

We will denote the collection of (equivalence classes of) predictable L02(H)-valued processes

Φ with ‖Φ‖T <∞ by N 2W

(T ;L0

2(H)). By the previous proposition, we know that E is a dense

subset of N 2W

(T ;L0

2(H)). This new space can be equipped with an inner product, given by

〈Φ,Ψ〉N 2W (T ;L0

2(H)) := E[∫ T

0〈Φ(s),Ψ(s)〉L0

2(H) ds

]and it is complete with respect to its inner product-induced norm (see page 94 in [8]), which iseasily seen to coincide with the ‖ · ‖T -norm defined in (2.6); hence, E = N 2

W

(T ;L0

2(H)). As a

result, Proposition 2.3.3 extends to elements of N 2W

(T ;L0

2(H))

and the stochastic integral withrespect to a predictable L0

2(H)-valued process with finite ‖ · ‖T -norm exists and is a continuoussquare-integrable martingale as well.

To get a better understanding of a stochastic integral for elements of N 2W

(T ;L0

2(H)), re-

call that, by Proposition 2.2.3, a Q-Wiener process can be understood as an infinite series of(not necessarily standard) Brownian motions, moving in each orthogonal direction. A similarinterpretation is also possible for a stochastic integral.

Proposition 2.3.5 (Proposition 2.4.5 in [23]). Let (Ω,F ,P) be a probability space that isendowed with a normal filtration F = F t : t ∈ [0, T ]. Let W be a Q-Wiener process ona separable Hilbert space U. Let ek : k ∈ N be the orthonormal basis of U given by theeigenvectors of Q, with the corresponding non-negative eigenvalues λk : k ∈ N, counted formultiplicity and given in non-increasing order; define Λ := k ∈ N : λk > 0 as the index set ofnon-zero eigenvalues. Let βk : k ∈ Λ be a sequence of independentstandard Brownian motionsand let Φ ∈ N 2

W

(T ;L0

2(H)). Then, for every t ∈ [0, T ], the following identity∫ t

0Φ(s) dW (s) =

∑k∈Λ

√λk

∫ t

0Φ(s)ek dβk(s)

holds P-almost surely; the series in the right-hand side converges in L2 ((Ω,F ,P); C([0, T ];H)),where C([0, T ];H) is equipped in the usual sup-norm.

We are not done yet: we can define the stochastic integral for an even richer class of (equiva-lence classes of) processes. We will denote by N W

(T ;L0

2(H))

the linear space of (equivalenceclasses of) L0

2(H)-valued predictable processes Φ such that

P(∫ T

0‖Φ(s)‖2L0

2(H) ds <∞)

= 1.

Elements of N W

(T ;L0

2(H))

are called stochastically integrable processes on [0, T ]. To extendthe definition of the stochastic integral for these processes, we will make use of the followinglemma.

Lemma 2.3.6. Assume that Φ ∈ N 2W

(T ;L0

2(H))

and that τ is an F-stopping time such thatP(τ ≤ T ) = 1. Then, there exists a set AΦ,τ of full probability, depending only on the processΦ and stopping time τ , on which∫ t

0

(1(0,τ ] Φ

)(s) dW (s) =

∫ τ∧t

0Φ(s) dW (s)

for all t ∈ [0, T ].

Proof. See Lemma 4.9 in [8] or Lemma 2.3.9 in [23].

36

Page 37: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.3 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

Let Φ be a stochastically integrable process and let τn : n ∈ N be a localizing sequence ofstopping times such that 1(0,τn] Φ belongs to N 2

W

(T ;L0

2(H))

for every n ∈ N. Note that sucha localizing sequence exists: it can be constructed in the same way as in the finite-dimensionalBrownian motion setting.

Then, we define the stochastic integral of a stochastically integrable process Φ by defining itfor every t ∈ [0, T ] as ∫ t

0Φ(s) dW (s) :=

∫ t

0

(1(0,τn] Φ

)(s) dW (s)

on the set τn ≥ t. Using Lemma 2.3.6, one may show that this definition is consistent.Note that the stochastic integral

∫ ·0 Φ(s) dW (s) for a stochastically integrable process is a

continuous local martingale with localizing sequence τn : n ∈ N, see also Proposition 1.106 in[12].

Before we discuss the properties of the stochastic integral, we end this section by remarkingthat the construction of the stochastic integral for stochastically integrable processes does notdepend on the chosen localizing sequence of stopping times (σn)n∈N, provided (σn)n∈N satisfiesthat 1(0,σn] Φ ∈ N 2

W

(T ;L0

2(H))

for all n ∈ N. Moreover, Lemma 2.3.6 holds for stochasticallyintegrable processes as well. The proofs of these results can be found in Remark 2.3.10 in [23].

2.3.2 Properties of the stochastic integral

Having defined the stochastic integral with respect to Q-Wiener processes in the last section,here, we will develop a toolbox to work with these stochastic integrals.

Lemma 2.3.7 (Lemma 2.4.1 in [23]). Let Φ ∈ N W

(T ;L0

2(H)), let K be another separable

Hilbert space and let A ∈ L(H,K). Then, the (equivalence class with respect to ‖ · ‖T of the)process A(Φ(t)) : t ∈ [0, T ] is an element of N W

(T ;L0

2(K))

and the following identity

A

(∫ T

0Φ(t) dW (t)

)=

∫ T

0A (Φ(t)) dW (t)

holds P-almost surely.

The next tool we acquire is very useful in showing the main result. Before we introduce it,recall the definition of the adjoint operator we introduced in Section 1.1.2.

Lemma 2.3.8 (Lemma 2.4.2 in [23]). Let Φ ∈ N W

(T ;L0

2(H))

and let Ξ be a H-valued process.Define, for every (t, ω) ∈ [0, T ]× Ω, the following map

ΦΞ(t, ω) : U0 → R : u 7→ 〈Ξ(t, ω),Φ(t, ω)u〉H . (2.8)

The following (in)equalities hold

‖(Φ(t, ω))∗ (Ξ(t, ω))‖20 =∥∥∥ΦΞ(t, ω)

∥∥∥2

L02(R)≤ ‖Φ(t, ω)‖2L0

2(H) · ‖Ξ(t, ω)‖2H (2.9)

for all (t, ω) ∈ [0, T ]× Ω. Suppose that Ξ satisfies the integrability condition∫ T

0‖(Φ(t))∗ (Ξ(t))‖20 dt <∞ (2.10)

P-almost surely. Then, the process ΦΞ :=

ΦΞ(t) : t ∈ [0, T ]

is a predictable L02(R)-valued

process satisfying, meaning that ΦΞ is a stochastically integrable L02(R)-valued process. Hence,

when we define ∫ t

0〈Ξ(s),Φ(s) dW (s)〉H :=

∫ t

0ΦΞ(s) dW (s), (2.11)

the stochastic integral∫ ·

0 〈Ξ(s),Φ(s) dW (s)〉H is a well-defined real-valued continuous local mar-tingale that is linear in both Ξ as Φ.

37

Page 38: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.3 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

This has the following consequence.

Corollary 2.3.9. Let Φ ∈ N W

(T ;L0

2(H))

and let Ξ be a H-valued process. The followinginequality ∫ T

0

∥∥∥ΦΞ(t)∥∥∥2

L02(R)

dt ≤

[supt∈[0,T ]

‖Ξ(t)‖2H

]·∫ T

0‖Φ(t)‖2L0

2(H) dt (2.12)

holds by (2.9). If the right-hand side is finite P-almost surely, the process ΦΞ belongs toN W

(T ;L0

2(R))

and the stochastic integral∫ ·

0 〈Ξ(s),Φ(s) dW (s)〉H is well-defined. In particular,this is the case if Ξ is a continuous F-adapted H-valued process.

Later, we will use the next “neat trick”.

Corollary 2.3.10. Let 0 ≤ s ≤ t ≤ T , let Φ ∈ N W

(T ;L0

2(H))

and let X be a H-valuedF s-measurable integrable random variable. Then, we have⟨

X,

∫ t

sΦ(r) dW (r)

⟩H

=

∫ t

s〈X,Φ(r) dW (r)〉H P-almost surely. (2.13)

Proof. Let Φ be an elementary L02(H)-valued process as given in (2.3) on page 34. (We

may assume that both s and t belong to partition of [0, T ]; if not, we add them.) Expand∫ ts Φ(u) dW (u) :=

∫ t0 Φ(u) dW (u) −

∫ s0 Φ(u) dW (u) using (2.4) on page 34. Plug this into

the left-hand side of (2.13) to conclude that the assertion holds for elementary processes.Then, approximation with a sequence of elementary processes yields the statement for allΦ ∈ N W

(T ;L0

2(H)).

The next lemma allows us to define an analogue of quadratic variation for a stochastic integralwith respect to a Q-Wiener process.

Lemma 2.3.11. Let Φ ∈ N W

(T ;L0

2(H)). Write M :=

∫ ·0 Φ(s) dW (s). Define, for each

t ∈ [0, T ], the real-valued random variable

〈M〉t :=

∫ t

0‖Φ(s)‖2L0

2(H) ds.

Then the process 〈M〉 is the unique continuous, increasing, F-adapted process starting at zero

such that the process‖M(t)‖2H − 〈M〉t : t ≤ T

is a real-valued continuous local martingale.

Here, uniqueness is meant up to P-indistinguishability. We call 〈M〉 the quadratic variation ofthe stochastic integral

∫ ·0 Φ(s) dW (s).

If, moreover, Φ ∈ N 2W

(T ;L0

2(H)), then for any nested sequence Πn : n ≥ 1 of partitions

Πn := 0 = tn0 < tn1 < · · · < tnkn = T of the interval [0, T ] for which the mesh tends to zero, itholds, for each t ∈ [0, T ], that

E

∣∣∣∣∣∣∣∣∣〈M〉t −

∑k∈N∪0:

tnk+1∈Πn∩[0,t]

∥∥M(tnk+1)−M(tnk)∥∥2

H

∣∣∣∣∣∣∣∣∣

→ 0

as n→∞.

Proof. See Lemma 2.4.4 in [23].

38

Page 39: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.4 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

Corollary 2.3.12. Let Φ ∈ N W

(T ;L0

2(H))

and let Ξ be a H-valued process such that (2.10)in Lemma 2.3.8 is satisfied. Define M :=

∫ ·0 Φ(s) dW (s). Then, the quadratic variation of the

stochastic integral∫ ·

0 〈Ξ(s),Φ(s) dW (s)〉H is given by∫ ·

0

∥∥∥ΦΞ(t)∥∥∥2

L02(R)

ds. Moreover, in view of

(2.12), the following inequality∫ T

0

∥∥∥ΦΞ(t)∥∥∥2

L02(R)

dt ≤

[supt∈[0,T ]

‖Ξ(t)‖2H

]· 〈M〉T (2.14)

holds.

The last statement of this section will be used frequently.

Proposition 2.3.13. Let Φ ∈ N W

(T ;L0

2(H))

and let Ξ be H-valued process satisfying inte-grability condition (2.10) in Lemma 2.3.8. Let δ, ε ∈ (0,∞) and N ∈ N be arbitrary. Then, thefollowing inequality

P

(supt≤T

∣∣∣∣∫ t

0〈Ξ(s),Φ(s) dW (s)〉H

∣∣∣∣ ≥ ε)

≤ 3δ

ε+ P

(supt∈[0,T ]

‖Ξ(t)‖H > N

)+N

δ2E[∫ T

0‖Φ(s)‖2L0

2(H) ds

]holds.

Proof. Use Lemma 2.3.11, Corollary 1.3.3 and Markov’s inequality.

2.4 Q-cylindrical Wiener processes

In this section, we will introduce Q-cylindrical Wiener processes, where Q is again a symmetric,non-negative bounded linear operator but not necessarily of finite trace. Later, we will constructthe stochastic integral with respect to such a process, but only for a particular choice of Q, aswe will encounter later.

Let us fix a complete probability space (Ω,F ,P), endowed with a normal filtration F =F t : t ∈ [0, T ]. Moreover, let U and H be separable Hilbert spaces. We will start by givingthe definition of a Q-cylindrical Wiener process.

Definition 2.4.1. Let Q be a symmetric, non-negative bounded linear operator on U. A familyW := W(t) : t ≤ T of maps from U to L2(Ω;R) is called a Q-cylindrical Wiener process (onU) if it satisfies the following conditions:

(i) for every t ∈ [0, T ], the map W(t) : U→ L2(Ω;R) is linear;

(ii) for every u ∈ U, the collection W(t)(u) : t ≤ T is an F-adapted Brownian motion withVar(W(t)(u)) = t〈Qu, u〉U for all t ∈ [0, T ];

(iii) for all s, t ∈ [0, T ] and all u, v ∈ U, we have E [W(s)(u) · W(t)(v)] = (s ∧ t) 〈Qu, v〉U.

The operator Q is referred to as the covariance operator of W.

We call a Q-cylindrical Wiener process a standard cylindrical Wiener process if Q = idU. Inthat setting, the next result holds.

Corollary 2.4.2. LetW be a standard cylindrical Wiener process on U and let uk : k ∈ N bean orthonormal basis of U. Define for every k ∈ N the process W [k] := W(t) (uk) : t ∈ [0, T ].Then, it follows from Condition (ii) in Definition 2.4.1 that W [k] is a standard Brownian mo-tion. Furthermore, in addition with the previous observation, Condition (iii) implies thatW [n] : n ∈ N

is a collection of independent Brownian motions.

39

Page 40: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.4 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

We can combine Corollary 2.4.2 with Proposition 2.2.3 to obtain the following result.

Proposition 2.4.3. Let W be a standard cylindrical Wiener process on a separable Hilbertspace U. Let uk : k ∈ N be an orthonormal basis of U. Consider arbitrary n ∈ N and defineUn := Sp (u1, . . . , un). Then, the process

W(n) :=

n∑k=1

W(t) (uk) · uk : t ≤ T

=

n∑k=1

W [n](t) · uk : t ≤ T

(2.15)

is a idUn-Wiener process on Un.

When we look back at Definition 2.4.1, we do not require the covariance operator Q to havefinite trace, as was the case with a Q-Wiener process. However, if it does, we can associate theQ-cylindrical Wiener process W to a Wiener process on U with the same covariance operatorQ.

Theorem 2.4.4 (Theorem 3.2 in [9]). Let U be a separable Hilbert space and let W be aQ-cylindrical Wiener process on U. Then, the following three conditions are equivalent.

1. The Q-cylindrical Wiener process W can be associated to a Q-Wiener process W on U,in the sense that, for every u ∈ U, we have 〈W (t), u〉U =W(t)(u) for all t ∈ [0, T ].

2. For every t ∈ [0, T ], the map u 7→ W(t)(u) defines a Hilbert-Schmidt operator from U toL2(Ω;R).

3. The covariance operator Q has finite trace.

In the next section, we want to construct the stochastic integral with respect to a standardcylindrical Wiener processW. But, when the Hilbert space U is infinite-dimensional, the identityoperator on U does not have finite trace. So, if we were to construct the stochastic integral withrespect to W using the associated Wiener process, we are out of luck in view of Theorem 2.4.4.

Luckily, we can patch this, as we will see in the next section, where we will define the stochasticintegral with respect to a standard cylindrical Wiener process.

A further discussion of the properties of cylindrical Wiener processes is beyond the scope ofthis thesis. The interested reader may concult Chapter 2 in [15] or [9] as a whole.

2.4.1 Stochastic integral with respect to standard cylindrical Wienerprocesses

In this section, we fix a standard cylindrical Wiener process W on a separable Hilbert space Uand let H be another separable Hilbert space. To construct the stochastic integral with respectto W, let J be an arbitrary injective Hilbert-Schmidt operator on U. It is important to notethat such an operator J always exists, as the next example shows.

Example 2.4.5. Let un : n ∈ N be an orthonormal basis of U, then, the operator

J : U→ U : u 7→∑n∈N

1

n〈u, un〉U un

is a Hilbert-Schmidt operator on U; compare Remark 2.5.1 in [23].

Using such an operator J , we can construct a Q-Wiener process on U; recall the notation J∗

for the adjoint of J .

40

Page 41: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.4 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

Proposition 2.4.6. Let U be a separable Hilbert space, let un : n ∈ N be an orthonormalbasis of U and let J be an arbitrary injective Hilbert-Schmidt operator on U. Then, the operator

Q := J J∗ is a non-negative, symmetric operator on U with finite trace. LetW [n] : n ∈ N

be the collection of independent Brownian motions we obtain from Corollary 2.4.2. Then, theprocess W := W (t) : t ≤ T given by

W (t) :=∑n∈NW [n](t) · J(un) (2.16)

defines a Q-Wiener process on U, where the series on the right converges in M 2T (U). Define

U0 := Q12 (U) and let 〈·, ·〉0 be the inner product on U0 defined in Propostion 1.1.27.

Then, we have that U0 = J(U) and for all u ∈ U, it holds that ‖u‖U = ‖Ju‖0. Hence, theoperator J : U→ U0 is an isometry.

Proof. See Proposition 2.5.2 in [23].

To construct the stochastic integral with respect toW, first, we repeat the results we obtainedfrom the previous section: if we consider the Q-Wiener process constructed in Proposition 2.4.6,we can integrate L0

2(H)-predictable processes Φ for which

P(∫ T

0‖Φ(t)‖2L0

2(H) dt <∞)

= 1

with respect to this Q-Wiener process. We will relate this to the class of processes that areintegrable with respect to W.

Lemma 2.4.7. Under the assumptions of Proposition 2.4.6, the collection Jun : n ∈ N formsan orthonormal basis of (U0, 〈·, ·〉0).

Proof. Since un : n ∈ N is an orthonormal basis of U, it follows from the polarization identity(1.5) and the linearity of the isometric isomorphism J : (U, 〈·, ·〉U)→ (U0, 〈·, ·〉0) that

〈u, v〉U = 14

(‖u+ v‖2U − ‖u− v‖

2U

)= 1

4

(‖Ju+ Jv‖20 − ‖Ju− Jv‖

20

)= 〈Ju, Jv〉0

for all u, v ∈ U, establishing that Jun : n ∈ N is an orthonormal set in U0. It follows from thisproperty and the injectivity and the linearity of the operator J : U → U0 that Jun : n ∈ Nforms an orthonormal basis of U0.

Lemma 2.4.8. The map

KJ : L2(U,H)→ L02(H) : A 7→ A J−1

is an isometric isomorphism with inverse K−1J : L0

2(H)→ L2(U,H) : S 7→ S J .

Proof. Note that, since J : U → U0 is an invertible bounded linear operator, its inverse J−1 isalso continuous. Let A ∈ L2(U,H) be arbitrary. Then, we know that A J−1 : U0 → H is acontinuous linear operator, and that, since J : U → U0 is invertible and Jun : n ∈ N formsan orthonormal basis of U0 by Lemma 2.4.7,

‖A‖2L2(U,H) =∑n∈N‖Aun‖2H =

∑n∈N

∥∥(A J−1)

(Jun)∥∥2

H=∥∥A J−1

∥∥2

L02(H)

, (2.17)

hence, AJ−1 : U0 → H is a Hilbert-Schmidt operator. We can conclude that KJ is an isometry.The surjectivity of KJ follows from the invertibility of J and (2.17). This shows that KJ is

even an isometric isomorphism.It is clear that K−1

J is the inverse of KJ . This concludes the proof.

41

Page 42: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.5 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

To define the stochastic integral with respect to a cylindrical Wiener processW, take a processΦ that is L2(U,H)-predictable such that

P(∫ T

0‖Φ(t)‖2L2(U,H) dt <∞

)= 1

and consider the process KJ(Φ) := KJ (Φ(t)) : t ≤ T. By Lemma 2.4.8, this process KJ(Φ) isL0

2(H)-valued and satisfies

P(∫ T

0‖KJ(Φ(t))‖2L0

2(H) dt <∞)

= P(∫ T

0‖Φ(t)‖2L2(U,H) dt <∞

)= 1.

Moreover, the process KJ(Φ) is L02(H)-predictable, as follows from the next lemma.

Lemma 2.4.9. Let Φ be a L2(U,H)-predictable process such that

P(∫ T

0‖Φ(t)‖2L2(U,H) dt <∞

)= 1.

Then, it follows that KJ(Φ) is L02(H)-predictable.

Proof. Take arbitrary F ∈ B(L02(H)), it follows that

(KJ(Φ))−1 [F ] = Φ−1[K−1J [F ]

]∈ PT

since KJ : L2(U,H)→ L02(H) is continuous, hence, measurable, and Φ is a L2(U,H)-predictable

process.

With these observations, we reach the following conclusion.

Conclusion 2.4.10. If Φ is a L2(U,H)-predictable process such that

P(∫ T

0‖Φ(t)‖L2(U,H) dt <∞

)= 1,

then KJ(Φ) ∈ N W (T ;L02(H))

In spirit of the previous section, let us denote the class of L2(U,H)-predictable processes Φsuch that

P(∫ T

0‖Φ(t)‖2L2(U,H) dt <∞

)= 1

by N W (T ;L2(U,H)).After these preparations, we are able to define the stochastic integral with respect to a

cylindrical Wiener process W. Let Φ ∈ N W (T ;L2(U,H)), then, we define∫ t

0Φ(s) dW(s) :=

∫ t

0KJ (Φ(s)) dW (s) =

∫ t

0Φ(s) J−1 dW (s) (2.18)

for every t ∈ [0, T ].One can verify that the stochastic integral

∫ ·0 Φ dW(s) does not depend on the choice of J ;

compare Remark 2.5.3 (part 1) in [23] and Remark 3.9 in [9]. We will therefore henceforth usethe Hilbert-Schmidt operator J constructed in Example 2.4.5.

Assumption 2.4.11. We will use the Hilbert-Schmidt operator J given in Example 2.4.5 toconstruct the Q-Wiener process that is associated with a standard cylindrical Wiener processW in Proposition 2.4.6.

With this definition for a stochastic integral with respect to a standard cylindrical Wienerprocess, all properties we derived for stochastic integrals with respect to Q-Wiener processescarry over to this setting if we apply necessary natural adjustments; oftentimes, this just meansreplacing L0

2(H) with L2(U,H) or U0 with U.

42

Page 43: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 2.5 CHAPTER 2. STOCHASTIC INTEGRATION IN HILBERT SPACES

2.5 A very important remark regarding the integrands of thestochastic integral

We will end this chapter with a very important remark, that can be found on page 53 in [23],that extends the definition of the stochastic integral to progressively measurable processes.

“Finally, we note that since the stochastic integrals in this chapter all have a stan-dard Wiener process as integrator [which is a continuous process], we can drop thepredictability assumption on Φ ∈ N W

(T ;L0

2(H))

and (as we shall do in subse-quent chapters) just assume progressive measurability, i.e. Φ|[0,t]×Ω is B([0, t]) ⊗F t /B

(L0

2(H))-measurable for all t ∈ [0, T ], at least if (Ω,F ,P) is complete (other-

wise we consider its completion).”

The argument why this is possible is a usual shift argument; compare Theorem 6.3.1 in [38] andLemma 5.12 in [33].

43

Page 44: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

3 Main result

In this chapter, we will focus on proving the wellposedness of SDEs. Before we do so, we willsketch the setting where we will work in. The approach we take here is the one depicted inChapter 4 in [23] and Chapter 4 in [27] and comparable to Chapter 4 in [15].

3.1 General setting of main result

First, we study the concept of a Gelfand triple. After that, we will impose conditions on thecoefficients in the SDE (3.14) defined below.

3.1.1 Gelfand triple

Let (H, 〈·, ·〉H) be a separable Hilbert space. Let (V, ‖ · ‖V) be a reflexive Banach space thatis a linear subspace of H and is dense in H with respect to the inner product norm ‖ · ‖H; asan example, one could think of Lp([0, 1], λ), for some arbitrary p ∈ (2,∞), as a subspace ofL2([0, 1], λ).

Also, assume that the inclusion map

ι : V→ H : v 7→ v (3.1)

is a continuous embedding, i.e., there exists a real number C > 0 such that ‖v‖H ≤ C‖v‖V forall v ∈ V.

The Riesz-Frechet Theorem asserts that the Hilbert space H is isometrically isomorphic to itsdual H∗; let us write Φ : H → H∗ for this isometric isomorphism. In addition, define the mapR : H∗ → V∗ : g 7→ g|V, that restricts linear functionals on H to V. This map is continuous fromH∗ to V∗, as, for every g ∈ H∗,

‖R(g)‖V∗ = supv∈Vv 6=0V

|g|V(v)|‖v‖V

= supv∈Vv 6=0V

|g(v)|‖v‖V

= C supv∈Vv 6=0V

|g(v)|C‖v‖V

≤ C supv∈Vv 6=0V

|g(v)|‖v‖H

= C suph∈Hh6=0H

|g(h)|‖h‖H

= C‖g‖H∗(3.2)

where the inequality follows from ι being a continuous embedding and the second-to-last equalityfollows from the denseness of V in H with respect to the inner product H-norm. From the samedenseness argument, it also follows that R is injective. Then, if we define R′ : (H∗, ‖ · ‖H∗) →(R(H∗), ‖ · ‖V∗) : g 7→ R(g), which is just the operator R with its codomain restricted to itsown image, the map R′ is trivially invertible by the Open Mapping Theorem. Hence, the map

J := R′ Φ : (H, ‖ · ‖H)→ (R(H∗), ‖ · ‖V∗) : h 7→ Φ(h)|V (3.3)

is an isomorphism. Moreover, J (H) is dense in V∗: let ζ ∈ V∗∗ be a functional that vanisheson J (H) and recall the evalutation map JV defined in (1.13). Then, since V is reflexive, thereexists a v ∈ V such that ζ = JV(v). Then, we find, for all h ∈ H

0 = ζ (J (h)) = (JV(v)) (J (h)) = (J (h)) (v) = (Φ(h)) (v) = 〈h, v〉H,

44

Page 45: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.1 CHAPTER 3. MAIN RESULT

where the second-to-last equality holds since the functionals Φ(h) and J (h) coincide on V.This implies that v = 0H = 0V. As a consequence, we have ζ = JV(0V) = 0V∗∗ . ApplyingProposition 1.1.3 yields that J (H) is dense in V∗. Then, since H is a separable Hilbert spaceand the map J is injective with a dense image in V∗, it follows that V∗ is separable and,consequently, this means that V is separable with respect to ‖ · ‖V as well, by Theorem 1.1.4.

The Hilbert space H is isomorphic to J (H) ⊂ V∗, so we can identify these spaces with eachother. By doing so, we obtain the following identities for all h ∈ H and v ∈ V,

〈h, v〉H = (Φ(h)) (v) = (J (h)) (v) = 〈J (h), v〉V∗×V = 〈h, v〉V∗×V (3.4)

where the last equality follows from the identification of h with J (h).From now on, we will mostly omit the mentioning of J and consider H ' J (H) ⊂ V∗ as a

linear subspace of V∗ and sometimes write H ⊂ V∗, where we have to keep the isomorphismJ in the back of our mind. Hence, together with V, we have “V ⊂ H ⊂ V∗”, which is knownunder a few different names: in mathematics, it is called a Gelfand triple, a Hilbert triple or anevolution triple, whereas in physics, it is sometimes referred to as a rigged Hilbert space. TheHilbert space H is often called a pivot space.

There exists a lower semi-continuous function on V∗ that allows us to extend the H-norm onH to V∗.

Proposition 3.1.1. Define the function

αH : V∗ → [0,∞] : ϕ 7→ sup∣∣〈ϕ, v〉V∗×V

∣∣ : v∈V,‖v‖H≤1

. (3.5)

This function is lower semi-continuous.Moreover, let ϕ ∈ V∗ be arbitrary. Then, we have ϕ ∈ H if and only if αH(ϕ) < ∞. In

addition, we have αH(ϕ) = ‖ϕ‖H for all ϕ ∈ H.

Proof. The map V∗ → [0,∞] : ϕ →∣∣〈ϕ, v〉V∗×V

∣∣ is continuous —and therefore lower semi-continuous— for every v ∈ V, so in particular for every v ∈ V with ‖v‖H ≤ 1. This implies thatαH is lower semi-continuous by Lemma 1.4.4.

The “only if”-part of the second statement follows directly from (3.4) and the Cauchy-Schwarzinequality. For the “if”-part of the second statement, let ϕ ∈ V∗ and suppose that αH(ϕ) <∞.Then, using (3.4), we see that the map ϕ : v 7→ 〈ϕ, v〉V∗×V = ϕ(v) is a bounded linearfunctional on the normed linear space (V, ‖ · ‖H), which is dense in H. We can therefore extendthis functional ϕ to a bounded linear functional ϕ on H using continuity. If we denote the Riesz-Frechet Isomorphism by Φ, there exists a unique hϕ ∈ H such that ϕ = Φ (hϕ). Consequently,we have ϕ = J (hϕ) and so, ϕ ∈ H, as desired.

The last statement is a consequence of (3.4), the Riesz-Frechet Theorem, the denseness of Vin H and Theorem 1.1.1.

The previous proposition allows us to define an extended norm on V∗.

Construction 3.1.2. Using the lower semi-continuous map αH we saw in (3.5) above, we definean extended norm on V∗, which we will denote by ‖ · ‖H, by setting ‖ϕ‖H := αH(ϕ) for allϕ ∈ V∗. Since αH(ϕ) = ‖ϕ‖H for all ϕ ∈ H, there is no confusion between ‖ · ‖H as seen as anextended norm on V∗ and as the inner product-induced norm on the Hilbert space H. It alsofollows from Proposition 3.1.1 that ‖ϕ‖H < ∞ if and only if ϕ ∈ H ⊂ V∗, so it follows thatH = ϕ ∈ V∗ : ‖ϕ‖H <∞.

In similar vein as with H, we can also consider V as a subspace of V∗, since J ι is anisomorphism to its own image in V∗. Just like with H, there exists a lower semi-continuousfunction on V∗ that allows us to extend the V-norm on V to V∗. Using the lower semi-continuousmap

αV : V∗ → [0,∞] : ϕ 7→ sup∣∣〈ϕ, v〉V∗×V

∣∣ :v∈V,‖v‖V≤1

,

45

Page 46: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.1 CHAPTER 3. MAIN RESULT

one can again verify that αV(ϕ) = ‖ϕ‖V for all ϕ ∈ V and that αV(ϕ) =∞ if ϕ /∈ V.In this scenario, we will then define the extended norm ‖·‖V on V∗ by defining ‖ϕ‖V := αV(ϕ)

for all ϕ ∈ V∗.

3.1.2 Setting and conditions on the coefficients

Here, we will lay out the setting in which we will prove the main result of this thesis, The-orem 3.2.3 below. Throughout this chapter, we consider a Gelfand triple V ⊂ H ⊂ V∗ aswe discussed in Section 3.1. We will also be working on a probability space (Ω,F ,P) thatis equipped with a normal filtration F = F t : t ∈ [0, T ]. Moreover, we let U be anotherseparable Hilbert space and let W be a standard cylindrical Wiener process on U (Section 2.4).Since both U and H are separable Hilbert spaces, we know that L2(U,H) is a separable Hilbertspace as well from Section 1.1.2.

In addition, we will consider the following maps,

A : [0, T ]× V×Ω→ V∗, (3.6)

B : [0, T ]× V×Ω→ L2(U,H). (3.7)

We will assume throughout that these maps satisfy the conditions in Condition 3.1.3 below.Notation-wise, when we write A(t, v), we mean the random variable ω 7→ A(t, v, ω); we usesimilar notation for B.

Condition 3.1.3 (Conditions on the maps A and B). We require that the maps A and B areprogressively measurable.

(P) The maps A |[0,t]×V×Ω and B |[0,t]×V×Ω are B([0, t]) ⊗ B(V) ⊗ F t-measurable for everyt ∈ [0, T ].

Moreover, we assume that the maps A and B satisfy the following conditions.

(H1) Hemicontinuity: let u, v, w ∈ V be arbitrary, and consider arbitrary ω ∈ Ω and t ∈ [0, T ].Then, we impose that the map

R→ R : λ 7→ 〈A (t, u+ λv, ω) , w〉V∗×V

is continuous.

(H2) Weak monotonicity: there exists a c ∈ R such that for all u, v ∈ V, the inequality

2 〈A (t, u, ω)− A (t, v, ω) , u− v〉V∗×V + ‖B(t, u, ω)− B(t, v, ω)‖2L2(U,H) ≤ c ‖u− v‖2H

holds for all t ∈ [0, T ] and all ω ∈ Ω.

(H3) Coercivity: there exist constants α ∈ (1,∞), c1 ∈ R, c2 ∈ (0,∞) and an F-adaptedf ∈ L1([0, T ]× Ω;R) such that for all v ∈ V and all t ∈ [0, T ], the inequality

2 〈A (t, v, ω) , v〉V∗×V + ‖B(t, v, ω)‖2L2(U,H) ≤ c1 ‖v‖2H − c2 ‖v‖αV + f(t, ω)

holds for all ω ∈ Ω.

(H4) Boundedness: let α be as in Condition (H3). Then, we assume that there exists a constant

c3 ≥ 0 and an F-adapted process g ∈ Lαα−1 ([0, T ] × Ω;R) such that for arbitrary v ∈ V

and t ∈ [0, T ], the inequality

‖A(t, v, ω)‖V∗ ≤ g(t, ω) + c3 ‖v‖α−1V

holds for all ω ∈ Ω.

46

Page 47: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.1 CHAPTER 3. MAIN RESULT

We can combine these conditions to find useful bounds on A and B, see Corollary 3.1.4, whichin turn will be applied to derive two more inequalities.

Corollary 3.1.4. Assume that the maps A and B satisfy the conditions in Condition 3.1.3.Then, it follows from Condition (H3) and (H4) that for arbitrary v ∈ V and t ∈ [0, T ], theinequalities

2 〈A (t, v, ω) , v〉V∗×V + ‖B(t, v, ω)‖2L2(U,H) ≤ (|c1|+ |f(t, ω)|)(

1 + ‖v‖2H)

(3.8)

and

‖A(t, v, ω)‖V∗ ≤ |g(t, ω)|+ c3 ‖v‖α−1V (3.9)

holds for all ω ∈ Ω. Moreover, Conditions (H3) and (H4) imply that for each v ∈ V and eacht ∈ [0, T ], the inequality

‖B(t, v, ω)‖2L2(U,H) ≤ |c1| ‖v‖2H + |f(t, ω)|+ 2‖v‖V |g(t, ω)|+ |2c3 − c2| ‖v‖αV (3.10)

holds for every ω ∈ Ω.Furthermore, Condition (H2) implies that for all u, v ∈ V, the inequality

‖B(t, u, ω)− B(t, v, ω)‖2L2(U,H) ≤ c ‖u− v‖2H + 2 ‖A (t, u, ω)− A (t, v, ω)‖V∗ ‖u− v‖V

holds for every t ∈ [0, T ] and ω ∈ Ω. In particular, we obtain from the continuity of the inclusionmap ι as defined in (3.1) that ‖v‖H ≤ C‖v‖V for all v ∈ V. Hence, for all u, v ∈ V, the inequality

‖B(t, u, ω)− B(t, v, ω)‖2L2(U,H)

≤ c · C2 · ‖u− v‖2V + 2 ‖A (t, u, ω)− A (t, v, ω)‖V∗ ‖u− v‖V(3.11)

holds for every t ∈ [0, T ] and ω ∈ Ω.

Lemma 3.1.5. Assume that the maps A and B satisfy the conditions in Condition 3.1.3. LetΞ be a V-valued process and let α > 1 be as in Condition (H3). Then, the inequality(

E[∫ T

0‖A(s,Ξ(s))‖

αα−1

V∗ ds

])α−1α

≤(E[∫ T

0|g(s)|

αα−1 ds

])α−1α

+ c3

(E[∫ T

0‖Ξ(s)‖αV ds

])α−1α

(3.12)

holds, where g and c3 are as in Condition (H4). In addition, the right-hand side is finite ifΞ ∈ Lα([0, T ]× Ω;V).

Proof. By (3.9), the inequality ‖A(s,Ξ(s))‖V∗ ≤ |g(t)|+ c3 ‖Ξ(s)‖α−1V holds on Ω. Hence,(

E[∫ T

0‖A(s,Ξ(s))‖

αα−1

V∗ ds

])α−1α

≤(E[∫ T

0

(|g(s)|+ c3 ‖Ξ(s)‖α−1

V

) αα−1

ds

])α−1α

.

Then, applying Minkowski’s inequality to the real-valued maps (s, ω) 7→ |g(s, ω)| and (s, ω) 7→c3 ‖Ξ(s, ω)‖α−1

V yields(E[∫ T

0

(|g(s)|+ c3 ‖Ξ(s)‖α−1

V

) αα−1

ds

])α−1α

≤(E[∫ T

0|g(s)|

αα−1 ds

])α−1α

+

(E[∫ T

0

(c3 ‖Ξ(s)‖α−1

V

) αα−1

ds

])α−1α

≤(E[∫ T

0|g(s)|

αα−1 ds

])α−1α

+ c3

(E[∫ T

0‖Ξ(s)‖αV ds

])α−1α

as desired. Then, this is indeed finite if Ξ ∈ Lα([0, T ]× Ω;V).

47

Page 48: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.1 CHAPTER 3. MAIN RESULT

Lemma 3.1.6. Assume that the maps A and B satisfy the conditions in Condition 3.1.3. LetΞ be a V-valued process. Then, the inequality

E[∫ T

0‖B (s,Ξ(s))‖2L2(U,H) ds

]≤ |c1|E

[∫ T

0‖Ξ(s)‖2H ds

]+ E

[∫ T

0|f(s)| ds

]

+ 2

(E[∫ T

0‖Ξ(s)‖αV ds

]) 1α

·(E[∫ T

0|g(s)|

αα−1 ds

])α−1α

+ |2c3 − c2|E[∫ T

0‖Ξ(s)‖αV ds

](3.13)

holds, where the process f and the constants c1 and c2 are as in Condition (H3) and theprocess g and the constant c3 are as in Condition (H4). Moreover, the right-hand side is finiteif Ξ ∈ Lα([0, T ]× Ω;V) ∩ L2([0, T ]× Ω;H).

Proof. When we look at inequality (3.10), we see that

E[∫ T

0‖B (s,Ξ(s))‖2L2(U,H) ds

]≤ E

[∫ T

0|c1| ‖Ξ(s)‖2H + |f(s)|+ 2‖Ξ(s)‖V |g(s)|+ |2c3 − c2| ‖Ξ(s)‖αV ds

]= |c1|E

[∫ T

0‖Ξ(s)‖2H ds

]+ E

[∫ T

0|f(s)| ds

]+ 2E

[∫ T

0‖Ξ(s)‖V |g(s)| ds

]+ |2c3 − c2|E

[∫ T

0‖Ξ(s)‖αV ds

]where we split the expectation on the right-hand side as all maps involved are non-negative.Applying Holder’s inequality on the third term of the right-hand side, we obtain

E[∫ T

0‖Ξ(s)‖V |g(s)| ds

]≤(E[∫ T

0‖Ξ(s)‖αV ds

]) 1α

·(E[∫ T

0|g(s)|

αα−1 ds

])α−1α

.

Combining these displays yields the desired inequality (3.13). The right-hand side of (3.13) isindeed finite if Ξ ∈ Lα([0, T ]× Ω;V) ∩ L2([0, T ]× Ω;H).

We end this section with the following observations, based on Remark 4.1.1(2) and Exercise4.2.3(3) in [23].

Lemma 3.1.7 (Remark 4.1.1(2) in [23]). Suppose that the Hilbert space H in the Gelfand tripleV ⊂ H ⊂ V∗ is finite-dimensional, with dimension, say, d ∈ N. Then, since V ⊂ H, it follows thatV is finite-dimensional and consequently, we have by Theorem 1.1.2, that dim(V) = dim(V∗).Since H is “squeezed” in between V and V∗, it follows that dim(V) = dim(H) = dim(V∗) = d.

In addition, suppose that the map A satisfies Condition (H1) in Condition 3.1.3 and satisfiesthe following boundedness condition.

(H) For every R > 0, there exists a constant CR ≥ 0 such that for all u, v ∈ x ∈ V : ‖x‖V ≤R, the inequality

2 〈A (t, u, ω)− A (t, v, ω) , u− v〉V∗×V ≤ CR ‖u− v‖2H

holds for all t ∈ [0, T ] and ω ∈ Ω.

48

Page 49: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.2 CHAPTER 3. MAIN RESULT

Then, the map V→ V∗ : u 7→ A(t, u, ω) is continuous for all t ∈ [0, T ] and ω ∈ Ω. In particular,the map A satisfies (H) with constant CR = c for all R > 0 if the maps A and B satisfyCondition (H2).

Corollary 3.1.8. Suppose that the Hilbert space H in the Gelfand triple V ⊂ H ⊂ V∗ isfinite-dimensional and suppose that the maps A and B satisfy Condition (H1) and (H2) inCondition 3.1.3. Then, we can conclude from Lemma 3.1.7 and (3.11) in Corollary 3.1.4 thatthe map V→ L2(U,H) : u 7→ B(t, u, ω) is continuous for all t ∈ [0, T ] and ω ∈ Ω.

Lemma 3.1.9 (Solution to Exercise 4.2.3(3) in [23]). If X is a V-valued and progressivelymeasurable process, the processes A(·, X(·)) and B(·, X(·)) are progressively measurable as well.

Proof. We will only establish this result for A, as the result with respect to B follows analogously.Let t ∈ [0, T ] be arbitrary. We will show that the map

At : [0, t]× Ω→ V∗ : (s, ω) 7→ A(s,X(s, ω), ω)

is B([0, t])⊗F t /B(V∗)-measurable.Recall Lemma 1.3.5. To abbreviate notation, we will employ similar notation as in Lemma 1.3.5:

let X := [0, t] × Ω and AX := B([0, t]) ⊗ F t and let (Y1,A1) := ([0, t],B([0, t])), (Y2,A2) :=(V,B(V)) and (Y3,A3) := (Ω,F t). In addition, we let Y and AY be as in Lemma 1.3.5, i.e.,Y := [0, t]× V×Ω and AY := B([0, t])⊗ B(V)⊗F t.

Define F1 : X → Y1 : (s, ω) 7→ s. This map is AX/A1-measurable as it is a projection ontothe first coordinate. In similar fashion, we see that the map F3 : X → Y3 : (s, ω) 7→ ω isAX/A3-measurable.

The map F2 : X→ Y2 : (s, ω) 7→ X(s, ω) is AX/A2-measurable by progressive measurability.Together, this means that the map F := (F1, F2, F3) : X → Y is AX/AY-measurable by

Lemma 1.3.5.By Condition (P), we know that the map A |[0,t]×V×Ω : [0, t]×V×Ω→ V∗ is measurable with

respect to B([0, t])⊗ B(V)⊗F t = AY. Also, observe that Y = [0, t]× V×Ω.Then, as At = A |[0,t]×V×Ω F , the assertion follows.

3.2 Formulation of the main result (Theorem 3.2.3)

In the previous section, we sketched the setting where we work in, and here, we will formulateour main result. We will consider the following stochastic differential equation (which we willshorten to SDE in what follows)

dX(t) = A(t,X(t)) dt+ B(t,X(t)) dW (t), (3.14)

where the maps A and B in (3.6) and (3.7) satisfy the conditions in Condition 3.1.3 and Wis a standard cylindrical Wiener process. In addition, let X0 ∈ L2(Ω,F 0,P;H) be arbitrary.This random variable X0 is called the initial condition. We will show that under the conditionsimposed on A and B by Condition 3.1.3 and the setting sketched in Section 3.1.2, we can “solve”this SDE, in the sense defined next.

Definition 3.2.1 (Solution). A processX = X(t) : t ∈ [0, T ] is called a solution of (3.14) withinitial condition X0 if X(0) coincides P-almost surely with X0 and if X is a continuous H-valued,F-adapted process that coincides λ ⊗ P-almost everywhere with a progressively measurable V-valued process X ∈ Lα ([0, T ]× Ω;V)∩L2 ([0, T ]× Ω;H) with α as in Condition (H3) on page 46such that for all t ∈ [0, T ]

X(t) = X(0) +

∫ t

0A(s, X(s)

)ds+

∫ t

0B(s, X(s)

)dW (s) P-almost surely. (3.15)

49

Page 50: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

When we look at Definition 3.2.1, a few things come to mind. First of all, one might won-der whether such a process X even exists. Moreover, when we look at the definition of themap A, we see that the integral with respect to ds is a priori V∗-valued. Luckily, all theseseemingly problematic observations will be solved when we show the main result of this thesis,Theorem 3.2.3 below.

Another problem arises: sometimes, we want to observe whether the process X takes valuesin (a Borel-measurable subset of) V and H. However, since X is a priori a V∗-valued process,this might not even be possible, as such sets need not belong to B(V∗). The next theoremsupplies us with a desirable result regarding this issue.

Theorem 3.2.2 (Lusin-Souslin). Let X and Y be Polish spaces and let f : X→ Y be continuous.If A ∈ B (X) and if the function f |A : A→ Y is injective, then f(A) ∈ B (Y).

Proof. See Theorem 15.1 in [22].

This result allows us to conclude that B(V) ⊂ B(H), since the embedding ι defined in (3.1)is injective and continuous. In similar vein, since J defined in (3.3) is also continuous andinjective, we have by the identification of H with the subspace J (H) ⊂ V∗ that B(H) ⊂ B(V∗).This solves the measurability issue; also, compare Lemma 1.17 in [12].

Now, on to the main result of this thesis.

Theorem 3.2.3. Let the maps A and B defined in (3.6) and (3.7) satisfy the conditions inCondition 3.1.3 and let X0 ∈ L2(Ω,F 0,P;H) be an initial condition. Then, there exists asolution X that is unique up to P-indistinguishability in the sense of Definition 3.2.1 to theSDE (3.14). Moreover, the following boundedness result holds

E

[supt≤T‖X(t)‖2H

]<∞. (3.16)

3.3 Auxiliary results

In this section, the main topic will concern two statements: Lemma 3.3.1 and Proposition 3.3.17.Both results are important ingredients to prove the main result, Theorem 3.2.3. To be precise:we will first show Lemma 3.3.1, which we will apply in the proof of Theorem 3.3.17. The latterresult is more directly related to the proof of Theorem 3.2.3.

We will lay out how we will structure the next two sections, by giving the structure forLemma 3.3.1; a similar approach will be taken in proving Theorem 3.3.17. The proof ofLemma 3.3.1 in the next section is split up into smaller pieces, in such a way that one candeduce the proof of Lemma 3.3.1 by combining the simpler results tackled first. In our opin-ion, this gives the reader with less time at hand the possibility to reach the desired conclusionwithout tracing through the full proof of the main statement at once.

We would like to point out that the intermediate results will only be used in the proof ofLemma 3.3.1, so there is no need to, as it is put in [1], “set aside some [...] mental resourcesto retain some facts which are of no further use”. However, sometimes, we will derive useful(in)equalities and tricks in the proof of an intermediate result: by our choice of breaking upthe proof of the main statement into relatively small pieces, the reader can quickly see thementioned (in)equality or trick by the brevity of the proof when he or she is referred back tosuch an inequality or such a trick later on.

3.3.1 Approximation of a stochastic process (Lemma 3.3.1)

The statement discussed here, Lemma 3.3.1, is our treatise of Lemma 4.2.6 in [23]. Whencomparing Lemma 3.3.1 below to Lemma 4.2.6 in [23], we added Remark 4.2.7(i) in [23] to the

50

Page 51: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

statement of Lemma 3.3.1 instead of discussing it afterwards, as tracing back through the proofcan sometimes be an unpleasant task for the reader. The proof presented here is largely basedon that of Lemma 4.2.6 in [23]. Therefore, when we use bits of the proof of Lemma 4.2.6 in [23]in the proofs below, this will be mentioned accordingly.

We now introduce Lemma 3.3.1 and after that, as laid out earlier, we will show severalstatements that, when combined, yield the assertions of Lemma 3.3.1. Before we start, recallthe constant α > 1 from Condition (H3).

Lemma 3.3.1. Let X : [0, T ] × Ω → V∗ be a B([0, T ]) ⊗ F /B (V∗)-measurable map thatcoincides λ ⊗ P-almost everywhere with a V-valued map X that belongs to Lα ([0, T ]× Ω;V)and let N ⊂ [0, T ] be a Lebesgue-null set. Then, there exists a increasing sequence Il : l ≥ 1of partions Il :=

0 = tl0 < tl1 < · · · < tlkl = T

of the interval [0, T ] for which the mesh tends

to zero, such that X(t) coincides P-almost surely with X (t) and t /∈ N for every t ∈ I′ :=[⋃l≥1 Il

]\ 0, T.

For each l ∈ N and the corresponding partition Il, define the processes

X l :=

kl∑i=2

1[tli−1,tli)X(tli−1

)and X l :=

kl−1∑i=1

1[tli−1,tli)X(tli

). (3.17)

These processes are λ⊗ P-versions of the V-valued processes

Xl

:=

kl∑i=2

1[tli−1,tli)

X(tli−1

)and X

l:=

kl−1∑i=1

1[tli−1,tli)

X(tli

)respectively. The latter two processes belong to Lα([0, T ] × Ω;V) for all l ∈ N. Moreover, wehave the following convergence result,

liml→∞

E[∫ T

0

(∥∥∥X(t)− X l(t)∥∥∥αV

+∥∥∥X(t)− X l(t)

∥∥∥αV

)dt

]= 0. (3.18)

In the next setting, we will elaborate on the assumptions in Lemma 3.3.1 and use this toderive some fruitful conclusions and observations before the start of the proof of the lemma. Inaddition, we will introduce notation that we will employ in this section.

Setting 3.3.2. Let X : [0, T ] × Ω → V∗ be a B([0, T ]) ⊗ F /B (V∗)-measurable map thatcoincides λ ⊗ P-almost everywhere with a V-valued map X that belongs to Lα ([0, T ]× Ω;V).We may assume that T = 1 by rescaling and we will henceforth do so. By Proposition 1.2.13,there exists a set Ω′ of full probability such that X (·, ω) ∈ Lα([0, 1];V) for all ω ∈ Ω′. Similarly,there also exists a set of full Lebesgue measure I ⊂ [0, 1] such that X (t) ∈ Lα(Ω;V) for allt ∈ I.

We will extend X to R×Ω by setting it “zero” outside of [0, 1]×Ω, i.e., X (t, ω) := 0V for all(t, ω) ∈ [0, 1]c ×Ω. Then, for all ω ∈ Ω′, the map X (·, ω) : R→ V (is measurable and) belongsto Lα(R;V).

Given fixed s ∈ [0, 1), construct for every k ∈ N the partition Ik(s) by defining tk0(s) := 0,tk2k+1

(s) := 1 and letting

tki (s) :=

(s−

⌊2ks⌋

2k

)+i− 1

2k(3.19)

for all i ∈

1, . . . , 2k

. For fixed k ∈ N, we see that the map tk1 : [0, 1) → [0, 1] is the periodicextension of the identity map on

[0, 2−k

)to the interval [0, 1). For every i ∈

2, . . . , 2k

, the

map tki is the map tk1 moved up by i−12k

, see Figure 3.1 on page 52 for a plot of t41 and t43.

Lastly, analogous to I′, we define I′(s) :=[⋃

k∈N Ik(s)]\ 0, T.

51

Page 52: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Figure 3.1: The functions t41 and t43 defined by (3.19).

Lemma 3.3.3. Let S ⊂ [0, 1] be a Lebesgue-null set. Define for each k ∈ N the set

Sk :=s ∈ [0, 1) :

tki (s) : i = 1, . . . , 2k

∩ S 6= ∅

.

Then, this set is a Lebesgue-null set for every k ∈ N.

Proof. Fix arbitrary k ∈ N. We have

Sk ⊂2k⋃i=1

2k⋃j=1

s ∈

[(j − 1)2−k, j2−k

): tki (s) ∈ S

.

Then, since tki is injective on the interval[(j − 1)2−k, j2−k

), we find that λ(Sk) ≤ 22kλ(S) = 0.

The assertion follows.

Corollary 3.3.4. If S ⊂ [0, 1] is a Lebesgue-null set, there exists a set F ⊂ [0, 1] of fullLebesgue-measure such that I′(s) ⊂ Sc for all s ∈ F .

Lemma 3.3.5. Under the assumptions in Setting 3.3.2, there exists a set F1 ⊂ [0, 1] of full

Lebesgue measure such that for all s ∈ F1 and k ∈ N, and the V-valued processes Xk,s

and

Xk,s

are elements of Lα([0, 1]× Ω;V).

Proof. Recall the set I ⊂ [0, 1] from Setting 3.3.2. It follows from Corollary 3.3.4 that thereexists a set F1 of full Lebesgue measure such that I′(s) ⊂ I for all s ∈ F1. In particular, we

have for all s ∈ F1 that Ik(s) ⊂ I′(s) ⊂ I for all k ∈ N. Using that∣∣∣∑N

j=1 aj

∣∣∣p ≤ Np∑N

j=1 |aj |p

for all p ≥ 0, we have for every s ∈ F1 and k ∈ N

∫ 1

0E[∥∥∥X k,s

(t)∥∥∥αV

]dt =

∫ 1

0E

∥∥∥∥∥∥2k+1∑i=2

1[tki−1(s),tki (s))(t)X(tki−1(s)

)∥∥∥∥∥∥α

V

dt

≤(

2k)α ∫ 1

0E

2k+1∑i=2

1[tki−1(s),tki (s))(t)∥∥∥X (

tki−1(s))∥∥∥α

V

dt

=(

2k)α 2k+1∑

i=2

(tki (s)− tki−1(s)

)· E[∥∥∥X (

tki−1(s))∥∥∥α

V

]<∞

52

Page 53: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

as Ik(s) ⊂ I. This shows that Xk,s ∈ Lα([0, 1] × Ω;V) for all s ∈ F1 and k ∈ N. A similar

argument establishes the same result for Xk,s

.

Lemma 3.3.6. Suppose that the assumptions in Setting 3.3.2 hold. Let δk : k ≥ 1 ⊂ [−1, 1]be a sequence that converges to zero and let t ∈ R. Then, we have

limk→∞

∫R‖X (s+ t+ δk, ω)−X (s+ t, ω)‖αV ds = 0

for every ω ∈ Ω′.

Proof. Can be shown using the standard machine or, if one wishes a reference, one can consultpages 92 and 93 in [23].

Now, we introduce notation that is used in the proof of Lemma 3.3.8 and Lemma 3.3.9.

Notation 3.3.7. Define γk(t) := 2−k⌊2kt⌋

and γk(t) := γk(t) + 2−k for k ∈ N and t ∈ R. Notethat γk(t) is the largest number of the form m

2k(where m ∈ Z) below t.

Lemma 3.3.8. Let the assumptions in Setting 3.3.2 be in force. Then, it holds that

limk→∞

E[∫ 1

0

∫ 1

0‖X (γk(u− s) + s)−X (u)‖αV du

ds

]= 0 (3.20)

and

limk→∞

E[∫ 1

0

∫ 1

0‖X (γk(u− s) + s)−X (u)‖αV du

ds

]= 0. (3.21)

Proof. Compare (4.33) and page 93 in [23].For every t ∈ R, the sequence γk(t)− t : k ∈ N converges to zero as k →∞. Hence,

limk→∞

∫R‖X (γk(t) + s, ω)−X (t+ s, ω)‖αV ds = 0

for every ω ∈ Ω′ and every t ∈ R by Lemma 3.3.6. As an auxilary resuly, we find, since∫ 10 ‖X (γk(t) + s, ω)−X (t+ s, ω)‖αV ds ≤

∫R ‖X (γk(t) + s, ω)−X (t+ s, ω)‖αV ds for each

k ∈ N, that

limk→∞

∫ 1

0‖X (γk(t) + s, ω)−X (t+ s, ω)‖αV ds = 0 (3.22)

for every ω ∈ Ω′ and every t ∈ R. As Ω′ is a set of full probability, the convergence result holdsP-almost surely on Ω, for every t ∈ R.

We have, as |a− b|p ≤ |(|a|+ |b|)|p ≤ 2p−1 (|a|p + |b|p) when p > 1, for every ω ∈ Ω′,∫ 1

0‖X (γk(t) + s, ω)−X (t+ s, ω)‖αV ds

≤ 2α−1

(∫ 1

0‖X (γk(t) + s, ω)‖αV ds+

∫ 1

0‖X (t+ s, ω)‖αV ds

).

Note that if t /∈ [−2, 2], then t+ s /∈ [0, 1] for all s ∈ [0, 1]. Because per construction, X (r, ω) =0V∗ for all r /∈ [0, 1], it follows that∫ 1

0‖X (t+ s, ω)‖αV ds = 0

53

Page 54: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

for all t /∈ [−2, 2]. Also, if t /∈ [−2, 2], then γk(t) /∈ [−2, 2) for all k ∈ N, and so γk(t) + s /∈ [0, 1]for all s ∈ [0, 1] and the same result as above follows if we replace t by γk(t).

On the other hand, if t ∈ [−2, 2], we have∫ 1

0‖X (t+ s, ω)‖αV ds ≤

∫R‖X (t+ s, ω)‖αV ds =

∫R‖X (u, ω)‖αV du =

∫ 1

0‖X (u, ω)‖αV du

and we also have that∫ 1

0 ‖X (γk(t) + s, ω)‖αV ds ≤∫ 1

0 ‖X (u, ω)‖αV du by an analogous trans-lation argument. Hence, for all ω ∈ Ω′, t ∈ R and k ∈ N, we have∫ 1

0‖X (γk(t) + s, ω)−X (t+ s, ω)‖αV ds ≤ 2α · 1[−2,2](t)

∫ 1

0‖X (u, ω)‖αV du.

Moreover, as X ∈ Lα([0, T ]× Ω;V), we have∫R

2α · 1[−2,2](t) · E[∫ 1

0‖X (u)‖αV du

]dt = 2α+2 · E

[∫ 1

0‖X (u)‖αV du

]<∞,

so the function (t, ω) 7→ 2 · 1[−2,2](t)∫ 1

0 ‖X (u, ω)‖αV du is integrable on the product measurespace (R×Ω,B(R)×F , λ⊗ P).

This means that by Lebesgue’s Dominated Convergence Theorem and (3.22), we have

limk→∞

E[∫

R

∫ 1

0‖X (γk(t) + s)−X (t+ s)‖αV ds

dt

]= 0. (3.23)

By using Tonelli’s theorem, we obtain for each k ∈ N,

E[∫

R

∫ 1

0‖X (γk(t) + s)−X (t+ s)‖αV ds

dt

]= E

[∫ 1

0

∫R‖X (γk(t) + s)−X (t+ s)‖αV dt

ds

]= E

[∫ 1

0

∫R‖X (γk(u− s) + s)−X (u)‖αV du

ds

]≥ E

[∫ 1

0

∫ 1

0‖X (γk(u− s) + s)−X (u)‖αV du

ds

]where we applied the change of variables t = u − s to the inner integral to obtain the thirdidentity. From this inequality and (3.23), convergence result (3.20) follows, as desired.

Regarding convergence result (3.21), note that for every t ∈ R, it holds that γk(t) − t → 0.Then, using a similar proof as above, we can validate convergence result (3.21).

Lemma 3.3.9. Suppose that the assumptions in Setting 3.3.2 are in force. Then, we have

limk→∞

∫ 1

0E[∫ T

0

∥∥∥X k,s(u)−X (u)

∥∥∥αV

du

]ds = 0 (3.24)

and

limk→∞

∫ 1

0E[∫ T

0

∥∥∥X k,s(u)−X (u)

∥∥∥αV

du

]ds = 0. (3.25)

Proof. Compare page 94 in [23].Let k ∈ N and s ∈ [0, 1) be arbitrary. Per construction, we know that Ik(s) is a partition of

the interval [0, 1]. Then, if u ∈[tki−1(s), tki (s)

)for some 2 ≤ i ≤ 2k + 1, it follows that

u− s ∈

[i−⌊2ks⌋− 2

2k,i−⌊2ks⌋− 1

2k

)

54

Page 55: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

and so, per definition, the number γk(t− s) is equal to the left end point of the interval. If wethen add s, we see that γk(u− s) + s = tki−1(s). Therefore, we have for every k ∈ N,

E[∫ 1

0

∫ 1

0‖X (γk(u− s) + s)−X (u)‖αV du

ds

]

= E

∫ 1

0

2k+1∑i=1

∫ tki (s)

tki−1(s)‖X (γk(u− s) + s)−X (u)‖αV du

ds

≥ E

∫ 1

0

2k+1∑i=2

∫ tki (s)

tki−1(s)

∥∥∥X (tki−1(s)

)−X (u)

∥∥∥αV

du

ds

Note that for each 2 ≤ i ≤ 2k + 1, we have X

k,s(u) = X

(tki−1(s)

)for all u ∈

[tki−1(s), tki (s)

).

Plugging this in yields, for every k ∈ N,

E[∫ 1

0

∫ 1

0‖X (γk(u− s) + s)−X (u)‖αV du

ds

]

≥ E

∫ 1

0

2k+1∑i=2

∫ tki (s)

tki−1(s)

∥∥∥X k,s(u)−X (u)

∥∥∥αV

du

ds

= E

[∫ 1

0

∫ 1

tk1(s)

∥∥∥X k,s(u)−X (u)

∥∥∥αV

du

ds

]≥ 0.

Then, convergence result (3.20) in Lemma 3.3.8 implies that

limk→∞

E

[∫ 1

0

∫ 1

tk1(s)

∥∥∥X k,s(u)−X (u)

∥∥∥αV

du

ds

]= 0. (3.26)

Moreover, note that the process Xk,s

vanishes on[0, tk1(s)

)for every k ∈ N and s ∈ [0, 1), so

limk→∞

E

[∫ 1

0

∫ tk1(s)

0

∥∥∥X k,s(u)−X (u)

∥∥∥αV

du

ds

]

= limk→∞

E

[∫ 1

0

∫ tk1(s)

0‖X (u)‖αV du

ds

]= lim

k→∞

∫ 1

0E

[∫ tk1(s)

0‖X (u)‖αV du

]ds,

where the last inequality holds by Tonelli’s theorem. We have, for every k ∈ N, the inequality∫ tk1(s)

0‖X (u)‖αV du ≤

∫ 1

0‖X (u)‖αV du

and it holds that ∫ 1

0E[∫ 1

0‖X (u)‖αV du

]ds = E

[∫ 1

0‖X (u)‖αV du

]<∞,

as X ∈ Lα([0, T ]× Ω;V). Hence, by Lebesgue’s Dominated Convergence Theorem, we obtain

limk→∞

∫ 1

0E

[∫ tk1(s)

0‖X (u)‖αV du

]ds =

∫ 1

0E

[limk→∞

∫ tk1(s)

0‖X (u)‖αV du

]ds. (3.27)

When we observe that

limk→∞

∫ tk1(s)

0‖X (u, ω)‖αV du = 0

55

Page 56: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

on [0, 1)× Ω′, a set of full product measure, the proof of convergence result (3.24) is completewhen we combine (3.26) and (3.27) and tacitly apply Tonelli’s theorem.

Convergence result (3.25) holds as well. If we let k ∈ N and s ∈ [0, 1) be arbitrary and assumethat u ∈

[tki−1(s), tki (s)

)for some 2 ≤ i ≤ 2k + 1, it follows from the first part of the proof that

γk(u− s) + s = tki−1(s) and so,

γk(u− s) + s = tki (s).

Then, we can give a similar proof as the one above and invoke (3.21) instead of (3.20) wherenecessary to obtain the result.

The next result is a consequence of Lemma 3.3.9.

Corollary 3.3.10. Suppose that the assumptions in Setting 3.3.2 are in force. Then, thereexists a set F2 ⊂ [0, 1] of full Lebesgue measure and a subsequence (k`)`≥1 such that for everys ∈ F2, it holds that

lim`→∞

E[∫ 1

0

(∥∥∥X k`,s(u)−X (u)∥∥∥αV

+∥∥∥X k`,s

(u)−X (u)∥∥∥αV

)du

]= 0.

In particular, as X is V-valued and the processes Xk`,s and X

k`,sare V-valued for all ` ∈ N

and s ∈ F2, we have that

Xk`,s →X and X

k`,s−X

in Lα([0, T ]× Ω;V) as `→∞ for all s ∈ F2.

If we combine Corollary 3.3.10 with the assumption that X and X coincide λ ⊗ P-almosteverywhere on [0, 1]× Ω, we obtain the following result.

Corollary 3.3.11. Suppose that the assumptions in Setting 3.3.2 are in force. Let F2 ⊂[0, 1] be the set (of full Lebesgue measure) and (k`)`≥1 be the (sub)sequence we obtain fromCorollary 3.3.10. Then, it holds that

lim`→∞

E[∫ 1

0

(∥∥∥X k`,s(u)−X(u)∥∥∥αV

+∥∥∥X k`,s

(u)−X(u)∥∥∥αV

)du

]= 0

for all s ∈ F2.

Up until now, we have all necessary ingredients to show Lemma 3.3.1. However, we have

constructed the processes Xk,s

and Xk,s

that depend on X and not X itself.

Lemma 3.3.12. Under the assumptions of Setting 3.3.2, there exists a set F3 of full Lebesgue-measure such that for all s ∈ F3, we have

X (t) = X (t)

P-almost surely for each t ∈ I′(s).

Proof. Compare page 94 in [23]; this is an extended discussion.Pick arbitrary k ∈ N and let i ∈

1, . . . , 2k

. Then, we can define the measure µ on [0, 1] by

µ := µki : B([0, 1])→ [0,∞) : A 7→∫Atki (s) ds.

By construction, the measure µ is absolutely continuous with respect to λ and as a consequence,it holds that µ ⊗ P λ ⊗ P by Lemma 1.3.6. Then, since X and X coincide λ ⊗ P-almost

56

Page 57: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

everywhere, we know that the set X 6= X is a λ⊗ P-null set and therefore also a µ⊗ P-nullset. It follows that the following integral∫

[0,T ]×Ω‖X −X‖V d (µ⊗ P)

=

∫X 6=X

‖X −X‖V d (µ⊗ P) +

∫X =X

‖X −X‖V d (µ⊗ P)

is equal to zero, as the first integral on the right-hand is an integral over a µ ⊗ P-null set andthe latter integral vanishes when we integrate over the mentioned set.

Using Tonelli’s theorem and the substitution rule in the third equality below, we find

0 =

∫[0,T ]×Ω

‖X −X‖V d (µ⊗ P) = E[∫ T

0‖X (s)−X(s)‖V dµ(s)

]= E

[∫ T

0

∥∥∥X (tki (s)

)−X

(tki (s)

)∥∥∥V

ds

]=

∫ T

0E[∥∥∥X (

tki (s))−X

(tki (s)

)∥∥∥V

]ds.

Hence, on a set F k3,i ⊂ [0, 1] of full Lebesgue-measure, we have that

X(tki (s)

)= X

(tki (s)

)P-almost surely for every s ∈ F k3,i. Now, if we define

F3 :=⋂k∈N

2k⋂i=1

F k3,i

then this (Lebesgue-measurable) set has full measure and possesses the desired probability.

With all these tools at hand, we can quickly wrap up the proof of Lemma 3.3.1.

Proof of Lemma 3.3.1. Given that A is a Lebesgue-null set, we obtain from Lemma 3.3.4 a setF0 of full Lebesgue-measure such that I′(s) ⊂ Ac for all s ∈ F0. Now, let F1, F2 and F3 be thesets of full Lebesgue measure obtained from Lemma 3.3.5, Corollary 3.3.10 and Lemma 3.3.12,respectively. Hence, the set F =

⋂0≤i≤3 Fi has full Lebesgue measure. We fix an arbitrary

element s ∈ F , we take the subsequence k` : ` ≥ 1 from Corollary 3.3.10 and claim that thesequence of partitions Ik`(s) : ` ≥ 1 satisfies the given conditions.

Per construction, we have that[⋃

`∈N Ik`(s)]\ 0, T ⊂ Ac, as desired. Moreover, for every

t ∈ I′(s), the random variable X(t) coincides P-almost surely with the V-valued random variableX (t) by Lemma 3.3.12. From Lemma 3.3.12 again and the countability of I′(s), we obtain that

Xk`,s and Xk` coincide λ⊗ P-almost everywhere for every ` ∈ N and the same goes for X

k`,s

and Xk` . Then, the convergence result (3.18) follows from 3.3.11.Therefore, if we consider the fixed s we chose earlier and define Il := Ikl(s) for all l ∈ N, we

are done.

3.3.2 A special case of Ito’s formula (Theorem 3.3.17)

The main topic of this section, Theorem 3.3.17, is a special case of the well-known Ito’s formulathat is essential for proving Theorem 3.2.3. It is an elaborated proof of Theorem 4.2.5 in [23];the proof in [23] is used as a backbone for our arguments. Comparable with [23], we also havethe Claims A through F below. However, we have chosen to break up the proofs of these claimsinto several pieces for readability pursoses, but these pieces culminate in the same claims as in[23].

57

Page 58: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

The proof of this theorem is more involved than the proof of Lemma 3.3.1. Therefore, we willfirst discuss the setting in which we will work. Then, we introduce important sets that will beused throughout the proof and we will derive some preliminary convergence results to be usedlater on. After that, we state the theorem and tackle the proof.

Setting 3.3.13 (Setting for Theorem 3.3.17). Let α ∈ (1,∞), let X0 ∈ L2(Ω,F 0,P;H) and

let the processes Y ∈ Lαα−1 ([0, T ]× Ω;V∗) and Z ∈ L2 ([0, T ]× Ω;L2(U,H)) be progressively

measurable. Define the V∗-valued process X by

X(t) := X0 +

∫ t

0Y (s) ds+

∫ t

0Z(s) dW (s).

This process is continuous in V∗ and F-adapted, hence B([0, T ])⊗F /B(V∗)-measurable, as wewill discuss in Discussion 3.3.14 below.

For notational purposes, we define M :=∫ ·

0 Z(s) dW (s). Then, it follows that 〈M〉 =∫ ·0 ‖Z(s)‖L2(U,H) ds by Proposition 2.3.11.In addition, suppose that X coincides λ ⊗ P-everywhere with a V-valued process X that

belongs to Lα([0, T ]× Ω;V) and satisfies the integrability condition

E[∥∥X(t)

∥∥2

H

]<∞ (3.28)

on a set I1 ⊂ [0, T ] of full Lebesgue-measure. (This is in particular the case if α ≥ 2, since theinclusion map ι defined in (3.1) is continuous, or when X belongs to L2([0, T ]× Ω;H) as well.)Moreover, observe that, since X is an α-integrable λ⊗ P-version of X, we have

0 =

∫[0,T ]×Ω

∥∥X −X∥∥V

d(λ⊗ P) =

∫ T

0E[∥∥X(t)−X(t)

∥∥V

]dt

by Tonelli’s theorem; compare the proof of Lemma 3.3.12. In particular, we know that there isa set I2 ⊂ [0, T ] of full Lebesgue-measure with the property that the random variables X(t) andX(t) coincide P-almost surely for every t ∈ I2. Let I := I1 ∩ I2 ⊂ [0, T ], a set of full Lebesguemeasure. On I, the integrability condition

E[‖X(t)‖2H

]<∞ (3.29)

holds for all t ∈ I ⊂ I2 by (3.28).Then, since the process X satisfies all conditions of Lemma 3.3.1, we obtain a sequence of

partitions Il : l ≥ 1 of the interval [0, T ] such that the random variable X(t) coincides P-almost surely with the V-valued random variable X(t) for all t ∈ I′ and such that I′ ⊂ I. Inparticular, this means that (3.29) holds for all t ∈ I′.

Using this sequence of partitions Il : l ≥ 1, we can define the processes X l and X l as in(3.17) and we define I′l := Il \0, T.

Since V is a dense, linear subspace of H and both spaces are separable, there exists anorthonormal basis of H consisting of elements of V by Proposition 1.1.12. Let ej : j ∈ N besuch an orthornormal basis and define Hn := e1, . . . , en and Hn := Sp(Hn) for every n ∈ N.Define the bounded linear operator Pn on H as the projection onto Hn and let idH denote theidentity operator on H.

Discussion 3.3.14 (Joint measurability and continuity of X). The process Y is progressivelymeasurable and belongs to L2([0, T ]×Ω;V) by assumption. From the latter assumption, we see

that there exists a set ΩY of full probability such that Y (·, ω) belongs to Lαα−1 ([0, T ];V∗) for

all ω ∈ ΩY by Proposition 1.2.13. Then, we know that the map [0, T ]→ V∗ : t 7→∫ t

0 Y (s, ω) dsis continuous for every ω ∈ ΩY by invoking Bochner’s inequality (1.17) in Proposition 1.2.4.

58

Page 59: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Moreover, since Y is progressively measurable and integrable, we know that Y |[0,t]×Ω is B([0, t])⊗F t-measurable and integrable on [0, t]×Ω for every t ∈ [0, T ]. Then, Part (3) in Theorem 1.2.8yields that the function Ω→ V∗ : ω 7→

∫ t0 Y (s, ω) ds is F t-measurable for every t ∈ [0, T ]. The

process∫ ·

0 Y (s) ds := ∫ t

0 Y (s) ds : t ∈ [0, T ] is therefore adapted.However, as the underlying filtration is normal, we know that the process 1ΩY

∫ ·0 Y (s) ds :=

1ΩY

∫ t0 Y (s) ds : t ∈ [0, T ], which is P-indistinguishable from

∫ ·0 Y (s) ds, is adapted as well

by Proposition 2.1.4. In addition, the process 1ΩY

∫ ·0 Y (s) ds is continuous in V∗. This means

that it is progressively measurable (Proposition 2.1.8) and consequently, jointly measurable(Proposition 2.1.5). We now define

∫ ·0 Y (s) ds := 1ΩY

∫ ·0 Y (s) ds, meaning that the process∫ ·

0 Y (s) ds is well-defined, continuous in V∗ and F-adapted.Then, since M :=

∫ ·0 Z(s) dW (s) is continuous in H —and by (3.3) therefore also continuous in

V∗— and F-adapted as well, the process X is an F-adapted and continuous in V∗. In particular,X is B([0, T ])⊗F /B(V∗)-measurable by Proposition 2.1.5.

Setting 3.3.15 (Important sets for the proof of Theorem 3.3.17). The following sets will beused in the proofs of the intermediate claims of Theorem 3.3.17.

ΩY The set of full probability such that Y (·, ω) ∈ Lαα−1 ([0, T ];V∗) for all ω ∈ ΩY (see also

Discussion 3.3.14; existence follows from Proposition 1.2.13).

ΩZ The set of full probability such that Z(·, ω) ∈ L2 ([0, T ];L2(U,H)) for all ω ∈ ΩZ (Propo-sition 1.2.13).

IZω As Z(·, ω) ∈ L2 ([0, T ];L2(U,H)) for all ω ∈ ΩZ , there exists a set IZω ⊂ [0, T ] of fullLebesgue-measure such that ‖Z(s, ω)‖L2(U,H) <∞ for all s ∈ IZω

ΩαX

The set of full probability such that X(·, ω) ∈ Lα ([0, T ];V) for all ω ∈ ΩαX

(Proposi-tion 1.2.13).

Ωt We define Ωt := X(t) = X(t) for all t ∈ I′; note that P(Ωt

)= 1 for every t ∈ I′ by

Lemma 3.3.1.

Ω We define Ω :=⋃t∈I′ Ωt. This is a set of full probability since the set I′ is countable.

On the set Ω, it holds that X(t) ∈ V for all t ∈ I′. In particular, the processes X l

and X l are V-valued for all l ∈ N on Ω.

Ωt We define Ωt := ω ∈ Ω : ‖X(t, ω)‖2H < ∞ for every t ∈ I′. This set has full probabilityin view of (3.29).

Ω We define Ω :=⋃t∈I′ Ωt, which has full probability. In addition, for arbitrary l ∈ N,

we see that

supt∈[0,T ]

∥∥∥X l(t)∥∥∥2

H<∞ and sup

t∈[0,T ]

∥∥∥X l(t)∥∥∥2

H<∞

on Ω.

FZ As we assume that Z ∈ L2 ([0, T ]× Ω;L2(U,H)), we can define the (measurable) setFZ :=

(s, ω) ∈ [0, T ]× Ω : ‖Z(s, ω)‖L2(U,H) <∞

of full λ⊗ P-measure.

We will first derive a useful lemma that we will use below twice. After that, we will stateTheorem 3.3.17.

59

Page 60: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Lemma 3.3.16 (Preliminary convergence results). Consider the set FZ (of full λ⊗P-measure)we defined in Setting 3.3.15. For every (s, ω) ∈ FZ , we have for all n ∈ N the inequality

‖(idH−Pn)Z(s, ω)‖2L2(U,H) =∞∑j=1

‖(idH−Pn)Z(s, ω)uj‖2H

≤ 4

∞∑j=1

‖Z(s, ω)uj‖2H = 4 ‖Z(s, ω)‖2L2(U,H) <∞(3.30)

by Lemma 1.1.21. Also note that

limn→∞

‖(idH−Pn)Z(s, ω)‖2L2(U,H) =

∞∑j=1

limn→∞

‖(idH−Pn)Z(s, ω)uj‖2H = 0 (3.31)

by the convergence of Pn to idH in strong operator topology. Interchanging the infinite seriesand the limit is allowed by (3.30).

In similar fashion, fix arbitrary ω ∈ ΩZ . Then, we have ‖(idH−Pn)Z(s, ω)‖2L2(U,H) ≤4 ‖Z(s, ω)‖2L2(U,H) for all s ≤ t and n ∈ N. Moreover, for every s ∈ IZω , we know that

‖Z(s, ω)‖2L2(U,H) < ∞. and so, we have (s, ω) ∈ FZ . The convergence result (3.31) above

holds on FZ , so it holds in particular for all s ∈ IZω . Recall that the set IZω ⊂ [0, T ] has fullLebesgue-measure.

Then, since Z(·, ω) ∈ L2 ([0, T ];L2(U,H)), we find for every ω ∈ ΩZ and all t ∈ [0, T ],

limn→∞

∫ t

0‖(idH−Pn)Z(s, ω)‖2L2(U,H) ds

= limn→∞

∫IZω ∩[0,t]

‖(idH−Pn)Z(s, ω)‖2L2(U,H) ds = 0

(3.32)

by Lebesgue’s Dominated Convergence Theorem in view of (3.31). This concludes the lemma.

Now, the main topic of this section.

Theorem 3.3.17 (Theorem 4.2.5 in [23]). Assume that the conditions in Setting 3.3.13 are inforce. Then, X is a continuous H-valued, F-adapted process. Moreover, the process X satisfies

E

[supt≤T‖X(t)‖2H

]<∞. (3.33)

The following Ito-formula for the square of the H-norm holds pointwise on a set of full probability

‖X(t)‖2H = ‖X0‖2H +

∫ t

0

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H)

)ds

+ 2

∫ t

0〈X(s), Z(s) dW (s)〉H

(3.34)

for all t ∈ [0, T ] simultaneously.

In the proofs below, we will often construct a set of full probability measure and denote thisset by Ω′ because of practicality reasons. However, if this constructed set Ω′ is of particularimportance, it will be duly noted. Otherwise, the set Ω′ is only of local importance and can besafely forgotten about after the proof is finished.

60

Page 61: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Remark 3.3.18. Below, we will often encounter an expression of the form∫ b

a〈Ξ(s), Z(s) dW (s)〉H (3.35)

where Ξ has P-almost surely paths in H. As an example, one can think of∫ ba

⟨X l(s), Z(s) dW (s)

⟩H

,

where the processes X l is defined as in (3.17) in Lemma 3.3.1. If we denote the set where thepaths of Ξ are in fact H-valued by ΩΞ, which has full probability, we interpret the expression(3.35) as

∫ ba 〈1ΩΞ

Ξ(s), Z(s) dW (s)〉H. The latter expression has a sensible interpretation inview of Lemma 2.3.8 if the involved processes satisfy the following integration condition∫ T

0‖(Z(t))∗ (1ΩΞ

Ξ(t))‖2U dt <∞ P-almost surely.

Moreover, when we look at Corollary 2.3.9 and the set ΩZ we encountered in Setting 3.3.15,the expression

∫ ba 〈1ΩΞ

Ξ(s), Z(s) dW (s)〉H is well-defined if on ΩΞ, it holds that

supt∈[0,T ]

‖Ξ(t)‖2H <∞.

Luckily, this will often be the case.

Claim A. Assume that the assumptions in Setting 3.3.13 are in force. Then, for all s, t ∈ I′with s < t, the following identity holds P-almost surely

‖X(t)‖2H − ‖X(s)‖2H = 2

∫ t

s〈Y (r), X(t)〉V∗×V dr + 2 〈X(s),M(t)−M(s)〉H

+ ‖M(t)−M(s)‖2H − ‖X(t)−X(s)−M(t) +M(s)‖2H .(3.36)

Proof. Consider two arbitrary partition points s, t ∈ I′ with s < t. Then, on Ω ∩ ΩY , the mapsX(t) and X(s) are V-valued, so we have by (1.6) on page 11

‖M(t)−M(s)‖2H − ‖X(t)−X(s)−M(t) +M(s)‖2H + 2 〈X(s),M(t)−M(s)〉H= 2 〈X(t),M(t)−M(s)〉H − ‖X(t)−X(s)‖2H .

“Adding zero” to the inner product on the right-hand side and rewriting, we obtain by againusing (1.6) in the second equality,

2 〈X(t),M(t)−M(s)〉H − ‖X(t)−X(s)‖2H= 2 〈X(t), X(t)−X(s)〉H + 2 〈X(t), [M(t)−M(s)]− [X(t)−X(s)]〉H− ‖X(t)−X(s)‖2H

= 2 〈X(t), X(t)−X(s)〉H − 2 〈X(t), [X(t)−X(s)]− [M(t)−M(s)]〉H− ‖X(t)‖2H − ‖X(s)‖2H + 2 〈X(t), X(s)〉H

= ‖X(t)‖2H − ‖X(s)‖2H − 2 〈X(t), [X(t)−X(s)]− [M(t)−M(s)]〉H

Note that [X(t)−X(s)]− [M(t)−M(s)] =∫ ts Y (r) dr. Hence, for fixed, arbitrary ω ∈ Ω∩ΩY ,

we have by (the symmetry of the real inner product and) the dualization property (3.4) onpage 45 and Lemma 1.2.6

〈X(t, ω), [X(t, ω)−X(s, ω)]− [M(t, ω)−M(s, ω)]〉H

=

⟨X(t, ω),

∫ t

sY (r, ω) dr

⟩H

=

⟨∫ t

sY (r, ω) dr,X(t, ω)

⟩H

=

⟨∫ t

sY (r, ω) dr,X(t, ω)

⟩V∗×V

=

∫ t

s〈Y (r, ω), X(t, ω)〉V∗×V dr.

(3.37)

61

Page 62: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

All together, we find that the following identity

‖M(t)−M(s)‖2H − ‖X(t)−X(s)−M(t) +M(s)‖2H + 2 〈X(s),M(t)− (s)〉H

= ‖X(t)‖2H − ‖X(s)‖2H − 2

∫ t

s〈Y (r), X(t)〉V∗×V dr

holds pointwise on Ω ∩ ΩY . Rearranging yields the desired result.

Lemma 3.3.19. Let the assumptions of Setting 3.3.13 be in force. For every t ∈ I′, we haveP-almost surely,

‖X(t)‖2H − ‖X0‖2H

= 2

∫ t

0

⟨Y (r), X l(r)

⟩V∗×V

dr + 2

∫ t

0

⟨X l(s), Z(s) dW (s)

⟩H

+ 2

⟨X(0),

∫ tl1

0Z(s) dW (s)

⟩H

+n−1∑j=0

(∥∥∥M (tlj+1

)−M

(tlj

)∥∥∥2

H−∥∥∥X (tlj+1

)−X

(tlj

)−M

(tlj+1

)+M

(tlj

)∥∥∥2

H

).

(3.38)

Proof. Let l ∈ N be arbitrary. For all t = tln ∈ I′l, the following identity holds P-almost surelyby Claim A

‖X(t)‖2H − ‖X0‖2H =n−1∑j=0

(∥∥∥X (tlj+1

)∥∥∥2

H−∥∥∥X (tlj)∥∥∥2

H

)

=

n−1∑j=0

[2

∫ tlj+1

tlj

⟨Y (r), X

(tlj+1

)⟩V∗×V

dr + 2⟨X(tlj

),M

(tlj+1

)−M

(tlj

)⟩H

+∥∥∥M (

tlj+1

)−M

(tlj

)∥∥∥2

H−∥∥∥X (tlj+1

)−X

(tlj

)−M

(tlj+1

)+M

(tlj

)∥∥∥2

H

].

(3.39)

Note that

n−1∑j=0

∫ tlj+1

tlj

⟨Y (r), X

(tlj+1

)⟩V∗×V

dr =n∑i=1

∫ tli

tli−1

⟨Y (r), X

(tli

)⟩V∗×V

dr

when we apply the substitution i = j + 1. Recall the definition of X l in (3.17) in Lemma 3.3.1.On each interval

[tli−1, t

li

), we have X l(r) = X

(tli)

for all r ∈[tli−1, t

li

). As this holds on any

such interval, we find

n∑i=1

∫ tli

tli−1

⟨Y (r), X

(tli

)⟩V∗×V

dr =

n∑i=1

∫ tli

tli−1

⟨Y (r), X l(r)

⟩V∗×V

dr =

∫ t

0

⟨Y (r), X l(r)

⟩V∗×V

dr.

Moreover, using the definition of M and the process X l as depicted in Lemma 3.3.1, we find by

62

Page 63: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Corollary 2.3.10 that the following identity

n−1∑j=0

⟨X(tlj

),M

(tlj+1

)−M

(tlj

)⟩H

=

⟨X(0),

∫ tl1

0Z(s) dW (s)

⟩H

+

n−1∑j=1

⟨X(tlj

),M

(tlj+1

)−M

(tlj

)⟩H

=

⟨X(0),

∫ tl1

0Z(s) dW (s)

⟩H

+n−1∑j=1

∫ tlj+1

tlj

⟨X(tlj

), Z(s) dW (s)

⟩H

=

⟨X(0),

∫ tl1

0Z(s) dW (s)

⟩H

+n−1∑j=1

∫ tlj+1

tlj

⟨X l (s) , Z(s) dW (s)

⟩H

(3.40)

holds P-almost surely. We note that the sum on the right-hand side has a sensible interpretationin view of Remark 3.3.18, as on the set Ω, we have

supt∈[0,T ]

∥∥∥X l(t)∥∥∥2

H<∞,

for we take the pointwise supremum over finitely many elements of V ⊂ H. In particular, sincethe process X l vanishes on the interval

[0, tl1

), we find∫ tl1

0

⟨X l(s), Z(s) dW (s)

⟩H

= 0.

Hence, “adding zero” to the series on the right-hand side of (3.40) gives, P-almost surely,

n−1∑j=0

⟨X(tlj

),M

(tlj+1

)−M

(tlj

)⟩H

=

⟨X(0),

∫ tl1

0Z(s) dW (s)

⟩H

+

∫ t

0

⟨X l(tlj

), Z(s) dW (s)

⟩H

Rewriting (3.39) with these observations yields

‖X(t)‖2H − ‖X0‖2H

= 2

∫ t

0

⟨Y (r), X l(r)

⟩V∗×V

dr + 2

∫ t

0

⟨X l(s), Z(s) dW (s)

⟩H

+ 2

⟨X(0),

∫ tl1

0Z(s) dW (s)

⟩H

+

n−1∑j=0

(∥∥∥M (tlj+1

)−M

(tlj

)∥∥∥2

H−∥∥∥X (tlj+1

)−X

(tlj

)−M

(tlj+1

)+M

(tlj

)∥∥∥2

H

),

P-almost surely, as desired.

Corollary 3.3.20. Assume that the assumptions of Setting 3.3.13 hold. Then, for every t ∈ I′,the following inequality holds P-almost surely,

‖X(t)‖2H ≤ ‖X0‖2H + 2

∫ t

0

∣∣∣∣⟨Y (r), X l(r)⟩V∗×V

∣∣∣∣ dr + 2

∣∣∣∣∫ t

0

⟨X l(s), Z(s) dW (s)

⟩H

∣∣∣∣+ 2

∣∣∣∣∣⟨X(0),

∫ tl1

0Z(s) dW (s)

⟩H

∣∣∣∣∣+

kl−1∑j=0

∥∥∥M (tlj+1

)−M

(tlj

)∥∥∥2

H.

(3.41)

63

Page 64: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

We will use this result in the next lemma which aids proving property (3.33).

Lemma 3.3.21. Suppose that the assumptions of Setting 3.3.13 hold. Then, there exists aconstant C > 0 such that

supl∈N

E

[supt∈I′l‖X(t)‖2H

]≤ C.

Proof. On the set Ω of full probabilty, the map t 7→⟨Y (t), X l(t)

⟩V∗×V

is well-defined for every

l ∈ N. Then, by applying Holder’s inequality to the maps t 7→ ‖Y (t)‖V∗ and t 7→∥∥X l(t)

∥∥V

, wehave

E[∫ T

0

∣∣∣∣⟨Y (r), X l(r)⟩V∗×V

∣∣∣∣ dr

]≤ E

[∫ T

0‖Y (r)‖V∗

∥∥∥X l(r)∥∥∥V

dr

]

≤(E[∫ T

0‖Y (r)‖

αα−1

V∗ dr

])α−1α(E[∫ T

0

∥∥∥X l(r)∥∥∥αV

dr

]) 1α

.

(3.42)

Per assumption, we have Y ∈ Lαα−1 ([0, T ]× Ω;V∗), implying that the first term on the right-

hand side is finite. Moreover, by (3.18) in Lemma 3.3.1, we know that

liml→∞

E[∫ T

0

∥∥∥X(t)− X l(t)∥∥∥αV

dt

]= 0

and from the same lemma, we obtain that E[∫ T

0 ‖X(t)‖αV dt]< ∞. This means that there

exists some C1 > 0 such that(E[∫ T

0‖Y (r)‖

αα−1

V∗ dr

])α−1α

· supl∈N

(E[∫ T

0

∥∥∥X l(r)∥∥∥αV

dr

]) 1α

≤ C1.

Hence, the following inequality

supl∈N

E[∫ T

0

∣∣∣∣⟨Y (r), X l(r)⟩V∗×V

∣∣∣∣ dr

]≤ C1. (3.43)

holds.The Burkholder-Davis-Gundy inequality, Proposition 1.3.2, supplies us with a universal con-

stant C ′ such that, when we look back at (2.14) in Corollary 2.3.12, the following inequalityholds

E

[supt≤T

∣∣∣∣∫ t

0

⟨X l(s), Z(s) dW (s)

⟩H

∣∣∣∣]≤ C ′ E

([ supt∈[0,T ]

∥∥∥X l(t)∥∥∥2

H

]· 〈M〉T

)1/2

for all l ∈ N. Then, as√ab ≤ 1

2C′a+ C′

2 b for all non-negative numbers a and b, we find

E

[supt≤T

∣∣∣∣∫ t

0

⟨X l(s), Z(s) dW (s)

⟩H

∣∣∣∣]≤ 1

2E

[supt∈[0,T ]

∥∥∥X l(t)∥∥∥2

H

]+

(C ′)2

2E [〈M〉T ] . (3.44)

For every l ∈ N, we have

E

[supt∈[0,T ]

∥∥∥X l(t)∥∥∥2

H

]≤∑t∈I′l

E[‖X(t)‖2H

]<∞ (3.45)

as (3.29) holds for all partition points t ∈ I′ and I′l contains finitely many points.

64

Page 65: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

We can repeatedly apply the Ito isometry (2.7) to the process Φ = 1(tlj ,tlj+1]Z to find

kl−1∑j=0

E[∥∥∥M (

tlj+1

)−M

(tlj

)∥∥∥2

H

]=

kl−1∑j=0

E

∥∥∥∥∥∫ tlj+1

tlj

Z(s) dW (s)

∥∥∥∥∥2

H

= E

kl−1∑j=0

∫ tlj+1

tlj

‖Z(s)‖2L2(U,H) ds

= E [〈M〉t] ≤ E [〈M〉T ] .

(3.46)

Moreover, using the Cauchy-Schwarz inequality, Holder’s inequality and a similar argument asin the previous display, we obtain

E

[∣∣∣∣∣⟨X(0),

∫ tl1

0Z(s) dW (s)

⟩H

∣∣∣∣∣]≤ E

[‖X(0)‖H

∥∥∥∥∥∫ tl1

0Z(s) dW (s)

∥∥∥∥∥H

]

≤(E[‖X0‖2H

])1/2

E

∥∥∥∥∥∫ tl1

0Z(s) dW (s)

∥∥∥∥∥2

H

1/2

≤(E[‖X0‖2H

])1/2· (E [〈M〉T ])1/2 .

(3.47)

Considering (3.41) in Corollary 3.3.20, we see that

E

[supt∈I′l‖X(t)‖2H

]

≤ E[‖X0‖2H

]+ 2E

[∫ T

0

∣∣∣∣⟨Y (r), X l(r)⟩V∗×V

∣∣∣∣ dr

]+ 2E

[supt≤T

∣∣∣∣∫ t

0

⟨X l(s), Z(s) dW (s)

⟩H

∣∣∣∣]

+ 2E

[∣∣∣∣∣⟨X(0),

∫ tl1

0Z(s) dW (s)

⟩H

∣∣∣∣∣]

+ E

kl−1∑j=0

∥∥∥M (tlj+1

)−M

(tlj

)∥∥∥2

H

.Then, by observations (3.43), (3.44), (3.46) and (3.47), we have

E

[supt∈I′l‖X(t)‖2H

]≤ E

[‖X0‖2H

]+ C1 +

1

2E

[supt∈I′l‖X(t)‖2H

]+

(C ′)2

2E [〈M〉T ]

+ E [〈M〉T ] +(E[‖X0‖2H

])1/2· (E [〈M〉T ])1/2 .

Hence, if we move the third term on the right-hand side to the left, which is allowed by (3.45),and multiply by two,

E

[supt∈I′l‖X(t)‖2H

]≤ 2 ·

(E[‖X0‖2H

]+ C1 +

(C ′)2

2E [〈M〉T ]

+E [〈M〉T ] +(E[‖X0‖2H

])1/2· (E [〈M〉T ])1/2

),

As we noted earlier, the constants C ′ and C1 do not depend on l ∈ N. Hence, the desired resultfollows.

Since the sequence of partitionsI′l : l ∈ N

is non-decreasing and liml→∞ I′l = I′, the following

result holds by the Monotone Convergence Theorem.

65

Page 66: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Corollary 3.3.22. Suppose that the assumptions in Setting 3.3.13 are in force. Let C > 0 bethe constant we obtain from Lemma 3.3.21. Then, it holds that

E

[supt∈I′‖X(t)‖2H

]≤ C and E

[supt∈I′‖X(t)‖H

]≤√C.

The latter is a result of Jensen’s inequality for concave functions applied to the square rootfunction.

From that observation, we obtain the following result.

Claim B. Under the assumptions of Setting 3.3.13, the integrability condition (3.33) on page 60,

i.e. E[supt≤T ‖X(t)‖2H

]<∞, holds. In particular, the set

ΩB :=

supt∈[0,T ]

‖X(t)‖H <∞

has full probability. To emphasize: the sample paths of X on this set lie completely in H andare uniformly bounded.

Proof. We define the set of full probability

Ω′ :=

supt∈I′‖X(t)‖H <∞

and this will turn out to be a judicious choice. By Corollary 3.3.22, the set Ω′ has full probability.Let ω ∈ Ω′ be arbitrary, but fixed. When we look back at Construction 3.1.2, we see that‖ · ‖H := αH is a lower semi-continuous map. Note that the sample point ω has a continuous,V∗-valued sample path under X. Then, we can invoke Lemma 1.4.5 and see that the map[0, T ]→ [0,∞] : t 7→ ‖X(t, ω)‖H is a lower semi-continuous function. By applying Lemma 1.4.3,we find that

supt∈I′‖X(t, ω)‖H = sup

t≤T‖X(t, ω)‖H , (3.48)

as I′ ⊂ [0, T ] is dense. This means that, as Ω′ has full probability,

E

[supt≤T‖X(t)‖2H

]= E

[1Ω′ sup

t≤T‖X(t)‖2H

]= E

[1Ω′ sup

t∈I′‖X(t)‖2H

]= E

[supt∈I′‖X(t)‖2H

]<∞

by Corollary 3.3.22.The set ΩB indeed exerts all mentioned properties.

It is important to consider the following observation.

Observation 3.3.23. We would like to point out that the previous claim does not guarantee thatthe H-valued sample paths on ΩB are continuous with respect to ‖ · ‖H. This will be shown inClaim F below.

We do however have the following consequence, if we reconsider Remark 3.3.18 in view of theset ΩB.

Remark 3.3.24. Henceforth, we will stop mentioning that the expressions of the form (3.35) inRemark 3.3.18 are sensible if they involve X, X l or X l, as this is implicitly dealt with by theset ΩB.

We continue by showing five results that together will be used to prove Claim C below.

66

Page 67: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Lemma 3.3.25. Under the assumptions of Setting 3.3.13, we have, for every ε > 0,

limn→∞

supl∈N

P

(supt≤T

∣∣∣∣∫ t

0

⟨(idH−Pn) X l(s), Z(s) dW (s)

⟩H

∣∣∣∣ > ε

)= 0.

Proof. Let l ∈ N and n ∈ N be arbitrary. Write K :=∫ ·

0

⟨(idH−Pn) X l(s), Z(s) dW (s)

⟩H

.Since the operator [idH−Pn] is symmetric, we have⟨

(idH−Pn) X l(s, ω), Z(s, ω)u⟩H

=⟨X l(s, ω), (idH−Pn)Z(s, ω)u

⟩H

for all u ∈ U and all (s, ω) ∈ [0, T ] × (ΩB ∩ ΩZ). With this observation, notation (2.8) inLemma 2.3.8 and inequality (2.14) in Corollary 2.3.12, we find

〈K〉T =

∫ T

0

∥∥∥Z(idH−Pn)Xl(s)∥∥∥2

L2(U,R)ds

[supt∈[0,T ]

∥∥∥X l(t)∥∥∥2

H

]·∫ T

0‖(idH−Pn)Z(s)‖2L2(U,H) ds.

(3.49)

The right-hand side is finite on the set ΩB ∩ ΩZ , so Corollary 2.3.9 implies that K is a well-defined real-valued continuous local martingale. Then, we obtain from Corollary 1.3.3, whenwe replace δ by 1

k for arbitrary k ∈ N, and inequality (3.49),

P

(supt≤T

∣∣∣∣∫ t

0

⟨(idH−Pn) X l(s), Z(s) dW (s)

⟩H

∣∣∣∣ > ε

)≤ 3

kε+ P

(〈K〉T >

1

k2

)

≤ 3

kε+ P

([supt∈[0,T ]

∥∥∥X l(t)∥∥∥2

H

]·∫ T

0‖(idH−Pn)Z(s)‖2L2(U,H) ds >

1

k2

).

(3.50)

We will bound the last term on the right-hand side. Note that for each N ∈ N, we have

P

([supt∈[0,T ]

∥∥∥X l(t)∥∥∥2

H

]·∫ T

0‖(idH−Pn)Z(s)‖2L2(U,H) ds >

1

k2

)

≤ P

([supt∈[0,T ]

∥∥∥X l(t)∥∥∥2

H

]·∫ T

0‖(idH−Pn)Z(s)‖2L2(U,H) ds >

1

k2, supt∈[0,T ]

∥∥∥X l(t)∥∥∥H≤ N

)

+ P

([supt∈[0,T ]

∥∥∥X l(t)∥∥∥2

H

]·∫ T

0‖(idH−Pn)Z(s)‖2L2(U,H) ds >

1

k2, supt∈[0,T ]

∥∥∥X l(t)∥∥∥H> N

)

≤ P(N ·

∫ T

0‖(idH−Pn)Z(s)‖2L2(U,H) ds >

1

k2

)+ P

(supt∈[0,T ]

∥∥∥X l(t)∥∥∥H> N

).

Applying Markov’s inequality to the first probability and plugging the result into (3.50), wefind

P

(supt≤T

∣∣∣∣∫ t

0

⟨(idH−Pn) X l(s), Z(s) dW (s)

⟩H

∣∣∣∣ ≥ ε)

≤ 3

kε+Nk2 E

[∫ T

0‖(idH−Pn)Z(s)‖2L2(U,H) ds

]+ P

(supt∈[0,T ]

∥∥∥X l(t)∥∥∥H> N

)

≤ 3

kε+Nk2 E

[∫ T

0‖(idH−Pn)Z(s)‖2L2(U,H) ds

]+ P

(supt∈[0,T ]

‖X(t)‖H > N

).

(3.51)

67

Page 68: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Observe that the upper bound is independent of l ∈ N.As Z ∈ L2 ([0, T ]× Ω;L2(U,H)), we have by Lemma 3.3.16

E[∫ T

0‖(idH−Pn)Z(s)‖2L2(U,H) ds

]→ 0 (3.52)

as n→∞ by Lebesgue’s Dominated Convergence Theorem.Hence, if we first take the supremum over l ∈ N and then let n→∞ in (3.51), we find

limn→∞

supl∈N

P

(supt≤T

∣∣∣∣∫ t

0

⟨(idH−Pn) X l(s), Z(s) dW (s)

⟩H

∣∣∣∣ ≥ ε)≤ 3

kε+ P

(supt∈[0,T ]

‖X(t)‖H > N

)

for arbitrary k and N by (3.52). Hence, when N → ∞, we know that the probability on theright-hand side vanishes by Claim B. Finally, since k ∈ N was chosen arbitrarily, the assertionfollows.

If we replace X l by X in the derivations in the proof of Lemma 3.3.25 above, the next resultfollows directly.

Lemma 3.3.26. Under the assumptions of Setting 3.3.13, we have, for every ε > 0,

limn→∞

P

(supt≤T

∣∣∣∣∫ t

0〈(idH−Pn)X(s), Z(s) dW (s)〉H

∣∣∣∣ > ε

)= 0.

We saw in Observation 3.3.23 that the sample paths of X on the set ΩB are not necessarilycontinuous in H. In contrast, however, the sample paths of the process Pn(X) are!

Lemma 3.3.27. Suppose that the assumptions of Setting 3.3.13 are in force. Consider theset ΩB we obtain from Claim B. Then, it holds that the map Pn ((X(·, ω)) : [0, T ] → H iscontinuous for all n ∈ N, for each ω ∈ ΩB.

Proof. Let ω ∈ ΩB and n ∈ N be arbitrary. Observe that Pn : (H, ‖ · ‖H) → (Hn, ‖ · ‖H) isa linear and continuous map for every n ∈ N. Moreover, we can invoke Proposition 1.1.40to see that the map [0, T ] → H : t 7→ X(t, ω) is weakly continuous into H. Hence, the map[0, T ] → Hn : t 7→ Pn ((X(·, ω)) is weakly continuous into Hn, a finite dimensional normedvector space. Then, as the notions of norm and weak convergence coincide on finite-dimensionalnormed vector spaces by Lemma 1.1.35, we have that Pn ((X(·, ω)) : ([0, T ], | · |)→ (Hn, ‖ · ‖H)is continuous. Since Hn is a closed linear subspace of H, the assertion holds.

This lemma has the following consequence.

Corollary 3.3.28. Let t ∈ (0, T ) and ω ∈ ΩB be arbitrary, but fixed. With the increasingsequence of partitions Il : l ≥ 1 and the associated processes X l, there exists an L ∈ N such

that for all l ≥ L, there is a unique n(l) ∈ 1, kl− 1 such that t ∈[tln(l), t

ln(l)+1

). In particular,

this means that tln(l) → t as l→∞, as the mesh of the partitions tends to zero.

By definition, it holds that X l(t, ω) = X(tln(l), ω

)for all l ≥ L. Hence, by Lemma 3.3.27, we

find that∥∥∥Pn

(X(t, ω)− X l(t, ω)

)∥∥∥2

H=∥∥∥Pn (X(t, ω))−Pn

(X l(t, ω)

)∥∥∥2

H

=∥∥∥Pn (X(t, ω))−Pn

(X(tln(l), ω

))∥∥∥2

H→ 0

(3.53)

as l→∞.

68

Page 69: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

We will use this result in the next lemma.

Lemma 3.3.29. Under the assumptions of Setting 3.3.13, we have

supn∈N

liml→∞

P

(supt≤T

∣∣∣∣∫ t

0

⟨Pn

(X(s)− X l(s)

), Z(s) dW (s)

⟩H

∣∣∣∣ > ε

)= 0.

Proof. Observe again that the stochastic integral appearing in the statement of the lemma hasa sensible interpretation by Remark 3.3.18 and the definition of the set ΩB. Replacing δ by 1

kfor arbitrary k ∈ N in Corollary 1.3.3 and implicitly using Corollary 2.3.12, we find

P

(supt≤T

∣∣∣∣∫ t

0

⟨Pn

(X(s)− X l(s)

), Z(s) dW (s)

⟩H

∣∣∣∣ ≥ ε)

≤ 3

kε+ P

(∫ T

0

∥∥∥ZPn(X−Xl)(s)∥∥∥2

L2(U,R)ds >

1

k2

).

Invoking (2.9) in Lemma 2.3.8, we can bound the latter probability by

P(∫ T

0

∥∥∥Pn

(X(s)− X l(s)

)∥∥∥2

H· ‖Z(s)‖2L2(U,H) ds >

1

k2

).

Consider the set ΩB ∩ ΩZ again. For every ω ∈ ΩB ∩ ΩZ , we have the following inequality

∥∥∥Pn

(X(s, ω)− X l(s, ω)

)∥∥∥2

H· ‖Z(s, ω)‖L0

2(H) ≤ 4

[supt≤T‖X(t, ω)‖2H

]· ‖Z(s, ω)‖L0

2(H)

for all s ∈ [0, T ]. Moreover, the right-hand side is integrable over the interval [0, T ].Then, the Dominated Convergence Theorem and Corollary 3.3.28 yield∫ T

0

∥∥∥Pn

(X(s)− X l(s)

)∥∥∥2

H‖Z(s)‖L0

2(H) ds→ 0

as l→∞. This means that

liml→∞

P

(supt≤T

∣∣∣∣∫ t

0

⟨Pn

(X(s)− X l(s)

), Z(s) dW (s)

⟩H

∣∣∣∣ ≥ ε)≤ 3

kε.

As n ∈ N was arbitrary and the upper bound does not depend on n ∈ N, we see that

supn∈N

liml→∞

P

(supt≤T

∣∣∣∣∫ t

0

⟨Pn

(X(s)− X l(s)

), Z(s) dW (s)

⟩H

∣∣∣∣ ≥ ε)≤ 3

kε.

The assertion follows from the arbitrariness of k ∈ N.

The previous result culminate in the proof of Claim C.

Claim C. Let the assumptions of Setting 3.3.13 be in force. We have

supt≤T

∣∣∣∣∫ t

0

⟨X(s)− X l(s), Z(s) dW (s)

⟩H

∣∣∣∣ P→ 0 (3.54)

as l→∞.

69

Page 70: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Proof. The result of Claim C follows from the “special” stochastic integral, that we defined inLemma 2.3.8, being linear in the left slot. One can verify that for arbitrary ε > 0, we have

liml→∞

P

(supt≤T

∣∣∣∣∫ t

0

⟨X(s)− X l(s), Z(s) dW (s)

⟩H

∣∣∣∣ > ε

)

≤ P

(supt≤T

∣∣∣∣∫ t

0〈(idH−Pn)X(s), Z(s) dW (s)〉H

∣∣∣∣ > ε

3

)

+ supl∈N

P

(supt≤T

∣∣∣∣∫ t

0

⟨(idH−Pn) X l(s), Z(s) dW (s)

⟩H

∣∣∣∣ > ε

3

)

+ supn∈N

liml→∞

P

(supt≤T

∣∣∣∣∫ t

0

⟨Pn

(X(s)− X l(s)

), Z(s) dW (s)

⟩H

∣∣∣∣ > ε

3

).

for every n ∈ N. With Lemma 3.3.29, the third term is zero. The other terms vanish as n→∞by Lemma 3.3.25 and Lemma 3.3.26.

This claim has the following two consequences.

Corollary 3.3.30. There exists a set ΩC of full probability and a subsequence l` : ` ≥ 1 onwhich the convergence

lim`→∞

[supt≤T

∣∣∣∣∫ t

0

⟨X(s)− X l`(s), Z(s) dW (s)

⟩H

∣∣∣∣]

= 0

holds pointwise.

Corollary 3.3.31. For every t ∈ [0, T ], we have∫ t

0

⟨X l(s), Z(s) dW (s)

⟩H

P→∫ t

0〈X(s), Z(s) dW (s)〉H

as l→∞.

This concludes the consequences of Claim C. We continue by proving auxiliary results thatcombined directly yield the proof of Claim D below.

Lemma 3.3.32. Under the assumptions of Setting 3.3.13. It holds that

supt∈[0,T ]

∣∣∣∣∫ t

0

⟨Y (s), X l(s)− X(s)

⟩V∗×V

ds

∣∣∣∣ P→ 0

as l→∞.

Proof. Recall that X is a V-valued λ ⊗ P-version of X, see Setting 3.3.13. Moreover, on theset Ω, a set of full probability, the map t 7→

⟨Y (t), X l(t)

⟩V∗×V

is well-defined, as we alreadyremarked the proof of Lemma 3.3.21.

We have, by Markov’s inequality and Holder’s inequality,

P

(supt∈[0,T ]

∣∣∣∣∫ t

0

⟨Y (s), X l(s)− X(s)

⟩V∗×V

ds

∣∣∣∣ > ε

)

≤ P(∫ T

0‖Y (s)‖V∗

∥∥∥X l(s)− X(s)∥∥∥V

ds > ε

)

≤ 1

ε

(E[∫ T

0‖Y (t)‖

αα−1

V∗ dt

])α−1α

·(E[∥∥∥X l(t)− X(t)

∥∥∥αV∗

dt]) 1

α.

When we note that X is a λ × P-version of X, we can apply convergence result (3.18) inLemma 3.3.1 to observe that the latter term goes to zero as l→∞.

70

Page 71: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

The linearity of the integral provides the following consequence.

Corollary 3.3.33. We have∫ t

0

⟨Y (s), X l(s)

⟩V∗×V

dsP→∫ t

0

⟨Y (s), X(s)

⟩V∗×V

ds.

as l→∞ for every t ∈ [0, T ].

The following result is obvious by the continuity of the sample paths of the stochastic integraland the Riesz-Frechet Theorem.

Lemma 3.3.34. Let the assumptions in Setting 3.5.1 be in force. We have the followingpointwise convergence ⟨

X(0),

∫ tl1

0Z(s) dW (s)

⟩H

→ 0.

as l→∞ on Ω. Therefore, the convergence holds in probability as well.

Before we tackle Claim D below, we make the following observation.

Observation 3.3.35. Let t ∈ I′. Then, for all large enough l ∈ N, there exists a unique n(l) ∈1, . . . , kl − 1 such that t = tln(l). With this observation, we can define

εt := P-liml→∞

n(l)−1∑j=0

∥∥∥X (tlj+1

)−X

(tlj

)−M

(tlj+1

)+M

(tlj

)∥∥∥2

H

which we will use in showing that the Ito formula (3.34) holds.

The previous results on convergence in probability give rise to the next conclusion.

Corollary 3.3.36. Assume that the assumptions in Setting 3.5.1 hold. For arbitrary t ∈ I′, wehave P-almost surely

εt = 2

∫ t

0

⟨Y (s), X(s)

⟩V∗×V

ds+ 2

∫ t

0〈X(s), Z(s) dW (s)〉H

+

∫ t

0‖Z(s)‖2L0

2(H) ds−(‖X(t)‖2H − ‖X0‖2H

).

Proof. By Corollary 3.3.31, Corollary 3.3.33 and Lemma 3.3.34, we see that the three first termsin (3.38) in Lemma 3.3.19 converge in probability to

2

∫ t

0

⟨Y (s), X(s)

⟩V∗×V

ds+ 2

∫ t

0〈X(s), Z(s) dW (s)〉H ,

and by Lemma 2.3.11, we find that

n(l)−1∑j=0

∥∥∥M (tlj+1

)−M

(tlj

)∥∥∥2

H

L1

→ 〈M〉t =

∫ t

0‖Z(s)‖2L2(U,H) ds

as l→∞, hence, it converges in probability as well. Obviously, we also have that, as l→∞,

‖X(t)‖2H − ‖X0‖2HP→ ‖X(t)‖2H − ‖X0‖2H .

71

Page 72: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Rearranging (3.38) and taking into account these observations, we find that

n(l)−1∑j=0

∥∥∥X (tlj+1

)−X

(tlj

)−M

(tlj+1

)+M

(tlj

)∥∥∥2

H

P→ 2

∫ t

0

⟨Y (s), X(s)

⟩V∗×V

ds+ 2

∫ t

0〈X(s), Z(s) dW (s)〉H

+

∫ t

0‖Z(s)‖2L0

2(H) ds−(‖X(t)‖2H − ‖X0‖2H

),

as desired.

Lemma 3.3.37. Assume that the assumptions in Setting 3.5.1 are in force. For arbitrary t ∈ I′,the random variable εt coincides P-almost surely with 0.

Proof. For convenience purposes, let us write X lj+1 for X

(tlj+1

), X l

j instead of X(tlj

), with

similar expressions for M . Moreover, we define Ω′ := Ω ∩ ΩZ ∩ ΩY ∩ ΩB, which has fullprobability. The sets in Ω′ are defined in Setting 3.3.15 or Claim B.

On the set Ω′ ⊂ Ω, we have for every j ≥ 0∥∥∥(X lj+1 −X l

j

)−(M lj+1 −M l

j

)∥∥∥2

H

=⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

),(X lj+1 −X l

j

)−[Pn

(M lj+1 −M l

j

)]⟩H

−⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

),[(idH−Pn)

(M lj+1 −M l

j

)]⟩H.

This means we can rewrite εt, yielding

εt = P-liml→∞

n(l)−1∑j=0

∥∥∥X lj+1 −X l

j −(M lj+1 −M l

j

)∥∥∥2

H

= P-liml→∞

n(l)−1∑j=0

⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

),(X lj+1 −X l

j

)−[Pn

(M lj+1 −M l

j

)]⟩H

−⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

),[(idH−Pn)

(M lj+1 −M l

j

)]⟩H

.

Right now, we will assume that

P-liml→∞

n(l)−1∑j=0

⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

),(X lj+1 −X l

j

)−[Pn

(M lj+1 −M l

j

)]⟩H

= 0.

(3.55)

This assumption will be verified in Lemma 3.3.38 below. Then, we have that, using the Cauchy-Schwarz inequality and Holder’s inequality for sums,

εt ≤ P-liml→∞

n(l)−1∑j=0

∥∥∥(X lj+1 −X l

j

)−(M lj+1 −M l

j

)∥∥∥H

∥∥∥[(idH−Pn)(M lj+1 −M l

j

)]∥∥∥H

≤ P-liml→∞

n(l)−1∑

j=0

∥∥∥(X lj+1 −X l

j

)−(M lj+1 −M l

j

)∥∥∥2

H

12n(l)−1∑

j=0

∥∥∥[(idH−Pn)(M lj+1 −M l

j

)]∥∥∥2

H

12

.

72

Page 73: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Since idH−Pn is a bounded linear operator for every n ∈ N, we can apply Lemma 2.3.7 andobtain

n(l)−1∑j=0

∥∥∥[(idH−Pn)(M lj+1 −M l

j

)]∥∥∥2

H=

n(l)−1∑j=0

∥∥∥∥∥∫ tlj+1

tlj

(idH−Pn) (Z(s)) dW (s)

∥∥∥∥∥2

H

L1

→∫ t

0‖(idH−Pn)Z(s)‖2L2(U,H) ds

(3.56)

as l→∞; the convergence result holds by Lemma 2.3.11. Hence, the convergence result (3.56)also holds in probability. We can then apply the continuous mapping theorem and we findn(l)−1∑

j=0

∥∥∥[(idH−Pn)(M lj+1 −M l

j

)]∥∥∥2

H

12

P→(∫ t

0‖(idH−Pn)Z(s)‖2L2(U,H) ds

) 12

,

whereas per definition, we haven(l)−1∑j=0

∥∥∥(X lj+1 −X l

j

)−(M lj+1 −M l

j

)∥∥∥2

H

12

P→ ε1/2t

when l→∞. Hence, we find that the following inequality holds

εt ≤ ε1/2t ·(∫ t

0‖(idH−Pn)Z(s)‖2L2(U,H) ds

) 12

On the set εt = 0, we trivially find that εt = 0, whereas on the set εt > 0, we can divideboth sides by the square root of εt, obtaining

ε1/2t ≤

(∫ t

0‖(idH−Pn)Z(s)‖2L2(U,H) ds

) 12

.

for every n ∈ N — note that this inequality holds on the set εt = 0 as well. We will nowderive that the set εt > 0 is a P-null set using this inequality. First, we take squares on bothsides of the inequality and obtain

εt ≤∫ t

0‖(idH−Pn)Z(s)‖2L2(U,H) ds. (3.57)

This inequality holds for every n ∈ N. Moreover, the right-hand side of (3.57) converges to 0pointwise on Ω′ ⊂ ΩZ in view of (3.32) in Lemma 3.3.16 as n→∞. Then, we have

P (εt > 0) ≤ P(∫ t

0‖(idH−Pn)Z(s)‖2L2(U,H) ds > 0

)→ 0.

We conclude that εt coincides P-almost surely with 0.

We will now verify that assumption (3.55) in the Lemma 3.3.37 indeed holds.

Lemma 3.3.38 (Verification of the assumption in Lemma 3.3.37). Assumption (3.55) is valid.

Proof. Take arbitrary t ∈ I′. We use the notation in Observation 3.3.35 and the first line in theproof of Lemma 3.3.37. We will show that both sums

n(l)−1∑j=0

⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

), X l

j+1 −X lj

⟩H,

n(l)−1∑j=0

⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

),Pn

(M lj+1 −M l

j

)⟩H.

(3.58)

73

Page 74: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

vanish in probability as l→∞, starting with the first one.On the set Ω ⊂ ΩY , we have

n(l)−1∑j=0

⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

), X l

j+1 −X lj

⟩H

=⟨(X l

1 −X l0

)−(M l

1 −M l0

), X l

1

⟩H−⟨(X l

1 −X l0

)−(M l

1 −M l0

), X l

0

⟩H

+

n(l)−1∑j=1

⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

), X l

j+1 −X lj

⟩H

=

∫ tl1

0

⟨Y (s), X l

1

⟩V∗×V

ds−

⟨∫ tl1

0Y (s) ds,X(0)

⟩H

+

n(l)−1∑j=1

∫ tlj+1

tlj

⟨Y (s), X l

j+1 −X lj

⟩V∗×V

ds

(3.59)

by (3.37) on page 61.One can deduce that on Ω′, the process X l − X l (see (3.17) in Lemma 3.3.1) coincides with

1[0,tl1)X(tl1

)+

n(l)−1∑j=1

1[tlj ,tlj+1)

(X(tlj

)−X

(tlj+1

)).

on the interval [0, t). Hence, on the set Ω′, we have∫ tl1

0

⟨Y (s), X l

1

⟩V∗×V

ds+

n(l)−1∑j=1

∫ tlj+1

tlj

⟨Y (s), X l

j+1 −X lj

⟩V∗×V

ds

=

∫ t

0

⟨Y (s), X l(s)− X l(s)

⟩V∗×V

ds.

(3.60)

Plugging (3.60) into (3.59), we find on Ω′,

n(l)−1∑j=0

⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

), X l

j+1 −X lj

⟩H

=

∫ t

0

⟨Y (s), X l(s)− X l(s)

⟩V∗×V

ds−

⟨∫ tl1

0Y (s) ds,X(0)

⟩H

.

(3.61)

When we apply Markov’s inequality on the first term, followed by a Holder-type argument like(3.42) on page 64, an “adding zero”-argument and convergence result (3.18) in Lemma 3.3.1,we see that the first term converges to 0 in probability.

On the set Ω′ ⊂ ΩB, the process X is H-valued. The random variable X0 and the stochasticintegral M :=

∫ ·0 Z(s) dW (s) are H-valued on ΩB as well. Hence, on Ω′ ⊂ ΩB, the sample

path t 7→∫ t

0 Y (s, ω) ds is H-valued for each ω ∈ Ω′. We know that the map [0, T ] → V∗ :

t 7→∫ t

0 Y (s, ω) ds is continuous in V∗ for every ω ∈ Ω′. By continuity, we therefore have that∫ tl10 Y (s) ds→ 0V∗ = 0H as l→∞. Hence, we find by invoking Proposition 1.1.40 that⟨∫ tl1

0Y (s) ds,X(0)

⟩H

→ 0

pointwise on Ω′, hence, in probability. This establishes the desired result for the first series in(3.58).

74

Page 75: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

If we define M l and M l like X l and X l, but replace X with M , we can use the calculation(3.61) above (and tacitly employ the linearity of the operator Pn) to find that

n(l)−1∑j=0

⟨(X lj+1 −X l

j

)−(M lj+1 −M l

j

),Pn

(M lj+1 −M l

j

)⟩H

=

∫ t

0

⟨Y (s),Pn

(M l(s)− M l(s)

)⟩V∗×V

ds−

⟨∫ tl1

0Y (s) ds,Pn (M(0))

⟩H

.

(3.62)

Note that the second term on the right-hand side vanishes on Ω′ for every large enough l ∈ N,as M(0) = 0H, and so, Pn(M(0)) = 0V = 0H.

We have M l(s, ω)→M(s, ω) for all s ∈ [0, t] by continuity of the sample paths of M . By thecontinuity of Pn, it follows that

liml→∞

∥∥∥Pn

(M l(s, ω)−M(s, ω)

)∥∥∥αV

= 0

for every s ∈ [0, t] and ω ∈ Ω′. Moreover, again by continuity, we know that the map [0, t] →[0,∞) : t 7→ ‖Pn (M(s, ω))‖αV is uniformly bounded for every ω ∈ Ω′

Then, for arbitrary ω ∈ Ω′ ⊂ ΩY , we have by Holder’s inequality and Lebesgue’s DominatedConvergence Theorem,∫ t

0

∣∣∣∣⟨Y (s, ω),Pn

(M l(s, ω)−M(s, ω)

)⟩V∗×V

∣∣∣∣ ds

≤(∫ t

0‖Y (s, ω)‖

αα−1

V∗ ds

)α−1α(∫ t

0

∥∥∥Pn

(M l(s, ω)−M(s, ω)

)∥∥∥αV

ds

) 1α

→ 0

as l→∞. Analogously, we have∫ t

0

∣∣∣∣⟨Y (s),Pn

(M l(s)−M(s)

)⟩V∗×V

∣∣∣∣ ds→ 0

pointwise on Ω′. By an “adding zero”-argument and the triangle inequality, we find that thefirst term on the right-hand side of (3.62) converges P-almost surely, hence in probability, to 0.

Ultimately, this shows that assumption (3.55) holds, as desired.

The next result follows from the uniqueness of limits in probability up to P-null sets.

Claim D. Let the assumptions in Setting 3.5.1 be in force. For every t ∈ I′ ∪0, the Itoformula (3.34) holds P-almost surely.

Proof. For t = 0, the identity is trivial. When t ∈ I′, the assertion holds by Corollary 3.3.36and Lemma 3.3.37.

Corollary 3.3.39. The Ito formula (3.34) holds P-almost surely for every t ∈ I′ and thereforepointwise on a set ΩD(t) of full probability for each t ∈ I′. By the countability of the set I′, theset ΩD :=

⋂t∈I′ ΩD(t) has full probability. Moreover, on this set, the Ito formula (3.34) holds

pointwise for all t ∈ I′ simultaneously.

Before we tackle the next claim, we will take time to sketch the setting where we will workin.

Setting 3.3.40 (Setting for Claim E). Let t ∈ (0, T ]\ I′. Then, there exists an L ∈ N such that

for all l ≥ L, there is a unique n(l) ∈ 0, . . . , kl − 1 such that t ∈(tln(l), t

ln(l)+1

]and tln(l) > 0.

Let us define t(l) := tln(l) for all l ≥ L. The mesh of the sequence of partitions Il : l ≥ 1 tends

75

Page 76: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

to zero, we know that t(l)→ t as l→∞; moreover, the sequence t(l) : l ≥ 1 is increasing, asevery partition contains the previous one, and bounded by t.

Let ΩB be as in Claim B. Let ΩC and l` : ` ≥ 1 be the set and the subsequence we obtainfrom Corollary 3.3.30. Also consider the set ΩD defined above (Corollary 3.3.39) and the setsΩ, ΩZ , and Ωα

Xdefined in Setting 3.3.15. These sets are all independent of t.

When we look back at Lemma 3.3.32, we can find a further subsequence, denoted l`i : i ≥ 1and a set Ω′ of full probability on which

supt∈[0,T ]

∣∣∣∣∫ t

0

⟨Y (s), X l`i (s)− X(s)

⟩V∗×V

ds

∣∣∣∣ P→ 0

pointwise as i→∞. Note that Ω′ does not depend on t. In what follows, we will write j insteadof l`j and m instead of l`m for brevity purposes.

Define the “master set”

ΩE := ΩB ∩ ΩC ∩ Ω ∩ ΩZ ∩ ΩY ∩ Ω′ ∩ ΩαX .

This set of full probabilily will glue many arguments together.Observe that by (1.7) on page 11 and the dualization property (3.4) on page 45, we have

‖u− v‖2H = ‖u‖2H − ‖v‖2H − 2 〈u− v, v〉V∗×V (3.63)

for all u, v ∈ V.

Lemma 3.3.41. Consider the assumptions in Setting 3.3.13 and the assumptions and notationin Setting 3.3.40. Then, on ΩE, it holds that

limm→∞

‖X(t(m))‖H = ‖X(t)‖H

for every t ∈ (0, T ] \ I′.

Proof. Fix arbitrary t ∈ (0, T ] \ I′. We have by (3.63) on ΩE ⊂ Ω, for all j,m ≥ L with j > m,

‖X(t(j))−X(t(m))‖2H = ‖X(t(j))‖2H − ‖X(t(m))‖2H − 2 〈X(t(j))−X(t(m)), X(t(m))〉V∗×V .

By Claim D and the Ito formula (3.34), the right-hand side equals∫ t(j)

t(m)

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H)

)ds+ 2

∫ t(j)

t(m)〈X(s), Z(s) dW (s)〉H

− 2

⟨∫ t(j)

t(m)Y (s) ds+

∫ t(j)

t(m)Z(s) dW (s), X(t(m))

⟩V∗×V

.

(3.64)

We will look at the last term. By Corollary 2.3.10, we have by dualization property (3.4) onpage 45 and the implicit use of the symmetry of the real inner product (compare (3.40) onpage 63) ⟨∫ t(j)

t(m)Z(s) dW (s), X(t(m))

⟩V∗×V

=

∫ t(j)

t(m)〈X(t(m)), Z(s) dW (s)〉H . (3.65)

As before (compare (3.37) on page 61), we have on ΩE ⊂ ΩY ∩ Ω by Lemma 1.2.6⟨∫ t(j)

t(m)Y (s) ds,X(t(m))

⟩V∗×V

=

∫ t(j)

t(m)〈Y (s), X(t(m))〉V∗×V ds, (3.66)

76

Page 77: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Rewriting (3.64) with observations (3.65) and (3.66), we obtain

‖X(t(j))−X(t(m))‖2H

= 2

∫ t(j)

t(m)

⟨Y (s), X(s)−X(t(m))

⟩V∗×V

ds+ 2

∫ t(j)

t(m)〈X(s)−X(t(m)), Z(s) dW (s)〉H

+

∫ t(j)

t(m)‖Z(s)‖2L2(U,H) ds.

(3.67)

By construction of the partitions and our choice of t(j) and t(m), we have(tjn(j), t

jn(j)+1

)⊂(tmn(m), t

mn(m)+1

)=⇒

(tmn(m), t

jn(j)

)⊂(tmn(m), t

mn(m)+1

).

On the latter interval, the process Xm conincides with X(t(m)). Plugging this into (3.67) gives

‖X(t(j))−X(t(m))‖2H

= 2

∫ t(j)

t(m)

⟨Y (s), X(s)− Xm(s)

⟩V∗×V

ds+ 2

∫ t(j)

t(m)

⟨X(s)− Xm(s), Z(s) dW (s)

⟩H

+

∫ t(j)

t(m)‖Z(s)‖2L2(U,H) ds.

(3.68)

Note that

2 supj>m

∣∣∣∣∣∫ t(j)

t(m)

⟨X(s)− Xm(s), Z(s) dW (s)

⟩H

∣∣∣∣∣≤ 4 sup

t∈[0,T ]

∣∣∣∣∫ t

0

⟨X(s)− Xm(s), Z(s) dW (s)

⟩H

∣∣∣∣→ 0

(3.69)

pointwise on ΩE ⊂ ΩC as m→∞. Moreover, on the set ΩE ⊂ ΩZ , we have

2 supj>m

∣∣∣∣∣∫ t(j)

t(m)‖Z(s)‖2L2(U,H) ds

∣∣∣∣∣ ≤ 2

∫ t

t(m)‖Z(s)‖2L2(U,H) ds→ 0 (3.70)

as t(m)→ t for m→∞ by Lebesgue’s Dominated Convergence Theorem. In addition,

supj>m

∣∣∣∣∣∫ t(j)

t(m)

⟨Y (s), X(s)− Xm(s)

⟩V∗×V

ds

∣∣∣∣∣≤ sup

t∈[0,T ]

∫ t

0

∣∣∣⟨Y (s), X(s)− Xm(s)⟩V∗×V

∣∣∣ ds→ 0

(3.71)

pointwise on ΩE ⊂ Ω′ when m→∞.Applying observations (3.69), (3.70) and (3.71) to (3.68), we find that on ΩE, the following

convergence holds pointwise

limm→∞

supj>m‖X(t(j))−X(t(m))‖2H = 0.

This means that X(t(m), ω) : m ≥ L ⊂ H is a Cauchy (hence, convergent) sequence for everyω ∈ ΩE. Moreover, as the process X is continuous in V∗ and t(m) → t as m → ∞, we have‖X(t, ω)−X(t(m), ω)‖V∗ → 0 as m → ∞ for every ω ∈ ΩE. In view of Lemma 1.1.6 and thecontinuity of the map J , defined in (3.3) on page 44, we also have that ‖X(t)−X(t(m))‖H → 0on Ω′E. Using the reversed triangle inequality yields∣∣ ‖X(t)‖H − ‖X(t(m))‖H

∣∣→ 0

as m→∞ on ΩE. This proves the assertion.

77

Page 78: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

Claim E. Consider the assumptions in Setting 3.3.13 and the assumptions and notation inSetting 3.3.40. For every t ∈ (0, T ] \ I′, the Ito formula (3.34) holds pointwise on ΩE.

Proof. Take arbitrary t ∈ (0, T ] \ I′. Let L ∈ N be the one constructed in Setting 3.3.40. Now,by Claim D, we know that for every m ≥ L, it holds that

‖X(t(m))‖2H = ‖X0‖2H +

∫ t(m)

0

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H)

)ds

+ 2

∫ t(m)

0〈X(s), Z(s) dW (s)〉H .

By Lemma 3.3.41, we know that on ΩE, we have ‖X(t(m))‖2H → ‖X(t)‖2H as m→∞. To finishoff Claim E, we want to conclude that the right-hand side cooperates on ΩE. This howeverfollows from Lemma 3.3.42 below.

Lemma 3.3.42. Consider the assumptions in Setting 3.3.13 and the assumptions and notationin Setting 3.3.40. Let tn : n ≥ 1 ⊂ [0, T ] be an arbitrary convergent sequence with limit t.Then, on the set ΩE, the following convergence result

‖X0‖2H +

∫ tn

0

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H)

)ds+ 2

∫ tn

0〈X(s), Z(s) dW (s)〉H

→ ‖X0‖2H +

∫ t

0

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H)

)ds+ 2

∫ t

0〈X(s), Z(s) dW (s)〉H .

holds pointwise as n→∞.

Proof. First, note that on ΩE ⊂ ΩαX∩ ΩY , we have by Holder’s inequality∣∣∣∣∫ t

0

⟨Y (s), X(s)

⟩V∗×V

ds−∫ tn

0

⟨Y (s), X(s)

⟩V∗×V

ds

∣∣∣∣≤∫ t

tn∧t‖Y (s)‖V∗

∥∥X(s)∥∥V

ds+

∫ tn∨t

t‖Y (s)‖V∗

∥∥X(s)∥∥V

ds

≤(∫ T

0‖Y (s)‖

αα−1

V∗ ds

)α−1α

·(∫ t

tn∧t

∥∥X(s)∥∥αV

ds

) 1α

+

(∫ T

0‖Y (s)‖

αα−1

V∗ ds

)α−1α

·(∫ tn∨t

t

∥∥X(s)∥∥αV

ds

) 1α

→ 0

as n → 0 by Lebesgue’s Dominated Convergence Theorem. In similar fashion, it holds that∫ tn0 ‖Z(s)‖2L2(U,H) ds→

∫ t0 ‖Z(s)‖2L2(U,H) ds as n→∞ on ΩE ⊂ ΩZ .

Furthermore, by the pathwise continuity of∫ ·

0 〈X(s), Z(s) dW (s)〉H on ΩE, we know that∫ tn0 〈X(s), Z(s) dW (s)〉H →

∫ t0 〈X(s), Z(s) dW (s)〉H as n→∞.

The assertion follows.

Recall that the set ΩE we constructed in Setting 3.3.40 is independent of t ∈ [0, T ]. The samegoes for the set ΩD in Corollary 3.3.39. We obtain the following result.

Corollary 3.3.43. The Ito-formula (3.34) holds pointwise on the set Ω∗ := ΩD ∩ ΩE of fullprobability for all t ∈ [0, T ] simultaneously.

In view of Corollary 3.3.43 and Lemma 3.3.42, the next result holds.

Corollary 3.3.44. In particular, we have limn→∞ ‖X (tn)‖2H = ‖X (t)‖2H pointwise on Ω∗ ⊂ ΩE.

78

Page 79: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.3 CHAPTER 3. MAIN RESULT

We will use this in proving the second-to-last tool that we will use in the proof of Theo-rem 3.3.17.

Claim F. The map ([0, T ], | · |)→ (H, ‖ · ‖H) : t 7→ X(t, ω) is continuous for every ω ∈ Ω∗ (thelatter set is defined in Corollary 3.3.43 above), or, the process X has P-almost surely continuoussample paths in H.

Proof. Take arbitrary ω ∈ Ω∗. Let tn : n ≥ 1 ⊂ [0, T ] be an arbitrary convergent sequencewith limit t. From Corollary 3.3.44, we obtain limn→∞ ‖X (tn, ω)‖2H = ‖X (t, ω)‖2H.

As Ω∗ ⊂ ΩB and since H ⊂ V∗ is dense, we know that the map t 7→ X(t, ω) is a weaklycontinuous map from [0, T ] into H by Proposition 1.1.40 for every ω ∈ Ω∗.

We conclude that

‖X (tn, ω)−X (t, ω)‖2H = ‖X (tn, ω)‖2H − 2 〈X (tn, ω) , X (t, ω)〉H + ‖X (t, ω)‖2H= ‖X (t, ω)‖2H − 2 〈X (t, ω) , X (t, ω)〉H + ‖X (t, ω)‖2H= 0

as n→∞.

Before we tackle the proof of Theorem 3.3.17, we “update” the process X by invoking theresults we derived thus far.

Observation 3.3.45. On the set Ω∗ of full probability, we see that the process X has continuouspaths in H. Now, we set the sample paths of all ω /∈ Ω∗ equal to zero, and we obtainX ′ := 1Ω∗ X.Then, the process X ′ is continuous in H (hence, also in V∗) and it is adapted by Lemma 2.1.4as it is indistinguishable from the V∗-continuous F-adapted process X. In addition, since bothX and X ′ are measurable and coincide λ⊗ P-almost everywhere, we see that X also coincidesλ⊗ P-almost everywhere with X ′.

We have supplied ourselves with the necessary tools to tackle Theorem 3.3.17 at once.

Proof of Theorem 3.3.17. From Observation 3.3.45, we obtain that X ′ is a continuous H-valuedF-adapted process. Moreover, it satisfies Claim B as well, so the integrability condition (3.33) inTheorem 3.3.17 holds. In addition, since the Ito formula holds pointwise on Ω∗ for all t ∈ [0, T ]simultaneously by Corollary 3.3.43, nothing would change if we would replace X by X ′. Thisproves that Theorem 3.3.17 holds for the process X ′. However, in view of Observation 3.3.45,it is allowed to replace X by X ′ in what follows.

Theorem 3.3.17 implies the following results.

Lemma 3.3.46. The real-valued stochastic integral∫ ·

0 〈X(s), Z(s) dW (s)〉H is a martingalestarting at zero under the assumptions of Setting 3.5.1.

Proof. We want to apply Proposition 1.3.4. By Holder’s inequality, Theorem 3.3.17 and theassumption that Z ∈ L2([0, T ]× Ω;H), we have

E

([ sups∈[0,T ]

‖X(s)‖2H

]·∫ T

0‖Z(s)‖2L2(U,H) ds

) 12

(E

[sups∈[0,T ]

‖X(s)‖2H

]) 12 (

E[∫ T

0‖Z(s)‖2L2(U,H) ds

]) 12

<∞.

When we look back at Corollary 2.3.12, and (2.12) in particular, we see the conditions inProposition 1.3.4 are met. The assertion follows.

79

Page 80: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.4 CHAPTER 3. MAIN RESULT

Corollary 3.3.47. Assume that under the assumptions of Setting 3.5.1 are in force. Then, itholds that

E[‖X(t)‖2H

]= E

[‖X0‖2H

]+

∫ t

0E[2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H)

]ds (3.72)

for all t ∈ [0, T ].

Proof. Take expectations on both sides of the Ito formula (3.34) and apply Lemma 3.3.46. Then,use Fubini’s theorem to interchange the integral and the expectation: the use of Fubini’s theoremis permitted by applying Holder’s inequality to Y ∈ L

αα−1 ([0, T ]×Ω;V) and X ∈ Lα([0, T ]×Ω;V)

and using that Z ∈ L2([0, T ]× Ω;L2(U,H)).

The last result we will discuss in this section is a consequence of (3.72) and Lemma 1.4.8.

Corollary 3.3.48. The map

[0, T ]→ R : t 7→ E[‖X(t)‖2H

]= E

[‖X0‖2H

]+

∫ t

0E[2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H)

]ds

is a continuous function of bounded variation.

3.4 Existence and uniqueness of strong solutions to SDEs infinite dimensions

In this section, we will give existence and uniqueness results of solutions to SDEs in finitedimensions. First, we will sketch the setting in which we work. Then, we define what a solutionis and in which way uniqueness is defined. We conclude this section by an existence anduniqueness result. This section is based on Chapter 3 in [23] and Chapter 10 in [33].

Setting 3.4.1. Let T > 0 be a finite time horizon and let (Ω,F ,P) be a complete probabilityspace endowed with a normal filtration F := F t : t ≤ T. Let d ∈ N and consider an Rd-valuedBrownian motion W := W (t) : t ≤ T. Denote the standard basis of Rd by e1, . . . , ed.

Denote byM(d;R) the collection of all d× d-matrices with real entries and endow this spacewith the following norm

‖A‖M(d;R) :=

d∑i=1

d∑j=1

|A(i, j)|2 1

2

.

Let the maps b and σ, defined by

b : [0, T ]× Rd×Ω→ Rd (3.73)

σ : [0, T ]× Rd×Ω→M(d;R) (3.74)

be progressively measurable for each fixed x ∈ Rd and continuous in the second variable foreach fixed t ∈ [0, T ] and ω ∈ Ω. Moreover, assume that the maps b and σ satisfy the followingintegrability condition∫ T

0sup

‖b(t, x)‖Rd + ‖σ(t, x)‖2M(d;R) : x∈Rd,

‖x‖Rd≤N

dt <∞ (3.75)

P-almost surely for all N ∈ N and let ξ : Ω→ Rd be an F 0-measurable random variable. Underthese conditions, the SDE

dX(t) = b(t,X(t)) dt+ σ(t,X(t)) dW (t) (3.76)

X(0) = ξ (3.77)

has a meaningful interpretation.

80

Page 81: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Definition 3.4.2 (Strong solution). Under the conditions of Setting 3.4.1, we call a processX = X(t) : t ≤ T a strong solution to the SDE (3.76) (with initial condition ξ) if (3.77) holdsP-almost surely and X satisfies the following conditions:

(i) the process X has continuous paths P-almost surely;

(ii) the process X is F-adapted;

(iii) it holds that ∫ T

0

(‖b(s,X(s))‖Rd + ‖σ(s,X(s))‖2M(d;R)

)ds < +∞

P-almost surely;

(iv) the process X satisfies (3.76) P-almost surely for each t ∈ [0, T ].

Theorem 3.4.3 (Theorem 3.1.1 in [23]). Let the conditions in Setting 3.4.1 be in force. Assumethat, for every N ∈ N, we have a non-negative, F-adapted process KN := KN (t) : t ≤ T whichsatisfies

γN :=

∫ T

0KN (t) dt <∞ (3.78)

P-almost surely. If for arbitrary t ∈ [0, T ] and N ∈ N, the following inequalities hold for allx, y ∈

v ∈ Rd : ‖v‖Rd ≤ N

,

2 〈x− y, b(t, x)− b(t, y)〉Rd + ‖σ(t, x)− σ(t, y)‖2M(d;R) ≤ KN (t) · ‖x− y‖2Rd (3.79)

2 〈x, b(t, x)〉Rd + ‖σ(t, x)‖2M(d;R) ≤ K1(t) ·(

1 + ‖x‖2Rd)

(3.80)

on Ω, the SDE (3.76) has a solution that is unique up to P-indistinguishability for every F 0-measurable square-integrable initial condition ξ : Ω→ Rd.

3.5 Proof of the main result

Before we can tackle our main result, Theorem 3.2.3, we will sketch the setting in which we willwork. After sketching this setting, we make three useful observations (Observation 3.5.2– 3.5.4)will prove a lemma (Lemma 3.5.11) and argue that the finite-dimensional SDE (3.84) admits astrong solution in the sense of Definition 3.4.2 for every n ∈ N, see Lemma 3.5.5. The last resulthas several consequences and gives rise to a surprisingly useful localizing sequence of stoppingtimes (Lemma 3.5.9). This sequence of stopping times will be used in Lemma 3.5.11, which isbased on Lemma 4.2.9 in [23].

Lemma 3.5.11 has a very important consequence, namely Corollary 3.5.14 in Section 3.5.2,which is an extended discussion of the start of the proof of Theorem 4.2.4 (pages 104 and 105)in [23] and will be used throughout the proof of Theorem 3.2.3.

Later, in Section 3.5.3, we will start by giving a step-by-step plan on how we will proveTheorem 3.2.3, by breaking its proof up into bite-sized chunks as we did before with the proofsof Lemma 3.3.1 and Theorem 3.3.17. Then, we will derive the desired result. If one wishes toskip ahead: the proof of existence is given in Conclusion 3.5.29 and the proof of uniqueness isfound in Lemma 3.5.30.

81

Page 82: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

3.5.1 Preparations for the proof of Theorem 3.2.3

We need some preparations before we prove Theorem 3.2.3. This section is an extended discus-sion of pages 102 and 103 in [23].

Setting 3.5.1 (Preparations for the proof of Theorem 3.2.3). The Banach space (V, ‖·‖V) in theGelfand triple V ⊂ H ⊂ V∗ is a linear subspace of the separable Hilbert space H that is dense withrespect to the H-norm. Therefore, there exists a countable orthonormal basis v1

k : k ≥ 1 ⊂ Vof H of elements in V by Proposition 1.1.12. Moreover, as (V, ‖·‖V) is separable, there also existsa countable dense subset v2

k : k ∈ N that is dense in V with respect to the V-norm. Usingthe Gram-Schmidt orthonormalization procedure, these two sequences can be merged into onesequence vk : k ∈ N =: H such that H is an orthonormal basis of H of elements in V and suchthat Sp(H) is dense in V with respect to the V-norm. We will define Hn := vk : k ≤ n andHn := Sp(Hn). We note that that Sp(H) =

⋃n≥1 Hn.

Define the linear operator Pn between the vector spaces V∗ and Hn through

Pn : V∗ → Hn : ` 7→n∑k=1

〈`, vk〉V∗×V vk.

Note that for all `1, `2 ∈ V∗ and v ∈ V, we have

〈Pn `1, v〉V∗×V = 〈`1,Pn v〉V∗×V (3.81)

〈`1,Pn `2〉V∗×V = 〈`1,Pn `2〉V∗×V (3.82)

and for every v ∈ Hn, we have

Pn v = v. (3.83)

In particular, we see that Pn |H, i.e., the operator Pn with the domain restricted to H ⊂ V∗

coincides with the orthogonal projection onto Hn, just like we defined Pn in Setting 3.3.13.Just like H, the Hilbert space U is separable and therefore contains a countable orthonormal

basis U := uk : k ≥ 1. Analogously, we define Un := u1, . . . , un and Un := Sp(Un). Let usdenote the projection onto Un by Pn.

The process W is a cylindrical Wiener process. Let us denote the Q-Wiener process that isassociated with the standard cylindrical Wiener process W , see Proposition 2.4.6, by W . Inview of Proposition 2.4.3, the process W (n) defined in (2.15) in is a Un-valued idUn-Wienerprocess. Combined with the projection Pn, this gives rise to the following SDE on Hn,

dX(n)(t) = PnA(t,X(n)(t)

)dt+ Pn B

(t,X(n)(t)

)dW (n)(t),

X(n)(0) = PnX0,(3.84)

where X0 is the initial condition as given in Theorem 3.2.3. We comment on this SDE inLemma 3.5.5 below.

Lastly, recall the constant α from Condition (H3) and define p := minα, 2 > 1. We cancontinuously and linearly embed the Banach spaces Lα([0, T ]×Ω;V) and L2([0, T ]×Ω;H) in theBanach space Lp([0, T ]×Ω;H). Hence, the intersection D := Lα([0, T ]×Ω;V)∩L2([0, T ]×Ω;H)is a Banach space as well, when endowed with the norm

‖Ξ‖D := max‖Ξ‖Lα([0,T ]×Ω;V) , ‖Ξ‖L2([0,T ]×Ω;H)

for any Ξ ∈ D by Theorem 1.1.7. This space D will be used frequently in what follows.

Before we comment on SDE (3.84) in Lemma 3.5.5, we make three preliminary observations.

82

Page 83: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Observation 3.5.2. Every Hilbert-Schmidt operator A : U→ Hn is, as seen as an operator fromUn to Hn to H, a Hilbert-Schmidt operator as well, since the domain of A contains Un and bothUn and Hn inherit the norm from their “parent spaces” U and H. Also, note that in this case,the following (in)equalities

‖A‖L2(Un,Hn) = ‖A‖L2(Un,H) ≤ ‖A‖L2(U,H)

hold.

Observation 3.5.3. The Hn-inner product on Hn is just the H-inner product restricted to Hn.

Observation 3.5.4. The norms ‖ · ‖V and ‖ · ‖H on Hn are equivalent, since the space Hn isfinite-dimensional.

Lemma 3.5.5. The SDE (3.84) admits a strong solution X(n) in the sense of Definition 3.4.2for every n ∈ N.

Proof. Let n ∈ N be arbitrary, but fixed. We will derive that we are in the setting of Theo-rem 3.4.3. Consider the Gelfand triple Hn ⊂ Hn ⊂ Hn, where all spaces are equipped with theHn-norm. Define the maps

A′ : [0, T ]× Hn×Ω→ Hn : (t, v, ω) 7→Pn(A(t, v, ω))

B′ : [0, T ]× Hn×Ω→ L2(Un,Hn) : (t, v, ω) 7→Pn B(t, v, ω).

We can isometrically embed L2(Un,Hn) into M(n;R): let A ∈ L2(Un,Hn) be arbitrary. Then,we define the matrix A by setting A(i, j) := 〈Aui, hj〉H for all i, j ≤ n. Then, it follows fromParseval’s identity that

‖A‖L2(Un,Hn) =

(n∑i=1

‖Aui‖2H

) 12

=

n∑i=1

n∑j=1

〈Aui, hj〉2H

12

= ‖A‖M(d;R) .

Looking back at Proposition 1.1.20, we see that L2(Un,Hn) (is separable and) has the samedimension asM(n;R). Hence, the injective linear map A 7→ A as constructed above is actuallya bijection (with an obvious inverse), so we can identify L2(Un,Hn) withM(n;R). On a similarnote, we can identify the space Hn with Rn.

As A and B satisfy the conditions in Condition 3.1.3, one can readily verify that A′ satisfiesCondition (H1), by continuity of the operator Pn. Let c be the constant we obtain fromCondition (H2). We have by by properties (3.81) and (3.83) of Pn above, Lemma 1.1.21, theinequality ‖Pn‖L(H) ≤ 1, Observations 3.5.2 and 3.5.3 and Condition (H2) for all u, v ∈ Hn,

2⟨A′ (t, u, ω)− A′ (t, v, ω) , u− v

⟩Hn

+∥∥B′(t, u, ω)− B′(t, v, ω)

∥∥2

L2(Un,Hn)

= 2 〈Pn (A (t, u, ω)− A (t, v, ω)) , u− v〉H +∥∥Pn

(B(t, u, ω)− B′(t, v, ω)

)∥∥2

L2(Un,Hn)

= 2 〈Pn (A (t, u, ω)− A (t, v, ω)) , u− v〉V∗×V +∥∥Pn

(B(t, u, ω)− B′(t, v, ω)

)∥∥2

L2(Un,Hn)

≤ 2 〈A (t, u, ω)− A (t, v, ω) ,Pn(u− v)〉V∗×V + ‖B(t, u, ω)− B(t, v, ω)‖2L2(Un,H)

≤ 2 〈A (t, u, ω)− A (t, v, ω) , u− v〉V∗×V + ‖B(t, u, ω)− B(t, v, ω)‖2L2(U,H)

≤ c‖u− v‖2H = c‖u− v‖2Hn .

for all t ∈ [0, T ] and ω ∈ Ω. This means that the maps A′ and B′ satisfy Conditions (H1)and (H2). With a similar argument, we check that A′ and B′ satisfy Condition (H3) with thesame constants α and c1 (this is a result of Observation 3.5.3) and the same process f as inCondition (H3) with A and B and an altered constant c′2.

83

Page 84: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Then, since A′ and B′ satisfy Conditions (H1) and (H2), we find by Lemma 3.1.7 that A′ iscontinuous and satisfies Condition (H). By Corollary 3.1.8, the map B′ is continuous as well.

Moreover, we have ∥∥A′(t, v, ω)∥∥Hn≤ ‖Pn‖L(V∗,Hn) ‖A(t, v, ω)‖V∗ .

Then, because A satisfies Condition (H4), we see that A′ satisfies Condition (H4) as well withcorresponding constant c′3 and corresponding process g′. For the constant c′3, we again invokeObservation 3.5.4 .

The map A′ is progressively measurable by the progressive measurability (P) of A and themeasurability of the (continuous!) operator Pn. For B′, one checks the progressive measura-bility with a similar argument as in Lemma 2.4.8.

This means that the maps A′ and B′ satisfy all conditions in Condition 3.1.3.Now, if we define b := A′ and σ := B′ in (3.73) and (3.74) and recall the identification of Hn

with Rn and L2(Un,Hn) with M(n;R), we see that all conditions that are imposed on b and σare satisfied by what we derived above and since Condition (H3) and (3.10) in Corollary 3.1.4imply (3.75). Hence, the SDE (3.84) is well-defined.

To check that the conditions in Theorem 3.4.3 are met, define

KN (t) := |c|+ |c1|+ |f(t)|

for all N ∈ N. Then, we have by the integrability of f on [0, T ] × Ω that (3.78) is indeedsatisfied P-almost surely. In addition, by our choice of KN , we see that (H2) implies (3.79) inTheorem 3.4.3 and that (H3) implies (3.80).

We conclude that the SDE (3.84) admits a strong solution in the sense of Definition 3.4.2.

Remark 3.5.6. We may even assume that the process X(n) is continuous by restricting X(n) tothe sets of continuous sample paths and setting the other paths equal to zero. One can quicklycheck that X(n) indeed still satisfies the conditions in Definition 3.4.2; regarding Condition (ii)in Definition 3.4.2, we would like to point out that the underlying filtration F is assumed to benormal.

In addition, since the paths are continuous in Hn(⊂ V ⊂ H) with respect to the H-norm, or,equivalently, the V-norm (Observation 3.5.4), the process X(n) is also continuous when viewedas a V- or a H-valued process.

We reach the following conclusion.

Conclusion 3.5.7. The strong solution X(n) is an F-adapted process that is continuous in Vand H. This also means that X(n) is progressively measurable when viewed as a V- or H-valuedprocess by Proposition 2.1.8.

Corollary 3.5.8. The stochastic integral∫ ·

0

⟨X(n)(s),Pn B

(s,X(n)(s)

)dW (n)(s)

⟩H

is a con-tinuous local martingale for every n ∈ N.

Proof. Since X(n) is a strong solution, it has continuous paths with values in Hn. Moreover, wehave by Property (iii) of a strong solution and Observation 3.5.3,∫ T

0

∥∥∥Pn B(s,X(n)(s)

)∥∥∥2

L2(Un,Hn)ds < +∞

P-almost surely. Apply Corollary 2.3.9.

We can apply Remark 3.5.6 and Corollary 3.5.8 to construct a useful localizing sequence ofstopping times.

84

Page 85: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Lemma 3.5.9. Fix arbitrary n ∈ N. Define for each l ∈ N,

σl := inft ∈ [0, T ] :

∥∥∥X(n)(t)∥∥∥V≥ l∧ T.

The sequence σl : l ∈ N is a localizing sequence for∫ ·

0

⟨X(n)(s),Pn B

(s,X(n)(s)

)dW (n)(s)

⟩H

.

In addition, the processX(n)(σl ∧ t) : t ∈ [0, T ]

is uniformly bounded for all l ∈ N and coin-

cides on (0, σl] with X(n).

Proof. First, we observe that σl is a stopping time for every l ∈ N by the continuity of X(n)

(Remark 3.5.6). The sequence σl : l ∈ N is increasing and converges to T pointwise on theset of continuous sample paths of X(n), i.e., σl : l ∈ N is a localizing sequence. By Obser-vation 3.5.2 , we indeed see that the Hn-valued process

X(n)(σl ∧ t) : t ∈ [0, T ]

is uniformly

bounded for all l ∈ N and coincides with X(n) on the set (0, σl]. We will continue by showingthat σl : l ∈ N is a localizing sequence for

∫ ·0

⟨X(n)(s),Pn B

(s,X(n)(s)

)dW (n)(s)

⟩H

, whichis a local martingale by Corollary 3.5.8.

To abbreviate notation, define K :=∫ ·

0

⟨X(n)(s),Pn B

(s,X(n)(s)

)dW (n)(s)

⟩H

. Let l ∈ N bearbitrary and consider the stopped process Kσn , which is again a local martingale; this followsfrom the optional sampling theorem and the existence of a localizing sequence τn : n ∈ N forthe local martingale K. As for every u ∈ Un, we have

1(0,σl](s)⟨X(n)(s),Pn B

(s,X(n)(s)

)u⟩H

=⟨1(0,σl](s)X

(n)(s),1(0,σl](s) Pn B(s,X(n)(s)

)u⟩H.

we see that, in view of Lemma 2.3.6, which also holds for stochastically integrable processes,and (2.8) in Lemma 2.3.8,

Kσl =

∫ ·0

1(0,σl](s)

˜Pn B(·, X(n)(·)

)X(n)(s)

dW (n)(s)

=

∫ ·0

⟨1(0,σl](s)X

(n)(s),1(0,σl](s) Pn B(s,X(n)(s)

)dW (n)(s)

⟩H

where the overarching tilde is as in Lemma 2.3.8. We want to apply Proposition 1.3.4 to concludethat the stopped process Kσl is indeed a martingale.

We have by inequality (2.14) in Corollary 2.3.12,

〈Kσl〉1/2T ≤

(sup

s∈(0,T ]

∥∥∥1(0,σl](s)X(n)(s)

∥∥∥2

H

) 12

·(∫ T

0

∥∥∥1(0,σl](s) Pn B(s,X(n)(s)

)∥∥∥2

L2(Un,Hn)ds

) 12

.

Taking expectations on both sides and applying Holder’s inequality to the right-hand side, weobtain by using Observation 3.5.4 in the second inequality,

E

( sups∈(0,T ]

∥∥∥1(0,σl](s)X(n)(s)

∥∥∥2

H

) 12

·(∫ T

0

∥∥∥1(0,σl](s) Pn B(s,X(n)(s)

)∥∥∥2

L2(Un,Hn)ds

) 12

≤ E

[sup

s∈(0,T ]1(0,σl]

∥∥∥(s)X(n)(s)∥∥∥2

H

]· E[∫ T

01(0,σl](s)

∥∥∥Pn B(s,X(n)(s)

)∥∥∥2

L2(Un,Hn)ds

]

≤ E

[sup

s∈(0,T ]1(0,σl]

∥∥∥X(n)(s)∥∥∥2

H

]· E[∫ T

01(0,σl](s)

∥∥∥B(s,X(n)(s))∥∥∥2

L2(U,H)ds

].

Recall Observation 3.5.2 and inequality (3.10) in Corollary 3.1.4. These results imply thatthe right-hand side is finite, since X(n) coincides on the set (0, σl] with the uniformly bounded

processX(n)(σl ∧ t) : t ∈ [0, T ]

. In particular, this means that E

[〈Kσl〉1/2T

]<∞ for all l ∈ N,

so the stopped process Kσl is a martingale by Proposition 1.3.4. We conclude that σl : l ∈ Nis a localizing sequence for K.

85

Page 86: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Corollary 3.5.10. The process X(n) is uniformly bounded on the set (0, σl] for all l ∈ N.

We will use the assumptions of Setting 3.5.1 to derive an important result.

Lemma 3.5.11. Under the assumptions of Setting 3.5.1, there exists a positive real number Rsuch that

sup

∥∥∥X(n)∥∥∥Lα([0,T ]×Ω;V)

+∥∥∥A(·, X(n)

)∥∥∥L

αα−1 ([0,T ]×Ω;V∗)

+ supt≤T

E[∥∥∥X(n)(t)

∥∥∥2

H

]: n ∈ N

≤ R.

Proof. By the finite-dimensional Ito formula, we have∥∥∥X(n)(t)∥∥∥2

H

=∥∥∥X(n)(0)

∥∥∥2

H+

∫ t

0

(2⟨PnA

(s,X(n)(s)

), X(n)(s)

⟩H

+∥∥∥Pn B

(s,X(n)(s)

)∥∥∥2

L2(Un,H)

)ds

+ 2

∫ t

0

⟨X(n)(s),Pn B

(s,X(n)(s)

)dW (n)(s)

⟩H.

(3.85)

From properties (3.81) and (3.83) of the operator Pn, we have⟨PnA

(s,X(n)(s)

), X(n)(s)

⟩H

=⟨PnA

(s,X(n)(s)

), X(n)(s)

⟩V∗×V

=⟨A(s,X(n)(s)

),PnX

(n)(s)⟩V∗×V

=⟨A(s,X(n)(s)

), X(n)(s)

⟩V∗×V

(3.86)

for all n ∈ N. Let σl : l ∈ N be the localizing sequence of stopping times we obtain fromLemma 3.5.9. Then, we have for every t ∈ [0, T ] and l ∈ N that

E[∥∥∥X(n) (t ∧ σl)

∥∥∥2

H

]= E

[∥∥∥X(n)(0)∥∥∥2

H

]+

∫ t

0E[1(0,σl]

(2⟨A(s,X(n)(s)

), X(n)(s)

⟩V∗×V

+∥∥∥Pn B

(s,X(n)(s)

)∥∥∥2

L2(Un,H)

)]ds,

(3.87)

since the expectation of the stochastic integral evaluated at σl vanishes. We can swap the expec-tation and integral by a Holder-type argument, using Condition (H4) on A in Condition 3.1.3and that on the set (0, σl], the process X(n) is uniformly bounded (Corollary 3.5.10).

Recall Condition (H3) in Condition 3.1.3, from which we obtain the real number c1. Then, wecan apply the ‘integration by parts’-rule for continuous functions of bounded variation (Propo-

sition 1.4.9) to the maps t 7→ ec1t and t 7→ E[∥∥X(n) (t ∧ σl)

∥∥2

H

]to obtain

e−c1t E[∥∥∥X(n) (t ∧ σl)

∥∥∥2

H

]= E

[∥∥∥X(n)(0)∥∥∥2

H

]+

∫ t

0E[∥∥∥X(n) (s ∧ σl)

∥∥∥2

H

]d(e−c1s

)+

∫ t

0e−c1s d

(E[∥∥∥X(n) (s ∧ σl)

∥∥∥2

H

])

86

Page 87: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

From (3.87), we obtain the shorthand notation to apply to the third integral on the right-handside, so we can rewrite the integrals and find

E[e−c1t

∥∥∥X(n) (t ∧ σl)∥∥∥2

H

]= E

[∥∥∥X(n)(0)∥∥∥2

H

]−∫ t

0c1e−c1s E

[∥∥∥X(n) (s ∧ σl)∥∥∥2

H

]ds

+

∫ t

0e−c1s E

[1(0,σl](s)

(2⟨A(s,X(n)(s)

), X(n)(s)

⟩V∗×V

+∥∥∥Pn B

(s,X(n)(s)

)∥∥∥2

L2(Un,H)

)]ds.

We can transfer the second term on the right-hand side to the left, as the integral is boundedby the choice of the stopping time σl and the uniform boundedness of the map [0, T ]→ R : t 7→c1e−c1t. This yields

E[e−c1t

∥∥∥X(n) (t ∧ σl)∥∥∥2

H

]+

∫ t

0c1e−c1s E

[∥∥∥X(n) (s ∧ σl)∥∥∥2

H

]ds

= E[∥∥∥X(n)(0)

∥∥∥2

H

]+

∫ t

0e−c1s E

[1(0,σl](s)

(2⟨A(s,X(n)(s)

), X(n)(s)

⟩V∗×V

+∥∥∥Pn B

(s,X(n)(s)

)∥∥∥2

L2(Un,H)

)]ds.

We have by Observation 3.5.2 and Lemma 1.1.21,∥∥∥Pn B(s,X(n)(s)

)∥∥∥2

L2(Un,H)≤∥∥∥Pn B

(s,X(n)(s)

)∥∥∥2

L2(U,H)≤∥∥∥B(s,X(n)(s)

)∥∥∥2

L2(U,H).

(3.88)

Hence, we can apply Condition (H3) to obtain

E[e−c1t

∥∥∥X(n) (t ∧ σl)∥∥∥2

H

]+

∫ t

0c1e−c1s E

[∥∥∥X(n) (s ∧ σl)∥∥∥2

H

]ds

≤ E[∥∥∥X(n)(0)

∥∥∥2

H

]+

∫ t

0e−c1s E

[1(0,σl](s)

(c1

∥∥∥X(n)(s)∥∥∥2

H− c2

∥∥∥X(n)(s)∥∥∥αV

+ f(s)

)]ds.

(3.89)

By Corollary 3.5.10, we see that the process∥∥X(n)(t)

∥∥αV

: t ∈ [0, T ]

is uniformly bounded on

(0, σl]. We bring this term to the other side in (3.89). We find

E[e−c1t

∥∥∥X(n) (t ∧ σl)∥∥∥2

H

]+

∫ t

0c1e−c1s E

[∥∥∥X(n) (s ∧ σl)∥∥∥2

H

]ds+

∫ t

0e−c1s E

[c2

∥∥∥X(n)(s)∥∥∥αV

]ds

≤ E[∥∥∥X(n)(0)

∥∥∥2

H

]+

∫ t

0e−c1s E

[1(0,σl](s)

(c1

∥∥∥X(n)(s)∥∥∥2

H+ f(s)

)]ds

≤ E[∥∥∥X(n)(0)

∥∥∥2

H

]+

∫ t

0e−c1s E

[1(0,σl](s)c1

∥∥∥X(n)(s)∥∥∥2

H

]ds+ e|c1|T E

[∫ T

0|f(s)| ds

].

In similar fashion, we obtain from Corollary 3.5.10 and Observation 3.5.4 that the second integralon the right-hand side is also finite, so we can move it to the left-hand side to get

E[e−c1t

∥∥∥X(n) (t ∧ σl)∥∥∥2

H

]+

∫ t

0c1e−c1s E

[∥∥∥X(n) (s ∧ σl)∥∥∥2

H

]ds+

∫ t

0e−c1s E

[c2

∥∥∥X(n)(s)∥∥∥αV

]ds

−∫ t

0e−c1s E

[1(0,σl](s)c1

∥∥∥X(n)(s)∥∥∥2

H

]ds

≤ E[∥∥∥X(n)(0)

∥∥∥2

H

]+ e|c1|T E

[∫ T

0|f(s)| ds

]≤ E

[‖X0‖2H

]+ e|c1|T E

[∫ T

0|f(s)| ds

].

87

Page 88: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

We can now apply Fatou’s lemma to l on the left-hand side to find that

E[e−c1t

∥∥∥X(n) (t)∥∥∥2

H

]+

∫ t

0e−c1s E

[c2

∥∥∥X(n)(s)∥∥∥αV

]ds

≤ E[‖X0‖2H

]+ e|c1|T E

[∫ T

0|f(s)| ds

].

(3.90)

Hence, we find by (3.90)

e−|c1|T E[∥∥∥X(n) (t)

∥∥∥2

H

]≤ E

[e−c1t

∥∥∥X(n) (t)∥∥∥2

H

]≤ E

[‖X0‖2H

]+ e|c1|T E

[∫ T

0|f(s)| ds

]and multiplying both sides with e|c1|T yields

E[∥∥∥X(n) (t)

∥∥∥2

H

]≤ e|c1|T E

[‖X0‖2H

]+ e2|c1|T E

[∫ T

0|f(s)| ds

].

As the right-hand side does not depend on t, we see that the right-hand side forms an upper

bound for supt≤T E[∥∥X(n) (t)

∥∥2

H

]. Since n ∈ N was chosen arbitrarily, this holds for all n ∈ N.

Using (3.90) again, we have

c2e−|c1|T

∫ t

0E[∥∥∥X(n)(s)

∥∥∥αV

]ds ≤

∫ t

0e−c1s E

[c2

∥∥∥X(n)(s)∥∥∥αV

]ds

≤ E[‖X0‖2H

]+ e|c1|T E

[∫ T

0|f(s)| ds

].

If we take t to be equal to T on the left-hand side and multiply both sides with 1c2e|c1|T , we find

∥∥∥X(n)∥∥∥Lα([0,T ]×Ω;V)

=

(∫ T

0E[∥∥∥X(n)(s)

∥∥∥αV

]ds

) 1α

≤(

1

c2e|c1|T E

[‖X0‖2H

]+

1

c2e2|c1|T E

[∫ T

0|f(s)| ds

]) 1α

.

Again, this holds for all n ∈ N as the upper bound does not depend on n.Finally, we see that, using (3.9) in Corollary 3.1.4, we find∥∥∥A(·, X(n)

)∥∥∥α−1α

Lαα−1 ([0,T ]×Ω;V∗)

= E[∫ T

0

∥∥∥A(s,X(n)(s))∥∥∥ α

α−1

V∗ds

]≤ E

[∫ T

0

(|g(s)|+ c3

∥∥∥X(n)(s)∥∥∥α−1

V

) αα−1

ds

].

As |a+ b|p ≤ 2p (|a|p + |b|p), we arrive at∥∥∥A(·, X(n))∥∥∥α−1

α

Lαα−1 ([0,T ]×Ω;V∗)

≤ 2αα−1 E

[∫ T

0|g(s)|

αα−1 ds

]+ 2

αα−1 c3 E

[∫ T

0

∥∥∥X(n)(s)∥∥∥αV

ds

].

From the integrability condition on g in (H4) and the upper bound we derived above for∥∥X(n)∥∥Lα([0,T ]×Ω;V)

, we can conclude that the desired R > 0 exists.

88

Page 89: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Recall the progressive σ-algebra MT we saw in Definition 2.1.7. In view of Lemma 3.5.14and Conclusion 3.5.7, we obtain the following result.

Corollary 3.5.12. It holds that X(n) ∈ Lα([0, T ]×Ω,MT , λ⊗P;V) for all n ∈ N. In addition,since ∫ T

0E[∥∥∥X(n)(t)

∥∥∥2

H

]dt ≤ sup

t≤TE[∥∥∥X(n)(t)

∥∥∥2

H

]≤ R

for all n ∈ N, it follows that X(n) ∈ L2([0, T ] × Ω,MT , λ ⊗ P;H) for all n ∈ N. Moreover, wesee that the sequence

X(n) : n ≥ 1

is uniformly bounded in Lα([0, T ]×Ω,MT , λ⊗ P;H) and

L2([0, T ]× Ω,MT , λ⊗ P;H).

Lemma 3.5.11 will be used to derive essential weak convergence results in Corollary 3.5.14below that are used extensively in proving the main result, Theorem 3.2.3.

3.5.2 Weak convergence results

Before we derive the important weak convergence results in Corollary 3.5.14, we introducenotation.

Notation 3.5.13. When we write Lp([0, T ] × Ω;E) for a Banach space E, we will assume thatthe underlying σ-algebra is the usual product σ-algebra and the corresponding measure is theusual product λ⊗ P-measure. To shorten notation, we will write LpMT

([0, T ]× Ω;E) instead ofLp([0, T ]× Ω,MT , λ⊗ P;E).

Corollary 3.5.14. This is an extended discussion of Remarks (i), (ii) and (iii) on pages 104and 105 in [23].

We will apply the results from Lemma 3.5.11 to obtain weak convergence results below; allthese are labeled and will be referred back to later. We will however start with an importantresult.

Recall the constant α from Condition (H3) and define p := minα, 2 > 1. We can continu-ously and linearly embed the Banach spaces LαMT

([0, T ]× Ω;V) and L2MT

([0, T ]× Ω;H) in the

Banach space LpMT([0, T ] × Ω;H). Define DMT

:= LαMT([0, T ] × Ω;V) ∩ L2

MT([0, T ] × Ω;H).

This is a Banach space as well, when endowed with the norm

‖Ξ‖DMT:= max

‖Ξ‖LαMT

([0,T ]×Ω;V) , ‖Ξ‖L2MT

([0,T ]×Ω;H)

for any Ξ ∈ DMT

by Theorem 1.1.7.The Banach spaces LαMT

([0, T ] × Ω;V) and L2MT

([0, T ] × Ω;H) are reflexive by Proposi-

tion 1.2.10. This means that the product space PMT:= LαMT

([0, T ]×Ω;V)×L2MT

([0, T ]×Ω;H),endowed with the norm

‖(φ, ψ)‖PMT:= max

‖φ‖LαMT

([0,T ]×Ω;V), ‖ψ‖L2MT

([0,T ]×Ω;H)

is a reflexive Banach space as well by Proposition 1.1.8 and Proposition 1.1.31. If we look atthe “diagonal” D′MT

:= (φ, ψ) ∈ PMT: φ = ψ of the product space PMT

, it immediatelyfollows that this linear subspace of the product space is closed. As a consequence, we see that(D′MT

, ‖ · ‖PMT

)is a reflexive Banach space by Lemma 1.1.30.

Consider the bounded linear operator κMT: DMT

→ PMT: Ξ 7→ (Ξ,Ξ). Then, we see

that the image of κMTis equal to the diagonal D′MT

of the product space PMTand so, if

we compare the norms on the domain and the codomain, we find that DMTand D′MT

areisometrically isomorphic. This means that DMT

is reflexive by Lemma 1.1.29.

89

Page 90: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

When we look at Corollary 3.5.12, we see that the sequenceX(n) : n ≥ 1

⊂ DMT

is uni-formly bounded. Therefore, we can apply Theorem 1.1.45 to conclude that there exists asubsequence nk : k ≥ 1 and some X ∈ DMT

such that

X(nk) w→ X

in DMTas k →∞. From this, we obtain that X is progressively measurable and V-valued.

An identical construction gives rise to the following space

D := Lα([0, T ]× Ω;V) ∩ L2([0, T ]× Ω;H),

which again is a reflexive Banach space; we already encountered this space at the end of Set-ting 3.5.1. Again, please keep in mind that this space will be used extensively in what follows.

From Proposition 2.1.9, we obtain that the sequenceX(n) : n ≥ 1

also belongs to the

normed vector space D. We can then invoke Proposition 1.1.42 to see that

X(nk) w→ X

in D as k →∞. This gives rise to the first weak convergence result in Remark A below.

A From Proposition 1.1.41, we obtain the next two weak convergence results.

A.1 Firstly, we have X(nk) w→ X in Lα([0, T ] × Ω;V) as k → ∞. Using Theorem 1.2.11,this is equivalent to saying that

E[∫ T

0

⟨K(s), X(nk)(s)

⟩V∗×V

ds

]→ E

[∫ T

0

⟨K(s), X(s)

⟩V∗×V

ds

]for every K ∈ L

αα−1 ([0, T ]× Ω;V∗)

A.2 Secondly, it also holds that X(nk) w→ X in L2([0, T ] × Ω;H) as k → ∞. Then,using the depiction of weak convergence in Hilbert spaces (Proposition 1.1.37), thisis equivalent to

E[∫ T

0

⟨K(s), X(nk)(s)

⟩H

ds

]→ E

[∫ T

0

⟨K(s), X(s)

⟩H

ds

]for every K ∈ L2([0, T ]× Ω;H).

We can combine Corollary 3.5.12 with Lemma 3.1.9 to see that A(·, X(nk)(·)

)is a progressively

measurable, V∗-valued process for every k ∈ N. Using Lemma 3.5.11 again, we see that thesequence

A(·, X(nk)

): k ≥ 1

is bounded in L

αα−1 ([0, T ]×Ω;V∗). Since V is reflexive, its dual

is reflexive as well by Lemma 1.1.28, and so Lαα−1 ([0, T ]×Ω;V∗) is a reflexive Banach space by

Proposition 1.2.10. Invoking Theorem 1.1.45, we can find a further subsequence, again denoted

nk : k ≥ 1, and some Y ∈ Lαα−1

MT([0, T ]× Ω;V∗) such that

A(·, X(nk)(·)

)w→ Y

in Lαα−1

MT([0, T ] × Ω;V∗) as k → ∞. Then, as L

αα−1

MT([0, T ] × Ω;V∗) ⊂ L

αα−1 ([0, T ] × Ω;V∗), we

have the following weak convergence result.

B We haveA(·, X(nk)(·)

)w→ Y

in Lαα−1 ([0, T ]× Ω;V∗) as k →∞, or, equivalently,

E[∫ T

0

⟨A(s,X(nk)(s)

),K(s)

⟩V∗×V

ds

]→ E

[∫ T

0〈Y (s),K(s)〉V∗×V ds

]for all K ∈ Lα([0, T ]× Ω;V) by Corollary 1.2.12.

90

Page 91: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Recall the bounded linear operators Pn ∈ L(H) and Pn ∈ L(U), see Setting 3.5.1 on page 82.For every k ∈ N, we define the function

ιk : L2(U,H)→ L2(U,H) : S 7→Pnk S Pnk

which is well-defined in view of Observation 3.5.2. It is also continuous by Lemma 1.1.21for every k ∈ N. Like above, we can invoke Corollary 3.5.12 and Lemma 3.1.9 to see thatB(·, X(nk)(·)

)is a progressively measurable, L2(U,H)-valued process for every k ∈ N. Then, we

see that the processιk

(B(·, X(nk)(·)

)): t ∈ [0, T ]

=

Pnk B(s,X(nk)(s)

) Pnk : t ∈ [0, T ]

is a progressively measurable, L2(U,H)-valued process.

We see from Lemma 1.1.21 that

supk∈N

E[∫ T

0

∥∥∥Pnk B(s,X(nk)(s)

) Pnk

∥∥∥2

L2(U,H)ds

]≤ sup

k∈NE[∫ T

0

∥∥∥B(s,X(nk)(s))∥∥∥2

L2(U,H)ds

].

Using Lemma 3.1.6, we see that the right-hand side of the previous inequality is bounded by

|c1| supk∈N

E[∫ T

0

∥∥∥X(nk)(s)∥∥∥2

Hds

]+ E

[∫ T

0|f(s)| ds

]

+

(E[∫ T

0|g(s)|

αα−1

])α−1α

· supk∈N

(E[∫ T

0

∥∥∥X(nk)(s)∥∥∥αV

ds

]) 1α

+ |2c3 − c2| supk∈N

E[∫ T

0

∥∥∥X(nk)(s)∥∥∥αV

ds

]which is finite in view of Lemma 3.5.11. Combined with the progressive measurability of theprocess, this implies that

Pnk B(·, X(nk)(·)

) Pnk : k ∈ N

⊂ L2

MT([0, T ]× Ω;L2(U,H))

is bounded. Therefore, by passing along to a subsequence if necessary, we know that there existsa Z ∈ L2

MT([0, T ]× Ω;L2(U,H)) such that Pnk B

(·, X(nk)(·)

) Pnk

w→ Z by Lemma 1.1.45

in L2MT

([0, T ]× Ω;L2(U,H)). It is important to observe that Z is progressively measurable.As will be discussed in Comment 3.5.16 below, we have∫ t

0Pnk B

(s,X(nk)(s)

) Pnk dW (s) =

∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s). (3.91)

Moreover, in view of Sections 2.3.1, 2.4 and 2.5, the operator int : Φ 7→∫ ·

0 Φ(s) dW (s) is anisometry from the space all square-integrable progressively measurable L2(U,H)-valued pro-cesses Φ to M 2

T (H). Then, we see that πt int is continuous from the same domain space toL2(Ω,F ,P;H). Hence, we have by Propostion 1.1.36,∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s)

w→∫ t

0Z(s) dW (s)

in L2(Ω,F ,P;H) for every t ∈ [0, T ]. This gives rise to the final weak convergence result.

91

Page 92: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

C For arbitrary ζ ∈ L2(Ω,F ,P;H) and t ∈ [0, T ], we have

E[⟨∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s), ζ

⟩H

]→ E

[⟨∫ t

0Z(s) dW (s), ζ

⟩H

],

by Proposition 1.1.37.

The progressively measurable weak limits Y and Z we found in Corollary 3.5.14 and the initialcondition X0 in Theorem 3.2.3 allow us to define the process X by

X(t) := X0 +

∫ t

0Y (s) ds+

∫ t

0Z(s) dW (s), (3.92)

for all t ∈ [0, T ].

With the integrability conditions on the random variable X0 and the processes Y and Z, thefollowing result can be verified; it is important to note that this result holds since ([0, T ]×Ω, λ⊗P) is a finite measure space.

Corollary 3.5.15. The process X defined in (3.92) belongs to Lp([0, T ] × Ω;V∗), where p :=minα, 2 > 1, with α as in Condition (H3).

We end with the justification of (3.91) we encountered above.

Comment 3.5.16. Here, we will derive the intuition behind (3.91) in Corollary 3.5.14. Recallthat we use the Hilbert-Schmidt operator J : U→ U given in Example 2.4.5 in the constructionof the Q-Wiener process W that is associated with the standard cylindrical Wiener process(Proposition 2.4.6), as we remarked in Assumption 2.4.11.

With simple algebra, we deduce that the operators J and Pn, the orthogonal projection ontoUn defined in Setting 3.5.1, commute. Consequently, we also have that Pn and J−1 commuteby Lemma 1.1.5. In that case, one can derive (3.91) for every elementary L2(U,H)-valuedprocesses, justifying (3.91).

3.5.3 Proof of Theorem 3.2.3

Right now, we will present our step-by-step plan on tackling the proof of Theorem 3.2.3. First,we show that the process X, defined in (3.92), coincides λ⊗ P-almost everywhere on [0, T ]×Ωwith X, defined in Corollary 3.5.14 using the weak convergence results we derived above inLemma 3.5.17 below. Since X is progressively measurable and belongs to Lα([0, T ] × Ω;V ) ∩L2([0, T ] × Ω;H), we may apply Theorem 3.3.17 to the process X and use the results in theproof of Theorem 3.2.3.

After that, we will prove auxiliary lemmas to derive two very imporant inequalities: the firstone being (3.117) in Lemma 3.5.25, which we will use to show that Z coincides with B

(·, X(·)

)almost everywhere on [0, T ] × Ω, see Corollary 3.5.26. The second inequality is (3.119) inLemma 3.5.27, which is based on (3.117), from which we can conclude that Y coincides withA(·, X(·)

)almost everywhere (Corollary 3.5.28). Combined, these yield the proof of existence

(Conclusion 3.5.29). This is followed by the proof of uniqueness (Lemma 3.5.30).Our proof is based on the proof of Theorem 4.2.4 in [23], but will from some point on differ

slightly from the proof given in [23]. This will be duly noted when it happens. Moreover, whenwe use bits of the proof of Theorem 4.2.4 in [23] in the proofs below, this will be mentionedaccordingly.

Lemma 3.5.17. Suppose that the assumptions in Setting 3.5.1 are in force. Then, the processX, defined in (3.92), coincides λ⊗P-almost everywhere on [0, T ]×Ω with X, the weak limit ofthe sequence

X(nk) : n ≥ 1

we obtain from Corollary 3.5.14. In particular, Theorem 3.3.17

applies to the process X.

92

Page 93: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Proof. This is an extended derivation of the last display on page 105 in [23].Let p be as in Corollary 3.5.15 above. From that same result, we know that X belongs

to Lp([0, T ] × Ω;V∗). Then, it follows from the continuity of the embedding J , see (3.3) onpage 44, of H into V∗ and the finiteness of the measure space ([0, T ]× Ω,B([0, T ])⊗F , λ⊗ P)that L2([0, T ]× Ω;H) ⊂ Lp([0, T ]× Ω;V∗). Hence, X belongs to Lp([0, T ]× Ω;V∗).

The dual of a space separates points by virtue of the Hahn-Banach Theorem. Moreover,by Corollary 1.2.12, we know that the dual of Lp([0, T ] × Ω;V∗) is isometrically isomorphic

to Lpp−1 ([0, T ] × Ω;V). In the latter space, the space

⋃n≥1 Hn-valued functions is dense by

Lemma 1.2.9, since⋃n≥1 Hn is dense in V with respect to the V-norm. Denote the space of⋃

n≥1 Hn-valued simple functions by S.To show that X and X coincide, it therefore suffices to check that we have

E[∫ T

0〈X(t), φ(t)〉V∗×V dt

]= E

[∫ T

0

⟨X(t), φ(t)

⟩V∗×V

dt

](3.93)

for all φ ∈ S.Let ϕ ∈ L∞([0, T ]× Ω;R) and v ∈

⋃n≥1 Hn ⊂ V be arbitrary and consider the map (t, ω) 7→

ϕ(t, ω) · v, which belongs to Lpp−1 ([0, T ] × Ω;V). We will show that for every such map (3.93)

holds. Then, linearity will allow us to conclude that (3.93) holds for every⋃n≥1 Hn-valued

simple function, proving this lemma.Take arbitrary ϕ ∈ L∞([0, T ] × Ω;R) and v ∈

⋃n≥1 Hn. As X(nk) converges weakly to X in

the Hilbert space L2([0, T ] × Ω;H), the same holds in Lp([0, T ] × Ω;V∗) by Corollary 1.1.42.Then, Corollary 1.2.12 yields that

E[∫ T

0

⟨X(t), ϕ(t)v

⟩V∗×V

dt

]= lim

k→∞E[∫ T

0

⟨X(nk)(t), ϕ(t)v

⟩V∗×V

dt

]. (3.94)

The stochastic differential equation (3.84) admits a strong solution for every nk by Lemma 3.5.5.Then, if we plug in this solution, separate the integrals —note that is not yet verified to be valid,but it follows from the partial results we will derive below— and use the dualization property(3.4), we find that

E[∫ T

0

⟨X(nk)(t), ϕ(t)v

⟩V∗×V

dt

]= E

[∫ T

0

⟨X(nk)(0), ϕ(t)v

⟩H

dt

]+ E

[∫ T

0

⟨∫ t

0Pnk A

(s,X(nk)(s)

)ds, ϕ(t)v

⟩V∗×V

dt

]+ E

[∫ T

0

⟨∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s), ϕ(t)v

⟩H

dt

](3.95)

for every k ∈ N. We will continue this proof in four different parts: the first three parts willderive partial results and the last one will glue all of them together.

——————————

(Part 1.) The first term on the right-hand side equals E[⟨X(nk)(0), v

⟩H

∫ T0 ϕ(t) dt

]. Moreover,

the inequality∥∥X(nk)(0)

∥∥H≤ ‖X0‖H holds, the map ϕ belongs to L∞([0, T ]×Ω;R) and X(nk)(0)

converges pointwise to X0, with the latter random variable being (square-)integrable. Hence,applying Lebesgue’s Dominated Convergence Theorem, we obtain

E[⟨X(nk)(0), v

⟩H

∫ T

0ϕ(t) dt

]→ E

[〈X0, v〉H

∫ T

0ϕ(t) dt

](3.96)

as k →∞.

93

Page 94: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

——————————

(Part 2.) Using Lemma 1.2.6 and the property (3.81) of the map Pnk (see Setting 3.5.1), wecan rewrite the second term on the right-hand side of (3.95) to obtain

E[∫ T

0

⟨∫ t

0Pnk A

(s,X(nk)(s)

)ds, ϕ(t)v

⟩V∗×V

dt

]= E

[∫ T

0

∫ t

0

⟨Pnk A

(s,X(nk)(s)

), ϕ(t)v

⟩V∗×V

ds dt

]= E

[∫ T

0

∫ t

0

⟨A(s,X(nk)(s)

), ϕ(t) ·Pnk(v)

⟩V∗×V

ds dt

]. (3.97)

We find, using Holder’s inequality,

E[∫ T

0

∫ T

0

∣∣∣∣⟨A(s,X(nk)(s)), ϕ(t) ·Pnk(v)

⟩V∗×V

∣∣∣∣ ds dt

]≤ E

[∫ T

0

∫ T

0

∥∥∥A(s,X(nk)(s))∥∥∥

V∗‖ϕ(t) ·Pnk(v)‖V ds dt

]≤ E

[∫ T

0

∥∥∥A(s,X(nk)(s))∥∥∥

V∗

∥∥∥∥(∫ T

0ϕ(t) dt

)v

∥∥∥∥V

ds

]

≤(E[∫ T

0

∥∥∥A(s,X(nk)(s))∥∥∥ α

α−1

V∗ds

])α−1α

·

(E

[∫ T

0

∥∥∥∥(∫ T

0ϕ(t) dt

)v

∥∥∥∥αV

ds

]) 1α

.

The right-hand side is finite, when we apply Lemma 3.1.5 to the first term on the right-handside and use that X(nk) ∈ D to see that it is finite, whereas the second term is finite by theassumption of ϕ ∈ L∞([0, T ]× Ω;R). Then, we can swap the order of integration on the innertwo integrals in (3.97), yielding

E[∫ T

0

∫ t

0

⟨A(s,X(nk)(s)

), ϕ(t) ·Pnk(v)

⟩V∗×V

ds dt

]= E

[∫ T

0

∫ T

s

⟨A(s,X(nk)(s)

), ϕ(t) ·Pnk(v)

⟩V∗×V

dt ds

]= E

[∫ T

0

⟨A(s,X(nk)(s)

),

(∫ T

sϕ(t) dt

)·Pnk(v)

⟩V∗×V

ds

].

Since v ∈⋃n≥1 Hn, we know that by definition of the map Pnk , it holds that Pnk(v) = v for

all large enough k, see property (3.83) on page 82. Hence,

limk→∞

E[∫ T

0

⟨A(s,X(nk)(s)

),

(∫ T

sϕ(t) dt

)·Pnk(v)

⟩V∗×V

ds

]= lim

k→∞E[∫ T

0

⟨A(s,X(nk)(s)

),

(∫ T

sϕ(t) dt

)· v⟩

V∗×V

ds

].

The map [0, T ] × Ω : (s, ω) 7→(∫ T

s ϕ(t) dt)v belongs to L∞([0, T ] × Ω;V), so in particular, it

belongs to Lα([0, T ]× Ω;V). Then, Remark B yields

limk→∞

E[∫ T

0

⟨A(s,X(nk)(s)

),

(∫ T

sϕ(t) dt

)· v⟩

V∗×V

ds

]= E

[∫ T

0

⟨Y (s),

(∫ T

sϕ(t) dt

)v

⟩V∗×V

ds

]

94

Page 95: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

as k → ∞. Since Y ∈ Lαα−1 ([0, T ] × Ω;V∗), we can use a Holder-type argument to reverse the

order of integration again on the last term, yielding

E[∫ T

0

⟨Y (s),

(∫ T

sϕ(t) dt

)v

⟩V∗×V

ds

]= E

[∫ T

0

⟨∫ t

0Y (s) ds, ϕ(t)v

⟩V∗×V

dt

].

Combining all these observations, we find

E[∫ T

0

⟨∫ t

0Pnk A

(s,X(nk)(s)

)ds, ϕ(t)v

⟩V∗×V

dt

]→ E

[∫ T

0

⟨∫ t

0Y (s) ds, ϕ(t)v

⟩V∗×V

dt

] (3.98)

as k →∞.

——————————

(Part 3.) We will now tackle the third term of (3.95). By (3.91), Proposition 2.1.15 (for p = 2)and the Ito isometry (2.7) we have for each k ∈ N,

E

[∥∥∥∥∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s)

∥∥∥∥2

H

]= E

[∥∥∥∥∫ t

0Pnk B

(s,X(nk)(s)

)Pnk dW (s)

∥∥∥∥2

H

]

≤ 4E

[∥∥∥∥∫ T

0Pnk B

(s,X(nk)(s)

)Pnk dW (s)

∥∥∥∥2

H

]=

∥∥∥∥∫ ·0

Pnk B(s,X(nk)(s)

)Pnk dW (s)

∥∥∥∥2

M 2T (H)

=∥∥∥Pnk B

(s,X(nk)(s)

)Pnk

∥∥∥2

T= E

[∫ T

0

∥∥∥Pnk B(s,X(nk)(s)

)Pnk

∥∥∥2

L2(U,H)ds

]and the right-hand side in finite by Remark C. This result, combined with Holder’s inequalityand the uniform boundedness of ϕ, implies that we can swap the order of integration on thetwo outermost integrals in the third term of (3.95),

E[∫ T

0

⟨∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s), ϕ(t)v

⟩H

dt

]=

∫ T

0E[⟨∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s), ϕ(t)v

⟩H

]dt.

(3.99)

Moreover, we have by a similar argument, Lemma 1.1.21 and Lemma 3.5.11 that

supk∈N

E

[∥∥∥∥∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s)

∥∥∥∥2

H

]

≤ supk∈N

E[∫ T

0

∥∥∥Pnk B(s,X(nk)(s)

)Pnk

∥∥∥2

L2(U,H)ds

]≤ sup

k∈NE[∫ T

0

∥∥∥B(s,X(nk)(s))∥∥∥2

L2(U,H)ds

]<∞.

(3.100)

The map F : [0, T ] × Ω : (t, ω) 7→ ϕ(t, ω) · v belongs to L∞([0, T ] × Ω;H) and therefore, toL2([0, T ] × Ω;H). If we apply Proposition 1.2.13, there exists a set I of full Lebesgue-measuresuch that the map ω 7→ ϕ(t, ω) · v belongs to L2(Ω,F ,P;H) for all t ∈ I.

95

Page 96: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Then, we have by the Dominated Convergence Theorem, which is allowed by (3.100) andRemark C,

limk→∞

∫ T

0E[⟨∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s), ϕ(t)v

⟩H

]dt

= limk→∞

∫IE[⟨∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s), ϕ(t)v

⟩H

]dλ

=

∫I

limk→∞

E[⟨∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s), ϕ(t)v

⟩H

]dλ

=

∫IE[⟨∫ t

0Z(s) dW (s), ϕ(t)v

⟩H

]dλ =

∫ T

0E[⟨∫ t

0Z(s) dW (s), ϕ(t)v

⟩H

]dt.

A similar Holder-type argument and the boundedness of ϕ will allow us to swap the order ofintegration on the last term again. Together, we obtain that

E[∫ T

0

⟨∫ t

0Pnk B

(s,X(nk)(s)

)dW (nk)(s), ϕ(t)v

⟩H

dt

]→ E

[∫ T

0

⟨∫ t

0Z(s) dW (s), ϕ(t)v

⟩H

dt

] (3.101)

when k →∞.

——————————

(Part 4.) Recall the definition of the process X given in (3.92). When we take the limit ofk →∞ in (3.95), we find using the convergence results and implicitly employing the working ofthe dual pair and the linearity of the integrals, the inner product and the dual pair,

limk→∞

E[∫ T

0

⟨X(nk)(t), ϕ(t)v

⟩V∗×V

dt

]= E

[∫ T

0

⟨X0 +

∫ t

0Y (s) ds+

∫ t

0Z(s) dW (s), ϕ(t)v

⟩V∗×V

dt

]+ E

[∫ T

0〈X(t), ϕ(t)v〉V∗×V dt

].

Comparing this result with (3.94) and using that weak limits are unique, we see that the pro-cesses X and X coincide λ⊗P-almost everywhere on [0, T ]×Ω. This means that Theorem 3.3.17applies to the process X, as desired.

Lemma 3.5.18. Suppose that the assumptions in Setting 3.5.1 are in force. Then, the inequal-ity

E[∫ T

0ψ(t) ‖X(t)‖2H dt

]≤ lim inf

k→∞E[∫ T

0ψ(t)

∥∥∥X(nk)(t)∥∥∥2

Hdt

](3.102)

holds for every non-negative ψ ∈ L∞([0, T ];R).

Proof. Compare page 106 in [23]; in particular, compare (4.54).Take arbitrary non-negative ψ ∈ L∞([0, T ];R). From Lemma 3.5.17 and Theorem 3.3.17,

we find that E[supt≤T ‖X(t)‖2H

]< ∞. Since ψ is uniformly bounded and the process X ∈

L2([0, T ]×Ω;H), the product of these processes, ψ·X, belongs to L2 ([0, T ]× Ω;H) as well. Then,

96

Page 97: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

since the sequenceX(nk) : k ≥ 1

converges weakly to X in this Hilbert space (Remark A.2),

we find that

E[∫ T

0ψ(t)

∥∥X(t)∥∥2

Hdt

]= E

[∫ T

0ψ(t)

⟨X(t), X(t)

⟩H

dt

]= E

[∫ T

0

⟨ψ(t) · X(t), X(t)

⟩H

dt

]= lim

k→∞E[∫ T

0

⟨ψ(t) · X(t), X(nk)(t)

⟩H

dt

].

We can even take the limit inferior, as the limit exists. Moreover, as ψ is non-negative, it isallowed to take the square root of ψ. Applying the Cauchy-Schwarz inequality yields

E[∫ T

0ψ(t)

∥∥X(t)∥∥2

Hdt

]= lim inf

k→∞E[∫ T

0

⟨ψ(t) · X(t), X(nk)(t)

⟩H

dt

]≤ lim inf

k→∞E[∫ T

0

⟨√ψ(t) · X(t),

√ψ(t) ·X(nk)(t)

⟩H

dt

]≤ lim inf

k→∞E[∫ T

0

∥∥∥√ψ(t) · X(t)∥∥∥H·∥∥∥√ψ(t) ·X(nk)(t)

∥∥∥H

dt

].

Then, if we take the square root of ψ outside the norm and apply Holder’s inequality, we obtainthat

E[∫ T

0ψ(t)

∥∥X(t)∥∥2

Hdt

]≤(E[∫ T

0ψ(t)

∥∥X(t)∥∥2

Hdt

]) 12

· lim infk→∞

(E[∫ T

0ψ(t)

∥∥∥X(nk)(t)∥∥∥2

Hdt

]) 12

.

If the first term on the right-hand side is zero, inequality (3.102) is trivial. If this is not the case,we divide both sides by the first term on the right-hand side, square both sides and observethat X and X coincide λ⊗ P-almost everywhere using Lemma 3.5.17. Then, inequality (3.102)holds, as desired.

Lemma 3.5.19. Suppose that the assumptions in Setting 3.5.1 are in force. Then, the inequal-ity

E[e−ct

∥∥∥X(nk)(t)∥∥∥2

H

]− E

[∥∥∥X(nk)(0)∥∥∥2

H

]≤ E

[ ∫ t

0e−cs

(2⟨A (s, φ(s)) , X(nk)(s)

⟩V∗×V

+ 2⟨A(s,X(nk)(s)

)− A (s, φ(s)) , φ(s)

⟩V∗×V

− ‖B (s, φ(s))‖2L2(U,H) + 2⟨B(s,X(nk)(s)

),B (s, φ(s))

⟩L2(U,H)

− 2c⟨X(nk)(s), φ(s)

⟩H

+ c ‖φ(s)‖2H)

ds

](3.103)

holds for each φ ∈ D and every t ∈ [0, T ].

Proof. This proof is an extended derivation of displays (4.55) and (4.56) on pages 106 and 107in [23].

97

Page 98: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

When we apply (3.86) and (3.88) from the proof of Lemma 3.5.11 to (3.85) on page 86 andtake expectations, it follows that

E[e−ct

∥∥∥X(nk)(t)∥∥∥2

H

]− E

[∥∥∥X(nk)(0)∥∥∥2

H

]≤ E

[∫ t

0e−cs

(2⟨A(s,X(nk)(s)

), X(nk)(s)

⟩V ∗×V

+∥∥∥B(s,X(nk)(s)

)∥∥∥2

L2(U,H)

−c∥∥∥X(nk)(s)

∥∥∥2

H

)ds

].

(3.104)

We will rewrite the integrand on right-hand side by “adding zero” thrice and using the propertiesof the duality pairing between V∗ and V and the linearity of the inner product. First, note thatfor every φ ∈ Lα([0, T ]× Ω;V) ∩ L2([0, T ]× Ω;H) and all s ∈ [0, T ], we have⟨

A(s,X(nk)(s)

), X(nk)(s)

⟩V ∗×V

=⟨A(s,X(nk)(s)

)− A (s, φ(s)) , X(nk)(s)− φ(s)

⟩V ∗×V

+⟨A (s, φ(s)) , X(nk)(s)

⟩V ∗×V

+⟨A(s,X(nk)(s)

)− A (s, φ(s)) , φ(s)

⟩V ∗×V

.

Moreover, it holds that∥∥∥B(s,X(nk)(s))∥∥∥2

L2(U,H)=∥∥∥B(s,X(nk)(s)

)− B (s, φ(s))

∥∥∥2

L2(U,H)− ‖B (s, φ(s))‖2L2(U,H)

+ 2⟨B(s,X(nk)(s)

),B (s, φ(s))

⟩L2(U,H)

for all s ∈ [0, T ] and

−c∥∥∥X(nk)(s)

∥∥∥2

H= −c

∥∥∥X(nk)(s)− φ(s)∥∥∥2

H− 2c

⟨X(nk)(s), φ(s)

⟩H

+ c ‖φ(s)‖2H .

By weak monotonicity, Condition (H2), we know that on Ω,

2⟨A(s,X(nk)(s)

)− A (s, φ(s)) , X(nk)(s)− φ(s)

⟩V∗×V

+∥∥∥B(s,X(nk)(s)

)− B (s, φ(s))

∥∥∥2

L2(U,H)− c

∥∥∥X(nk)(s)− φ(s)∥∥∥2

H

≤ 0

for all s ∈ [0, T ]. Hence, plugging these observations into (3.104), we see that the inequality

E[e−ct

∥∥∥X(nk)(t)∥∥∥2

H

]− E

[∥∥∥X(nk)(0)∥∥∥2

H

]≤ E

[ ∫ t

0e−cs

(2⟨A (s, φ(s)) , X(nk)(s)

⟩V∗×V

+ 2⟨A(s,X(nk)(s)

)− A (s, φ(s)) , φ(s)

⟩V∗×V

− ‖B (s, φ(s))‖2L2(U,H) + 2⟨B(s,X(nk)(s)

),B (s, φ(s))

⟩L2(U,H)

− 2c⟨X(nk)(s), φ(s)

⟩H

+ c ‖φ(s)‖2H)

ds

]holds for every t ∈ [0, T ], as desired.

In the next two lemmas, we will derive integrability and weak convergence results.

98

Page 99: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Lemma 3.5.20. Suppose that the assumptions in Setting 3.5.1 hold. Then, it holds that

e|c|T(

2 supk∈N

E[∫ T

0‖A (s, φ(s))‖V∗

∥∥∥X(nk)(s)∥∥∥V

ds

]+ 2 sup

k∈NE[∫ T

0

∥∥∥A(s,X(nk)(s))∥∥∥

V∗‖φ(s)‖V ds

]+ 2E

[∫ T

0‖A (s, φ(s))‖V∗ ‖φ(s)‖V ds

]+ E

[∫ T

0‖B (s, φ(s))‖2L2(U,H) ds

]+ 2 sup

k∈NE[∫ T

0

∥∥∥B(s,X(nk)(s))∥∥∥

L2(U,H)‖B (s, φ(s))‖L2(U,H) ds

]+ 2|c| sup

k∈NE[∫ T

0

∥∥∥X(nk)(s)∥∥∥H‖φ(s)‖H ds

]+ |c|E

[∫ T

0‖φ(s)‖2H ds

])(3.105)

is finite for every φ ∈ D. In particular, this means that all terms are finite and that theright-hand side of (3.103) is finite for every φ ∈ D.

Proof. In this proof, we will ignore the constants “2” and “e|c|T ”, as these will not alter the finite-ness results. Moreover, we will often refer to inequalities (3.12) and (3.13) and Lemmas 3.1.5and 3.1.6. We will tackle these terms one-by-one. Let φ ∈ D be arbitrary.

Applying Holder’s inequality to the first term of (3.105), we find

supk∈N

E[∫ T

0‖A (s, φ(s))‖V∗

∥∥∥X(nk)(s)∥∥∥V

ds

]

≤(E[∫ T

0‖A (s, φ(s))‖

αα−1

V∗ ds

])α−1α

· supk∈N

(E[∫ T

0

∥∥∥X(nk)(s)∥∥∥αV

ds

]) 1α

.

The first term on the right-hand side is finite by Lemma 3.1.5, since φ ∈ D ⊂ Lα([0, T ]×Ω;V),the other term is finite by Lemma 3.5.11.

On the second term of (3.105), we apply Holder’s inequality and inequality (3.12) to obtain

supk∈N

E[∫ T

0

∥∥∥A(s,X(nk)(s))∥∥∥

V∗‖φ(s)‖V ds

]

(E [∫ T

0|g(s)|

αα−1 ds

])α−1α

+ c3 supk∈N

(E[∫ T

0

∥∥∥X(nk)(s)∥∥∥αV

ds

])α−1α

·(E[∫ T

0‖φ(s)‖αV ds

]) 1α

.

The right-hand side is finite by Lemma 3.5.11,since g ∈ Lαα−1 ([0, T ]× Ω;R) and φ ∈ D.

The third term of (3.105) now follows from a similar argument, since φ ∈ D.For the fourth term of (3.105), we invoke Lemma 3.1.6 and conclude that this term is finite

as well, for we again can use that φ ∈ D.The fifth term of (3.105) is also finite, but it does not follow directly. First, by Holder’s

inequality, we find

supk∈N

E[∫ T

0

∥∥∥B(s,X(nk)(s))∥∥∥

L2(U,H)‖B (s, φ(s))‖L2(U,H) ds

]

≤(E[∫ T

0‖B (s, φ(s))‖2L2(U,H) ds

]) 12

· supk∈N

(E[∫ T

0

∥∥∥B(s,X(nk)(s))∥∥∥2

L2(U,H)ds

]) 12

99

Page 100: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

and conclude that the first term is finite from Lemma 3.1.6, as φ ∈ D. For the other term, weapply (3.13) and use that

√x ≤ x + 1 for all x ≥ 0 and that supk(ak + bk + ck) ≤ supk ak +

supk bk + supk ck. Then, we see that

1 + supk∈N

E[∫ T

0

∥∥∥B(s,X(nk)(s))∥∥∥2

L2(U,H)ds

]≤ 1 + |c1| sup

k∈NE[∫ T

0

∥∥∥X(nk)(s)∥∥∥2

Hds

]+ E

[∫ T

0|f(s)| ds

]

+ 2 supk∈N

(E[∫ T

0‖X(nk)(s)‖αV ds

]) 1α

·(E[∫ T

0|g(s)|

αα−1 ds

])α−1α

+ |2c3 − c2| supk∈N

E[∫ T

0

∥∥∥X(nk)(s)∥∥∥αV

ds

]).

Then, by Tonelli’s theorem and the inequality∫ T

0 ζ(t) dt ≤ T ·supt≤T ζ(t) for every non-negativereal-valued function ζ, we find

supk∈N

E[∫ T

0

∥∥∥X(nk)(s)∥∥∥2

Hds

]≤ sup

k∈N

∫ T

0E[∥∥∥X(nk)(s)

∥∥∥2

H

]ds

≤ T · supk∈N

supt≤T

E[∥∥∥X(nk)(t)

∥∥∥2

H

].

Apply Lemma 3.5.11 again. This last argument also implies that the sixth term of (3.105) isfinite, whereas the seventh term of (3.105) is finite as well, since φ ∈ D. This proves that (3.105)is finite, as desired.

We can apply weak convergence results from Corollary 3.5.14 to obtain the following conver-gence results.

Lemma 3.5.21. If the assumptions in Setting 3.5.1 are in force, then, we have

E[∫ t

0

⟨2e−csA(s, φ(s)), X(nk)(s)

⟩V∗×V

ds

]→ E

[∫ t

0

⟨2e−csA(s, φ(s)), X(s)

⟩V∗×V

ds

](3.106)

E[∫ t

0

⟨A(s,X(nk)(s)

), 2e−csφ(s)

⟩V∗×V

ds

]→ E

[∫ t

0

⟨Y (s), 2e−csφ(s)

⟩V∗×V

ds

](3.107)

E[∫ t

0

⟨B(s,X(nk)(s)

), 2e−cs B (s, φ(s))

⟩H

ds

]→ E

[∫ t

0

⟨Z(s), 2e−cs B (s, φ(s))

⟩H

ds

](3.108)

E[∫ t

0

⟨X(nk)(s),−2ce−csφ(s)

⟩H

ds

]→ E

[∫ t

0

⟨X(s),−2ce−csφ(s)

⟩H

ds

](3.109)

E[∥∥∥X(nk)(0)

∥∥∥2

H

]→ E

[‖X0‖2H

](3.110)

for every t ∈ [0, T ] and every φ ∈ D and

E[∥∥∥X(nk)(0)

∥∥∥2

H

]→ E

[‖X0‖2H

](3.111)

as k →∞.

100

Page 101: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Proof. We will derive (3.106) first. Let t ∈ [0, T ] and φ ∈ D be arbitrary. Consider the map[0, T ]× Ω : (s, ω) 7→ 1[0,t](s) · 2e−csA(s, φ(s, ω), ω). Then, the following inequality

E[∫ T

0

∥∥1[0,t](s) · 2e−csA(s, φ(s, ω), ω)∥∥ αα−1

V

]≤ 2e|c|T E

[∫ T

0‖A(s, φ(s, ω), ω)‖

αα−1

V

]holds. Then, since φ ∈ D, Lemma 3.1.5 implies that the right-hand is finite. This meansthat (s, ω) 7→ 1[0,t](s) · 2e−csA(s, φ(s, ω), ω) belongs to L

αα−1 ([0, T ]× Ω;V∗), so we obtain from

Remark A.1 that

E[∫ t

0

⟨2e−csA(s, φ(s)), X(nk)(s)

⟩V∗×V

ds

]= E

[∫ T

0

⟨1[0,t](s)2e

−csA(s, φ(s)), X(nk)(s)⟩V∗×V

ds

]→ E

[∫ T

0

⟨1[0,t](s)2e

−csA(s, φ(s)), X(s)⟩V∗×V

ds

]= E

[∫ t

0

⟨2e−csA(s, φ(s)), X(s)

⟩V∗×V

ds

]as k →∞, proving (3.106).

The map [0, T ] × Ω : (s, ω) 7→ 1[0,t](s) · 2e−csφ(s, ω) belongs to Lα([0, T ] × Ω;V), since φ ∈D ⊂ Lα([0, T ]× Ω;V), so

E[∫ t

0

⟨A(s,X(nk)(s)

), 2e−csφ(s)

⟩V∗×V

ds

]→ E

[∫ t

0

⟨Y (s), 2e−csφ(s)

⟩V∗×V

ds

]when we look at Remark B and repeat the computations above, proving (3.107).

Moreover, note that

E[∫ T

0

∥∥1[0,t](s) · B (s, φ(s))∥∥2

L2(U,H)ds

]≤ E

[∫ T

0‖B (s, φ(s))‖2L2(U,H) ds

].

The right-hand side is finite by Lemma 3.1.6, as φ ∈ D. This means that the map [0, T ] ×Ω : (s, ω) 7→ 1[0,t](s) · 2e−cs B (s, φ(s, ω), ω) belongs to L2([0, T ] × Ω;L2(U,H)). Then, usingRemark C, it follows that

E[∫ t

0

⟨B(s,X(nk)(s)

), 2e−cs B (s, φ(s))

⟩H

ds

]→ E

[∫ t

0

⟨Z(s), 2e−cs B (s, φ(s))

⟩H

ds

]as k →∞, proving (3.108)

The process φ also belongs to L2([0, T ]× Ω;H), so using Remark A.2 we find that

E[∫ t

0

⟨X(nk)(s),−2ce−csφ(s)

⟩H

ds

]→ E

[∫ t

0

⟨X(s),−2ce−csφ(s)

⟩H

ds

]when k →∞, proving the fourth convergence result (3.109).

The final convergence result (3.111) follows from the Dominated Convergence Theorem 1.2.7.

We have∥∥X(nk)(0)

∥∥2

H≤ ‖X0‖2H for all k ∈ N. As X0 ∈ L2(Ω;H) and since X(nk)(0) converges to

X0 in H almost everywhere on Ω, it follows from Lebesgue’s Dominated Convergence Theoremthat

E[∥∥∥X(nk)(0)

∥∥∥2

H

]→ E

[‖X0‖2H

]when k →∞.

Remark 3.5.22. From this point on, we take a slightly different route when compared to theproof of Theorem 4.2.5 in [23]: looking at the second display on page 106 in [23], we do not takearbitrary ψ, but we take it equal to one in what follows. However, the arguments used here arestill based on the observations made in [23].

101

Page 102: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Lemma 3.5.23. Suppose that the assumptions in Setting 3.5.1 hold. Then, we find that∫ T

0

(E[e−ct ‖X(t)‖2H

]− E

[‖X0‖2H

])dt

≤ E[ ∫ T

0(T − s) · e−cs

(2⟨A (s, φ(s)) , X(s)

⟩V∗×V

+ 2 〈Y (s)− A (s, φ(s)) , φ(s)〉V∗×V

− ‖B (s, φ(s))‖2L2(U,H) + 2 〈Z(s),B (s, φ(s))〉L2(U,H)

− 2c⟨X(s), φ(s)

⟩H

+ c ‖φ(s)‖2H)

ds

](3.112)

for every φ ∈ D.

Proof. Compare the second display on page 106 in [23].By integrating both sides of (3.103), we obtain∫ T

0

(E[e−ct

∥∥∥X(nk)(t)∥∥∥2

H

]− E

[∥∥∥X(nk)(0)∥∥∥2

H

])dt

≤∫ T

0E[ ∫ t

0e−cs

(2⟨A (s, φ(s)) , X(nk)(s)

⟩V∗×V

+ 2⟨A(s,X(nk)(s)

)− A (s, φ(s)) , φ(s)

⟩V∗×V

− ‖B (s, φ(s))‖2L2(U,H) + 2⟨B(s,X(nk)(s)

),B (s, φ(s))

⟩L2(U,H)

− 2c⟨X(nk)(s), φ(s)

⟩H

+ c ‖φ(s)‖2H)

ds

]dt.

(3.113)

We have, by Lemma 3.5.20 and Tonelli’s theorem∫ T

0

(E[e−ct

∥∥∥X(nk)(t)∥∥∥2

H

]− E

[∥∥∥X(nk)(0)∥∥∥2

H

])dt

=

∫ T

0E

[e−ct

∥∥∥X(nk)(t)∥∥∥2

H

]dt−

∫ T

0E[∥∥∥X(nk)(0)

∥∥∥2

H

]dt

= E[∫ T

0e−ct

∥∥∥X(nk)(t)∥∥∥2

Hdt

]−∫ T

0E[∥∥∥X(nk)(0)

∥∥∥2

H

]dt

(3.114)

so we can apply Lemma 3.5.17 to the first term on the right-hand side, by replacing ψ withthe non-negative, uniformly bounded map [0, T ]→ [0,∞) : t 7→ e−ct. Then, by taking the limitinferior on both sides of (3.113) and using (3.114) and the convergence result (3.111), we find∫ T

0

(E[e−ct ‖X(t)‖2H

]− E

[‖X0‖2H

])dt

= E[∫ T

0e−ct ‖X(t)‖2H dt

]−∫ T

0E[‖X0‖2H

]dt

≤ lim infk→∞

(E[∫ T

0e−ct

∥∥∥X(nk)(t)∥∥∥2

Hdt

]−∫ T

0E[∥∥∥X(nk)(0)

∥∥∥2

H

]dt

)≤ lim inf

k→∞

∫ T

0E[ ∫ t

0e−cs

(2⟨A (s, φ(s)) , X(nk)(s)

⟩V∗×V

+ 2⟨A(s,X(nk)(s)

)− A (s, φ(s)) , φ(s)

⟩V∗×V

− ‖B (s, φ(s))‖2L2(U,H) + 2⟨B(s,X(nk)(s)

),B (s, φ(s))

⟩L2(U,H)

− 2c⟨X(nk)(s), φ(s)

⟩H

+ c ‖φ(s)‖2H)

ds

]dt.

102

Page 103: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

On the right-hand side, we can interchange the first integral and the limit by invoking Lebesgue’sDominated Convergence Theorem, which is allowed in view of Lemma 3.5.20. Then, the conver-gence results (3.106), (3.107), (3.108) and (3.109) and implicit use of the linearity of integrals(which is yet again allowed by Lemma 3.5.20) yield that the right-hand side equals∫ T

0E[ ∫ t

0e−cs

(2⟨A (s, φ(s)) , X(s)

⟩V∗×V

+ 2 〈Y (s)− A (s, φ(s)) , φ(s)〉V∗×V

− ‖B (s, φ(s))‖2L2(U,H) + 2 〈Z(s),B (s, φ(s))〉L2(U,H)

− 2c⟨X(s), φ(s)

⟩H

+ c ‖φ(s)‖2H)

ds

]dt,

(3.115)

as desired.

Lemma 3.5.24. Under the assumptions of Setting 3.5.1, the following identity holds.

E[e−ct ‖X(t)‖2H

]− E

[‖X0‖2H

]= E

[∫ t

0e−cs

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H) − c ‖X(s)‖2H)

ds

] (3.116)

Proof. Apply the integration-by-parts formula (Proposition 1.4.9) to the monotone function

t 7→ e−ct (see Corollary 1.4.7) and the continuous function of bounded variation t 7→ E[‖X(t)‖2H

]of bounded variation (Corollary 3.3.48) and use (3.72). Also, compare (4.55) in [23].

We will combine Lemmas 3.5.23 and 3.5.24 into an important inequality.

Lemma 3.5.25 (A crucial inequality). If the assumptions in Setting 3.5.1 hold, we have

E[ ∫ T

0(T − s) · e−cs

(2⟨Y (s)− A (s, φ(s)) , X(s)− φ(s)

⟩V∗×V

+ ‖Z(s)− B (s, φ(s))‖2L2(U,H) − c ‖X(s)− φ(s)‖2H)

ds

]≤ 0

(3.117)

for every φ ∈ D.

Proof. This argument is an extended derivation of (4.66) in [23].Plugging (3.116) into the left-hand side of (3.112), we find∫ T

0

(E[e−ct ‖X(t)‖2H

]− E

[‖X0‖2H

])dt

=

∫ T

0E[∫ t

0e−cs

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H) − c ‖X(s)‖2H)

ds

]dt.

(3.118)

First, we will show that we can apply Fubini’s theorem to the right-hand side. We have

e|c|T∫ T

0E[∫ T

0

(2 ‖Y (s)‖V∗

∥∥X(s)∥∥V

+ ‖Z(s)‖2L2(U,H) + |c| ‖X(s)‖2H)

ds

]dt

= 2Te|c|T E[∫ T

0‖Y (s)‖V∗

∥∥X(s)∥∥V

ds

]+ e|c|T E

[∫ T

0‖Z(s)‖2L2(U,H) ds

]+ |c|e|c|T E

[∫ T

0‖X(s)‖2H ds

].

103

Page 104: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

The second term on the right-hand side is finite, since Z ∈ L2([0, T ] × Ω;L2(U,H)). Thethird term is finite, since X coincides λ ⊗ P-almost everywhere with X, which belongs toL2([0, T ]×Ω;H) (see Remark A). On the first term, we can apply Holder’s inequality to obtain

E[∫ T

0‖Y (s)‖V∗

∥∥X(s)∥∥V

ds

]≤(E[∫ T

0‖Y (s)‖

αα−1

V∗ ds

])α−1α(E[∫ T

0

∥∥X(s)∥∥αV

ds

]) 1α

<∞,

since Y ∈ Lαα−1 ([0, T ]× Ω;V∗) and X ∈ Lα([0, T ] × Ω;V) by Remark A. In particular, if we

apply Fubini’s theorem to the right-hand side of (3.118), we obtain∫ T

0E[∫ t

0e−cs

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H) − c ‖X(s)‖2H)

ds

]dt

= E[∫ T

0

∫ t

0e−cs

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H) − c ‖X(s)‖2H)

ds dt

]= E

[∫ T

0(T − s) · e−cs

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H) − c ‖X(s)‖2H)

ds

].

Let φ ∈ D be arbitrary. With these observations, (3.112) reads

E[∫ T

0(T − s) · e−cs

(2⟨Y (s), X(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H) − c ‖X(s)‖2H)

ds

]≤ E

[ ∫ T

0(T − s) · e−cs

(2⟨A (s, φ(s)) , X(s)

⟩V∗×V

+ 2 〈Y (s)− A (s, φ(s)) , φ(s)〉V∗×V

− ‖B (s, φ(s))‖2L2(U,H) + 2 〈Z(s),B (s, φ(s))〉L2(U,H)

− 2c⟨X(s), φ(s)

⟩H

+ c ‖φ(s)‖2H)

ds

].

In view of Lemma 3.5.20, the weak convergence results in Corollary 3.5.14 and Lemma 1.1.34,we can substract the right-hand side, yielding

E[ ∫ T

0(T − s) · e−cs

(2⟨Y (s)− A (s, φ(s)) , X(s)− φ(s)

⟩V∗×V

+ ‖Z(s)‖2L2(U,H) + ‖B (s, φ(s))‖2L2(U,H) − 2 〈Z(s),B (s, φ(s))〉L2(U,H)

−(c ‖X(s)‖2H + c ‖φ(s)‖2H − 2c

⟨X(s), φ(s)

⟩H

))ds

dt

]≤ 0.

Using that (1.6) holds in every Hilbert space, we can rewrite the previous inequality to obtain

E[ ∫ T

0(T − s) · e−cs

(2⟨Y (s)− A (s, φ(s)) , X(s)− φ(s)

⟩V∗×V

+ ‖Z(s)− B (s, φ(s))‖2L2(U,H) − c ‖X(s)− φ(s)‖2H)

ds

]≤ 0,

so (3.117) holds for every φ ∈ D, as desired.

Corollary 3.5.26. Suppose that the assumptions in Setting 3.5.1 hold. Then, the process Zcoincides λ⊗ P-almost everywhere with the process B

(·, X(·)

).

104

Page 105: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Proof. This is an extended discussion of the conclusion on the bottom of page 107 in [23].Suppose, for the sake of contradiction, that Z and B

(·, X(·)

)do not coincide λ ⊗ P-almost

everywhere. Then there exists a set F of non-zero λ ⊗ P-measure on which Z and B(·, X(·)

)differ, hence

∥∥Z − B(·, X(·)

)∥∥2

L2(U,H)is strictly greater than zero on F .

From Remark A, we obtain that X ∈ D. Replacing φ by X in (3.117) and using that X andX coincide λ⊗ P-almost everywhere by Lemma 3.5.17, we find

E[∫ T

0(T − s) · e−cs

(∥∥Z(s)− B(s, X(s)

)∥∥2

L2(U,H)

)ds

]≤ 0.

Then, as the integrand is non-negative, we have by Tonelli’s theorem

E[∫ T

0(T − s) · e−cs

(∥∥Z(s)− B(s, X(s)

)∥∥2

L2(U,H)

)ds

]=

∫[0,T ]×Ω

(T − s) · e−cs(∥∥Z(s, ω)− B

(s, X(s, ω), ω

)∥∥2

L2(U,H)

)d (λ⊗ P) (s, ω)

=

∫F

(T − s) · e−cs(∥∥Z(s, ω)− B

(s, X(s, ω), ω

)∥∥2

L2(U,H)

)d (λ⊗ P) (s, ω)

since the integral over the complement of F vanishes, since on that set, the processes Z andB(·, X(·)

)coincide. The set F ∩ ((0, T )× Ω) is a subset of F , but both sets have the same

measure, which means that the former set is not a λ⊗ P-null set. Moreover, on the set (0, T ),the maps s 7→ T − s and s 7→ e−cs are strictly positive. This means that∫

F(T − s) · e−cs

(∥∥Z(s, ω)− B(s, X(s, ω), ω

)∥∥2

L2(U,H)

)d (λ⊗ P) (s, ω)

≥∫F∩((0,T )×Ω)

(T − s) · e−cs(∥∥Z(s, ω)− B

(s, X(s, ω), ω

)∥∥2

L2(U,H)

)d (λ⊗ P) (s, ω).

The latter integral is non-negative, and, importantly, non-zero per construction.Together, we therefore have

0 <

∫F∩((0,T )×Ω)

(T − s) · e−cs(∥∥Z(s, ω)− B

(s, X(s, ω), ω

)∥∥2

L2(U,H)

)d (λ⊗ P) (s, ω)

≤ E[∫ T

0(T − s) · e−cs

(∥∥Z(s)− B(s, X(s)

)∥∥2

L2(U,H)

)ds

]≤ 0,

a contradiction. This shows that the processes Z and B(·, X(·)

)coincide λ ⊗ P-almost every-

where on [0, T ]× Ω.

Below, we will derive an inequality (Lemma 3.119), based on inequality (3.117) in Lemma 3.5.25,that will allow us to show that Y coincides with the process A

(·, X(·)

), see Lemma 3.5.28.

Lemma 3.5.27. If the assumptions in Setting 3.5.1 hold, we have

E[∫ T

0(T − s) · 2e−cs

⟨Y (s)− A

(s, X(s)

),Ξ(s) · v

⟩V∗×V

ds

]≤ 0 (3.119)

for all Ξ ∈ L∞([0, T ]× Ω;R) and v ∈ V.

Proof. This argument is based on the paragraph after (4.57) in [23].Let Ξ ∈ L∞([0, T ] × Ω;R) and v ∈ V be arbitrary. Then, the map (s, ω) 7→ Ξ(s, ω) · v

belongs to D, as can be readily verified, and so, we can conclude that for every n ∈ N, we have(s, ω) 7→ X(s, ω)− 1

n ·Ξ(s, ω) ·v ∈ D. Plugging this into (3.117) for φ, replacing Z by B(·, X(·)

),

105

Page 106: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

observing that X and X coincide λ ⊗ P-almost everywhere and multiplying both sides by anarbitrary n ∈ N, we find

E[n ·∫ T

0(T − s) · e−cs

(2⟨Y (s)− A

(s, X(s)− 1

n · Ξ(s) · v), 1n · Ξ(s) · v

⟩V∗×V

+∥∥B (s, X(s)

)− B

(s, X(s)− 1

n · Ξ(s) · v)∥∥2

L2(U,H)− c

∥∥ 1n · Ξ(s) · v

∥∥2

H

)ds

]≤ 0 · n = 0.

(3.120)

The left-hand side of the previous inequality can be split into three parts, namely

E[n ·∫ T

0(T − s) · 2e−cs

⟨Y (s)− A

(s, X(s)− 1

n · Ξ(s) · v), 1n · Ξ(s) · v

⟩V∗×V

ds

]+ E

[n ·∫ T

0(T − s) · e−cs

∥∥B (s, X(s))− B

(s, X(s)− 1

n · Ξ(s) · v)∥∥2

L2(U,H)ds

]− cE

[n ·∫ T

0(T − s) · e−cs

∥∥ 1n · Ξ(s) · v

∥∥2

Hds

].

(3.121)

For notational covenience, we will tackle these terms one-by-one, starting with the last one.Taking the factor 1

n out of the squared norm, we find that the last expectation equals

− cnE[∫ T

0(T − s) · e−cs ‖Ξ(s) · v‖2H ds

].

Then, we have∣∣∣∣− cn E[∫ T

0(T − s) · e−cs ‖Ξ(s) · v‖2H ds

]∣∣∣∣ ≤ T 2|c|e|c|T

n· ‖Ξ‖L∞([0,T ]×Ω;R) · ‖v‖2H → 0

as n→∞.To evaluate the first one, note that it equals

E[∫ T

0(T − s) · 2e−cs

⟨Y (s)− A

(s, X(s)− 1

n · Ξ(s) · v),Ξ(s) · v

⟩V∗×V

ds

]when we move n into the right-side slot of the dual pair. We will show that there exists anfunction that dominates the integrand for every n ∈ N, so we can apply Lebesgue’s DominatedConvergence Theorem. We have the following (in)equalities on Ω,∣∣∣(T − s) · 2e−cs ⟨Y (s)− A

(s, X(s)− 1

n · Ξ(s) · v),Ξ(s) · v

⟩V∗×V

∣∣∣≤ 2Te|c|T ·

∥∥Y (s)− A(s, X(s)− 1

n · Ξ(s) · v)∥∥

V∗· ‖Ξ(s) · v‖V

≤ 2Te|c|T · ‖Ξ‖L∞([0,T ]×Ω;R) · ‖v‖V ·(‖Y (s)‖V∗ +

∥∥A (s, X(s)− 1n · Ξ(s) · v

)∥∥V∗)

= 2Te|c|T · ‖Ξ‖L∞([0,T ]×Ω;R) · ‖v‖V · ‖Y (s)‖V∗+ 2Te|c|T · ‖Ξ‖L∞([0,T ]×Ω;R) · ‖v‖V ·

∥∥A (s, X(s)− 1n · Ξ(s) · v

)∥∥V∗

Momentarily ignoring the constants, the first term is integrable on [0, T ] × Ω since Y be-

longs to Lαα−1 ([0, T ]× Ω;V∗) ⊂ L1 ([0, T ]× Ω;V∗). For the second term, we invoke (3.12)

106

Page 107: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

and Minkowski’s inequality to obtain

supn∈N

(E[∫ T

0

∥∥A (s, X(s)− 1n · Ξ(s) · v

)∥∥ αα−1

V∗ ds

])α−1α

≤(E[∫ T

0|g(s)|

αα−1 ds

])α−1α

+ c3 supn∈N

(E[∫ T

0

∥∥X(s)− 1n · Ξ(s) · v

∥∥αV

ds

])α−1α

≤(E[∫ T

0|g(s)|

αα−1 ds

])α−1α

+ c3

(E[∫ T

0

∥∥X(s)∥∥αV

ds

])α−1α

+ c3 · ‖Ξ‖L∞([0,T ]×Ω;R) ·(E[∫ T

0‖v‖αV ds

])α−1α

<∞.

(3.122)

In particular, we find by Lebesgue’s Dominated Convergence Theorem and Condition (H1) thatthe first term converges to

E[∫ T

0(T − s) · 2e−cs

⟨Y (s)− A

(s, X(s)

),Ξ(s) · v

⟩V∗×V

ds

]as n→∞.

Finally, we will show the second term vanishes as n→∞ by “adding zero” in order to invokeCondition (H2). Doing so, we find that the second expectation is equal to

E[n ·∫ T

0(T − s) · e−cs

(− 2

⟨A(s, X(s)

)− A

(s, X(s)− 1

n · Ξ(s) · v), 1n · Ξ(s) · v

⟩V∗×V

+ 2⟨A(s, X(s)

)− A

(s, X(s)− 1

n · Ξ(s) · v), 1n · Ξ(s) · v

⟩V∗×V

+∥∥B (s, X(s)

)− B

(s, X(s)− 1

n · Ξ(s) · v)∥∥2

L2(U,H)

)ds

],

so from Condition (H2), we find that the next expression is an upper bound

0 ≤ E[n ·∫ T

0(T − s) · e−cs

(− 2

⟨A(s, X(s)

)− A

(s, X(s)− 1

n · Ξ(s) · v), 1n · Ξ(s) · v

⟩V∗×V

+c∥∥ 1n · Ξ(s) · v

∥∥2

H

)ds

]= E

[∫ T

0(T − s) · e−cs − 2

⟨A(s, X(s)

)− A

(s, X(s)− 1

n · Ξ(s) · v),Ξ(s) · v

⟩V∗×V

ds

]+ cE

[n

∫ T

0(T − s) · e−cs ·

∥∥ 1n · Ξ(s) · v

∥∥2

Hds

]for the second (non-negative) expectation. Luckily, the last term on the right-hand side goes tozero when n→∞, as we already saw earlier when discussing the third term in (3.121). For theother term, we can use a similar boundedness result as (3.122) to invoke Lebesgue’s DominatedConvergence Theorem and use Condition (H1) to conclude that this term vanishes.

We will use the inequality (3.119) in the next corollary.

Corollary 3.5.28. Under the assumptions of Setting 3.5.1, the process Y coincides λ⊗P-almosteverywhere with the process A

(·, X(·)

).

Proof. This is an extended discussion of the second-to-last line of the proof of Theorem 4.2.4in [23].

107

Page 108: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

For the sake of contradiction, assume that Y and A(·, X(·)

)belong to different equivalence

classes. Then, the (equivalence class of the) process Y −A(·, X(·)

)is not the (equivalence class

of the) zero process. Using Proposition 1.2.12 and the fact that the dual of normed vector spaceseparates points, there exists a process Φ ∈ Lα([0, T ]× Ω;V) such that

E[∫ T

0

⟨Y (s)− A

(s, X(s)

),Φ(s)

⟩V∗×V

ds

]6= 0.

In particular, it follows that

0 < E[∫ T

0

∣∣∣⟨Y (s)− A(s, X(s)

),Φ(s)

⟩V∗×V

∣∣∣ ds

].

Define

F :=

(s, ω) ∈ [0, T ]× Ω :∣∣∣⟨Y (s, ω)− A

(s, X(s, ω), ω

),Φ(s, ω)

⟩V∗×V

∣∣∣ > 0

which is a set of non-zero λ⊗P-measure — if it were a null set, it would contradict the previousstrict inequality.

Then, with similar arguments as in the proof of Corollary 3.5.26, it follows that the following(strict) inequalities

0 <

∫F∩((0,T )×Ω)

(T − s) · 2e−cs ·∣∣∣⟨Y (s, ω)− A

(s, X(s, ω), ω

),Φ(s, ω)

⟩V∗×V

∣∣∣ d (λ⊗ P) (s, ω)

≤ E[∫ T

0(T − s) · 2e−cs ·

∣∣∣⟨Y (s)− A(s, X(s)

),Φ(s)

⟩V∗×V

∣∣∣ ds

]hold.

To show the contradiction, we will now first show that

E[∫ T

0(T − s) · 2e−cs ·

∣∣∣⟨Y (s)− A(s, X(s)

), ξ(s)

⟩V∗×V

∣∣∣ ds

]= 0 (3.123)

for every V-valued simple function ξ. Take arbitrary M ∈ N and let A1, . . . , AM ∈ B([0, T ])⊗Fbe disjoint sets and let v1, . . . , vM be distinct elements of V. Define, for m ≤M , the processesΞ(m) by

Ξ(m)(s, ω) := sign

⟨Y (s, ω)− A(s, X(s, ω), ω

),∑m≤M

1Am(s, ω) · vm

⟩V∗×V

· 1Am(s, ω),

which is uniformly bounded (and, not to forget, measurable) and let Ξ be given by

Ξ(s, ω) := sign

⟨Y (s, ω)− A(s, X(s, ω), ω

),∑m≤M

1Am(s, ω) · vm

⟩V∗×V

which is again uniformly bounded and measurable. Then, we see that for every m ≤ M , theinequality (3.119) gives

E[∫ T

0(T − s) · 2e−cs

⟨Y (s)− A

(s, X(s)

),Ξ(m)(s) · vm

⟩V∗×V

ds

]≤ 0.

108

Page 109: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

In particular, summing over m and using the linearity of the integrals and the duality pairingyields

0 ≥∑m≤M

E[∫ T

0(T − s) · 2e−cs

⟨Y (s)− A

(s, X(s)

),Ξ(m)(s) · vm

⟩V∗×V

ds

]

= E

∫ T

0(T − s) · 2e−cs

⟨Y (s)− A

(s, X(s)

),Ξ(s) ·

∑m≤M

1Am(s) · vm

⟩V∗×V

ds

= E

∫ T

0(T − s) · 2e−cs ·

∣∣∣∣∣∣⟨Y (s)− A

(s, X(s)

),∑m≤M

1Am(s) · vm

⟩V∗×V

∣∣∣∣∣∣ ds

≥ 0

where in the last step we took Ξ out of the right-hand slot of the dual pair and used its definition.This shows that (3.123) holds for simple functions.

To proceed, we can approximate Φ ∈ Lα([0, T ] × Ω;V) by a sequence of simple functionsΦ(n) : n ≥ 1

. For each of the simple functions, we know that (3.123) holds. We will now show

that

E[∫ T

0(T − s) · 2e−cs ·

∣∣∣∣⟨Y (s)− A(s, X(s)

),Φ(n)(s)

⟩V∗×V

∣∣∣∣ ds

]→ E

[∫ T

0(T − s) · 2e−cs ·

∣∣∣⟨Y (s)− A(s, X(s)

),Φ(s)

⟩V∗×V

∣∣∣ ds

]and that will result in a contradiction, as the latter expression is strictly positive. Taking theabsolute value of the difference, we have, albeit in slightly abbreviated notation,∣∣∣∣E [∫ T

0(T − s) · 2e−cs ·

∣∣∣∣⟨Y − A(·, X(·)

),Φ(n)

⟩V∗×V

∣∣∣∣− ∣∣∣⟨Y − A(·, X(·)

),Φ⟩V∗×V

∣∣∣ ds

]∣∣∣∣≤ E

[∫ T

0(T − s) · 2e−cs ·

∣∣∣∣∣∣∣∣⟨Y − A(·, X(·)

),Φ(n)

⟩V∗×V

∣∣∣∣− ∣∣∣⟨Y − A(·, X(·)

),Φ⟩V∗×V

∣∣∣∣∣∣∣ ds

]≤ 2Te|c|T E

[∫ T

0

∣∣∣∣∣∣∣∣⟨Y − A(·, X(·)

),Φ(n)

⟩V∗×V

∣∣∣∣− ∣∣∣⟨Y − A(·, X(·)

),Φ⟩V∗×V

∣∣∣∣∣∣∣ ds

].

Ignoring the positive constant in front of the expectation, applying the reversed triangle in-equality on the integrand and using the linearity of the duality pairing, we find

E[∫ T

0

∣∣∣∣∣∣∣∣⟨Y (s)− A(s, X(s)

),Φ(n)(s)

⟩V∗×V

∣∣∣∣− ∣∣∣⟨Y (s)− A(s, X(s)

),Φ(s)

⟩V∗×V

∣∣∣∣∣∣∣ ds

]≤ E

[∫ T

0

∣∣∣∣⟨Y (s)− A(s, X(s)

),Φ(n)(s)− Φ(s)

⟩V∗×V

∣∣∣∣ ds

].

Finally, observe that from Holder’s inequality, we find

E[∫ T

0

∣∣∣∣⟨Y (s)− A(s, X(s)

),Φ(n)(s)− Φ(s)

⟩V∗×V

∣∣∣∣ ds

]≤ E

[∫ T

0

∥∥Y (s)− A(s, X(s)

)∥∥V∗

∥∥∥Φ(n)(s)− Φ(s)∥∥∥V

ds

]≤∥∥Y − A

(·, X(·)

)∥∥L

αα−1 ([0,T ]×Ω;V∗)

∥∥∥Φ(n) − Φ∥∥∥Lα([0,T ]×Ω;V)

≤(‖Y ‖

Lαα−1 ([0,T ]×Ω;V∗)

+∥∥A (·, X(·)

)∥∥L

αα−1 ([0,T ]×Ω;V∗)

)·∥∥∥Φ(n) − Φ

∥∥∥Lα([0,T ]×Ω;V)

.

109

Page 110: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

First of all, note that ‖Y ‖L

αα−1 ([0,T ]×Ω;V∗)

< ∞, as Y ∈ Lαα−1 ([0, T ]× Ω;V∗). Secondly, it

holds that∥∥A (·, X(·)

)∥∥L

αα−1 ([0,T ]×Ω;V∗)

<∞ by Lemma 3.1.5 as X ∈ D. Finally, the last term

tends to zero when n → ∞. This completes the contradiction, so we can conclude that Y andA(·, X(·)

)belong to the same equivalence class in L

αα−1 ([0, T ]× Ω;V∗) and therefore coincide

λ⊗ P-almost everywhere on [0, T ]× Ω.

Together, these results allow us to prove the existence of the solution.

Conclusion 3.5.29 (Proof of existence). From Lemma 3.5.17, we obtain that the process X,defined in (3.92) on page 92, coincides λ⊗P-almost everywhere with the progressively measurableV-valued process X that belongs to D = Lα([0, T ]×Ω;V)∩L2([0, T ]×Ω;H) by Corollary 3.5.14.Hence, the process X satisfies integrability condition (3.28) in Theorem 3.3.17.

Note that Y ∈ Lαα−1 ([0, T ]× Ω;V∗) and Z ∈ L2([0, T ]× Ω;H) by Corollary 3.5.14; moreover,

both are progressively measurable. This means thatX satisfies the conditions of Theorem 3.3.17.Hence, we see that (3.33) in Theorem 3.3.17 holds and consequently, (3.16) in Theorem 3.2.3 issatisfied. In addition, the process X is a continuous H-valued, F-adapted process.

By Lemma 3.5.28, the processes Y and A(·, X(·)

)coincide λ ⊗ P-almost everywhere. Let

t ∈ [0, T ] be arbitrary. Then, we have by the linearity of Bochner integrals and inequality (1.17)in Proposition 1.2.4, that we may invoke since α > 1 and the underlying measure space is finite,

E[∥∥∥∥∫ t

0Y (s) ds−

∫ t

0A(s, X(s)

)ds

∥∥∥∥V∗

]= E

[∥∥∥∥∫ t

0

(Y (s) ds− A

(s, X(s)

))ds

∥∥∥∥V∗

]≤ E

[∫ t

0

∥∥Y (s) ds− A(s, X(s)

)∥∥V∗

ds

]≤ E

[∫ T

0

∥∥Y (s) ds− A(s, X(s)

)∥∥V∗

ds

]= 0.

Hence, for every t ∈ [0, T ], the integrals∫ t

0 Y (s) ds and∫ t

0 A(s, X(s)

)ds are equal P-almost

surely.Regarding Z, note that it is progressively measurable and belongs to L2([0, T ]×Ω;L2(U,H))

by Corollary 3.5.14. The same corollary implies that the map X is progressively measur-able and belongs to D = Lα([0, T ] × Ω;V) ∩ L2([0, T ] × Ω;H). Then, we can invoke Corol-lary 3.1.9 and Corollary 3.1.6 and see that B

(·, X(·)

)is progressively measurable and belongs

to L2([0, T ]×Ω;L2(U,H)) as well. Hence, the difference process Z −B(·, X(·)

)is progressively

measurable and belongs to L2([0, T ] × Ω;L2(U,H)). Then, for every t ∈ [0, T ], we have by thelinearity of the stochastic integrals, the maximal inequality (Corollary 2.1.15), the Ito isometry(Proposition 2.3.3) and Corollary 3.5.26,

E

[∥∥∥∥∫ t

0Z(s) dW (s)−

∫ t

0B(s, X(s)

)dW (s)

∥∥∥∥2

H

]= E

[∥∥∥∥∫ t

0

(Z(s)− B

(s, X(s)

))dW (s)

∥∥∥∥2

H

]

≤ E

[supt∈[0,T ]

∥∥∥∥∫ t

0

(Z(s)− B

(s, X(s)

))dW (s)

∥∥∥∥2

H

]≤ 4E

[∥∥∥∥∫ T

0

(Z(s)− B

(s, X(s)

))dW (s)

∥∥∥∥2

H

]

= 4E[∫ T

0

∥∥Z(s) ds− B(s, X(s)

)∥∥2

L2(U,H)ds

]= 0,

proving that∫ t

0 Z(s) dW (s) and∫ t

0 B(s, X(s)

)dW (s) are equal P-almost surely.

Looking back at our definition of the process X in (3.92) on page 92, this establishes (3.15)for every t ∈ [0, T ]. We can conclude that X is indeed a solution in the sense of Definition 3.2.1to SDE (3.14) that satisfies (3.16) in Theorem 3.2.3, as desired.

The uniqueness follows from the next lemma. The proof is based a slightly modified versionof the proof of Proposition 4.2.10 in [23].

110

Page 111: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.5 CHAPTER 3. MAIN RESULT

Lemma 3.5.30. The solution of SDE (3.14) in the sense of Definition 3.2.1 is unique up toP-indistinguishability.

Proof. Compare Proposition 4.2.10 in [23] and Exercise 4.11 in [15].Suppose that X and X are both solutions to the SDE (3.14) in the sense of Definition 3.2.1.

Then, since X(0) and X (0) coincide P-almost surely with the initial condition X0, we find,

X(t)−X (t)

= (0H +)

∫ t

0

(A(s, X(s)

)− A

(s, X (s)

))ds+

∫ t

0

(B(s, X (s)

)− B

(s, X(s)

))dW (s).

for every t ∈ [0, T ].Then, since X, respectively X , is a solution in the sense of Definition 3.2.1, it coincides

λ ⊗ P-almost everywhere on [0, T ] × Ω with the progressively measurable V-valued process X,respectively X , that belongs to D = Lα([0, T ] × Ω;V) ∩ L2([0, T ] × Ω;H). Hence, the processX − X is progressively measurable, belongs to D and coincides λ⊗ P-almost everywhere withX −X .

We obtain from Lemma 3.1.9 that the process A(·, X(·)

)is progressively measurable. More-

over, Corollary 3.1.5 yields(E[∫ T

0‖A(s,Ξ(s))‖

αα−1

V∗ ds

])α−1α

<∞.

This means that A(·, X(·)

)∈ L

αα−1 ([0, T ]× Ω;V∗) by Proposition 1.17. Analogously, the same

statement are true for X , implying that A(·, X(·)

)−A

(·, X (·)

)is a progressively measurable

process in Lαα−1 ([0, T ]× Ω;V∗).

We can copy this argument and invoke Corollary 3.1.6 instead of Corollary 3.1.5 to con-clude that the process B

(·, X(·)

)− B

(·, X (·)

)is progressively measurable and belongs to

L2 ([0, T ]× Ω;L2(U,H)).With these observations, we see that the conditions in Setting 3.3.13 are met. Hence, we find

by (3.72) in Corollary 3.3.47 that

E[‖X(t)−X (t)‖2H

]=

∫ t

0E[2⟨(A(s, X(s)

)− A

(s, X (s)

)), X(s)− X (s)

⟩V∗×V

+∥∥B (s, X (s)

)− B

(s, X(s)

)∥∥2

L2(U,H)

]ds.

(3.124)

One can apply integration by parts (Proposition 1.4.9) to the maps s 7→ e−cs and s 7→E[‖X(s)−X (s)‖2H

](see Corollary 3.3.48) to derive the following identity for every t ∈ [0, T ],

E[e−ct ‖X(t)−X (t)‖2H

]=

∫ t

0e−cs

E[2⟨(A(s, X(s)

)− A

(s, X (s)

)), X(s)− X (s)

⟩V∗×V

+∥∥B (s, X (s)

)− B

(s, X(s)

)∥∥2

L2(U,H)

]− cE

[‖X(t)−X (t)‖2H

]ds.

When we look back at Condition (H2) in Condition 3.1.3, we see that the expression betweenthe curly brackets is non-positive. Then, as the map [0, t] → R : s 7→ e−cs is non-negative, wesee that the integral right-hand side is non-positive. Multiplying both sides by ect yields

E[‖X(t)−X (t)‖2H

]≤ 0

111

Page 112: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.6 CHAPTER 3. MAIN RESULT

We conclude that X(t) coincides P-almost surely with X (t) for every t ∈ [0, T ]. Since the pro-cesses X and X are continuous H-valued processes, the assertion follows by Proposition 2.1.3.This showes the uniqueness.

This completes the proof of Theorem 3.2.3.

3.6 Examples

We end this chapter with a few examples, due to [23]. Below, we will refer to the conditionsimposed on the coefficients of the SDE as discussed in Condition 3.1.3 (page 46).

We will only give a Gelfand triple and a map A. We will do so because of the next threeobservations, which are based on Exercise 4.1.2 in [23]. In the first observation, an integrabilitycondition on B(·,0V, ·) is added, making the proof evident.

Observation 3.6.1. Suppose that the map A satisfies Condition (H2) and (H3) with B ≡ 0.Assume that the map B is Lipschitz continuous in v ∈ V with respect to the H-norm with aLipschitz constant LipB that is independent of t ∈ [0, T ] and ω ∈ Ω and that B (·,0V, ·) belongsto L1([0, T ]×Ω;L2(U,H)). Then, the maps A and B together satisfy Condition (H2) and (H3).

Observation 3.6.2. Suppose that the map A′ satisfies Condition (H2) and (H3) with B ≡ 0 andsuppose that the maps A and B in conjunction satisfy Conditions (H2) and (H3). Then, themaps A+A′ and B satisfy Conditions (H2) and (H3) as well.

Observation 3.6.3. Suppose that the maps A and A′ satisfies Condition (H1) and (H4), thenso does their sum.

All examples we will discuss here were found in Section 4.1 in [23]. Moreover, in discussingthe examples, we will not verify the conditions on the Gelfand triple and the map A, but insteadrefer the reader to the corresponding proofs in [23].

Note that if a map A is independent of t and ω and is linear in V, it falls into the setting wehave discussed earlier, as we will see in the next example. In the same example, we will also useH1,2

0 (O), the Sobolev space of order 1 in L2(O) with trace zero at the boundary ; we would liketo note that this is not to be confused with the trace we saw with bounded linear operators.This definition is omitted for brevity purposes and can be found in [23] on pages 77 and 78.

Example 3.6.4. Let O ⊂ Rn be open. Let H1,20 (O) be the Sobolev space of order 1 in L2(O)

with trace zero at the boundary. Then, we have the Gelfand triple

H1,20 (O) ⊂ L2(O) ⊂

(H1,2

0 (O))∗.

Let A be the Laplace operator with Dirichlet boundary conditions in L2(O). All conditions onthe Gelfand triple and the map A are satsfied. When we take B ≡ 0, the resulting deterministicdifferential equation is just the classical heat equation.

Proof. See pages 77-81 in [23].

Example 3.6.5. Let O ⊂ Rn be open and bounded. Let p ∈ [2,∞). Consider the Gelfandtriple

Lp(O) ⊂ L2(O) ⊂ Lpp−1 (O)

and consider the map

A : Lp(O)→ Lpp−1 (O) : ψ 7→ −ψ |ψ|p−2 .

This Gelfand triple and this map A satisfy the set conditions.

Proof. Example 4.1.5 in [23].

112

Page 113: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.6 CHAPTER 3. MAIN RESULT

Example 3.6.6. Let O ⊂ Rn be open and bounded. Let p ∈ [2,∞). Let H1,20 (O) be as in

Example 3.6.4. We have the Gelfand triple

Lp(O) ⊂(H1,2

0 (O))∗⊂ (Lp(O))∗ .

Define Θ : Lp(O) → Lpp−1 (O) : ψ 7→ ψ |ψ|p−2 (compare A in Example 3.6.5) and let ∆ be the

Laplace operator. Define the porous medium operator

A(ψ) := ∆Θ(ψ).

The imposed conditions are met. The map A is the non-linear operator appearing in the classicalporous medium equation. The following stochastic porous medium equation

dX(t) = ∆(X(t) |X(t)|p−2

)+ B (t,X(t)) dW (t)

fits into the setting of Theorem 3.2.3. The solution of this equation describes the time evolutionof the density X(t) of a substance in a porous medium.

Proof. See Example 4.1.11, Lemma 4.1.12, Lemma 4.1.13 and Remark 4.1.14 in [23]. Moreover,compare Example 1.2.7 therein.

Another noteworthy example is the stochastic Cahn-Hilliard equation, which is explained infull detail in [11] and [7] and describes the phase separation of a binary alloy after quenching.

For more examples, we refer the reader to Section 1.2 of [23].

113

Page 114: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

Populaire samenvatting

Dit stuk is sterk gebaseerd op [35].Mogelijk ben je bekend met de warmtevergelijking : deze beschrijft de verdeling van de warmte

over een bepaald gebied in de tijd. Nemen wij het open interval (0, 1) ⊂ R, dan voldoet dewarmtevergelijking aan de onderstaande gelijkheid

∂u∂t (t, x) = ∆u (t, x)

u(0, x) = f(x)(3.125)

waar we met ∆ = ∂2

∂x2 de Laplace-operator aanduiden en f een zekere beginvoorwaarde is diehet temperatuurprofiel op (0, 1) op het starttijdstip beschrijft.

De oplossing u van de partiele differentiaalvergelijking (3.125) is een functie die afhangt vanplaats x en tijd t. In plaats van u te beschouwen als een reeelwaardige functie in twee variabelent en x, kunnen wij u “in plakjes knippen”: voor ieder tijdstip t een. Laten we aannemen datieder plakje integreerbaar is over het open eenheidsinterval.

Elk plakje beschrijft een reeelwaardige functie die alleen afhangt van x en daarmee de tem-peratuurverdeling op het tijdstip t geeft: voor iedere t is er een integreerbare functie die eenafspiegeling geeft van de temperatuurverdeling op het open eenheidsinterval. Met deze plakjeskunnen wij de functie U : [0,∞) → L1((0, 1)) definieren; deze functie U is vastgelegd door deeis dat U(t)(x) = u(t, x) voor alle t ≥ 0 en x ∈ (0, 1). We kunnen nu (3.125) herschrijven tot

dU(t) = ∆U(t) dt

U(0) = f(3.126)

wat een gewone differentiaalvergelijking is met oplossingen in L1((0, 1)). Wij kunnen ons eenoplossing voorstellen als een “pad” door L1((0, 1)) dat begint bij f en op ieder tijdstip hettemperatuurprofiel beschrijft op (0, 1).

Stel nu echter dat de temperatuur op het interval (0, 1) aangetast wordt door een ruisbrondie ervoor zorgt dat het gebied (0, 1) als het ware getroffen wordt door willekeurige functies op(0, 1). We voegen deze ruisbron toe aan de bovenstaande gewone differentiaalvergelijking (3.126)— het betreft de “dW (t)”-term — en we verkrijgen een stochastische differentiaalvergelijking

dU(t) = ∆U(t) dt + dW (t)

U(0) = f.(3.127)

Een oplossing van deze vergelijking is wenselijk, maar bestaat deze ook?In deze scriptie zullen we laten zien dat vergelijkingen als (3.127) hierboven een oplossing

hebben en dat deze oplossingen uniek zijn.

114

Page 115: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

Bibliography

[1] Create lemmas. Blog post on professor Terence Tao’s blog What’s new ; link: https:

//terrytao.wordpress.com/advice-on-writing-papers/create-lemmas/. Accessed:June 26, 2017.

[2] C. D. Aliprantis and K. C. Border. Infinite Dimensional Analysis: A Hitchhiker’s Guide,Third Edition. Springer, Berlin, 2006.

[3] C. Bennett and R. Sharpley. Interpolation of Operators, volume 129 of Pure and AppliedMathematics. Academic Press, Inc., Boston, MA, 1988.

[4] V. I. Bogachev. Measure Theory: Volume I. Springer-Verlag, Berlin, 2007.

[5] S. N. Cohen and R. J. Elliott. Stochastic Calculus and Applications: Second Edition.Probability and Its Applications. Birkhauser, Basel, 2015.

[6] J. B. Conway. A Course in Functional Analysis: Second Edition, volume 96 of GraduateTexts in Mathematics. Springer-Verlag, New York, 1990.

[7] G. Da Prato and A. Debussche. Stochastic Cahn-Hilliard equation. Nonlinear Anal.,26(2):241–263, 1996.

[8] G. Da Prato and J. Zabczyk. Stochastic Equations in Infinite Dimensions: Second Edition,volume 152 of Encyclopedia of Mathematics and its Applications. Cambridge UniversityPress, Cambridge.

[9] R. C. Dalang and L. Quer-Sardanyons. Stochastic integrals for spde’s: a comparison. Expo.Math., 29(1):67–109, 2011.

[10] J. L. Doob. Classical Potential Theory and Its Probabilistic Counterpart: Reprint of the1984 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001.

[11] N. Elezovic and A. Mikelic. On the stochastic Cahn-Hilliard equation. Nonlinear Anal.,16(12):1169–1200, 1991.

[12] G. Fabbri, F. Gozzi, and A. Swiech. Stochastic Optimal Control in Infinite Dimension:Dynamic Programming and HJB Equations. Probability Theory and Stochastic Modelling82. Springer International Publishing, 2017.

[13] G. B. Folland. Real Analysis. Pure and Applied Mathematics (New York). John Wiley &Sons, Inc., New York, 1984.

[14] L. Gasinski and N. S. Papageorgiou. Nonlinear analysis, volume 9 of Series in MathematicalAnalysis and Applications. Chapman & Hall/CRC, Boca Raton, FL, 2006.

[15] L. Gawarecki and V. Mandrekar. Stochastic Differential Equations in Infinite Dimensionswith Applications to Stochastic Partial Differential Equations. Probability and its Appli-cations (New York). Springer, Heidelberg, 2011.

[16] I. Gohberg, S. Goldberg, and N. Krupnik. Traces and Determinants of Linear Operators,volume 116 of Operator Theory: Advances and Applications. Birkhauser Verlag, Basel,2000.

115

Page 116: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

SECTION 3.6 Bibliography

[17] T. Hytonen, J. van Neerven, M. Veraar, and L. Weis. Analysis in Banach spaces. Vol. I.Martingales and Littlewood-Paley theory, volume 63 of Ergebnisse der Mathematik und ihrerGrenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathemat-ics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer,Cham, 2016.

[18] R. C. James. A non-reflexive banach space isometric with its second conjugate space. Proc.Nat. Acad. Sci. U. S. A., 37:174–177, 1951.

[19] R. Jarrow and P. Protter. A short history of stochastic integration and mathematicalfinance: the early years, 1880–1970. In A festschrift for Herman Rubin, volume 45 of IMSLecture Notes Monogr. Ser., pages 75–91. Inst. Math. Statist., Beachwood, OH, 2004.

[20] A. Jentzen. Numerical Analysis of Stochastic Partial Differential Equations. Lecture notes;version: May 26, 2014.

[21] I. Karatzas and S. E. Shreve. Brownian Motion and Stochastic Calculus: Second Edition,volume 113 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1991.

[22] A. S. Kechris. Classical Descriptive Set Theory (Graduate Texts in Mathematics). Springer-Verlag Berlin and Heidelberg GmbH & Co. K, 1995.

[23] W. Liu and M. Rockner. Stochastic Partial Differential Equations: An Introduction. Uni-versitext. Springer, Cham, 2015.

[24] V. Mandrekar and B. Rudiger. Stochastic integration in Banach spaces, volume 73 ofProbability Theory and Stochastic Modelling. Springer, Cham, 2015.

[25] P.-A. Meyer. Stochastic processes from 1950 to the present. J. Electron. Hist. Probab.Stat., 5(1):42, 2009. Translated from French by Jeanine Sedjro.

[26] W. Page. Topological Uniform Structures. Dover Publications, Inc., New York, 1988.Revised reprint of the 1978 original.

[27] C. Prevot and M. Rockner. A Concise Course on Stochastic Partial Differential Equations,volume 1905 of Lecture Notes in Mathematics. Springer, Berlin, 2007.

[28] D. Revuz and M. Yor. Continuous Martingales and Brownian motion: Third Edition,volume 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principlesof Mathematical Sciences]. Springer-Verlag, Berlin, 1999.

[29] H. L. Royden. Real Analysis. Macmillan Publishing Company, New York, 1988.

[30] W. Rudin. Functional Analysis: Second Edition. International Series in Pure and AppliedMathematics. McGraw-Hill, Inc., New York, 1991.

[31] B. P. Rynne and M. A. Youngson. Linear Functional Analysis: Second Edition. SpringerUndergraduate Mathematics Series. Springer-Verlag London, Ltd., London, 2008.

[32] F. Spieksma. An Introduction to Stochastic Processes in Continuous Time: the non-Jip-and-Janneke-language approach (adaptation of the text by Harry van Zanten). Lecturenotes accompanying the MasterMath-course “Stochastic Processes”; version: May 5, 2016.

[33] P. J. C. Spreij. Stochastic integration. Lecture notes accompanying the course “StochasticIntegration” taught at the University of Amsterdam; version: August 17, 2016.

[34] T. C. Tao. An Epsilon of Room, I: Real Analysis: pages from year three of a mathematicalblog. Obtained from http://bookstore.ams.org/gsm-117 on 27 Jan 2017.

116

Page 117: Wellposedness of Stochastic Di erential Equations in In ... · to develop a very complete theory of stochastic di erential equations { in a style so luminous by the way that these

[35] J. M. A. M. van Neerven. Infinite Dimensions [in Dutch]. Intreerede, uitgesproken op 5september 2008 ter gelegenheid van de aanvaarding van het ambt van Antoni van Leeuwen-hoek hoogleraar Stochastische Analyse aan de Faculteit Elektrotechniek, Wiskunde en In-formatica van de Technische Universiteit Delft.

[36] J. M. A. M. van Neerven. Stochastic Evolution Equations. Lecture notes accompanyingthe Internet Seminar 2007-2008; version: May 07, 2010.

[37] F. Viens, J. Feng, Y. Hu, and E. Nualart, editors. Malliavin Calculus and StochasticAnalysis: A Festschrift in Honor of David Nualart, volume 34 of Springer Proceedings inMathematics & Statistics. Springer, New York, 2013.

[38] H. von Weizsacker and G. Winkler. Stochastic Integrals: An Introduction. AdvancedLectures in Mathematics. Friedr. Vieweg & Sohn, Braunschweig, 1990.

117


Recommended