+ All Categories
Home > Documents > The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

Date post: 19-Dec-2016
Category:
Upload: viorel
View: 220 times
Download: 2 times
Share this document with a friend
16
J. Differential Equations 255 (2013) 3832–3847 Contents lists available at ScienceDirect Journal of Differential Equations www.elsevier.com/locate/jde The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise Viorel Barbu Al.I. Cuza University and Octav Mayer Institute of Mathematics, Romanian Academy, Ia¸ si, Romania article info abstract Article history: Received 18 June 2013 Revised 18 July 2013 Available online 29 August 2013 MSC: 60H15 60H05 70H20 Keywords: Hamilton–Jacobi equation Brownian motion Viscosity solution Stochastic process The global existence and uniqueness of viscosity solutions to the Cauchy problem for the Hamilton–Jacobi equations in R N driven by additive and multiplicative Wiener processes are studied for convex Hamiltonians via variational techniques. The finite speed of propagation is also established in the multiplicative noise case for equations with Lipschitzian Hamiltonians. © 2013 Elsevier Inc. All rights reserved. 1. Introduction We consider here the stochastic Hamilton–Jacobi equation dX (t , x) + H ( t , x, X x (t , x) ) dt = dW (t ), t ∈[0, T ], X (0, x) = X 0 (x), x R N , (1) where W = W (t , x) is the Wiener process W (t , x) = m j=1 μ j (xj (t ), t 0, x R N , (2) E-mail address: [email protected]. 0022-0396/$ – see front matter © 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jde.2013.07.044
Transcript
Page 1: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

J. Differential Equations 255 (2013) 3832–3847

Contents lists available at ScienceDirect

Journal of Differential Equations

www.elsevier.com/locate/jde

The variational approach to Hamilton–Jacobi equationsdriven by a Gaussian noise

Viorel Barbu

Al.I. Cuza University and Octav Mayer Institute of Mathematics, Romanian Academy, Iasi, Romania

a r t i c l e i n f o a b s t r a c t

Article history:Received 18 June 2013Revised 18 July 2013Available online 29 August 2013

MSC:60H1560H0570H20

Keywords:Hamilton–Jacobi equationBrownian motionViscosity solutionStochastic process

The global existence and uniqueness of viscosity solutions to theCauchy problem for the Hamilton–Jacobi equations in R

N drivenby additive and multiplicative Wiener processes are studied forconvex Hamiltonians via variational techniques. The finite speed ofpropagation is also established in the multiplicative noise case forequations with Lipschitzian Hamiltonians.

© 2013 Elsevier Inc. All rights reserved.

1. Introduction

We consider here the stochastic Hamilton–Jacobi equation{dX(t, x) + H

(t, x, Xx(t, x)

)dt = dW (t), t ∈ [0, T ],

X(0, x) = X0(x), x ∈RN ,

(1)

where W = W (t, x) is the Wiener process

W (t, x) =m∑

j=1

μ j(x)β j(t), t � 0, x ∈ RN , (2)

E-mail address: [email protected].

0022-0396/$ – see front matter © 2013 Elsevier Inc. All rights reserved.http://dx.doi.org/10.1016/j.jde.2013.07.044

Page 2: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

V. Barbu / J. Differential Equations 255 (2013) 3832–3847 3833

and H : [0, T ] × RN × R

N → R is a Hamiltonian function to be made precise later on. In (2),{μ j}m

j=1 are smooth real-valued functions on RN and {β j}m

j=1 is a system of linear independent realBrownian motions in a probability space {Ω,F ,P} with the natural filtration (Ft)t�0. The solutionX = X(t, x,ω) : [0, T ] × R

n × Ω → R to (1) is taken in a certain weak sense which will be madeprecise later on in Section 2. Besides (1), we shall study also the stochastic equation (1) with themultiplicative white noise X W , that is,

dX(t, x) + H(t, x, Xx(t, x)

)dt = X(t, x)dW (t),

X(0, x) = X0(x), x ∈Rn, t ∈ [0, T ], (3)

where X dW is considered in the Itô sense. Eqs. (1), (3) arise as models for dynamics of mechanicalparticles in a field perturbed by white noise (a few examples are given below in Section 2) as wellas in the study of optimal control problems with systems driven by Gaussian processes. In fact, thesolution X to (1) may be viewed as the action functions corresponding to the stochastic Hamiltoniansystem

⎧⎪⎨⎪⎩dx = H p(t, x, p)dt,

dp = −Hx(t, x, p)dt +m∑

j=1

∇μ j(x)dβ j .(4)

The stochastic Hamilton–Jacobi equations

Yt + H(t, x, Yx,ω) = 0, (t, x) ∈ [0, T ] ×RN , ω ∈ Ω, (5)

was studied in the context of stochastic homogenization (see, e.g., [12,13]) but there are a few resultsfor Eq. (1) or (3) under general conditions on H . In this context, we mention the works [11,10], whereexistence in (1) is studied for the smooth Hamiltonian function H by the method of the stochasticcharacteristic defined by the Hamiltonian system (4). The method we use here is different and leadsto existence and uniqueness results of solutions under more general assumptions on H , which coversin particular the case of the eikonal stochastic equation.

By the substitution X = Y + W , Eq. (1) reduces to a random Hamilton–Jacobi equation of the form(5), but the existence of an adapted stochastic process t → Y (t, x,ω), which is a viscosity solutionto (5), is not a direct consequence of the existence theory of viscosity solutions to Hamilton–Jacobiequations and requires a special analysis based on the variational theory. As a matter of fact, theprincipal effort is to show that the viscosity solution to corresponding random equations in Y definesan (Ft)t�0-adapted stochastic process.

As regards Eq. (3), it reduces via the rescaling procedure (see [2,3]) to a random Hamilton–Jacobiequation of the form ϕt + H0(t, x,ϕ,ϕx,ω) = 0. Both problems are treated in Sections 2–4 below.Eqs. (1) and (3) with Dirichlet stochastic boundary conditions on bounded domains can be similarlytreated, but we do not give details. Note also that the Stratonovich equation

dX + H(t, x, Xx)dt = X ◦ dW , X(t) = X0, (6)

can be treated by the same technique.

Notation. Everywhere in the following, RN is the Euclidean space with the norm denoted | · |N andscalar product denoted x · y. By B R we denote the ball {x ∈ R

N ; |x|N � R}.Denote by BUC(RN ), BUC([0, T ] ×R

N ) and BUC([0, T ] ×RN × B R) the space of bounded and uni-

formly continuous functions on RN (respectively, [0, T ] ×R

N , [0, T ] ×RN × B R ).

Page 3: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

3834 V. Barbu / J. Differential Equations 255 (2013) 3832–3847

We denote by L1(0, T ;RN ) and C([0, T ];RN ) the space of Lebesgue-summable functions u :(0, T ) → R

N , respectively continuous functions u : [0, T ] → RN . By C1([0, T ] × R

N ) we denote thespace of continuously differentiable functions on [0, T ] ×R

N .Finally, Ck

b(RN ) is the space of bounded and continuously k-differentiable functions on R

N withbounded derivatives up to order k.

We also use the notations:

ϕx(t, x) = ∇xϕ(t, x), x ∈ RN , y′(s) = dy

ds(s), s ∈R, ϕt = ∂ϕ

∂t.

Given the Hamiltonian function H : [0, T ] ×RN ×R

N → R , we denote by L : [0, T ] ×RN ×R

N → Rthe corresponding Lagrangian function

L(t, x, u) = supp

{p · u − H(t, x, p); p ∈R

N}. (7)

We recall that u → L(t, x, u) is convex and lower semicontinuous and, if p → H(t, x, p) is convex andlower semicontinuous, then

H(t, x, p) = supu

{p · u − L(t, x, u); u ∈ R

N}, ∀p ∈R

N . (8)

A continuous monotonically increasing function m : [0,∞) → [0,∞) such that m(0) = 0 is called mod-ulus.

2. The main results

The following hypotheses will be considered in the sequel.

(i) H = BUC([0, T ] ×RN × B R) for each R > 0.

(ii) For each (t, x) ∈ [0, T ] ×RN , the function p → H(t, x, p) is convex.

(iii) There is a modulus m such that∣∣H(t, x, p) − H(t, x, p)∣∣ � m

(|x − y|N(1 + |p|N

)),

for all t ∈ [0, T ] and all x, x, p ∈ RN .

(iv) H(t, x, p) � ν|p|N , ∀t ∈ [0, T ], x, p ∈ RN .

(v) μ j ∈ C2b (RN ), j = 1,2, . . . ,m.

By the substitution Y = X − W , we reduce Eq. (1) to the random Hamilton–Jacobi equation

Yt(t, x) + H(t, x, Yx(t, x) + W x(t, x)

) = 0, x ∈RN ,

Y (0, x) = X0(x), x ∈RN , (9)

which will be studied below in the framework of viscosity solutions.We recall (see [7,6]) that the continuous function (t, x) → Y (t, x) is said to be a viscosity solution

to (9) (for a fixed ω ∈ Ω) if Y (0, x) ≡ X0(x) and, for each ϕ ∈ C1([0, T ] × RN ), we have ϕt(t0, x0) +

H(t0, x0,ϕx(t0, x0)) � 0 (respectively, � 0) at each local maxima (respectively, minima) of Y − ϕ .

Definition 2.1. The stochastic process X = X(t, x) is said to be a viscosity solution to (1) if t → X(t, x)is, for each x ∈ R

N , (Ft)t�0–adapted and Y = X − W is a viscosity solution to (9), P-a.s., i.e., forP-almost all ω ∈ Ω .

Page 4: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

V. Barbu / J. Differential Equations 255 (2013) 3832–3847 3835

Theorem 2.2. Let X0 ∈ BUC(RN ) and 0 < T < ∞. Then, under hypotheses (i), (ii), (iii), (v), Eq. (1) has aunique viscosity solution

X ∈ BUC([0, T ] ×R

N), P-a.s. (10)

Moreover, the solution X depends P-a.s. continuously on X0 from BUC(RN ) to C([0, T ];RN ).

The viscosity solution to Eq. (3) is defined via the rescaling transformation

X = eW Y . (11)

If X is a strong solution to (3) in Itô’s integral sense, that is, P-a.s.,

X(t, x) +t∫

0

H(s, x, Xx(s, x)

)ds = X0(x) +

t∫0

X(s, x)dW (s), t ∈ [0, T ], x ∈RN , (12)

then, by Itô’s formula,

dX = eW Y dW + eW dY + μeW Y dt,

and so Y is the solution to the random Hamilton–Jacobi equation

Yt(t, x) + e−W (t,x)H(t, x, eW (t,x)(Yx(t, x) + W x(t, x)

)) + μ(x)Y (t, x) = 0, t ∈ [0, T ], x ∈RN ,

Y (0, x) = X0(x), x ∈ RN , (13)

where

μ(x) = 1

2

m∑j=1

μ2j (x), x ∈R

N .

Conversely, every strong P-a.s. solution Y to (13) (if any) which is adapted to the filtration (Ft)t�0is a solution to (12). The equivalence of the stochastic equation and the corresponding deterministicequation via the rescaling transformation (11) can be proved by the arguments used in [2,3]. Sinceunder our assumptions the random Hamilton–Jacobi equation has not, in general, a strong solution,the equivalence between (12) and (13) is only conceptual, but we are entitled, however, to say that Xis a viscosity solution to (12) if the process t → X(t, x) is (Ft)t�0-adapted for each x ∈ R

N and Y = e−W X isP-a.s. a viscosity solution to (13).

We have

Theorem 2.3. Let X0 ∈ BUC(RN ) and 0 < T < ∞. Then, under hypotheses (i), (ii), (iii), (iv), (v), Eq. (3)has a unique viscosity solution X ∈ BUC([0, T ] ×R

N ), P-a.s. The solution depends continuously on the initialdata X0 .

We note that, by substitution (11), the Stratonovich stochastic Eq. (6) reduces to

Yt + e−W H(t, x, eW (Yx + W x)

) = 0, Y (0) = X0, (14)

and the definition of the viscosity solution remains the same.

Page 5: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

3836 V. Barbu / J. Differential Equations 255 (2013) 3832–3847

Let us now briefly recall a few examples for which Theorems 2.2 and 2.3 apply.1. The eikonal equation

dX + ρ|Xx|N dt + a(t, x) · Xx dt = X dW in (0, T ) ×RN ,

X(0) = X0 in RN , (15)

where ρ > 0 and a ∈ BUC([0, T ] ×RN ).

For a ≡ 0 this is the classical geometrical optic equation perturbed by a linearly multiplicativeGaussian process. The same equation models the flame propagation driven by the multiplicative Gaus-sian noise X W and the linear transport term a · Xx .

2. The inviscid stochastic Burger equation

dX + 1

2

∣∣X(t)∣∣2x dt = dW (t), t ∈ (0, T ), x ∈R,

X(0) = X0, (16)

reduces via the transformation X = Xx to the Hamilton–Jacobi equation

dX + 1

2| Xx|2N dt = dW (t), X(0) =

0∫−∞

X0(ξ)dξ, (17)

where W = ∑Nj=1 β j

∫ x−∞ μ j(ξ)dξ .

In a similar way, one reduces the stochastic conservation law equation dX + (F (X))x dt = dW in1 − D to a Hamilton–Jacobi equation of the form (1).

3. The stochastic erosion equation

dh + H(hx)dt = h dW , t � 0, x ∈ RN (18)

can be taken as a model for the temporary change of the surface height h = h(t, x) at time t inpresence of a stochastic perturbation hW proportional with the height. In the special case of (18),with h(p) = 1

2 |p|2N and additive noise dW , this is a limit case of the Kardar–Parisi–Zhang equation forthe stochastic interface growth.

3. Proof of Theorem 2.2

It should be said that, under assumptions (i)–(iii) and (v), for fixed ω ∈ Ω , the Hamilton–Jacobiequation (9) has a unique viscosity solution Y (see [5,7,6,8]). The main effort here is to show that theprocess t → Y (t, x) is (Ft)t -adapted and, to this purpose, we shall invoke a variational technique tothe existence theory for (9) which though is not new (see, e.g., [1,4]), it has, however, some specificfeatures in the present case. Namely, consider for a fixed ω ∈ Ω the optimal value function

Y (t, x) = infu

{ t∫0

(L(s, y(s), u(s)

) − u(s) · W y(s, y(s)

))ds + X0

(y(0)

);y′(s) = u(s), a.e. s ∈ (0, t), y(t) = x, u ∈ L1(0, t;RN)}

(19)

for (t, x) ∈ [0, T ] ×RN .

Page 6: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

V. Barbu / J. Differential Equations 255 (2013) 3832–3847 3837

Lemma 3.1. The function Y : [0, T ] ×RN → R is continuous and is a viscosity solution to (9).

Proof. We note first that, for each (t, x) ∈ [0, T ] × RN , the infimum in (19) is attained. Indeed, we

have by (7)

L(s, y, u) � ρ|u|N − H

(s, y,ρ

u

|u|N

), ∀u ∈ R

N , ρ > 0, (20)

and so, by hypothesis (i), it follows that

L(s, y, u) � ρ|u|N − C(ρ), ∀u ∈RN , ∀ρ > 0. (21)

Now, if {(yn, un)} is a minimizing sequence in (19), it follows by (21) via the Dunford–Pettis theoremthat {un} is weakly compact in L1(0, t;RN ) and so, by the Arzela theorem, {yn} is strongly compactin C([0, t];RN ). Hence, on a subsequence, again denoted {n}, we have, for n → ∞,

un −→ u∗ weakly in L1(0, t;RN),

yn −→ y∗ in C([0, t];RN)

,

un · W (·, yn) −→ u∗ · W(·, y∗) weakly in L1(0, t;RN)

(22)

where y∗(s) = x − ∫ ts u∗(τ )dτ , s ∈ [0, t].

By (7), we have

t∫0

L(s, yn(s), un(s)

)ds �

t∫0

un(s) · p(s)ds −t∫

0

H(s, yn(s), p(s)

)ds, ∀p ∈ L∞(

0, t;RN),

and so, it follows by virtue of assumption (i) that

lim infn→∞

t∫0

L(s, yn(s), un(s)

)ds �

t∫0

(u∗(s) · p(s) − H

(s, y∗(s), p(s)

))ds,

∀p ∈ L∞(0, t;RN)

. (23)

Taking into account that, by (ii), the function p → H(s, y∗; p) is convex and continuous, we have (see,e.g., Theorem 1 in [14])

supp

{ t∫0

(u∗(s) · p(s) − H

(s, y∗(s), p(s)

))ds; p ∈ L∞(

0, t;RN)} =t∫

0

L(s, y∗(s), u∗(s)

)ds.

Hence, it follows that

Y (t, x) =t∫

0

(L(s, y∗(s), u∗(s)

) − u∗(s) · W y(s, y∗(s)

))ds + X0

(y∗(0)

),

as claimed.

Page 7: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

3838 V. Barbu / J. Differential Equations 255 (2013) 3832–3847

It is easily seen that (t, x) → Y (t, x) is continuous. Indeed, if (tn, xn) → (t0, x0), for n → ∞, wehave

Y (tn, xn) =tn∫

0

(L(s, yn(s), un(s)

) − W y(s, yn(s)

) · un(s))

ds + X0(

yn(0))

�tn∫

0

(L(s, y(s), u(s)

) − W y(s, y(s), u(s)

) · u(s))

ds + X0(

y(0)), (24)

for all admissible pairs (y, u) in the optimal control problem (19). Arguing as above, it follows by (24)that

Y (tn, xn)�tn∫

0

(p · un − H(s, yn, p)

)ds −

tn∫0

W y(s, yn(s)

) · un(s)ds + X0(

yn(0))

for all p ∈ L∞(0,∞,RN ). Letting n → ∞, we obtain that

lim supn→∞

Y (tn, xn) � Y (t0, x0).

On the other hand, by the lower semicontinuity of the integral functional arising in the right handside of (24), it is easily seen that

lim infn→∞ Y (tn, xn) � Y (t0, x0).

Let us show now that Y is a viscosity solution to (9). We set Y (t, x) = Y (T − t, x) and note that

Y (t, x) = infv

{ T∫t

(L(T − s, z(s), v(s)

) − v(s) · W z(T − s, z(s)

))ds + X0

(z(T )

);z′ = −v on (t, T ); z(t) = x

}. (25)

By the dynamic programming principle, we have, for all (t0, x0) ∈ [0, T ] ×RN ,

Y (t0, x0) = infv

{ s∫t0

(L

(T − τ , x0 −

τ∫t0

v(θ)dθ

), v(τ )

)dτ + Y

(s, x0 −

s∫t0

v(θ)dθ

),

v ∈ L1(t0, s;RN)}, (26)

where L(s, y, v) ≡ L(s, y, v) − v · W y(s, y).

Page 8: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

V. Barbu / J. Differential Equations 255 (2013) 3832–3847 3839

Let (t0, x0) ∈ [0, T ] ×RN and ϕ ∈ C1([0, T ] ×R

N ) be such that

Y (t0, x0) − ϕ(t0, x0) � Y (t, x) − ϕ(t, x),

for all (t, x) in a neighborhood of (t0, x0).By (26), this yields, for s = t0, x = x0 − (t − t0)v and v ∈ R

N ,

ϕ(t0, x0) − ϕ(t, x) � Y (t0, x0) − Y (t, x)

�t∫

t0

L(T − τ , x0 − (τ − t0)v, v

)dτ ,

and, therefore,

−ϕt(t0, x0) + ϕx(t0, x0) · v � L(T − t0, x0, v) − v · W x(T − t0, x0), ∀v ∈RN .

This yields

−ϕt(t0, x0) + H(T − t0, x0, Y x(t0) + W x(T − t0, x0)

)� 0.

Similarly, if (t0, x0) is a local minima for Y − ϕ , we get that

−ϕt(t0, x0) + H(T − t0, x0,ϕx(t0) + W x(T − t0, x0)

)� 0.

Hence, Y is a viscosity solution for the equation

−Yt + H(T − t, x, Y x + W x(T − t, x)

) = 0,

and this clearly implies that Y is a viscosity solution to (9), as claimed. �Lemma 3.2. We have P-a.s.,

Y ∈ BUC([0, T ] ×R

N).

Proof. The argument is well known, but we sketch it for convenience (see, e.g., [5, Theorem 8.1]). Weconsider the function

ψ(t, x, y) = Y (t, x) − Y (t, y) − γ(t, |x − y|N

),

where γ is a smooth function such that ψ < 0 on [0, T ] ×RN ×R

N . Then, if max ψ > 0 and

(x, t, y, s) = arg max

{Y (t, x) − Y (s, y) − γ

(t, |x − y|N

) − (t − s)2

α2− δ

(|x|2N + |y|2N)}

,

where α, δ > 0, it follows by the definition of the viscosity solution that

γt(t, |x − y|N

) + H(t, x, p) − H(s, y, p) � 0,

Page 9: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

3840 V. Barbu / J. Differential Equations 255 (2013) 3832–3847

where

p = γx

(t,

x − y

|x − y|N

).

By (iii), this yields

γt(t, |x − y|N

) − m(|x − y|N(1 + p)

) + 0(α) < 0. (27)

If pγ (t) = sup{|γx(t, x)|; |x|N < 1} and, choosing γ such that

γt(t, r) − m(r(1 + pγ (t)

))> δ1 > 0,

γ (0, r) = sup{∣∣X0(x) − X0(y)

∣∣; |x − y|N � r}, γ (t,0) = 0,

we arrive at a contradiction. Hence, we have

Y (t, x) − Y (t, y) � γ(t, |x − y|N

), ∀x, y ∈R

N ,

and this implies that Y ∈ BUC([0, T ] ×RN ), as desired. �

Lemma 3.3. Y is the unique viscosity solution to Eq. (9).

Proof. Nothing remains to be done except to combine Lemma 3.2 with the uniqueness of BUC([0, T ]×R

N ) viscosity solutions of Eq. (9) under hypotheses (i)–(iii) (see, [5,7,6]). �We shall establish now the following key lemma.

Lemma 3.4. For each x ∈RN , the stochastic process t → Y (t, x) is adapted to the filtration (Ft)t�0 .

Proof. The Ft -measurability of the function ω → Y (t, x,ω) does not follow directly from (19), though,as easily seen, the integrand is Fs-measurable. To show this, we shall proceed as follows. Replacingthe function L by the approximating Lagrangian

Lε(t, x, u) =∫RN

L(t, y, v)ρε(x − y, u − v)dy dv + ε|u|2N , ε > 0,

where ρε is a C1-mollifier, we may assume without any loss of generality that L is of class C1([0, T ]×R

N ×RN), and strictly convex in u. Indeed, as easily seen, for ε → 0, the sequence of functions

Yε(t, x) = infu

{ t∫0

(Lε(s, y, u) − W y

(s, y(s)

) · u(s))

ds + X0(

y(0)),

y(s) = x −t∫

s

u(τ )dτ , u ∈ L1(0, t);RN )

}, (28)

is P-a.s. convergent to Y on [0, T ] ×RN . In a similar way, we may assume without any loss of gener-

ality that Y0 ∈ C1b (RN ). Now, with this assumption we define the sequence of functions

Page 10: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

V. Barbu / J. Differential Equations 255 (2013) 3832–3847 3841

Y n(t, x) = infu

{ t∫0

(L(s, y(s), u(s)

) − W y(s, y(s)

) · u(s))

ds + X0(

y(0));

y′(s) = u(s), a.e. s ∈ (0, t),∣∣u(s)

∣∣N � n, y(t) = x, u ∈ L1(0, t;RN)}

, n ∈N. (29)

It is easily seen that, P-a.s.,

limn→∞ Y n(t, x) = Y (t, x), ∀(t, x) ∈ [0, T ] ×R

N . (30)

Hence, it suffices to show that, for each n and all (t, x) ∈ [0, T ] × RN , ω → Y n(t, x) is

Ft -measurable. To this end, we note that, by the Pontriaghin maximum principle (see, e.g., [1, p. 45]),any optimal pair (y∗, u∗) in problem (29) satisfies the Euler–Lagrange system

(y∗(s)

)′ = u∗(s),(

p∗(s))′ = L y

(s·y∗(s), u∗(s)

) − W yy(s, y∗(s)

)(u∗(s)

),

p∗(s) ∈ Lu(s, y∗(s), u∗(s)

) − W y(s, y∗(s)

) + NUn

(u∗)(s), a.e. s ∈ (0, t),

y∗(t) = x, p∗(0) = (X0)y(

y∗(0)), (31)

where Lu = ∂u L is the subdifferential of the convex function u → L(t, x, u) and NUn ⊂ L1(0, t;RN ) isthe normal cone to the set

Un = {u ∈ L1(0, t;RN); ∣∣u(s)

∣∣N � n, a.e. s ∈ (0, t)

},

that is, NUn (u∗) = {η ∈ L∞(0, t;RN )}, where

η(s) ∈⎧⎨⎩

[0,∞) if u∗(s) = n,

(−∞,0] if u∗(s) = −n,

0 if u∗(s) ∈ (−n,n),

a.e. s ∈ (0, t).

Since, as mentioned earlier, the function u → L(·, u) is strictly convex, we have, for some λ > 0,

(Lu(·, u) − Lv(·, · v)

) · (u − v)� λ|u − v|2N , ∀u, v ∈RN ,

and so (Lu + NUn )−1 is Lipschitzian for each (t, x) ∈ [0, T ] ×R

N . Then, by (31), we see that

u∗(s) = (Lu(t, x, ·) + NUn

)−1(p∗(s) + W y

(s, y∗(s)

))−1, s ∈ [0, t],

and, therefore, u∗ is continuous on [0, t]. Then, we may rewrite (29) as

Y n(t, x) = inf

{ t∫0

(L

(s, x −

t∫s

u(τ )dτ , u(s)

)− u(s) · W y

(s, x −

t∫s

u(τ )dτ

))ds

+ X0

(x −

t∫u(τ )dτ

); u ∈ Un

}, (32)

0

Page 11: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

3842 V. Barbu / J. Differential Equations 255 (2013) 3832–3847

where

Un = {u ∈ C

([0, t];RN); ∣∣u(s)∣∣

N � n, ∀s ∈ [0, T ]}.Since Un is a separable metric space and the functions L, W y are in BUC([0, T ]×R

N × Bn), we seethat the infimum in (32) can be taken on a countable subset V of Un . Taking into account that, foreach u ∈ V , the function

ω →t∫

0

L

(s, x −

t∫s

u(τ )dτ , u(s)

)ds −

t∫0

u(s) · W y

(s, x −

t∫s

u(τ )dτ

)ds

is Ft -measurable (because each β j is (Ft)t�0-adapted), we conclude by (32) that ω → Y n(t, x) isFt -measurable and therefore, by virtue of (29), so is ω → Y (t, x) for all x ∈ R

N and t ∈ [0, T ], asdesired. �Proof of Theorem 2.2. (Continued.) By Lemmas 3.1–3.4, it follows that X is the unique viscosity so-lution to (1) and that X ∈ BUC([0, T ] × R

N × RN ), P-a.s. By (19), it follows also that, for P-almost

all ω ∈ Ω , Y and consequently X depend continuously on X0 as functions from C([0, T ];RN ) toBUC(RN ). This completes the proof. �Remark 3.5. If X0 is Lipschitzian and assumption (iii) is strengthen to

(iii)′ |H(t, x, p) − H(t, x, p)| � m(|p|N + 1)|x − x|N , ∀t ∈ [0, T ], x, x, p ∈RN ,

then it follows by (19) that x → Y (t, x) is Lipschitz continuous for each t ∈ [0, T ] and

∥∥Yx(·, t)∥∥

L∞(RN )� C, ∀t ∈ [0, T ].

(This also follows by Theorem 9.1 in [5].) Moreover, Y ∈ Lip([0, T ] ×RN ) and satisfies a.e. on (0, T ) ×

RN Eq. (13). In other words, X is a strong solution to (1) in Itô’s sense (13).

Remark 3.6. By (19) we have

X(t, x) = Y (t, x) + W (t, x)

= infu

{ t∫0

L(s, y(s), u(s)

)ds +

t∫0

m∑j=1

μ j(

y(s))

dβ j(s) + X0(

y(0));

y′(s) = u(s), a.e. s ∈ (0, t), y(t) = x, u ∈ L1(0, t;RN)}. (33)

In other words, X can be viewed as the action function of the stochastic variational problem with theHamiltonian system (4). Of course, under our general assumptions, this system does not have a strongsolution in Itô’s sense, so (4) cannot be used to construct, as in [9,10], a strong solution to Eq. (1), viastochastic characteristics defined by system (4).

In the special case of the stochastic eikonal equation

dX + ρ|Xx|N dt = dW , X(0) = X0,

Page 12: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

V. Barbu / J. Differential Equations 255 (2013) 3832–3847 3843

we obtain by (33) the Lax–Hopf type formula, for the viscosity solution X ,

X(t, x) = inf

{X0

(y(0)

) +m∑

j=1

t∫0

μ j(

y(s))

dβ j(s), |u|N � ρ

}, t ∈ [0, T ], x ∈R

N .

We have a similar formula in the general case of Eq. (15) with the additive noise W .

4. Proof of Theorem 2.3

It suffices to show that, for each ω ∈ Ω , the Hamilton–Jacobi equation (13) has a unique viscositysolution Y such that t → Y (t, x) is (Ft)t�0-adapted and Y ∈ BUC([0, T ] × R

N ) P-a.s. To this end, wefix ω ∈ Ω and, for each Z ∈ BUC([0, T ];RN ), we consider the viscosity solution Y = F (Z) to theequation

Yt(t, x) + e−W (t,x)H(t, x, eW (t,x)(Yx(t, x) + W x(t, x)Z(t, x)

)) + μ(x)Z(t, x) = 0,

Y (0, x) = X0(x), x ∈RN , t ∈ [0, T ]. (34)

By Lemma 3.1, we know that Eq. (34) has a unique viscosity solution, Y ∈ BUC([0, T ]×RN ), which

is given by (see (19))

Y (t, x) = infu

{ t∫0

L(s, y(s), u(s)

)ds + X0

(y(0)

); y′(s) = u(s),

a.e. s ∈ (0, t), y(t) = x, u ∈ L1(0, t;RN)}, (35)

where L is the Lagrangian function associated with the Hamiltonian

H(t, x, p) = e−W (t,x)H(t, x, eW (t,x)(p + W x(t, x)Z(t, x)

)) + μ(x)Z(t, x),

that is,

L(t, x, u) = supp

{u · p − e−W (t,x)H

(t, x, eW (t,x)(p + W x(t, x)Z(t, x)

)) − μ(x)Z(t, x)}

= e−W (t,x)L(t, x, u) − e−W (t,x) Z(t, x)W x(t, x) · u − μ(x)Z(t, x),

where

L(t, x, u) = supp

{u · p − H(t, x, p); p ∈R

N}, ∀u ∈R

N , t ∈ [0, T ], x ∈RN . (36)

Hence, (35) can be rewritten as

Y (t, x) = infu

{ t∫0

(e−W (s,y(s))L

(s, y(s), u(s)

)

Page 13: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

3844 V. Barbu / J. Differential Equations 255 (2013) 3832–3847

− (μ

(y(s)

) + u(s) · W y(s, y(s)

)e−W (s,y(s)))Z

(s, y(s)

))ds + X0

(y(0)

);y′(s) = u(s), a.e. s ∈ (0, t), u ∈ L1(0, t;RN)

, y(t) = x

}. (37)

By (36) and (iv), it follows that

L(t, x, u) = +∞ for |u|N > ν, t ∈ [0, T ], x ∈RN , (38)

and, therefore, the infimum into (37) is constrained to the set {u; |u(s)|N � ν , a.e. s ∈ (0,1)}.Next, by (37) and hypothesis (v), we see that (recall that Y = F (Z))

F (Z)(t, x) � F (Z)(t, x) + supu

{ t∫0

(y(s)

) + u(s)

· W y(s, y(s)

)e−W (s,y(s)))(Z

(s, y(s)

) − Z(s, y(s)

))ds;

y′ = u; y(t) = x,∣∣u(s)

∣∣N � ν, a.e. s ∈ (0, t)

}

� F (Z)(t, x) + C supu

{ t∫0

∣∣Z(s, y(s)

) − Z(s, y(s)

)∣∣ds;

y(s) = x −t∫

s

u(τ )dτ ,∣∣u(s)

∣∣N � ν, a.e. s ∈ (0, t), u ∈ L1(0, t;RN)}

. (39)

We introduce in the space X = {Z ∈ BUC([0, T ] ×RN )} the norm

‖Z‖α = sup{

eαt∣∣Z(t, x)

∣∣; (t, x) ∈ [0, T ] ×RN}

,

where α > 1. Then, by (39), we see that

∥∥F (Z) − F (Z)∥∥α� 1

α‖Z − Z‖α, ∀Z , Z ∈ BUC

([0, T ] ×RN)

,

and, therefore, by the Banach fixed point theorem, Eq. (37) (i.e., Z = F (Z)) has a unique viscosity so-lution Y ∈ BUC([0, T ]×R

N ). Moreover, since, as seen in Lemma 3.4, t → F (Z(t, x)) is (Ft)t�0-adaptedfor each x and the solution Y to Z = F (Z) is obtained as limit of the sequence {Zn}, whereZn = F (Zn−1), we infer that the process t → Y (t, x) is (Ft)t�0-adapted, too.

The uniqueness of the viscosity solutions Y ∈ BUC([0, T ] ×RN ) to (34) follows by assumption (iv)

(see, e.g., [7,6]). This completes the proof.

Remark 4.1. As easily follows from the previous proofs, Theorems 2.2 and 2.3 remain true for moregeneral Wiener processes

W (t, x) =m∑

j=1

μ j(t, x)β j(t), (t, x) ∈ [0,∞) ×RN ,

Page 14: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

V. Barbu / J. Differential Equations 255 (2013) 3832–3847 3845

where μ j ∈ BUC([0, T ] ×RN ). In this case, Eq. (1) reduces to a random equation by the change of the

variable X = Y − ∫ t0

∑mj=1 μ j(s, x)dβ j(s), while in (3), instead of (11), we may use the substitution

X(t, x) =(

exp

( t∫0

m∑j=1

μ j(s, x)dβ j(s)

))Y (t, x), (t, x) ∈ [0, T ] ×R

N ,

to obtain for Y a random Hamilton–Jacobi equation of the form (34), for which hypotheses (i)–(iii)hold. The details are omitted.

Remark 4.2. Taking into account (14), we see that the previous proof applies word by word to proveTheorem 2.3 for the viscosity solutions to the Stratonovich equation (6).

5. The finite speed propagation

We establish here the finite speed propagation of the viscosity solutions X to Eq. (3). Namely, wehave

Theorem 5.1. Assume that H satisfies hypotheses (i), (ii), (iii), (iv), (v), H � 0 and that H(t, x,0) ≡ 0. LetX0 ∈ BUC(RN ) be such that support(X0) ⊂ B R . Then P-a.s. support(X(t, ·)) ⊂ B R+νt , for all t > 0.

Proof. It suffices to prove that support(Y (t, ·)) ⊂ B R+νt , P-a.s. for the solution Y to the randomEq. (34) or, equivalently (see (37)),

Y (t, x) = infu

{ t∫0

(e−W (s,y(s))L

(s, y(s), u(s)

) − (μ

(y(s)

)+ u(s)e−W (s,y(s)))Y

(s, y(s)

))ds + X0

(y(0)

),

y′(s) = u(s), a.e. s ∈ (0, t), u ∈ L1(0, t;RN), y(t) = x

}. (40)

As seen earlier in the proof of Theorem 2.3, by the Banach contraction principle, we have, for eachT > 0,

Y = limn→∞ Yn in C

([0, T ] ×RN)

, P-a.s.,

where Yn = F (Yn−1), that is,

Yn(t, x) = infu

{ t∫0

(e−W (s,y(s))L

(s, y(s), u(s)

) − (μ

(y(s)

)+ u(s)e−W (s,y(s)))Yn−1

(s, y(s)

))ds + X0

(y(0)

);y′(s) = u(s), a.e. s ∈ (0, t), u ∈ L1(0, t;RN)

, y(t) = x

}, n ∈N. (41)

Page 15: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

3846 V. Barbu / J. Differential Equations 255 (2013) 3832–3847

On the other hand, by (38), it follows that each minimizing arc y in (41) must satisfy the constraint

|x|N � ν|t − s| + ∣∣y(s)∣∣

N � νt + ∣∣y(s)∣∣

N , ∀s ∈ (0, t).

Since support(X0) ⊂ BR , this implies that, for |x|N � νt + R , we have

Yn(t, x) = infu

{ t∫0

(e−W (s,y(s))L

(s, y(s), u(s)

)ds − (

μ(

y(s)) + u(s)e−W (s,y(s)))Yn−1

(s, y(s)

))ds;

y′(s) = u(s),∣∣u(s)

∣∣N � ν, a.e. s ∈ (0, t), u ∈ L1(0, t;RN)

, y(t) = x

}, n � 1, (42)

and, in particular, for n = 1, we obtain that for Y0 = X0

Y1(t, x) = infu

{ t∫0

e−W (s,y(s))L(s, y(s), u(s)

)ds; y′(s) = u(s),

∣∣u(s)∣∣

N � ν, a.e. s ∈ (0, t), u ∈ L1(0, t;RN), y(t) = x

}. (43)

Since H � 0, H(t, x,0) ≡ 0, we have L � 0 on (0, T ) ×RN ×R

N and

L(t, x,0) = infu

{L(t, x, u); u ∈R

N} = 0, ∀(t, x) ∈ (0, T ) ×RN .

Then (43) yields

Y1(t, x) = 0, for |x|N � νt + R, t � 0,

and, iterating by induction (42), we get that, for all n ∈ N,

Yn(t, x) = 0, for |x|N � νt + R, t � 0.

Hence

Y (t, x) = 0, for |x|N � νt + R, t � 0,

which completes the proof. �Remark 5.2. It is interesting that the finite speed of the propagation property is independent of thecoefficients μ j of the Wiener process W (t). Theorem 5.1 applies, in particular, to the eikonal equation(15) and so, if X0(x) = 0 for |x|N � R , then we have

X(t, x) = 0 for |x|N � νt + R, (44)

where ν = ρ + sup{|a(t, x)|N ; (t, x) ∈ (0,∞) ×RN }.

If we view (15) as a model of the flame propagation in R N , N = 1,2,3, then {x ∈ RN ; X(t, x) =

0} = Ot represents the burnt region at time t and (44) shows that this region extends with finitespeed � ν .

Page 16: The variational approach to Hamilton–Jacobi equations driven by a Gaussian noise

V. Barbu / J. Differential Equations 255 (2013) 3832–3847 3847

Acknowledgments

This work was supported by a grant of the Romanian National Authority for Scientific Research,CNCS–UEFISCDI, project PN-II-PCE-2011-3-0027.

References

[1] V. Barbu, Mathematical Methods in Optimization of Differential Systems, Kluwer Academic Publishers, Dordrecht, 1995.[2] V. Barbu, M. Röckner, On a random scaled porous media equation, J. Differential Equations 251 (2011) 2494–2514.[3] V. Barbu, M. Röckner, Stochastic variational inequalities and applications to the total variation flow perturbed by linear

multiplicative noise, Arch. Ration. Mech. Anal. 209 (2013) 797–834.[4] M. Bardi, I. Capuzzo–Dolcetta, Optimal Control and Viscosity, Solutions of Hamilton–Jacobi–Bellman Equations, Birkhäuser,

Boston, 1997.[5] G. Barles, First–Order Hamilton–Jacobi equations and applications, in: CIME Course, 2011.[6] M.G. Crandall, L.C. Evans, P.-L. Lions, Some properties of viscosity solutions of Hamilton–Jacobi equations, Trans. Amer.

Math. Soc. 282 (1984) 487–502.[7] M.G. Crandall, P.-L. Lions, Viscosity solutions of Hamilton–Jacobi equations, Trans. Amer. Math. Soc. 277 (1983) 1–42.[8] H. Ishii, Peron’s method for Hamilton–Jacobi equations, Duke Math. J. 55 (1987) 369–419.[9] V.N. Kolokoltsov, Stochastic Hamiltonian–Jacobi–Bellman equation and stochastic Hamiltonian systems, J. Dyn. Control

Syst. 2 (1996) 299–319.[10] V.N. Kolokoltsov, R.L. Schiling, A.E. Tyukov, Estimates for multiple stochastic integrals and stochastic Hamilton–Jacobi equa-

tions, Rev. Mat. Iberoam. 20 (2004) 333–380.[11] P.-L. Lions, Generalized Solutions of Hamilton–Jacobi Equations, Res. Notes Math., vol. 69, Pitman, London, 1982.[12] P.-L. Lions, P.E. Souganidis, Homogenization of “viscous” Hamilton–Jacobi equations in stationary ergodic media, Comm.

Partial Differential Equations 30 (2005) 335–375.[13] R. Rezakhanlou, J.E. Tarver, Homogenization for stochastic Hamilton–Jacobi equations, Arch. Ration. Mech. Anal. 151 (2000)

277–309.[14] R.T. Rockafellar, Integrals which are convex functionals, Pacific J. Math. 24 (1968) 525–539.


Recommended