+ All Categories
Home > Documents > BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto...

BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto...

Date post: 30-Jul-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
71
BSDE Representation and Discretization for Hamilton-Jacobi-Bellman PDE Idris KHARROUBI CEREMADE, CNRS, UMR 7534 Universit´ e Paris Dauphine and CREST March 31, 2014
Transcript
Page 1: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

BSDE Representation and Discretization

for Hamilton-Jacobi-Bellman PDE

Idris KHARROUBI

CEREMADE, CNRS, UMR 7534

Universite Paris Dauphine

and CREST

March 31, 2014

Page 2: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Contents

1 Introduction 2

I BSDE representation for HJB equations 8

2 BSDE with nonpositive jumps 9

2.1 Formulation and assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2 Existence and approximation by penalization . . . . . . . . . . . . . . . . . 11

2.3 Dual representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Nonlinear IPDE and Feynman-Kac formula 19

3.1 The Markovian framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2 Viscosity property of the penalized BSDE . . . . . . . . . . . . . . . . . . . 25

3.3 The non dependence of the function v in the variable a . . . . . . . . . . . . 29

3.4 Viscosity properties of the minimal solution to the constrained BSDE . . . 34

II Discretization of fully nonlinear HJB equations via BSDEs withnonpositive jumps 39

4 Discretization of the nonpositive jump constraint 40

4.1 Discretely jump-constrained BSDE . . . . . . . . . . . . . . . . . . . . . . . 40

4.2 Convergence of discretely jump-constrained BSDE . . . . . . . . . . . . . . 46

4.2.1 Convergence result . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.2.2 Rate of convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5 Approximation scheme for jump-constrained BSDE and stochastic con-

trol problem 59

5.1 The forward regime switching process . . . . . . . . . . . . . . . . . . . . . 59

5.2 BSDE scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.3 Error estimate for the discrete time scheme . . . . . . . . . . . . . . . . . . 63

5.4 Approximate optimal control . . . . . . . . . . . . . . . . . . . . . . . . . . 66

1

Page 3: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Chapter 1

Introduction

Let us consider the Hamilton-Jacobi-Bellman (HJB) equation in the form:

∂v

∂t+ supa∈A

[b(x, a).Dxv +

1

2tr(σσᵀ(x, a)D2

xv) + f(x, a)]

= 0, on [0, T )× Rd, (1.0.1)

v(T, x) = g(x), x ∈ Rd,

where A is a subset of Rq. It is well-known (see e.g. [38]) that such nonlinear PDE is the

dynamic programming equation associated to the stochastic control problem with value

function defined by:

v(t, x) := supα

E[ ∫ T

tf(Xt,x,α

s , αs)ds+ g(Xt,x,αT )

], (1.0.2)

where Xt,x,α is the solution to the controlled diffusion:

dXαs = b(Xα

s , αs)ds+ σ(Xαs , αs)dWs,

starting from x at t, and given a predictable control process α valued in A.

Our main goal is to provide a numerical scheme for the approximation of the nonlinear

HJB equation using Backward Stochastic Differential Equation (BSDEs). To this end one

has first to built a probabilistic representation via BSDEs, namely the so-called nonlinear

Feynman-Kac formula, which involves a simulatable forward process. One can then hope

to use such representation for deriving a probabilistic numerical scheme for the solution to

HJB equation, whence the stochastic control problem. Such issues have attracted a lot of

interest and generated an important literature over the recent years. Actually, there is a

crucial distinction between the case where the diffusion coefficient is controlled or not.

Consider first the case where σ(x) does not depend on a ∈ A, and assume that σσᵀ(x)

is of full rank. Denoting by θ(x, a) = σᵀ(x)(σσᵀ(x))−1b(x, a) a solution to σ(x)θ(x, a) =

b(x, a), we notice that the HJB equation reduces into a semi-linear PDE:

∂v

∂t+

1

2tr(σσᵀ(x)D2

xv) + F (x, σᵀDxv) = 0, (1.0.3)

where F (x, z) = supa∈A[f(x, a)+θ(x, a).z] is the θ-Fenchel-Legendre transform of f . In this

case, we know from the seminal works by Pardoux and Peng [33], [34], that the (viscosity)

2

Page 4: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

solution v to the semi-linear PDE (1.0.3) is connected to the BSDE:

Yt = g(X0T ) +

∫ T

tF (X0

s , Zs)ds−∫ T

tZsdWs, t ≤ T, (1.0.4)

through the relation Yt = v(t,X0t ), with a forward diffusion process

dX0s = σ(X0

s )dWs.

This probabilistic representation leads to a probabilistic numerical scheme for the resolution

of (1.0.3) by discretization and simulation of the BSDE (1.0.4), see [9]. Alternatively,

when the function F (x, z) is of polynomial type on z, the semi-linear PDE (1.0.3) can be

numerically solved by a forward Monte-Carlo scheme relying on marked branching diffusion,

as recently pointed out in [22]. Moreover, as showed in [15], the solution to the BSDE (1.0.4)

admits a dual representation in terms of equivalent change of probability measures as:

Yt = ess supα

EPα[ ∫ T

tf(X0

s , αs)ds+ g(X0T )∣∣Ft], (1.0.5)

where for a control α, Pα is the equivalent probability measure to P under which

dX0s = b(X0

s , αs)ds+ σ(X0s )dWα

s ,

with Wα a Pα-Brownian motion by Girsanov’s theorem. In other words, the process X0 has

the same dynamics under Pα than the controlled processXα under P, and the representation

(1.0.5) can be viewed as a weak formulation (see [14]) of the stochastic control problem

(1.0.2) in the case of uncontrolled diffusion coefficient.

The general case with controlled diffusion coefficient σ(x, a) associated to fully nonlinear

PDE is challenging and led to recent theoretical advances. Consider the motivating example

from uncertain volatility model in finance formulated here in dimension 1 for simplicity of

notations:

dXαs = αsdWs,

where the control process α is valued in A = [a, a] with 0 ≤ a ≤ a < ∞, and define the

value function of the stochastic control problem:

v(t, x) := supα

E[g(Xt,x,αT )], (t, x) ∈ [0, T ]× R.

The associated HJB equation takes the form:

∂v

∂t+G(D2

xv) = 0, (t, x) ∈ [0, T )× R, v(T, x) = g(x), x ∈ R, (1.0.6)

where G(M) = 12 supa∈A[a2M ] = a2M+−a2M−. The unique (viscosity) solution to (1.0.6)

is represented in terms of the so-called G-Brownian motion B, and G-expectation EG,

concepts introduced in [36]:

v(t, x) = EG[g(x+BT−t)

].

3

Page 5: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Moreover, G-expectation is closely related to second order BSDE studied in [41], namely

the process Yt = v(t, Bt) satisfies a 2BSDE, which is formulated under a nondominated

family of singular probability measures given by the law of Xα under P. This gives a nice

theory and representation for nonlinear PDE, but it requires a non degeneracy assumption

on the diffusion coefficient, and does not cover general HJB equation (i.e. control both on

drift and diffusion arising for instance in portfolio optimization). On the other hand, it is

not clear how to simulate G-Brownian motion.

We provide here an alternative BSDE representation including general HJB equation,

formulated under a single probability measure (thus avoiding nondominated singular mea-

sures), and under which the forward process can be simulated. The idea, used in [25] for

quasi variational inequalities arising in impulse control problems, is the following. We intro-

duce a Poisson random measure µ(dt, da) on R+ ×A with finite intensity measure λ(da)dt

associated to the marked point process (τi, ζi)i, independent of W , and consider the pure

jump process (It)t equal to the mark ζi valued in A between two jump times τi and τi+1.

We next consider the forward regime switching diffusion process

dXs = b(Xs, Is)ds+ σ(Xs, Is)dWs,

and observe that the (uncontrolled) pair process (X, I) is Markov. Let us then consider the

BSDE with jumps w.r.t the Brownian-Poisson filtration F = FW,µ:

Yt = g(XT ) +

∫ T

tf(Xs, Is)ds−

∫ T

tZsdWs −

∫ T

t

∫AUs(a)µ(ds, da), (1.0.7)

where µ is the compensated measure of µ. This linear BSDE is the Feynman-Kac formula

for the linear integro-partial differential equation (IPDE):

∂v

∂t+ b(x, a).Dxv +

1

2tr(σσᵀ(x, a)D2

xv) (1.0.8)

+

∫A

(v(t, x, a′)− v(t, x, a))λ(da′) + f(x, a) = 0, (t, x, a) ∈ [0, T )× Rd ×A,

v(T, x, a) = g(x), (x, a) ∈ Rd ×A, (1.0.9)

through the relation: Yt = v(t,Xt, It). Now, in order to pass from the above linear IPDE

with the additional auxiliary variable a ∈ A to the nonlinear HJB PDE (4.2.18), we con-

strain the jump component to the BSDE (1.0.7) to be nonpositive, i.e.

Ut(a) ≤ 0, ∀(t, a). (1.0.10)

Then, since Ut(a) represents the jump of Yt = v(t,Xt, It) induced by a jump of the random

measure µ, i.e of I, and assuming that v is continuous, the constraint (1.0.10) means that

Ut(a) = v(t,Xt, a)− v(t,Xt, It−) ≤ 0 for all (t, a). This formally implies that v(t, x) should

not depend on a ∈ A. Once we get the non dependence of v in a, the equation (1.0.8)

becomes a PDE on [0, T )×Rd with a parameter a ∈ A. By taking the supremum over a ∈A in (1.0.8), we then obtain the nonlinear HJB equation (4.2.18).

4

Page 6: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Inspired by the above discussion, we now introduce the following general class of BSDE

with nonpositive jumps, which is a non Markovian extension of (1.0.7)-(1.0.10):

Yt = ξ +

∫ T

tF (s, Ys, Zs, Us)ds+KT −Kt (1.0.11)

−∫ T

tZsdWs −

∫ T

t

∫AUs(a)µ(ds, da), 0 ≤ t ≤ T, a.s.

with

Ut(a) ≤ 0 , dP⊗ dt⊗ λ(da) a.e. on Ω× [0, T ]×A. (1.0.12)

The solution to this BSDE is a quadruple (Y,Z, U,K) where, besides the usual component

(Y,Z, U), the fourth component K is a predictable nondecreasing process, which makes

the A-constraint (1.0.12) feasible. We thus look at the minimal solution (Y, Z, U,K) in the

sense that for any other solution (Y , Z, U , K) to (1.0.11)-(1.0.12), we must have Y ≤ Y .

Through a penalization method, we construct the unique minimal solution as the limit

of a sequence (Y n, Zn, Un,Kn)n of Lipschitz BSDEs with jumps. In a markovian framework

the general constrained BSDE takes the following form:

Yt = g(XT ) +

∫ T

tf(Xs, Is, Ys, Zs)ds+KT −Kt (1.0.13)

−∫ T

tZsdWs −

∫ T

t

∫AUs(a)µ(ds, da), 0 ≤ t ≤ T,

with

Ut(a) ≤ 0 , dP⊗ dt⊗ λ(da) a.e. on Ω× [0, T ]×A. (1.0.14)

Its minimal solution is then proved to be the Feynman-Kac representation of the PDE

∂v

∂t+ supa∈A

[b(., a).Dxv +

1

2tr(σσᵀ(., a)D2

xv) + f(x, a, v, σᵀ(x, a)Dxv)

]= 0

on [0, T ) × Rd, through the relation: Yt = v(t,Xt). This equation clearly extends HJB

equation (4.2.18) by incorporating the terms v and Dxv in the function f .

In the second part of the lectures, we use this representation to set an approximation of

solutions to HJB equations. Namely, we provide and analyze a discrete-time approximation

scheme for the minimal solution to (1.0.13)-(1.0.14), and thus an approximation scheme for

the HJB equation. In the non-constrained jump case, approximations schemes for BSDE

have been studied in the papers [21], [8], which extended works in [9], [44] for BSDEs in

a Brownian framework. The issue is now to deal with the nonpositive jump constraint in

(1.0.13)-(1.0.14), and we propose a discrete time approximation scheme of the form:

Y πT = YπT = g(Xπ

T )

Zπtk = E[Y πtk+1

Wtk+1−Wtk

tk+1−tk

∣∣Ftk]Yπtk = E

[Y πtk+1

∣∣Ftk]+ (tk+1 − tk) f(Xπtk, Itk , Yπtk , Z

πtk

)

Y πtk

= ess supa∈A

E[Yπtk∣∣Ftk , Itk = a

], k = 0, . . . , n− 1,

(1.0.15)

5

Page 7: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

where π = t0 = 0 < . . . < tk < . . . < tn = T is a partition of the time interval [0, T ],

with modulus |π|, and Xπ is the Euler scheme of X (notice that I is perfectly simulatable

once we know how to simulate the distribution λ(da)/∫A λ(da) of the jump marks). The

interpretation of this scheme is the following. The first three lines in (1.0.15) correspond to

the standard scheme (Yπ, Zπ) for a discretization of a BSDE with jumps (see [8]), where we

omit here the computation of the jump component. The last line in (1.0.15) for computing

the approximation Y π of the minimal solution Y corresponds precisely to the minimality

condition for the nonpositive jump constraint and should be understood as follows. By the

Markov property of the forward process (X, I), the solution (Y,Z,U) to the BSDE with

jumps (without constraint) is in the form Yt = ϑ(t,Xt, It) for some deterministic function

ϑ. Assuming that ϑ is a continuous function, the jump component of the BSDE, which is

induced by a jump of the forward component I, is equal to Ut(a) = ϑ(t,Xt, a)−ϑ(t,Xt, It−).

Therefore, the nonpositive jump constraint means that: ϑ(t,Xt, It−) ≥ ess supa∈A

ϑ(t,Xt, a).

The minimality condition is thus written as:

Yt = v(t,Xt) = ess supa∈A

ϑ(t,Xt, a) = ess supa∈A

E[Yt|Xt, It = a],

whose discrete time version is the last line in scheme (1.0.15). We mainly consider the

case where f(x, a, y) does not depend on z, and our aim is to analyze the discrete time

approximation error on Y , where we split the error between the positive and negative

parts:

Errπ+(Y ) :=(

maxk≤n−1

E[(Ytk − Y

πtk

)2+

]) 12, Errπ−(Y ) :=

(maxk≤n−1

E[(Ytk − Y

πtk

)2−

]) 12.

We do not study directly the error on Z, and instead focus on the approximation of

an optimal control for the HJB equation, which is more relevant in practice. It appears

that the maximization step in the scheme (1.0.15) provides a control in feedback form

a(tk, Xπtk

), k ≤ n − 1, which approximates the optimal control with an estimated error

bound. The analysis of the error on Y proceeds as follows. We first introduce the solu-

tion (Y π,Yπ,Zπ,Uπ) of a discretely jump-constrained BSDE. This corresponds formally to

BSDEs for which the nonpositive jump constraint operates only a finite set of times, and

should be viewed as the analog of discretely reflected BSDEs defined in [1] and [7] in the

context of the approximation for reflected BSDEs. By combining BSDE methods and PDE

approach with comparison principles, and further with the shaking coefficients method of

Krylov [29] and Barles, Jacobsen [5], we prove the monotone convergence of this discretely

jump-constrained BSDE towards the minimal solution to the BSDE with nonpositive jump

constraint. We also obtained a convergence rate without any ellipticity condition on the

diffusion coefficient σ. We next focus on the approximation error between the discrete time

scheme in (1.0.15) and the discretely jump-constrained BSDE. The standard argument for

studying rate of convergence of such error consists in getting an estimate of the error at

time tk: E[|Y πtk− Y π

tk|2] in function of the same estimate at time tk+1, and then conclude by

induction together with classical estimates for the forward Euler scheme.

However, due to the supremum in the conditional expectation in the scheme (1.0.15) for

passing from Yπ to Y π, such argument does not work anymore. Indeed, taking the supre-

mum is a nonlinear operation, which violates the law of iterated conditional expectations.

6

Page 8: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Therefore, we cannot obtain directly the error at time tk as a function of that at time tk+1.

Instead, we consider the auxiliary error control at time tk:

Eπk (Y) := E[ess supa∈A

Et1,a[. . . ess sup

a∈AEtk,a

[|Yπtk − Y

πtk|2]. . .]],

where Etk,a[.] denotes the conditional expectation E[.|Ftk , Itk = a], and we are able to

express Eπk (Y) in function of Eπk+1(Y). We define similarly an error control Eπk (X) for the

forward Euler scheme, and prove that it converges to zero with a rate |π|. Proceeding

by induction, we then obtain a rate of convergence |π| for Eπk (Y), and consequently for

E[|Y πtk− Y π

tk|2]. This leads finally to a rate |π|

12 for Errπ−(Y ), |π|

110 for Errπ+(Y ), and so |π|

110

for the global error Errπ(Y ) = Errπ+(Y ) + Errπ−(Y ). In fact, as noticed in Remark 5.3.4,

we believe that one can obtain a better rate at least of the order |π|16 . Anyway, our result

improves the convergence rate of the mixed Monte-Carlo finite difference scheme proposed

in [17], where the authors obtained a rate |π|14 on one side and |π|

110 on the other side under

a nondegeneracy condition.

We conclude this introduction by pointing out that the above discrete time scheme is

not yet directly implemented in practice, and requires the estimation and computation of

the conditional expectations together with the supremum. Actually, simulation-regression

methods on basis functions defined on Rd × A appear to be very efficient, and provide

approximate optimal controls in feedback forms via the maximization operation in the last

step of the scheme (1.0.15). We refer to [24] for analysis and illustrations with several

numerical tests arising in superreplication of options under uncertain volatility and correla-

tion. Notice that since it relies on the simulation of the forward process (X, I), our scheme

does not suffer the curse of dimensionality encountered in finite difference scheme or con-

trolled Markov chains methods (see [30], [6]), and takes advantage of the high-dimensional

properties of Monte-Carlo methods.

7

Page 9: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Part I

BSDE representation for HJB

equations

8

Page 10: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Chapter 2

BSDE with nonpositive jumps

We introduce in this chapter a new class of BSDEs driven by a Brownian motion and a

Poisson, where the jump component of the solution is subject to a constraint.

2.1 Formulation and assumptions

Let (Ω,F ,P) be a complete probability space on which are defined a d-dimensional Brownian

motion W = (Wt)t≥0, and an independent integer valued Poisson random measure µ on

R+ × A, where A is a Borelian subset of Rq, endowed with its Borel σ-field B(A). We

assume that the random measure µ has the intensity measure λ(da)dt for some σ-finite

measure λ on (A,B(A)) satisfying

λ(A) :=

∫Aλ(da) < ∞ .

We set µ(dt, da) = µ(dt, da)− λ(da)dt, the compensated martingale measure associated to

µ, and denote by F = (Ft)t≥0 the completion of the natural filtration generated by W and

µ.

We fix a finite time duration T <∞ and we denote by P the σ-algebra of F-predictable

subsets of Ω× [0, T ]. Let us introduce some additional notations. We denote by

• S2 the set of real-valued cadlag adapted processes Y = (Yt)0≤t≤T such that ‖Y ‖S2 :=(

E[

sup0≤t≤T |Yt|2]) 1

2< ∞.

• Lp(0,T), p≥ 1, the set of real-valued adapted processes (φt)0≤t≤T such that E[ ∫ T

0 |φt|pdt]

< ∞.

• Lp(W), p ≥ 1, the set of Rd-valued P-measurable processes Z = (Zt)0≤t≤T such that

‖Z‖Lp(W)

:=(E[ ∫ T

0 |Zt|pdt]) 1

p<∞.

• Lp(µ), p ≥ 1, the set of P ⊗B(A)-measurable maps U : Ω× [0, T ]×E → R such that

‖U‖Lp(µ)

:=(E[ ∫ T

0

(∫A |Ut(a)|2λ(da)

) p2 dt]) 1

p<∞.

9

Page 11: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

• L2(λ) is the set of B(A)-measurable maps u : E → R such that |u|L2(λ)

:=( ∫

A |u(a)|2λ(da)) 1

2

< ∞.

• K2 the closed subset of S2 consisting of nondecreasing processes K = (Kt)0≤t≤T with

K0 = 0.

We are then given two objects:

1. A terminal condition ξ, which is an FT -measurable random variable.

2. A generator function F : Ω × [0, T ] × R × Rd × L2(λ) → R, which is a P ⊗ B(R) ⊗B(Rd)⊗ B(L2(λ))-measurable map.

We shall impose the following assumption on these objects:

(H0)

(i) The random variable ξ and the generator function F satisfy the square integrability

condition:

E[|ξ|2]

+ E[ ∫ T

0|F (t, 0, 0, 0)|2dt

]< ∞ .

(ii) The generator function F satisfies the uniform Lipschitz condition: there exists a

constant CF such that

|F (t, y, z, u)− F (t, y′, z′, u′)| ≤ CF(|y − y′|+ |z − z′|+ |u− u′|

L2(λ)

),

for all t ∈ [0, T ], y, y′ ∈ R, z, z′ ∈ Rd and u, u′ ∈ L2(λ).

(iii) The generator function F satisfies the monotonicity condition:

F (t, y, z, u)− F (t, y, z, u′) ≤∫Aγ(t, e, y, z, u, u′)(u(a)− u′(a))λ(da) ,

for all t ∈ [0, T ], z ∈ Rd, y ∈ R and u, u′ ∈ L2(λ), where γ : [0, T ] × Ω × E × R ×Rd × L2(λ) × L2(λ) → R is a P ⊗ B(A) ⊗ B(R) ⊗ B(Rd) ⊗ B(L2(λ)) ⊗ B(L2(λ))-

measurable map satisfying: C1 ≤ γ(t, a, y, z, u, u′) ≤ C2, for all a ∈ A, with two

constants −1 < C1 ≤ 0 ≤ C2.

Let us now introduce our class of Backward Stochastic Differential Equations (BSDE)

with partially nonpositive jumps written in the form:

Yt = ξ +

∫ T

tF (s, Ys, Zs, Us)ds+KT −Kt (2.1.1)

−∫ T

tZsdWs −

∫ T

t

∫AUs(a)µ(ds, da), 0 ≤ t ≤ T, a.s.

with

Ut(a) ≤ 0 , dP⊗ dt⊗ λ(da) a.e. on Ω× [0, T ]× E. (2.1.2)

10

Page 12: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Definition 2.1.1 A minimal solution to the BSDE with terminal data/generator (ξ, F ) and

A-nonpositive jumps is a quadruple of processes (Y,Z, U,K) ∈ S2 × L2(W)× L2(µ)×K2

satisfying (2.1.1)-(2.1.2) such that for any other quadruple (Y , Z, U , K) ∈ S2 × L2(W) ×L2(µ)×K2 satisfying (2.1.1)-(2.1.2), we have

Yt ≤ Yt, 0 ≤ t ≤ T, a.s.

Remark 2.1.1 Notice that when it exists, there is a unique minimal solution. Indeed, by

definition, we clearly have uniqueness of the component Y . The uniqueness of Z follows by

identifying the Brownian parts and the finite variation parts, and then the uniqueness of

(U,K) is obtained by identifying the predictable parts and by recalling that the jumps of µ

are inaccessible. By misuse of language, we say sometimes that Y (instead of the quadruple

(Y,Z, U,K)) is the minimal solution to (2.1.1)-(2.1.2). 2

In order to ensure that the problem of getting a minimal solution is well-posed, we shall

need to assume:

(H1) There exists a quadruple (Y , Z, K, U) ∈ S2 × L2(W) × L2(µ) × K2 satisfying

(2.1.1)-(2.1.2).

We shall see later in Lemma 3.1.6 how such condition is satisfied in a Markovian frame-

work.

2.2 Existence and approximation by penalization

In this section, we prove the existence of a minimal solution to (2.1.1)-(2.1.2), based on

approximation via penalization. For each n ∈ N, we introduce the penalized BSDE with

jumps

Y nt = ξ +

∫ T

tF (s, Y n

s , Zns , U

ns )ds+Kn

T −Knt (2.2.3)

−∫ T

tZns dWs −

∫ T

t

∫AUns (a)µ(ds, da), 0 ≤ t ≤ T,

where Kn is the nondecreasing process in K2 defined by

Knt = n

∫ t

0

∫A

[Uns (a)]+λ(da)ds, 0 ≤ t ≤ T.

Here [u]+ = max(u, 0) denotes the positive part of u. Notice that this penalized BSDE can

be rewritten as

Y nt = ξ +

∫ T

tFn(s, Y n

s , Zns , U

ns )ds−

∫ T

tZns dWs −

∫ T

t

∫AUns (a)µ(ds, da), 0 ≤ t ≤ T,

where the generator Fn is defined by

Fn(t, y, z, u) = F (t, y, z, u) + n

∫A

[u(a)]+λ(da),

11

Page 13: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

for all (t, y, z, u) ∈ [0, T ] × R × Rd × L2(λ). Under (H0)(ii)-(iii) and since λ(E) < ∞, we

see that Fn is Lipschitz continuous w.r.t. (y, z, u) for all n ∈ N. Therefore, we obtain from

Lemma 2.4 in [43], that under (H0), BSDE (2.2.3) admits a unique solution (Y n, Zn, Un) ∈S2 × L2(W)× L2(µ) for any n ∈ N.

Lemma 2.2.1 Let Assumption (H0) holds. The sequence (Y n)n is nondecreasing, i.e. Y nt

≤ Y n+1t for all t ∈ [0, T ] and all n ∈ N.

Proof. Fix n ∈ N, and observe that

Fn(t, e, y, z, u) ≤ Fn+1(t, e, y, z, u),

for all (t, e, y, z, u) ∈ [0, T ]×E × R× Rd × L2(λ). Under Assumption (H0), we can apply

the comparison Theorem 2.5 in [40], which shows that Y nt ≤ Y n+1

t , 0 ≤ t ≤ T , a.s. 2

The next result shows that the sequence (Y n)n is upper-bounded by any solution to the

constrained BSDE.

Lemma 2.2.2 Let Assumption (H0) holds. For any quadruple (Y , Z, U , K) ∈ S2×L2(W)×L2(µ)×K2 satisfying (2.1.1)-(2.1.2), we have

Y nt ≤ Yt, 0 ≤ t ≤ T, n ∈ N. (2.2.4)

Proof. Fix n ∈ N, and consider a quadruple (Y , Z, U , K) ∈ S2 × L2(W) × L2(µ) ×K2

solution to (2.1.1)-(2.1.2). Then, U clearly satisfies∫ t

0

∫A[Us(a)]+λ(da)ds = 0 for all t ∈

[0, T ], and so (Y , Z, U , K) is a supersolution to the penalized BSDE (2.2.3), i.e:

Yt = ξ +

∫ T

tFn(s, Ys, Zs, Us)ds+ KT − Kt

−∫ T

tZsdWs −

∫ T

t

∫AUs(a)µ(ds, da), 0 ≤ t ≤ T.

By a slight adaptation of the comparison Theorem 2.5 in [40] under (H0), we obtain the

required inequality: Y nt ≤ Yt, 0 ≤ t ≤ T . 2

We now establish a priori uniform estimates on the sequence (Y n, Zn, Un,Kn)n.

Lemma 2.2.3 Under (H0) and (H1), there exists some constant C depending only on T

and the monotonicity condition of F in (H0)(iii) such that

‖Y n‖2S2

+ ‖Zn‖2L2(W)

+ ‖Un‖2L2(µ)

+ ‖Kn‖2S2

≤ C(E|ξ|2 + E

[ ∫ T

0|F (t, 0, 0, 0)|2dt

]+ E

[sup

0≤t≤T|Yt|2

]), ∀n ∈ N. (2.2.5)

Proof. In what follows we shall denote by C > 0 a generic positive constant depending

only on T , and the linear growth condition of F in (H0)(ii), which may vary from line to

12

Page 14: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

line. By applying Ito’s formula to |Y nt |2, and observing that Kn is continuous and ∆Y n

t =∫A U

nt (a)µ(t, da), we have

E|ξ|2 = E|Y nt |2 − 2E

∫ T

tY ns F (s, Y n

s , Zns , U

ns )ds− 2E

∫ T

tY ns dK

ns + E

∫ T

t|Zns |2ds

+ E∫ T

t

∫A

|Y ns− + Uns (a)|2 − |Y n

s−|2 − 2Y ns−U

ns (a)

µ(da, ds)

= E|Y nt |2 + E

∫ T

t|Zns |2ds+ E

∫ T

t

∫A|Uns (a)|2λ(da)ds

−2E∫ T

tY ns F (s, Y n

s , Zns , U

ns )ds− 2E

∫ T

tY ns dK

ns , 0 ≤ t ≤ T.

From (H0)(iii), the inequality Y nt ≤ Yt by Lemma 2.2.2 under (H1), and the inequality

2ab ≤ 1αa

2 + αb2 for any constant α > 0, we have:

E|Y nt |2 + E

∫ T

t|Zns |2ds+ E

∫ T

t

∫A|Uns (a)|2λ(da)ds

≤ E|ξ|2 + CE∫ T

t|Y ns |(|F (s, 0, 0, 0)|+ |Y n

s |+ |Zns |+ |Uns |L2(λ)

)ds

+1

αE[

sups∈[0,T ]

|Ys|2]

+ αE|KnT −Kn

t |2.

Using again the inequality ab ≤ a2

2 + b2

2 , and (H0)(i), we get

E|Y nt |2 +

1

2E∫ T

t|Zns |2ds+

1

2E∫ T

t

∫A|Uns (a)|2λ(da)ds (2.2.6)

≤ CE∫ T

t|Y ns |2ds+ E|ξ|2 +

1

2E∫ T

0|F (s, 0, 0, 0)|2ds+

1

αE[

supt∈[0,T ]

|Yt|2]

+ αE|KnT −Kn

t |2 .

Now, from the relation (2.2.3), we have:

KnT −Kn

t = Y nt − ξ −

∫ T

tF (s, Y n

s , Zns , U

ns )ds

+

∫ T

tZns dWs +

∫ T

t

∫AUns (a)µ(ds, da).

Thus, there exists some positive constant C1 depending only on the linear growth condition

of F in (H0)(ii) such that

E|KnT −Kn

t |2 ≤ C1

(E|ξ|2 + E

∫ T

0|F (s, 0, 0, 0)|2ds+ E|Y n

t |2

+ E∫ T

t

(|Y ns |2 + |Zns |2 + |Uns |2L2(λ)

)ds), 0 ≤ t ≤ T. (2.2.7)

Hence, by choosing α > 0 s.t. C1α ≤ 14 , and plugging into (2.2.6), we get

3

4E|Y n

t |2 +1

4E∫ T

t|Zns |2ds+

1

4E∫ T

t

∫A|Uns (a)|2λ(da)ds

≤ CE∫ T

t|Y ns |2ds+

5

4E|ξ|2 +

1

4E∫ T

0|F (s, 0, 0, 0)|2ds+

1

αE[

sups∈[0,T ]

|Ys|2], 0 ≤ t ≤ T.

13

Page 15: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Thus application of Gronwall’s lemma to t 7→ E|Y nt |2 yields:

sup0≤t≤T

E|Y nt |2 + E

∫ T

0|Znt |2dt+ E

∫ T

0

∫A|Unt (a)|2λ(da)dt

≤ C(E|ξ|2 + E

∫ T

0|F (t, 0, 0, 0)|2dt+ E

[supt∈[0,T ]

|Yt|2]), (2.2.8)

which gives the required uniform estimates (2.2.5) for (Zn, Un)n and also (Kn)n by (2.2.7).

Finally, by writing from (2.2.3) that

sup0≤t≤T

|Y nt | ≤ |ξ|+

∫ T

0|F (t, Y n

t , Znt , U

nt )|dt+Kn

T

+ sup0≤t≤T

∣∣∣ ∫ t

0Zns dWs

∣∣∣+ sup0≤t≤T

∣∣∣ ∫ t

0

∫AUns (a)µ(ds, da)

∣∣∣,we obtain the required uniform estimate (2.2.5) for (Y n)n by Burkholder-Davis-Gundy in-

equality, linear growth condition in (H0)(ii), and the uniform estimates for (Zn, Un,Kn)n.

2

We can now state the main result of this paragraph.

Theorem 2.2.1 Under (H0) and (H1), there exists a unique minimal solution (Y,Z, U,K)

∈ S2 × L2(W) × L2(µ) × K2 with K predictable, to (2.1.1)-(2.1.2). Y is the increasing

limit of (Y n)n and also in L2(0,T), Kt is the weak limit of (Knt )n in L2(Ω,Ft,P) for all

t ∈ [0, T ], and for any p ∈ [1, 2),

‖Zn − Z‖Lp(W)

+ ‖Un − U‖Lp(µ)

−→ 0,

as n goes to infinity.

Proof. By the Lemmata 2.2.1 and 2.2.2, (Y n)n converges increasingly to some adapted

process Y , satisfying: ‖Y ‖S2 < ∞ by the uniform estimate for (Y n)n in Lemma 2.2.3 and

Fatou’s lemma. Moreover by dominated convergence theorem, the convergence of (Y n)nto Y also holds in L2(0,T). Next, by the uniform estimates for (Zn, Un,Kn)n in Lemma

2.2.3, we can apply the monotonic convergence Theorem 3.1 in [16], which extends to the

jump case the monotonic convergence theorem of Peng [35] for BSDE. This provides the

existence of (Z,U) ∈ L2(W)×L2(µ), and K predictable, nondecreasing with E[K2T ] < ∞,

such that the sequence (Zn, Un,Kn)n converges in the sense of Theorem 2.2.1 to (Z,U,K)

satisfying:

Yt = ξ +

∫ T

tF (s, Ys, Zs, Us)ds+KT −Kt

−∫ T

tZsdWs −

∫ T

t

∫AUs(a)µ(ds, da), 0 ≤ t ≤ T.

Thus, the process Y is the difference of a cad-lag process and the nondecreasing process K,

and by Lemma 2.2 in [35], this implies that Y and K are also cad-lag, hence respectively

14

Page 16: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

in S2 and K2. Moreover, from the strong convergence in L1(µ) of (Un)n to U and since

λ(E) <∞, we have

E∫ T

0

∫A

[Uns (a)]+λ(da)ds −→ E∫ T

0

∫A

[Us(a)]+λ(da)ds,

as n goes to infinity. Since KnT = n

∫ T0

∫A[Uns (a)]+λ(da)ds is bounded in L2(Ω,FT,P), this

implies

E∫ T

0

∫A

[Us(a)]+λ(da)ds = 0,

which means that the A-nonpositive constraint (2.1.2) is satisfied. Hence, (Y, Z,K,U) is a

solution to the constrained BSDE (2.1.1)-(2.1.2), and by Lemma 2.2.2, Y = limY n is the

minimal solution. Finally, the uniqueness of the solution (Y,Z, U,K) is given by Remark

2.1.1. 2

2.3 Dual representation

In this section, we consider the case where the generator function F (t, ω) does not depend

on y, z, u. Our main goal is to provide a dual representation of the minimal solution to the

BSDE with nonpositive jumps in terms of a family of equivalent probability measures.

Let V be the set of essentially bounded P⊗B(A)-measurable processes valued in (0,∞),

and consider for any ν ∈ V, the Doleans-Dade exponential local martingale

Lνt := E(∫ .

0

∫A

(νs(a)− 1)µ(ds, da))t

= exp(∫ t

0

∫A

ln νs(a)µ(ds, da)−∫ t

0

∫A

(νs(a)− 1)λ(da)ds), 0 ≤ t ≤ T.(2.3.9)

When Lν is a true martingale, i.e. E[LνT ] = 1, it defines a probability measure Pν equivalent

to P on (Ω,FT ) with Radon-Nikodym density:

dPν

dP

∣∣∣Ft

= Lνt , 0 ≤ t ≤ T, (2.3.10)

and we denote by Eν the expectation operator under Pν . Notice that W remains a Brownian

motion under Pν , and the effect of the probability measure Pν , by Girsanov’s Theorem, is

to change the compensator λ(da)dt of µ under P to νt(a)λ(da)dt under Pν . We denote by

µν(dt, da) = µ(dt, da)− νt(a)λ(da)dt the compensated martingale measure of µ under Pν .

We then introduce the subset Vn of V as the elements ν ∈ V essentially bounded by

n+ 1, for n ∈ N.

Lemma 2.3.4 For any ν ∈ V, Lν is a uniformly integrable martingale, and LνT is square

integrable.

15

Page 17: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Proof. Several sufficient criteria for Lν to be a uniformly integrable martingale are known.

We refer for example to the recent paper [39], which shows that if

SνT := exp(∫ T

0

∫A|νt(a)− 1|2λ(da)dt

)is integrable, then Lν is uniformly integrable. By definition of V, we see that for ν ∈ V, SνTis essentially bounded since ν is essentially bounded and λ(E) < ∞. Moreover, from the

explicit form (2.3.9) of Lν , we have |LνT |2 = Lν2

T SνT , and so E|LνT |2 ≤ ‖SνT ‖∞. 2

We can then associate to each ν ∈ V the probability measure Pν through (2.3.10). We

first provide a dual representation of the penalized BSDEs in terms of such Pν . To this

end, we need the following Lemma.

Lemma 2.3.5 Let φ ∈ L2(W) and ψ ∈ L2(µ). Then for every ν ∈ V, the processes∫ .0 φtdWt and

∫ .0

∫A ψt(a)µν(dt, da) are Pν-martingales.

Proof. Fix φ ∈ L2(W) and ν ∈ V and denote by Mφ the process∫ .

0 φtdWt. Since

W remains a Pν-Brownian motion, we know that Mφ is a Pν-local martingale. From

Burkholder-Davis-Gundy and Cauchy Schwarz inequalites, we have

Eν[

supt∈[0,T ]

|Mφt |]≤ CEν

[√〈Mφ〉T

]= CE

[LνT

√∫ T

0|φt|2dt

]≤ C

√E[|LνT |2

]√E[ ∫ T

0|φt|2dt

]< ∞,

since LνT is square integrable by Lemma 2.3.4, and φ ∈ L2(W). This implies that Mφ is Pν-

uniformly integrable, and hence a true Pν-martingale. The proof for∫ .

0

∫A φt(a)µν(dt, da)

follows exactly the same lines and is therefore omitted. 2

Proposition 2.3.1 For all n ∈ N, the solution to the penalized BSDE (2.2.3) is explicitly

represented as

Y nt = ess sup

ν∈VnEν[ξ +

∫ T

tF (s)ds

∣∣∣Ft], 0 ≤ t ≤ T. (2.3.11)

Proof. Fix n ∈ N. For any ν ∈ Vn, and by introducing the compensated martingale

measure µν(dt, da) = µ(dt, da) − (νt(a) − 1)λ(da)dt under Pν , we see that the solution

(Y n, Zn, Un) to the BSDE (2.2.3) satisfies:

Y nt = ξ +

∫ T

t

[F (s) +

∫A

(n[Uns (a)]+ − (νs(a)− 1)Uns (a)

)λ(da)

]ds (2.3.12)

−∫ T

tZns dWs −

∫ T

t

∫AUns (a)µν(ds, da).

By taking expectation in (2.3.12) under Pν (∼ P), we then get from Lemma 2.3.5:

Y nt = Eν

[ξ +

∫ T

t

(F (s) +

∫A

(n[Uns (a)]+ − (νs(a)− 1)Uns (a)

)λ(da)

)ds∣∣∣Ft]. (2.3.13)

16

Page 18: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Now, observe that for any ν ∈ Vn, hence valued in [1, n+ 1], we have

n[Unt (a)]+ − (νt(a)− 1)Unt (a) ≥ 0, dP⊗ dt⊗ λ(da) a.e.

which yields by (2.3.13):

Y nt ≥ ess sup

ν∈VnEν[ξ +

∫ T

tF (s)ds

∣∣∣Ft]. (2.3.14)

On the other hand, let us consider the process ν∗ ∈ Vn defined by

ν∗t (a) = 1Ut(a)≤0 + (n+ 1)1Ut(a)>0, 0 ≤ t ≤ T, e ∈ E.

By construction, we clearly have

n[Unt (a)]+ − (ν∗t (a)− 1)Unt (a) = 0, for all 0 ≤ t ≤ T, e ∈ E,

and thus for this choice of ν = ν∗ in (2.3.13):

Y nt = Eν

∗[ξ +

∫ T

tF (s)ds

∣∣∣Ft].Together with (2.3.14), this proves the required representation of Y n. 2

Remark 2.3.2 Arguments in the proof of Proposition 2.3.1 shows that the relation (2.3.11)

holds for general generator function F depending on (y, z, u), i.e.

Y nt = ess sup

ν∈VnEν[ξ +

∫ T

tF (s, Y n

s , Zns , U

ns )ds

∣∣∣Ft] ,which is in this case an implicit relation for Y n. Moreover, the essential supremum in this

dual representation is attained for some ν∗, which takes extreme values 1 or n+1 depending

on the sign of Un, i.e. of bang-bang form. 2

Let us then focus on the limiting behavior of the above dual representation for Y n when

n goes to infinity.

Theorem 2.3.2 Under (H1), the minimal solution to (2.1.1)-(2.1.2) is explicitly repre-

sented as

Yt = ess supν∈V

Eν[ξ +

∫ T

tF (s)ds

∣∣∣Ft], 0 ≤ t ≤ T. (2.3.15)

Proof. Let (Y,Z, U,K) ∈ be the minimal solution to (2.1.1)-(2.1.2). Let us denote by Y

the process defined in the r.h.s of (2.3.15). Since Vn ⊂ V, it is clear from the representation

(2.3.11) that Y nt ≤ Yt, for all n. Recalling from Theorem 2.2.1 that Y is the pointwise limit

of Y n, we deduce that Yt = limn→∞ Ynt ≤ Yt, 0 ≤ t ≤ T .

17

Page 19: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Conversely, for any ν ∈ V, let us consider the compensated martingale measure µν(dt, da)

= µ(dt, da)− (νt(a)− 1)λ(da)dt under Pν , and observe that (Y,Z, U,K) satisfies:

Yt = ξ +

∫ T

t

[F (s)−

∫A

(νs(a)− 1)Us(a)λ(da)]ds+KT −Kt (2.3.16)

−∫ T

tZsdWs −

∫ T

t

∫AUs(a)µν(ds, da).

Thus, by taking expectation in (2.3.16) under Pν from Lemma 2.3.5, and recalling that K

is nondecreasing, we have:

Yt ≥ Eν[ξ +

∫ T

t

(F (s)−

∫A

(νs(a)− 1)Us(a)λ(da))ds∣∣∣Ft]

≥ Eν[ξ +

∫ T

tF (s)ds

∣∣∣Ft],since ν is valued in [1,∞), and U satisfies the nonpositive constraint (2.1.2). Since ν is

arbitrary in V, this proves the inequality Yt ≥ Yt, and finally the required relation Y = Y .

2

18

Page 20: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Chapter 3

Nonlinear IPDE and Feynman-Kac

formula

In this chapter, we shall show how minimal solutions to our BSDE class with partially

nonpositive jumps provides actually a new probabilistic representation (or Feynman-Kac

formula) to fully nonlinear integro-partial differential equation (IPDE) of Hamilton-Jacobi-

Bellman (HJB) type, when dealing with a suitable Markovian framework.

3.1 The Markovian framework

We first assume that

(HA) A is compact, its interior A is connex, and A = Adh(A), the closure of its interior.

∫Aλ(da) < ∞ .

We also assume that

(Hλ)

(i) The measure λ supports the whole set A: for any a ∈ A and any open neighborhood

O of a in Rq we have λ(O ∩ A) > 0.

(ii) The boundary of A: ∂A = A \ A, is negligible w.r.t. λ, i.e. λ(∂A) = 0.

Given some measurable functions b : Rd × Rq → Rd, σ : Rd × Rq → Rd×d and β :

Rd × Rq × L → Rd, we introduce the forward Markov regime-switching jump-diffusion

process (X, I) governed by:

dXs = b(Xs, Is)ds+ σ(Xs, Is)dWs, (3.1.1)

dIs =

∫A

(a− Is−

)µ(ds, da). (3.1.2)

19

Page 21: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

In other words, I is the pure jump process valued in A associated to the Poisson random

measure π, which changes the coefficients of jump-diffusion process X. We make the usual

assumptions on the forward jump-diffusion coefficients:

(HFC) There exists a constant C such that

|b(x, a)− b(x′, a′)|+ |σ(x, a)− σ(x′, a′)| ≤ C(|x− x′|+ |a− a′|

),

for all x, x′ ∈ Rd and a, a′ ∈ Rq.

Remark 3.1.3 We do not make any ellipticity assumption on σ. In particular, some lines

and columns of σ may be equal to zero, and so there is no loss of generality by considering

that the dimension of X and W are equal. We can also make the coefficients b, σ and β

depend on time with the following standard procedure: we introduce the time variable as

a state component Θt = t, and consider the forward Markov system:

dXs = b(Xs,Θs, Is)ds+ σ(Xs,Θs, Is)dWs,

dΘs = ds

dIs =

∫A

(a− Is−

)π(ds, da).

which is of the form given above, but with an enlarged state (X,Θ, I) (with degenerate

noise), and with the resulting assumptions on b(x, θ, a) and σ(x, θ, a). 2

Under these conditions, existence and uniqueness of a solution (Xt,x,as , It,as )t≤s≤T to

(3.1.1)-(3.1.2) starting from (x, a) ∈ Rd × Rq at time s = t ∈ [0, T ], is well-known, and we

have the standard estimate: for all p ≥ 2, there exists some positive constant Cp s.t.

E[

supt≤s≤T

|Xt,x,as |p + |It,as |p

]≤ Cp(1 + |x|p + |a|p) , (3.1.3)

for all (t, x, a) ∈ [0, T ]× Rd × Rq.

In this Markovian framework, the terminal data and generator of our class of BSDE are

given by two continuous functions g: Rd × Rq → R and f : Rd × Rq × R × Rd → R. We

make the following assumptions on the BSDE coefficients:

(HBC) There exists some constant C s.t.

|f(x, a, y, z)− f(x′, a′, y′, z′)|≤ C

(|x− x′|+ |a− a′|+ |y − y′|+ |z − z′|

),

for all x, x′ ∈ Rd, y, y′ ∈ R, z, z′ ∈ Rd and a, a′ ∈ Rq.

In this framework, the BSDE problem (2.1.1)-(2.1.2) takes the following form: find the

minimal solution (Y,Z, U,K) ∈ S2 × L2(W)× L2(µ)×K2 to

Yt = g(XT , IT ) +

∫ T

tf(Xs, Is, Ys, Zs

)ds+KT −Kt

−∫ T

tZs.dWs −

∫ T

t

∫LUs(a)µ(ds, da), (3.1.4)

20

Page 22: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

with

Ut(a) ≤ 0 , dP⊗ dt⊗ λ(da) a.e. (3.1.5)

The main goal of this chapter is to relate the BSDE (3.1.4) with nonpositive jumps

(3.1.5) to the following nonlinear IPDE of HJB type:

− ∂w

∂t− supa∈A

[Law + f

(., a, w, σᵀ(., a)Dxw)

]= 0, on [0, T )× Rd, (3.1.6)

w(T, x) = supa∈A

g(x, a), x ∈ Rd, (3.1.7)

where

Law(t, x) = b(x, a).Dxw(t, x) +1

2tr(σσᵀ(x, a)D2

xw(t, x))

for (t, x, a) ∈ [0, T ]× Rd × Rq.

Notice that under (HBC) and (3.1.3) (which follows from (HFC)), the generator

F (t, ω, y, z, u) = f(Xt(ω), It(ω), y, z) and the terminal condition ξ = g(XT , IT ) satisfy

clearly Assumption (H0). Let us now show that Assumption (H1) is satisfied. More

precisely, we have the following result.

Lemma 3.1.6 Let Assumptions (HFC), (HBC1) hold. Then, for any initial condition

(t, x, a) ∈ [0, T ]×Rd×Rq, there exists a solution (Y t,x,as , Zt,x,as , U t,x,as , Kt,x,a

s ), t ≤ s ≤ T to

the BSDE (3.1.4)-(3.1.5) when (X, I) = (Xt,x,as , It,as ), t ≤ s ≤ T, with Y t,x,a

s = v(s,Xt,x,as )

for some deterministic function v on [0, T ]× Rd satisfying a polynomial growth condition:

sup(t,x)∈[0,T ]×Rd

|v(t, x)|1 + |x|2

< ∞ . (3.1.8)

Proof. Under (HBC1) and since A is compact, we observe that

Cf,g := supx∈Rd,a∈A

|g(x, a)|+ |f(x, a, y, z)|1 + |x|+ |y|+ |z|

< ∞. (3.1.9)

Let us then consider the smooth function v(t, x) = Ceρ(T−t)(1 + |x|2) for some positive

constants C and ρ to be determined later. We claim that for C and ρ large enough, the

function v is a classical supersolution to (3.1.6)-(3.1.7). Indeed, observe first that from the

growth condition on g in (3.1.9), there exists C > 0 s.t. g(x) := supa∈A g(x, a) ≤ C(1+ |x|2)

for all x ∈ Rd. For such C, we then have: v(T, .) ≥ g. On the other hand, we see after

straightforward calculation that there exists a positive constant C depending only on C,

Cf,g, and the linear growth condition in x on b and σ by (HFC) (recall that A is compact),

such that

−∂v∂t− supa∈A

[Lav + f

(., a, v, σᵀ(., a)Dxv

)]≥ (ρ− C)v

≥ 0,

21

Page 23: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

by choosing ρ ≥ C. Let us now define the quadruple (Y , Z, U , K) by:

Yt = v(t,Xt) for t < T, YT = g(XT , IT ),

Zt = σᵀ(Xt− , It−)Dxv(t,Xt−), t ≤ T,Ut = 0, t ≤ T

Kt =

∫ t

0

[− ∂v

∂t(s,Xs)− LIs v(s,Xs)− f(Xs, Is, Zs)

]ds, t < T

KT = KT− + v(T,XT )− g(XT , IT ).

From the supersolution property of v to (3.1.6)-(3.1.7), the process K is nondecreasing.

Moreover, from the polynomial growth condition on v, linear growth condition on b, σ,

growth condition (3.1.9) on f , g and the estimate (3.1.3), we see that (Y , Z, U , K) lies in

S2×L2(W)×L2(µ)×K2. Finally, by applying Ito’s formula to v(t,Xt), we conclude that

(Y , Z, U , R, K) is solution a to (3.1.4), and the constraint (3.1.5) is trivially satisfied. 2

Under (HFC) and (HBC), we then get from Theorem 2.2.1 the existence of a unique

minimal solution (Y t,x,as , Zt,x,as , U t,x,as ,Kt,x,a

s ), t ≤ s ≤ T to (3.1.4)-(3.1.5) when (X, I) =

(Xt,x,as , It,as ), t ≤ s ≤ T. Moreover, as we shall see in the next paragraph, this minimal

solution is written in this Markovian context as: Y t,x,as = v(s,Xt,x,a

s , It,x,as ) where v is the

deterministic function defined on [0, T ]× Rd × Rq → R by:

v(t, x, a) := Y t,x,at , (t, x, a) ∈ [0, T ]× Rd × Rq. (3.1.10)

We aim at proving that the function v defined by (3.1.10) does not depend actually on its

argument a, and is a solution in a sense to be precised to the parabolic IPDE (3.1.6)-(3.1.7).

Notice that we do not have a priori any smoothness or even continuity properties on v.

To this end, we first recall the definition of (discontinuous) viscosity solutions to (3.1.6)-

(3.1.7). For a locally bounded function w on [0, T )×Rd, we define its lower semicontinuous

(lsc for short) envelope w∗, and upper semicontinuous (usc for short) envelope w∗ by

w∗(t, x) = lim inf(t′, x′)→ (t, x)

t′ < T

w(t′, x′) and w∗(t, x) = lim sup(t′, x′)→ (t, x)

t′ < T

w(t′, x′),

for all (t, x) ∈ [0, T ]× Rd.

Definition 3.1.2 (Viscosity solutions to (3.1.6)-(3.1.7))

(i) A function w, lsc (resp. usc) on [0, T ] × Rd, is called a viscosity supersolution (resp.

subsolution) to (3.1.6)-(3.1.7) if

w(T, x) ≥ (resp. ≤) supa∈A

g(x, a) ,

for any x ∈ Rd, and(− ∂ϕ

∂t− supa∈A

[Laϕ+ f(., a, w, σᵀ(., a)Dxϕ)

])(t, x) ≥ (resp. ≤) 0,

22

Page 24: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

for any (t, x) ∈ [0, T )× Rd and any ϕ ∈ C1,2([0, T ]× Rd) such that

(w − ϕ)(t, x) = min[0,T ]×Rd

(w − ϕ) (resp. max[0,T ]×Rd

(w − ϕ)) .

(ii) A locally bounded function w on [0, T )×Rd is called a viscosity solution to (3.1.6)-(3.1.7)

if w∗ is a viscosity supersolution and w∗ is a viscosity subsolution to (3.1.6)-(3.1.7).

We can now state the main result of this chapter.

Theorem 3.1.3 Assume that conditions (HA), (Hλ), (HFC) and (HBC) hold. The

function v in (3.1.10) does not depend on the variable a on [0, T )× R× A i.e.

v(t, x, a) = v(t, x, a′), ∀ a, a′ ∈ A,

for all (t, x) ∈ [0, T ) × Rd. Let us then define by misuse of notation the function v on

[0, T )× Rd by:

v(t, x) = v(t, x, a), (t, x) ∈ [0, T )× Rd, (3.1.11)

for any a ∈ A. Then v is a viscosity solution to (3.1.6)-(3.1.7).

Remark 3.1.4 1. Once we have a uniqueness result for the fully nonlinear IPDE (3.1.6)-

(3.1.7), Theorem 3.1.3 provides a Feynman-Kac representation of this unique solution by

means of the minimal solution to the BSDE (3.1.4)-(3.1.5). This suggests consequently an

original probabilistic numerical approximation of the nonlinear IPDE (3.1.6)-(3.1.7) by dis-

cretization and simulation of the minimal solution to the BSDE (3.1.4)-(3.1.5). This issue,

especially the treatment of the nonpositive jump constraint, has been recently investigated

in [23] and [24], where the authors analyze the convergence rate of the approximation

scheme, and illustrate their results with several numerical tests arising for instance in the

super-replication of options in uncertain volatilities and correlations models. We mention

here that a nice feature of our scheme is the fact that the forward process (X, I) can be

easily simulated: indeed, notice that the jump times of I follow a Poisson distribution of

parameter λ :=∫A λ(da), and so the pure jump process I is perfectly simulatable once

we know how to simulate the distribution λ(da)/λ of the jump marks. Then, we can use

a standard Euler scheme for simulating the component X. Our scheme does not suffer

the curse of dimensionality encountered in finite difference methods or controlled Markov

chains, and takes advantage of the high dimensional properties of Monte-Carlo methods.

2. We do not address here comparison principles (and so uniqueness results) for the general

parabolic nonlinear IPDE (3.1.6)-(3.1.7). In the case where the generator function f(x, a)

does not depend on (y, z, u) (see Remark 3.1.5 below), comparison principle is proved in

[37], and the result can be extended by same arguments when f(x, a, y, z) depends also on

y, z under the Lipschitz condition of (HBC). 2

Remark 3.1.5 Stochastic control problem

23

Page 25: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

1. Let us now consider the particular and important case where the generator f(x, a) does

not depend on (y, z). We then observe that the nonlinear IPDE (3.1.6) is the Hamilton-

Jacobi-Bellman (HJB) equation associated to the following stochastic control problem: let

us introduce the controlled jump-diffusion process:

dXαs = b(Xα

s , αs)ds+ σ(Xαs , αs)dWs, (3.1.12)

where W is a Brownian motion independent of a random measure ϑ on a filtered probability

space (Ω,F ,F0,P), the control α lies in AF0 , the set of F0-predictable process valued in A,

and define the value function for the control problem:

w(t, x) := supα∈AF0

E[ ∫ T

tf(Xt,x,α

s , αs)ds+ g(Xt,x,αT , αT )

], (t, x) ∈ [0, T ]× Rd,

where Xt,x,αs , t ≤ s ≤ T denotes the solution to (3.1.12) starting from x at s = t, given

a control α ∈ AF0 . It is well-known (see e.g. [37] or [32]) that the value function w is

characterized as the unique viscosity solution to the dynamic programming HJB equation

(3.1.6)-(3.1.7), and therefore by Theorem 3.1.3, w = v. In other words, we have provided a

representation of fully nonlinear stochastic control problem, including especially control in

the diffusion term, possibly degenerate, in terms of minimal solution to the BSDE (3.1.4)-

(3.1.5).

2. Combining the BSDE representation of Theorem 3.1.3 together with the dual repre-

sentation in Theorem 2.3.2, we obtain an original representation for the value function of

stochastic control problem:

supα∈AF0

E[ ∫ T

0f(Xα

t , αt)dt+ g(XαT , αT )

]= sup

ν∈VAEν[ ∫ T

0f(Xt, It)dt+ g(XT , IT )

]The r.h.s. in the above relation may be viewed as a weak formulation of the stochastic

control problem. Indeed, it is well-known that when there is only control on the drift, the

value function may be represented in terms of control on change of equivalent probability

measures via Girsanov’s theorem for Brownian motion. Such representation is called weak

formulation for stochastic control problem, see [14]. In the general case, when there is

control on the diffusion coefficient, such “Brownian” Girsanov’s transformation can not be

applied, and the idea here is to introduce an exogenous process I valued in the control set

A, independent of W and ϑ governing the controlled state process Xα, and then to control

the change of equivalent probability measures through a Girsanov’s transformation on this

auxiliary process.

3. Non Markovian extension. An interesting issue is to extend our BSDE representation

of stochastic control problem to a non Markovian context, that is when the coefficients

b, σ and β of the controlled process are path-dependent. In this case, we know from the

recent works by Ekren, Touzi, and Zhang [13] that the value function to the path-dependent

stochastic control is a viscosity solution to a path-dependent fully nonlinear HJB equation.

One possible approach for getting a BSDE representation to path-dependent stochastic

control, would be to prove that our minimal solution to the BSDE with nonpositive jumps

is a viscosity solution to the path-dependent fully nonlinear HJB equation, and then to

24

Page 26: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

conclude with a uniqueness result for path-dependent nonlinear PDE. However, to the best

of our knowledge, there is not yet such comparison result for viscosity supersolution and

subsolution of path-dependent nonlinear PDEs. Instead, we recently proved in [20] by

purely probabilistic arguments that the minimal solution to the BSDE with nonpositive

jumps is equal to the value function of a path-dependent stochastic control problem, and

our approach circumvents the delicate issue of dynamic programming principle and viscosity

solution in the non Markovian context. Our result is also obtained without assuming that

σ is non degenerate, in contrast with [13] (see their Assumption 4.7). 2

The rest of this paper is devoted to the proof of Theorem 3.1.3.

3.2 Viscosity property of the penalized BSDE

Let us consider the Markov penalized BSDE associated to (3.1.4)-(3.1.5):

Y nt = g(XT , IT ) +

∫ T

tf(Xs, Is, Y

ns , Z

ns )ds+ n

∫ T

t

∫A

[Uns (a)]+λ(da)ds

−∫ T

tZns dWs −

∫ T

t

∫AUns (a)µ(ds, da) , (3.2.13)

and denote by (Y n,t,x,as , Zn,t,x,as , Un,t,x,as ), t ≤ s ≤ T the unique solution to (3.2.13) when

(X, I) = (Xt,x,as , It,as ), t ≤ s ≤ T for any initial condition (t, x, a) ∈ [0, T ] × Rd × Rq.

From the Markov property of the jump-diffusion process (X, I), we recall from [3] that

Y n,t,x,as = vn(s,Xt,x,a

s , It,as ), t ≤ s ≤ T , where vn is the deterministic function defined on

[0, T ]× Rd × Rq by:

vn(t, x, a) := Y n,t,x,at , (t, x, a) ∈ [0, T ]× Rd × Rq. (3.2.14)

From the convergence result (Theorem 2.2.1) of the penalized solution, we deduce that the

minimal solution of the constrained BSDE is actually in the form Y t,x,as = v(s,Xt,x,a

s , It,as ),

t ≤ s ≤ T , with a deterministic function v defined in (3.1.10).

Moreover, from the uniform estimate (2.2.5) and Lemma 3.1.6, there exists some positive

constant C s.t. for all n,

|vn(t, x, a)|2 ≤ C(E|g(Xt,x,a

T , It,aT )|2 + E[ ∫ T

t|f(Xt,x,a

s , It,as , 0, 0)|2ds]

+ E[

supt≤s≤T

|v(s,Xt,x,as )|2

]),

for all (t, x, a) ∈ [0, T ]×Rd×Rq. From (HBC) we get that g and f satisfy a linear growth

condition. Using (3.1.8) for v, and the estimate (3.1.3) for (X, I), we obtain that vn, and

thus also v by passing to the limit, satisfy a polynomial growth condition: there exists some

positive constant Cv such that for all n:

|vn(t, x, a)|+ |v(t, x, a)| ≤ Cv(1 + |x|2 + |a|2

), ∀(t, x, a) ∈ [0, T ]× Rd × Rq. (3.2.15)

25

Page 27: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

We now consider the parabolic semi-linear penalized IPDE for any n:

− ∂vn∂t

(t, x, a)− Lavn(t, x, a)− f(x, a, vn, σ

ᵀ(x, a)Dxvn) (3.2.16)

−∫A

[vn(t, x, a′)− vn(t, x, a)]λ(da′)

−n∫A

[vn(t, x, a′)− vn(t, x, a)]+λ(da′) = 0, on [0, T )× Rd × Rq,

vn(T, ., .) = g, on Rd × Rq. (3.2.17)

From Theorem 3.4 in Barles et al. [3], we have the well-known property that the

penalized BSDE with jumps (2.2.3) provides a viscosity solution to the penalized IPDE

(3.2.16)-(3.2.17). Moreprecisely, we have the following result.

Proposition 3.2.2 Let Assumptions (HFC) and (HBC) hold. The function vn in (3.2.14)

is a continuous viscosity solution to (3.2.16)-(3.2.17), i.e. it is continuous on [0, T ]×Rd×Rq,a viscosity supersolution (resp. subsolution) to (3.2.17):

vn(T, x, a) ≥ (resp. ≤) g(x, a) ,

for any (x, a) ∈ Rd × Rq, and a viscosity supersolution (resp. subsolution) to (3.2.16):

− ∂ϕ

∂t(t, x, a)− Laϕ(t, x, a) (3.2.18)

−f(x, a, vn(t, x, a), σᵀ(x, a)Dxϕ(t, x, a))

−∫A

[ϕ(t, x, a′)− ϕ(t, x, a)]λ(da′)− n∫A

[ϕ(t, x, a′)− ϕ(t, x, a)]+λ(da′) ≥ (resp. ≤) 0,

for any (t, x, a) ∈ [0, T )× Rd × Rq and any ϕ ∈ C1,2([0, T ]× (Rd × Rq)) such that

(vn − ϕ)(t, x, a) = min[0,T ]×Rd×Rq

(vn − ϕ) (resp. max[0,T ]×Rd×Rq

(vn − ϕ)) . (3.2.19)

In contrast to local PDEs with no integro-differential terms, we cannot restrict in general

the global minimum (resp. maximum) condition on the test functions for the definition of

viscosity supersolution (resp. subsolution) to local minimum (resp. maximum) condition.

In our IPDE case, the nonlocal terms appearing in (3.2.16) involve the values w.r.t. the

variable a only on the set A. Therefore, we are able to restrict the global extremum

condition on the test functions to extremum on [0, T ] × Rd × A. More precisely, we have

the following equivalent definition of viscosity solutions, which will be used later.

Lemma 3.2.7 Assume that (Hλ), (HFC), and (HBC) hold. In the definition of vn being

a viscosity supersolution (resp. subsolution) to (3.2.16) at a point (t, x, a) ∈ [0, T )×Rd× A,

we can replace condition (3.2.19) by:

0 = (vn − ϕ)(t, x, a) = min[0,T ]×Rd×A

(vn − ϕ) (resp. max[0,T ]×Rd×A

(vn − ϕ)) ,

and suppose that the test function ϕ is in C1,2,0([0, T ]× Rd × Rq).

26

Page 28: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Proof. We treat only the supersolution case as the subsolution case is proved by same

arguments, and proceed in two steps.

Step 1. Fix (t, x, a) ∈ [0, T ) × Rd × Rq, and let us show that the viscosity supersolution

inequality (3.2.18) also holds for any test function ϕ in C1,2,0([0, T ]× Rd × Rq) s.t.

(vn − ϕ)(t, x, a) = min[0,T ]×Rd×Rq

(vn − ϕ) . (3.2.20)

We may assume w.l.o.g. that the minimum for such test function ϕ is zero, and let us

define for r > 0 the function ϕr by

ϕr(t′, x′, a′) = ϕ(t′, x′, a′)(

1− Φ( |x′|2 + |a′|2

r2

))− CvΦ

( |x′|2 + |a′|2

r2

)(1 + |x′|p + |a′|p

),

where Cv > 0 and p ≥ 2 are the constant and degree appearing in the polynomial growth

condition (3.2.15) for vn, Φ : R+ → [0, 1] is a function in C∞(R+) such that Φ|[0,1] ≡ 0 and

Φ|[2,+∞) ≡ 1. Notice that ϕr ∈ C1,2,0([0, T ]× Rd × Rq),

(ϕr, Dxϕr, D2

xϕr) −→ (ϕ,Dxϕ,D

2xϕ) as r →∞ (3.2.21)

locally uniformly on [0, T ]× Rd × Rq, and that there exists a constant Cr > 0 such that

|ϕr(t′, x′, a′)| ≤ Cr(1 + |x′|p + |a′|p

)(3.2.22)

for all (t′, x′, a′) ∈ [0, T ]×Rq×Rd. Since Φ is valued in [0, 1], we deduce from the polynomial

growth condition (3.2.15) satisfied by vn and (3.2.20) that ϕr ≤ vn on [0, T ]× Rd × Rq for

all r > 0. Moreover, we have ϕr(t, x, a) = ϕ(t, x, a) (= vn(t, x, a)) for r large enough.

Therefore we get

(vn − ϕr)(t, x, a) = min[0,T ]×Rd×Rq

(vn − ϕr) , (3.2.23)

for r large enough, and we may assume w.l.o.g. that this minimum is strict. Let (ϕrk)k be

a sequence of function in C1,2([0, T ]× (Rd × Rq)) satisfying (3.2.22) and such that

(ϕrk, Dxϕrk, D

2xϕ

rk) −→ (ϕr, Dxϕ

r, D2xϕ

r) as k →∞, (3.2.24)

locally uniformly on [0, T ] × Rd × Rq. From the growth conditions (3.2.15) and (3.2.22)

on the continuous functions vn and ϕrk, we can assume w.l.o.g. (up to an usual negative

perturbation of the function ϕkr for large (x′, a′)), that there exists a bounded sequence

(tk, xk, ak)k in [0, T ]× Rd × Rq such that

(vn − ϕrk)(tk, xk, ak) = min[0,T ]×Rd×Rq

(vn − ϕrk) . (3.2.25)

The sequence (tk, xk, ak)k converges up to a subsequence, and thus, by (3.2.23), (3.2.24)

and (3.2.25), we have

(tk, xk, ak) → (t, x, a), as k →∞ . (3.2.26)

27

Page 29: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Now, from the viscosity supersolution property of vn at (tk, xk, ak) with the test function

ϕrk, we have

−∂ϕrk∂t

(tk, xk, ak)− Lakϕrk(tk, xk, ak)

−f(xk, ak, vn(tk, xk, ak), σᵀ(xk, ak)Dxϕ

rk(tk, xk, ak))

−∫A

[ϕrk(tk, xk, a′)− ϕrk(tk, xk, ak)]λ(da′)

−n∫A

[ϕrk(tk, xk, a′)− ϕrk(tk, xk, ak)]+λ(da′) ≥ 0,

Sending k and r to infinity, and using (3.2.21), (3.2.24) and (3.2.26), we obtain the viscosity

supersolution inequality at (t, x, a) with the test function ϕ.

Step 2. Fix (t, x, a) ∈ [0, T )×Rd× A, and let ϕ be a test function in C1,2([0, T ]×(Rd×Rq))such that

0 = (vn − ϕ)(t, x, a) = min[0,T ]×Rd×A

(vn − ϕ). (3.2.27)

By same arguments as in (3.2.22), we can assume w.l.o.g. that ϕ satisfies the polynomial

growth condition:

|ϕ(t′, x′, a′)| ≤ C(1 + |x′|p + |a′|p), (t′, x′, a′) ∈ [0, T ]× Rd × Rq,

for some positive constant C. Together with (3.2.15), and since A is compact, we have

(vn − ϕ)(t′, x′, a′) ≥ −C(1 + |x′|p + |dA(a′)|p), (3.2.28)

for all (t′, x′, a′) ∈ [0, T ] × Rd × Rq, where dA(a′) is the distance from a′ to A. Fix ε > 0

and define the function ϕε ∈ C1,2,0([0, T ]× Rd × Rq) by

ϕε(t′, x′, a′) = ϕ(t′, x′, a′)− Φ

(dAε(a′)ε

)C(1 + |x′|p + |dA(a′)|p)

for all (t′, x′, a′) ∈ [0, T ]× Rd × Rq, where

Aε =a′ ∈ A : d∂A(a′) ≥ ε

, (3.2.29)

and Φ : R+ → [0, 1] is a function in C∞(R+) such that Φ|[0, 12

] ≡ 0 and Φ|[1,+∞) ≡ 1. Notice

that

(ϕε, Dxϕε, D2xϕε) −→ (ϕ,Dxϕ,D

2xϕ) as ε→ 0, (3.2.30)

locally uniformly on [0, T ]×Rd × A. We notice from (3.2.28) and the definition of ϕε that

ϕε ≤ vn on [0, T ] × Rd × Acε. Moreover, since ϕε ≤ ϕ on [0, T ] × Rd × Rq, ϕε = ϕ on

[0, T ]× Rd × Aε and a ∈ A, we get by (3.2.27) for ε small enough

0 = (vn − ϕε)(t, x, a) = min[0,T ]×Rd×Rq

(vn − ϕε) .

28

Page 30: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

From Step 1, we then have

−∂ϕε∂t

(t, x, a)− Laϕε(t, x, a)

−f(x, a, vn(t, x, a), σᵀ(x, a)Dxϕε(t, x, a))

−∫A

[ϕε(t, x, a′)− ϕε(t, x, a)]λ(da′)− n

∫A

[ϕε(t, x, a′)− ϕε(t, x, a)]+λ(da′) ≥ 0 .

By sending ε to zero with (3.2.30), and using a ∈ A with (Hλ)(ii), we get the required

viscosity subsolution inequality at (t, x, a) for the test function ϕ. 2

3.3 The non dependence of the function v in the variable a

In this subsection, we aim to prove that the function v(t, x, a) does not depend on a.

From the relation defining the Markov BSDE (3.1.4), and since for the minimal solution

(Y t,x,a, Zt,x,a, U t,x,a,Kt,x,a) to (3.1.4)-(3.1.5), the process Kt,x,a is predictable, we observe

that the A-jump component U t,x,a is expressed in terms of Y t,x,a = v(., Xt,x,a, It,x,a) as:

U t,x,as (a′) = v(s,Xt,x,as− , a′)− v(s,Xt,x,a

s− , It,x,as− ), t ≤ s ≤ T, a′ ∈ A,

for all (t, x, a) ∈ [0, T ]× Rd × Rq. From the A-nonpositive constraint (3.1.5), this yields

E[ ∫ t+h

t

∫A

[v(s,Xt,x,a

s , a′)− v(s,Xt,x,as , It,x,as )

]+λ(da′)ds

]= 0,

for any h > 0. If we knew a priori that the function v was continuous on [0, T )× Rd × A,

we could obtain by sending h to zero in the above equality divided by h (and by dominated

convergence theorem), and from the mean-value theorem:∫A

[v(t, x, a′)− v(t, x, a)

]+λ(da′) = 0.

Under condition (Hλ)(i), this would prove that v(t, x, a) ≥ v(t, x, a′) for any a, a′ ∈ A, and

thus the function v would not depend on a in A.

Unfortunately, we are not able to prove directly the continuity of v from its very defi-

nition (3.1.10), and instead, we shall rely on viscosity solutions approach to derive the non

dependence of v(t, x, a) in a ∈ A. To this end, let us introduce the following first-order

PDE:

−∣∣Dav(t, x, a)

∣∣ = 0 , (t, x, a) ∈ [0, T )× Rd × A. (3.3.31)

Lemma 3.3.8 Let assumptions (Hλ), (HFC) and (HBC) hold. The function v is a

viscosity supersolution to (3.3.31): for any (t, x, a) ∈ [0, T ) × Rd × A and any function

ϕ ∈ C1,2([0, T ]× (Rd × Rq)) such that (v − ϕ)(t, x, a) = min[0,T ]×Rd×Rq(v − ϕ), we have

−∣∣Daϕ(t, x, a)

∣∣ ≥ 0, i.e. Daϕ(t, x, a) = 0.

29

Page 31: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Proof. We know that v is the pointwise limit of the nondecreasing sequence of functions

(vn). By continuity of vn, the function v is lsc and we have (see e.g. [2] p. 91):

v = v∗ = lim infn→∞ ∗vn, (3.3.32)

where

lim infn→∞ ∗vn(t, x, a) := lim inf

n→∞(t′, x′, a′)→ (t, x, a)

t′ < T

vn(t′, x′, a′), (t, x, a) ∈ [0, T ]× Rd × Rq .

Let (t, x, a) ∈ [0, T ) × Rd × A, and ϕ ∈ C1,2([0, T ] × (Rd × Rq)), such that (v − ϕ)(t, x, a)

= min[0,T ]×Rd×Rq(v − ϕ). We may assume w.l.o.g. that this minimum is strict:

(v − ϕ)(t, x, a) = strict min[0,T ]×Rd×Rq

(v − ϕ) . (3.3.33)

Up to a suitable negative perturbation of ϕ for large (x, a), we can assume w.l.o.g. that

there exists a bounded sequence (tn, xn, an)n in [0, T ]× Rd × Rq such that

(vn − ϕ)(tn, xn, an) = min[0,T ]×Rd×Rq

(vn − ϕ) . (3.3.34)

From (4.2.22), (3.3.33), and (3.3.34), we then have, up to a subsequence:

(tn, xn, an, vn(tn, xn, an)) −→ (t, x, a, v(t, x, a)) as n→∞ . (3.3.35)

Now, from the viscosity supersolution property of vn at (tn, xn, an) with the test function

ϕ, we have by (3.3.34):

−∂ϕ∂t

(tn, xn, an)− Lanϕ(tn, xn, an)

−f(xn, an, vn(tn, xn, an), σᵀ(xn, an)Dxϕ(tn, xn, an))

−∫A

[ϕ(tn, xn, a′)− ϕ(tn, xn, an)]λ(da′)

−n∫A

[ϕ(tn, xn, a′)− ϕ(tn, xn, an)]+λ(da′) ≥ 0,

which implies ∫A

[ϕ(tn, xn, a′)− ϕ(tn, xn, an)]+λ(da′)

≤ 1

n

[− ∂ϕ

∂t(tn, xn, an)− Lanϕ(tn, xn, an)

− f(xn, an, vn(tn, xn, an), σᵀ(xn, an)Dxϕ(tn, xn, an))

−∫A

[ϕ(tn, xn, a′)− ϕ(tn, xn, an)]λ(da′)

].

Sending n to infinity, we get from (3.3.35), the continuity of coefficients b, σ, β and f , and

the dominated convergence theorem:∫A

[ϕ(t, x, a′)− ϕ(t, x, a)]+λ(da′) = 0 .

30

Page 32: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Under (Hλ), this means that ϕ(t, x, a) = maxa′∈A ϕ(t, x, a′). Since a ∈ A, we deduce that

Daϕ(t, x, a) = 0. 2

We notice that the PDE (3.3.31) involves only differential terms in the variable a.

Therefore, we can freeze the terms (t, x) ∈ [0, T ) × Rd in the PDE (3.3.31), i.e. we can

take test functions not depending on the variables (t, x) in the definition of the viscosity

solution, as shown in the following Lemma.

Lemma 3.3.9 Let assumptions (Hλ), (HFC) and (HBC) hold. For any (t, x) ∈ [0, T )×Rd, the function v(t, x, .) is a viscosity supersolution to

−∣∣Dav(t, x, a)

∣∣ = 0 , a ∈ A,

i.e. for any a ∈ A and any function ϕ ∈ C2(Rq) such that (v(t, x, .)−ϕ)(a) = minRq(v(t, x, .)−ϕ), we have: −

∣∣Daϕ(a)∣∣ ≥ 0 (and so = 0).

Proof. Fix (t, x) ∈ [0, T )× Rd, a ∈ A and ϕ ∈ C2(Rq) such that

(v(t, x, .)− ϕ)(a) = minRq

(v(t, x, .)− ϕ) . (3.3.36)

As usual, we may assume w.l.o.g. that this minimum is strict and that ϕ satisfies the

growth condition supa′∈Rq|ϕ(a′)|1+|a′|2 < ∞. Let us then define for n ≥ 1, the function ϕn ∈

C1,2([0, T ]× (Rd × Rq)) by

ϕn(t′, x′, a′) = ϕ(a′)− n(|t′ − t|2 + |x′ − x|4

)− |a′ − a|4

for all (t′, x′, a′) ∈ [0, T ]×Rd ×Rq. From the growth condition (3.2.15) on the lsc function

v, and the growth condition on the continuous function ϕ, one can find for any n ≥ 1 an

element (tn, xn, an) of [0, T ]× Rd × Rq such that

(v − ϕn)(tn, xn, an) = min[0,T ]×Rd×Rq

(v − ϕn).

In particular, we have

v(t, x, a)− ϕ(a) = (v − ϕn)(t, x, a) ≥ (v − ϕn)(tn, xn, an) (3.3.37)

= v(tn, xn, an)− ϕ(an) + n(|tn − t|2 + |xn − x|4) + |an − a|4

≥ v(tn, xn, an)− v(t, x, an) + v(t, x, a)− ϕ(a)

+ n(|tn − t|2 + |xn − x|4) + |an − a|4

by (3.3.36), which implies from the growth condition (3.2.15) on v:

n(|tn − t|2 + |xn − x|4) + |an − a|4 ≤ C(1 + |xn − x|2 + |an − a|2).

Therefore, the sequences n(|tn− t|2 + |xn−x|4)n and (|a−an|4)n are bounded and (up to

a subsequence) we have: (tn, xn, an) −→ (t, x, a∞) as n goes to infinity, for some a∞ ∈ Rq.Actually, since v(t, x, a)−ϕ(a) ≥ v(tn, xn, an)−ϕ(an) by (3.3.37), we obtain by sending n

to infinity and since the minimum in (3.3.36) is strict, that a∞ = a, and so:

(tn, xn, an) −→ (t, x, a) as n→∞ .

31

Page 33: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

On the other hand, from Lemma 3.3.8 applied to (tn, xn, an) with the test function ϕn, we

have

0 = Daϕn(tn, xn, an) = Daϕ(an)− 4(an − a)|an − a|3,

for all n ≥ 1. Sending n to infinity we get the required result: Daϕ(a) = 0. 2

We are now able to state the main result of this subsection.

Proposition 3.3.3 Let assumptions (HA), (Hλ), (HFC) and (HBC) hold. The func-

tion v does not depend on the variable a on [0, T )× Rd × A:

v(t, x, a) = v(t, x, a′) , a, a′ ∈ A,

for any (t, x) ∈ [0, T )× Rd.

Proof. We proceed in four steps.

Step 1. Approximation by inf-convolution.

We introduce the family of functions (un)n defined by

un(t, x, a) = infa′∈A

[v(t, x, a′) + n|a− a′|4

], (t, x, a) ∈ [0, T ]× Rd ×A.

It is clear that the sequence (un)n is nondecreasing and upper-bounded by v. Moreover,

since v is lsc, we have the pointwise convergence of un to v on [0, T ]× Rd ×A. Indeed, fix

some (t, x, a) ∈ [0, T ] × Rd × A. Since v is lsc, there exists a sequence (an)n valued in A

such that

un(t, x, a) = v(t, x, an) + n|a− an|4 ,

for all n ≥ 1. Since A is compact, the sequence (an) converges, up to a subsequence, to

some a∞ ∈ A. Moreover, since un is upper-bounded by v and v is lsc, we see that a∞ = a

and

un(t, x, a) −→ v(t, x, a) as n→∞ . (3.3.38)

Step 2. A test function for un seen as a test function for v.

For r, δ > 0 let us define the integer N(r, δ) by

N(r, δ) = minn ∈ N : n ≥ 2Cv(1 + 2−1 + rp + 2 maxa∈A |a|2)(

δ2

)4 + Cv

where Cv is the constant in the growth condition (3.2.15), and define the set Aδ by

Aδ =a ∈ A : d(a, ∂A) := min

a′∈∂A|a− a′| > δ

.

Fix (t, x) ∈ [0, T ) × Rd. We now prove that for any δ > 0, n ≥ N(|x|, δ), a ∈ Aδ and

ϕ ∈ C2(Rq) such that

0 = (un(t, x, .)− ϕ)(a) = minRq

(un(t, x, .)− ϕ) , (3.3.39)

32

Page 34: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

there exists an ∈ A and ψ ∈ C2(Rq) such that

0 = (v(t, x, .)− ψ)(an) = minRq

(v(t, x, .)− ψ) , (3.3.40)

and

Daψ(an) = Daϕ(a). (3.3.41)

To this end we proceed in two substeps.

Substep 2.1. We prove that for any δ > 0, (t, x, a) ∈ [0, T ) × Rd × Aδ, and any n ≥N(|x|, δ):

argmina′∈A

v(t, x, a′) + n|a′ − a|4

⊂ A .

Fix (t, x, a) ∈ [0, T )× Rd × Aδ and let an ∈ A such that

v(t, x, an) + n|an − a|4 = mina′∈A

[v(t, x, a′) + n|a′ − a|4

].

Then we have

v(t, x, an) + n|an − a|4 ≤ v(t, x, a),

and by (3.2.15), this gives

−Cv(1 + |x|p + 2 maxa∈A|a|2 + 2|an − a|2) + n|an − a|4 ≤ Cv(1 + |x|2 + |a|2) .

Then using the inequality 2αβ ≤ α2 + β2 to the product 2αβ = 2p−1|an − a|p, we get:

(n− Cv)|an − a|4 ≤ 2Cv(1 + 2−1 + |x|2 + 2 maxa∈A|a|2) .

For n ≥ N(|x|, δ), we get from the definition of N(r, δ):

|an − a| ≤δ

2,

which shows that an ∈ A since a ∈ Aδ.Substep 2.2. Fix δ > 0, (t, x, a) ∈ [0, T )× Rd × Aδ, and ϕ ∈ C2(Rq) satisfying (3.3.39).

Let us then choose an ∈ argmin v(t, x, a′) +n|a′−a|4 : a′ ∈ A, and define ψ ∈ C2(Rq) by:

ψ(a′) = ϕ(a+ a′ − an)− n|an − a|4, a′ ∈ Rq.

It is clear that ψ satisfies (3.3.41). Moreover, we have by (3.3.39) and the inf-convolution

definition of un:

ψ(a′) ≤ un(t, x, a+ a′ − an)− n|an − a|4 ≤ v(t, x, a′) , a′ ∈ Rq.

Moreover, since an ∈ A attains the infimum in the inf-convolution definition of un(t, x, a),

we have

ψ(an) = ϕ(a)− n|an − a|4 = un(t, x, a)− n|an − a|4 = v(t, x, an) ,

33

Page 35: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

which shows (3.3.40).

Step 3. The function un does not depend locally on the variable a. From Step 2 and Lemma

3.3.9, we obtain that for any fixed (t, x) ∈ [0, T )×Rd, the function un(t, x, .) inherits from

v(t, x, .) the viscosity supersolution to

−∣∣Daun(t, x, a)

∣∣ = 0, a ∈ Aδ, (3.3.42)

for any δ > 0, n ≥ N(|x|, δ). Let us then show that un(t, x, .) is locally constant in the

sense that for all a ∈ Aδ:

un(t, x, a) = un(t, x, a′), ∀a′ ∈ B(a, η), (3.3.43)

for all η > 0 such that B(a, η) ⊂ Aδ. We first notice from the inf-convolution definition

that un(t, x, .) is semi concave on Aδ. From Theorem 2.1.7 in [10], we deduce that un(t, x, .)

is locally Lipschitz continuous on Aδ. By Rademacher theorem, this implies that un(t, x, .)

is differentiable almost everywhere on Aδ. Therefore, by Corollary 2.1 (ii) in [2], and the

viscosity supersolution property (3.3.42), we get that this relation (3.3.42) holds actually

in the classical sense for almost all a′ ∈ Aδ. In other words, un(t, x, .) is a locally Lipschitz

continuous function with derivatives equal to zero almost everywhere on Aδ. This means

that it is locally constant (easy exercise in analysis left to the reader).

Step 4. From the convergence (3.3.38) of un to v, and the relation (3.3.43), we get by sending

n to infinity that for any δ > 0 the function v satisfies: for any (t, x, a) ∈ [0, T )× Rd × Aδ

v(t, x, a) = v(t, x, a′)

for all η > 0 such that B(a, η) ⊂ Aδ and all a′ ∈ B(a, η). Then by sending δ to zero

we obtain that v does not depend on the variable a locally on [0, T )× Rd × A. Since A is

assumed to be connex, we obtain that v does not depend on the variable a on [0, T )×Rd×A.

2

3.4 Viscosity properties of the minimal solution to the con-

strained BSDE

From Proposition 3.3.3, we can define by misuse of notation the function v on [0, T )× Rd

by

v(t, x) = v(t, x, a), (t, x) ∈ [0, T )× Rd, (3.4.44)

for any a ∈ A. Moreover, by the growth condition (3.2.15), we have

sup(t,x)∈[0,T ]×Rd

|v(t, x)|1 + |x|2

< ∞. (3.4.45)

The aim of this section is to prove that the function v is a viscosity solution to (3.1.6)-

(3.1.7).

34

Page 36: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Proof of the viscosity supersolution property to (3.1.6). We first notice from (4.2.22)

and (3.4.44) that v is lsc and

v(t, x) = v∗(t, x) = lim infn→∞ ∗vn(t, x, a) (3.4.46)

for all (t, x, a) ∈ [0, T ]×Rd×A. Let (t, x) be a point in [0, T )×Rd, and ϕ ∈ C1,2([0, T ]×Rd),such that

(v − ϕ)(t, x) = min[0,T ]×Rd

(v − ϕ) .

We may assume w.l.o.g. that ϕ satisfies sup(t,x)∈[0,T ]×Rd|ϕ(t,x)|1+|x|p <∞. Fix some a ∈ A, and

define for ε > 0, the test function

ϕε(t′, x′, a′) = ϕ(t′, x′)− ε(|t′ − t|2 + |x′ − x|4 + |a′ − a|4),

for all (t′, x′, a′) ∈ [0, T ]×Rd×Rq. Since ϕε(t, x, a) = ϕ(t, x), and ϕε ≤ ϕ with equality iff

(t′, x′, a′) = (t, x, a), we then have

(v − ϕε)(t, x, a) = strict min[0,T ]×Rd×Rq

(v − ϕε). (3.4.47)

From the growth conditions on the continuous functions vn and ϕ, there exists a bounded

sequence (tn, xn, an)n (we omit the dependence in ε) in [0, T ]× Rd × Rq such that

(vn − ϕε)(tn, xn, an) = min[0,T ]×Rd×Rq

(vn − ϕε) . (3.4.48)

From (3.4.46) and (3.4.47), we obtain by standard arguments that up to a subsequence:

(tn, xn, an, vn(tn, xn, an)) −→ (t, x, a, v(t, x)), as n goes to infinity.

Now from the viscosity supersolution property of vn at (tn, xn, an) with the test function

ϕε, we have

−∂ϕε

∂t(tn, xn, an)− Lanϕε(tn, xn, an)

−f(xn, an, vn(tn, xn, an), σᵀ(xn, an)Dxϕ

ε(tn, xn, an))

−∫A

[ϕε(tn, xn, a′)− ϕε(tn, xn, an)]λ(da′)

− n∫A

[ϕε(tn, xn, a′)− ϕε(tn, xn, an)]+λ(da′) ≥ 0 .

Sending n to infinity in the above inequality, we get from the definition of ϕε and the

dominated convergence Theorem:

− ∂ϕε

∂t(t, x, a)− Laϕε(t, x, a)

−f(x, a, v(t, x), σᵀ(x, a)Dxϕ

ε(t, x, a))

(3.4.49)

∫A|a′ − a|4λ(da′) ≥ 0 .

35

Page 37: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Sending ε to zero, and since ϕε(t, x, a) = ϕ(t, x), we get

−∂ϕ∂t

(t, x)− Laϕ(t, x)− f(x, a, v(t, x), σᵀ(x, a)Dxϕ(t, x)

)≥ 0 .

Since a is arbitrarily chosen in A, we get from (HA) and the continuity of the coefficients

b, σ, γ and f in the variable a

−∂ϕ∂t

(t, x)− supa∈A

[Laϕ(t, x) + f

(x, a, v(t, x), σᵀ(x, a)Dxϕ(t, x)

)]≥ 0 ,

which is the viscosity supersolution property. 2

Proof of the viscosity subsolution property to (3.1.6). Since v is the pointwise limit

of the nondecreasing sequence of continuous functions (vn), and recalling (3.4.44), we have

by [2] p. 91:

v∗(t, x) = lim supn→∞

∗vn(t, x, a) (3.4.50)

for all (t, x, a) ∈ [0, T ]× Rd × A, where

lim supn→∞

∗vn(t, x, a) := lim supn→∞

(t′, x′, a′)→ (t, x, a)

t′ < T, a′ ∈ A

vn(t′, x′, a′).

Fix (t, x) ∈ [0, T )× Rd and ϕ ∈ C1,2([0, T ]× Rd) such that

(v∗ − ϕ)(t, x) = max[0,T ]×Rd

(v∗ − ϕ) . (3.4.51)

We may assume w.l.o.g. that this maximum is strict and that ϕ satisfies

sup(t,x)∈[0,T ]×Rd

|ϕ(t, x)|1 + |x|p

< ∞ . (3.4.52)

Fix a ∈ A and consider a sequence (tn, xn, an)n in [0, T )× Rd × A such that

(tn, xn, an, vn(tn, xn, an)) −→ (t, x, a, v∗(t, x)) as n→∞. (3.4.53)

Let us define for n ≥ 1 the function ϕn ∈ C1,2,0([0, T ]× Rd × Rq) by

ϕn(t′, x′, a′) = ϕ(t′, x′) + n(dAηn (a′)

ηn∧ 1 + |t′ − tn|2 + |x′ − xn|4

)where Aηn is defined by (3.2.29) for ε = ηn and (ηn)n is a positive sequence converging to

0 s.t. (such sequence exists by (Hλ)(ii)):

n2λ(A \Aηn) −→ 0 as n→∞ . (3.4.54)

From the growth conditions (3.4.45) and (3.4.52) on v and ϕ, we can find a sequence

(tn, xn, an) in [0, T ]× Rd ×A such that

(vn − ϕn)(tn, xn, an) = max[0,T ]×Rd×A

(vn − ϕn) , n ≥ 1 . (3.4.55)

36

Page 38: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Using (3.4.50) and (3.4.51), we obtain by standard arguments that up to a subsequence

n( 1

ηndAηn (an) + |tn − tn|2 + |xn − xn|4

)−→ 0 as n→∞ , (3.4.56)

and

vn(tn, xn, an) −→ v∗(t, x) as n→∞ .

We deduce from (3.4.56) and (3.4.53) that, up to a subsequence:

(tn, xn, an) −→ (t, x, a), as n→∞ . (3.4.57)

for some a ∈ A. Moreover, for n large enough, we have an ∈ A. Indeed, suppose that, up

to a subsequence, an ∈ ∂A for n ≥ 1. Then we have 1ηndAηn (an) ≥ 1, which contradicts

(3.4.56). Now, from the viscosity subsolution property of vn at (tn, xn, an) with the test

function ϕn satisfying (3.4.55), Lemma 3.2.7, and since an ∈ A, we have:

− ∂ϕn∂t

(tn, xn, an)− Lanϕn(tn, xn, an)

−f(xn, an, vn(tn, xn, an), σᵀ(xn, an)Dxϕ(tn, xn)) (3.4.58)

−(n+ 1)n

∫A

(dAηn (a′)

ηn∧ 1)λ(da′) ≤ 0 ,

for all n ≥ 1. From (3.4.54) we get

(n+ 1)n

∫A

(dAηn (a′)

ηn∧ 1)λ(da′) −→ 0 as n→∞ (3.4.59)

Sending n to infinity into (3.4.58), and using (3.4.50), (3.4.57) and (3.4.59), we get

−∂ϕ∂t

(t, x)− Laϕ(t, x)− f(x, a, v∗(t, x), σᵀ(x, a)Dxϕ(t, x)) ≤ 0 .

Since a ∈ A, this gives

−∂ϕ∂t

(t, x)− supa∈A

[Laϕ(t, x) + f

(x, a, v∗(t, x), σᵀ(x, a)Dxϕ(t, x)

)]≤ 0 ,

which is the viscosity subsolution property. 2

Proof of the viscosity supersolution property to (3.1.7). Let (x, a) ∈ Rd × A. From

(3.4.46), we can find a sequence (tn, xn, an)n valued in [0, T )× Rd × Rq such that

(tn, xn, an, vn(tn, xn, an)) −→ (T, x, a, v∗(T, x)) as n→∞ .

The sequence of continuous functions (vn)n being nondecreasing and vn(T, .) = g we have

v∗(T, x) ≥ limn→∞

v1(tn, xn, an) = g(x, a) .

Since a is arbitrarily chosen in A, we deduce that v∗(T, x) ≥ supa∈A g(x, a) = supa∈A g(x, a)

by (HA) and continuity of g in a. 2

37

Page 39: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Proof of the viscosity subsolution property to (3.1.7). Let x ∈ Rd. Then we can find

by (3.4.50) a sequence (tn, xn, an)n in [0, T )× Rd × A such that

(tn, xn, vn(tn, xn, an)) → (T, x, v∗(T, x)) , as n→∞ . (3.4.60)

Define the function h : [0, T ]× Rd → R by

h(t, x) =√T − t+ sup

a∈Ag(x, a)

for all (t, x) ∈ [0, T )×Rd. From (HFC) and (HBC), we see that h is a continuous viscosity

supersolution to (3.2.16)-(3.2.17), on [T − η, T ]× B(x, η) for η small enough. We can then

apply Theorem 3.5 in [3] which gives that

vn ≤ h on [T − η, T ]× B(x, η)×A

for all n ≥ 0. By applying the above inequality at (tn, xn, an), and sending n to infinity,

together with (3.4.60), we get the required result. 2

38

Page 40: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Part II

Discretization of fully nonlinear

HJB equations via BSDEs with

nonpositive jumps

39

Page 41: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Chapter 4

Discretization of the nonpositive

jump constraint

In this chapter, we present a approximation of the constraint imposed to the jump com-

ponent of the solution to the BSDE. We first express this approximated constraint as a

constraint on the the component Y operating only at the times on a fixed grid. We then

show that the solution satisfying the approximating constraint converges as soon as the

time mesh of the grid goes to zero.

4.1 Discretely jump-constrained BSDE

We introduce in this section discretely jump-constrained BSDE. The nonpositive jump

constraint operates only at the times of the grid π = t0 = 0 < t1 < . . . < tn = T of [0, T ],

and we look for a quadruple (Y π,Yπ,Zπ,Uπ) ∈ S2 × S2 × L2(W)× L2(µ) satisfying:

Y πT = YπT = g(XT ) (4.1.1)

and

Yπt = Y πtk+1

+

∫ tk+1

tf(Xs, Is,Yπs ,Zπs )ds (4.1.2)

−∫ tk+1

tZπs dWs −

∫ tk+1

t

∫AUπs (a)µ(ds, da) ,

Y πt = Yπt 1(tk,tk+1)(t) + ess sup

a∈AE[Yπt∣∣Xt, It = a

]1tk(t) , (4.1.3)

for all t ∈ [tk, tk+1) and all 0 ≤ k ≤ n− 1.

Notice that at each time tk of the grid, the condition is not known a priori to be

square integrable since it involves a supremum over A, and the well-posedness of the BSDE

(4.1.1)-(4.1.2)-(4.1.3) is not a direct and standard issue. We shall use a PDE approach for

proving the existence and uniqueness of a solution. Let us consider the system of integro-

partial differential equations (IPDEs) for the functions vπ and ϑπ defined recursively on

[0, T ]× Rd ×A by:

40

Page 42: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

• A terminal condition for vπ and ϑπ:

vπ(T, x, a) = ϑπ(T, x, a) = g(x) , (x, a) ∈ Rd ×A , (4.1.4)

• A sequence of IPDEs for ϑπ−Laϑπ − f

(x, a, ϑπ, σᵀ(x, a)Dxϑ

π)

−∫A

(ϑπ(t, x, a′)− ϑπ(t, x, a)

)λ(da′) = 0, (t, x, a) ∈ [tk, tk+1)× Rd ×A,

ϑπ(t−k+1, x, a) = supa′∈A ϑπ(tk+1, x, a

′) (x, a) ∈ Rd ×A(4.1.5)

for k = 0 . . . , n− 1,

• the relation between vπ and ϑπ:

vπ(t, x, a) = ϑπ(t, x, a)1(tk,tk+1)(t) + supa′∈A

ϑπ(t, x, a′)1tk(t) , (4.1.6)

for all t ∈ [tk, tk+1) and k = 0 . . . , n− 1. The rest of this section is devoted to the proof of

existence and uniqueness of a solution to (4.1.4)-(4.1.5)-(4.1.6), together with some uniform

Lipschitz properties, and its connection to the discretely jump-constrained BSDE (4.1.1)-

(4.1.2)-(4.1.3).

For any L-Lipschitz continuous function ϕ on Rd ×A, and k ≤ n− 1, we denote:

Tkπ[ϕ](t, x, a) := w(t, x, a), (t, x, a) ∈ [tk, tk+1)× Rd ×A, (4.1.7)

where w is the unique continuous viscosity solution on [tk, tk+1]×Rd×A with linear growth

condition in x to the integro partial differential equation (IPDE):−Law − f(x, a, w, σᵀDxw)

−∫A

(w(t, x, a′)− w(t, x, a)

)λ(da′) = 0, (t, x, a) ∈ [tk, tk+1)× Rd ×A,

w(t−k+1, x, a) = ϕ(x, a), (x, a) ∈ Rd ×A ,

(4.1.8)

and we extend by continuity Tkπ[ϕ](tk+1, x, a) = ϕ(x, a). The existence and uniqueness

of such a solution w to the semi linear IPDE (4.1.8), and its nonlinear Feynman-Kac

representation in terms of BSDE with jumps, is obtained e.g. from Theorems 3.4 and 3.5

in [3].

Lemma 4.1.1 There exists a constant C such that for any L-Lipschitz continuous function

ϕ on Rd ×A, and k ≤ n− 1, we have

|Tkπ[ϕ](t, x, a)− Tkπ[ϕ](t, x′, a′)| ≤ max(L, 1)√

1 + |π|eC|π|(|x− x′|+ |a− a′|) ,

for all t ∈ [tk, tk+1), and (x, a), (x′, a′) ∈ Rd ×A.

Proof. Fix t ∈ [tk, tk+1), k ≤ n−1, (x, a), (x′, a′) ∈ Rd×A, and ϕ an L-Lipschitz continuous

function on Rd × A. Let (Y ϕ, Zϕ, Uϕ) and (Y ϕ,′ , Zϕ,′, Uϕ,

′) be the solutions on [t, tk+1] to

41

Page 43: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

the BSDEs

Y ϕs = ϕ(Xt,x,a

tk+1, It,atk+1

) +

∫ tk+1

sf(Xt,x,a

r , It,ar , Y ϕr , Z

ϕr )dr

−∫ tk+1

sZϕr dWr −

∫ tk+1

s

∫AUϕr (a)µ(dr, da), t ≤ s ≤ tk+1,

Y ϕ,′s = ϕ(Xt,x′,a′

tk+1, It,a

tk+1) +

∫ tk+1

sf(Xt,x′,a′

r , It,a′

r , Y ϕ,′r , Zϕ,

′r )dr

−∫ tk+1

sZϕ,

′r dWr −

∫ tk+1

s

∫AUϕ,

′r (a)µ(dr, da), t ≤ s ≤ tk+1

From Theorems 3.4 and 3.5 in [3], we have the identification:

Y ϕt = Tkπ[ϕ](t, x, a) and Y ϕ,′

t = Tkπ[ϕ](t, x′, a′) . (4.1.9)

We now estimate the difference between the processes Y ϕand Y ϕ,′ , and set δY ϕ = Y ϕ−Y ϕ,′ ,

δZϕ = Zϕ − Zϕ,′ , δX = Xt,x,a −Xt,x′,a′ , δI = It,a − It,a′ . By Ito’s formula, the Lipschitz

condition of f and ϕ, and Young inequality, we have

E[|δY ϕ

s |2]

+ E[ ∫ tk+1

s|δZϕs |2ds

]≤ L2E

[|δXT |2 + |δIT |2

]+ C

∫ tk+1

sE[|δY ϕ

r |2]dr

+1

2E[ ∫ tk+1

s

(|δXr|2 + |δIr|2 + |δZϕr |2

)dr],

for any s ∈ [t, tk+1]. Now, from classical estimates on jump-diffusion processes we have

supr∈[t,tk+1]

E[|δXr|2 + |δIr|2

]≤ eC|π|

(|x− x′|2 + |a− a′|2

),

and thus:

E[|δY ϕ

s |2]≤ (L2 + |π|)eC|π|

(|x− x′|2 + |a− a′|2

)+ C

∫ tk+1

sE[|δY ϕ

r |2]dr ,

for all s ∈ [t, tk+1]. By Gronwall’s Lemma, this yields

sups∈[t,tk+1]

E[|δY ϕ

s |2]≤ (L2 + |π|)e2C|π|(|x− x′|2 + |a− a′|2

),

which proves the required result from the identification (4.1.9):

|Tkπ[ϕ](t, x, a)− Tkπ[ϕ](t, x′, a′)| ≤√L2 + |π|eC|π|(|x− x′|+ |a− a′|)

≤ max(L, 1)√

1 + |π|eC|π|(|x− x′|+ |a− a′|).

2

Proposition 4.1.1 There exists a unique viscosity solution ϑπ with linear growth condition

to the IPDE (4.1.4)-(4.1.5), and this solution satisfies:

|ϑπ(t, x, a)− ϑπ(t, x′, a′)|

≤ max(L2, 1)

√(e2C|π|(1 + |π|)

)n−k(|x− x′|+ |a− a′|

), (4.1.10)

for all k = 0, . . . , n− 1, t ∈ [tk, tk+1), (x, a), (x′, a′) ∈ Rd ×A.

42

Page 44: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Proof. We prove by a backward induction on k that the IPDE (4.1.4)-(4.1.5) admits a

unique solution on [tk, T ]× Rd ×A, which satisfies (4.1.10).

• For k = n−1, we directly get the existence and uniqueness of ϑπ on [tn−1, T ]×Rd×A from

Theorems 3.4 and 3.5 in [3], and we have ϑπ = Tn−1π [g] on [tn−1, T ) × Rd × A. Moreover,

we also get by Lemma 4.1.1:

|ϑπ(t, x, a)− ϑπ(t, x′, a′)| ≤ max(L2, 1)√e2C|π|(1 + |π|)

(|x− x′|+ |a− a′|

)for all t ∈ [tn−1, tn), (x, a), (x′, a′) ∈ Rd ×A.

• Suppose that the result holds true at step k + 1 i.e. there exists a unique function ϑπ on

[tk+1, T ]×Rd×A with linear growth and satisfying (4.1.4)-(4.1.5) and (4.1.10). It remains

to prove that ϑπ is uniquely determined by (4.1.5) on [tk, tk+1)×Rd×A and that it satisfies

(4.1.10) on [tk, tk+1)×Rd ×A. Since ϑπ satisfies (4.1.10) at time tk+1, we deduce that the

function

ψk+1(x) := supa∈A

ϑπ(tk+1, x, a), x ∈ Rd,

is also Lipschitz continuous, and satisfies by the induction hypothesis:

|ψk+1(x)− ψk+1(x′)| ≤ max(L2, 1)

√(e2C|π|(1 + |π|)

)n−k−1|x− x′|, (4.1.11)

for all x, x′ ∈ Rd. Under (HFC) and (HBC), we can apply Theorems 3.4 and 3.5 in

[3], and we get that ϑπ is the unique viscosity solution with linear growth to (4.1.5) on

[tk, tk+1)× Rd × A, with ϑπ = Tkπ[ψk+1]. Thus it exists and is unique on [tk, T ]× Rd × A.

From Lemma 4.1.1 and (4.1.11), we then get

|ϑπ(t, x, a)− ϑπ(t, x′, a′)| = |Tkπ[ψk+1](t, x, a)− Tkπ[ψk+1](t, x′, a′)|

≤ max(L2, 1)

√(e2C|π|(1 + |π|)

)n−k−1

√(1 + |π|)e2C|π|

(|x− x′|+ |a− a′|

)≤ max(L2, 1)

√(e2C|π|(1 + |π|)

)n−k(|x− x′|+ |a− a′|

)for any t ∈ [tk, tk+1) and (x, a), (x′, a′) ∈ Rd × A, which proves the required induction

inequality at step k. 2

Remark 4.1.1 The function ϑπ(t, x, .) is continuous on A, for each (t, x), and so the

function vπ is well-defined by (4.1.6). Moreover, the function ϑπ may be written recursively

as: ϑπ(T, ., .) = g on Rd ×A,

ϑπ = Tkπ[vπ(tk+1, .)], on [tk, tk+1)× Rd ×A,(4.1.12)

for k = 0, . . . , n− 1. In particular, ϑπ is continuous on (tk, tk+1)× Rd ×A, k ≤ n− 1. 2

As a consequence of the above proposition, we obtain the uniform Lipschitz property

of ϑπ and vπ, with a Lipschitz constant independent of π.

43

Page 45: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Corollary 4.1.1 There exists a constant C (independent of |π|) such that

|ϑπ(t, x, a)− ϑπ(t, x′, a′)|+ |vπ(t, x, a)− vπ(t, x′, a′)| ≤ C(|x− x′|+ |a− a′|

),

for all t ∈ [0, T ], x, x′ ∈ Rd, a, a′ ∈ Rd.

Proof. Recalling that n|π| is bounded, we see that the sequence appearing in (4.1.10):((e2C|π|(1+ |π|)

)n−k)0≤k≤n−1

is bounded uniformly in |π| (or n), which shows the required

Lipschitz property of ϑπ. Since A is assumed to be compact, this shows in particular that

the function vπ defined by the relation (4.1.6) is well-defined and finite. Moreover, by

noting that

| supa∈A

ϑπ(t, x, a)− supa∈A

ϑπ(t, x′, a)| ≤ supa∈A|ϑπ(t, x, a)− ϑπ(t, x′, a)|

for all (t, x) ∈ [0, T ]× Rd, we also obtain the required Lipschitz property for vπ. 2

We now turn to the existence of a solution to the discretely jump-constrained BSDE.

Proposition 4.1.2 The BSDE (4.1.1)-(4.1.2)-(4.1.3) admits a unique solution (Y π,Yπ,Zπ,Uπ)

in S2 × S2 × L2(W)× L2(µ). Moreover we have

Yπt = ϑπ(t,Xt, It), and Y πt = vπ(t,Xt, It) (4.1.13)

for all t ∈ [0, T ].

Proof. We prove by backward induction on k that (Y π,Yπ,Zπ,Uπ) is well defined and

satisfies (4.1.13) on [tk, T ].

• Suppose that k = n − 1. From Corollary 2.3 in [3], we know that (Yπ,Zπ,Uπ), exists

and is unique on [tn−1, T ]. Moreover, from Theorems 3.4 and 3.5 in [3], we get Yπt =

Tkπ[g](t,Xt, It) = ϑπ(t,Xt, It) on [tn−1, T ]. By (4.1.3), we then have for all t ∈ [tn−1, T ):

Y πt = 1(tn−1,T )(t) ϑ

π(t,Xt, It) + 1tn−1(t) ess supa∈A

ϑπ(t,Xt, a)

= 1(tn−1,T )(t) ϑπ(t,Xt, It) + 1tn−1(t) sup

a∈Aϑπ(t,Xt, a) = vπ(t,Xt, It),

since the essential supremum and supremum coincide by continuity of a 7→ ϑπ(t,Xt, a) on

the compact set A.

• Suppose that the result holds true for some k ≤ n− 1. Then, we see that (Yπ,Zπ,Uπ) is

defined on [tk−1, tk) as the solution to a BSDE driven by W and µ with a terminal condition

vπ(tk, Xtk). Since vπ satisfies a linear growth condition, we know again by Corollary 2.3

in [3] that (Yπ,Zπ,Uπ), thus also Y π, exists and is unique on [tk−1, tk). Moreover, using

again Theorems 3.4 and 3.5 in [3], we get (4.1.13) on [tk−1, tk). 2

We end this section with a conditional regularity result for the discretely jump-constrained

BSDE.

44

Page 46: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Proposition 4.1.3 There exists some constant C such that

supt∈[tk,tk+1)

Etk[|Yπt − Yπtk |

2]

+ supt∈(tk,tk+1]

Etk[|Y πt − Y π

tk+1|2]≤ C(1 + |Xtk |

2)|π|,

for all k = 0, . . . , n− 1.

Proof. Fix k ≤ n− 1. By Ito’s formula, we have for all t ∈ [tk, tk+1):

Etk[|Yπt − Yπtk |

2]

= 2Etk[ ∫ t

tk

f(Xs, Is,Yπs ,Zπs )(Yπtk − Yπs )ds

]+ Etk

[ ∫ t

tk

|Zπs |2]

+ Etk[ ∫ t

tk

∫A|Uπs (a)|2λ(da)ds

]≤ Etk

[ ∫ t

tk

|Yπs − Yπtk |2]

+ C|π|(

1 + Etk[

sups∈[tk,tk+1]

|Xs|2])

+ C|π|Etk[

sups∈[tk,tk+1]

(|Yπs |2 + |Zπs |2 +

∫A|Uπs (a)|2λ(da)

)],

by the linear growth condition on f (recall also that A is compact), and Young inequality.

Now, by standard estimate for X under growth linear condition on b and σ, we have:

Etk[

sups∈[tk,tk+1]

|Xs|2]≤ C(1 + |Xtk |

2). (4.1.14)

We also know from Proposition 4.2 in [8], under (H1) and (H2), that there exists a

constant C depending only on the Lipschitz constants of b, σ f and vπ(tk+1, .) (which does

not depend on π by Corollary 4.1.1), such that

Etk[

sups∈[tk,tk+1]

(|Yπs |2 + |Zπs |2 +

∫A|Uπs (a)|2λ(da)

)]≤ C(1 + |Xtk |

2). (4.1.15)

We deduce that

Etk[|Yπt − Yπtk |

2]≤ Etk

[ ∫ t

tk

|Yπs − Yπtk |2]

+ C|π|(1 + |Xtk |2),

and we conclude for the regularity of Yπ by Gronwall’s lemma. Finally, from the defini-

tion (4.1.2)-(4.1.3) of Y π and Yπ, Ito isometry for stochastic integrals, and growth linear

condition on f , we have for all t ∈ (tk, tk+1):

Etk[|Y πt − Y π

tk+1|2]

= Etk[|Yπt − Y π

tk+1|2]

≤ 3Etk[ ∫ tk+1

tk

(|f(Xs, Is,Yπs ,Zπs )|2 + |Zπs |2 +

∫A|Uπs (a)|2λ(da)

)ds]

≤ C|π|Etk[1 + sup

s∈[tk,tk+1]

(|Xs|2 + |Yπs |2 + |Zπs |2 +

∫A|Uπs (a)|2λ(da)

)]≤ C|π|(1 + |Xtk |

2),

where we used again (4.1.14) and (4.1.15). This ends the proof. 2

45

Page 47: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

4.2 Convergence of discretely jump-constrained BSDE

This section is devoted to the convergence of the discretely jump-constrained BSDE towards

the minimal solution to the BSDE with nonpositive jump.

Under (HFC) and (HBC), we have seen in Chapter 3 the existence and uniqueness of

a minimal solution (Y,Z, U,K) ∈ S2 × L2(W)× L2(µ)× L2(µ) toYt = g(XT ) +

∫ Tt f(Xs, Is, Ys, Zs)ds+KT −Kt

−∫ Tt ZsdWs −

∫ Tt

∫A Us(a)µ(ds, da), 0 ≤ t ≤ T.

Ut(a) ≤ 0 dP⊗ dt⊗ λ(da) a.e.

(4.2.16)

Moreover, the minimal solution Y is in the form

Yt = v(t,Xt), 0 ≤ t ≤ T, (4.2.17)

where v : [0, T ] × Rd → R is a viscosity solution with linear growth to the fully nonlinear

HJB type equation:− supa∈A

[Lav + f(x, a, v, σᵀ(x, a)Dxv)

]= 0, on [0, T )× Rd,

v(T, x) = g, on Rd,(4.2.18)

where

Lav =∂v

∂t+ b(x, a).Dxv +

1

2tr(σσᵀ(x, a)D2

xv).

We shall make the standing assumption that comparison principle holds for (4.2.18).

(HC) Let w (resp. w) be a lower-semicontinuous (resp. upper-semicontinuous) viscosity

supersolution (resp. subsolution) with linear growth condition to (4.2.18). Then, w ≥ w.

When f does not depend on y, z, i.e. (4.2.18) is the usual HJB equation for a stochastic

control problem, Assumption (HC) holds true, see [18] or [38]. In the general case, we

refer to [12] for sufficient conditions to comparison principles. Under (HC), the function

v in (4.2.17) is the unique viscosity solution to (4.2.18), and is in particular continuous.

Actually, we have the standard Holder and Lipschitz property (see Appendix in [29] or [5]):

|v(t, x)− v(t′, x′)| ≤ C(|t− t′|

12 + |x− x′|

), (t, t′) ∈ [0, T ], x, x′ ∈ Rd. (4.2.19)

This implies that the process Y is continuous, and thus the jump component U = 0. In

the sequel, we shall focus on the approximation of the remaining components Y and Z of

the minimal solution to (4.2.16).

4.2.1 Convergence result

Lemma 4.2.2 We have the following assertions:

1) The familly (ϑπ)π is nondecreasing and upper bounded by v: for any grids π and π′ such

that π ⊂ π′, we have

ϑπ(t, x, a) ≤ ϑπ′(t, x, a) ≤ v(t, x) , (t, x, a) ∈ [0, T ]× Rd ×A .

46

Page 48: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

2) The familly (ϑπ)π satisfies a uniform linear growth condition: there exists a constant C

such that

|ϑπ(t, x, a)| ≤ C(1 + |x|),

for any (t, x, a) ∈ [0, T ]× Rd ×A and any grid π.

Proof. 1) Let us first prove that ϑπ ≤ v. Since v is a (continuous) viscosity solution

to the HJB equation (4.2.18), and v does not depend on a, we see that v is a viscosity

supersolution to the IPDE in (4.1.5) satisfied by ϑπ on each interval [tk, tk+1). Now, since

v(T, x) = ϑπ(T, x, a), we deduce by comparison principle for this IPDE (see e.g. Theorem

3.4 in [3]) on [tn−1, T )×Rd×A that v(t, x) ≥ ϑπ(t, x, a) for all t ∈ [tn−1, T ], (x, a) ∈ Rd×A.

In particular, v(t−n−1, x) = v(tn−1, x) ≥ supa∈A ϑπ(tn−1, x, a) = ϑπ(t−n−1, x, a). Again, by

comparison principle for the IPDE (4.1.5) on [tn−2, tn−1) × Rd × A, it follows that v(t, x)

≥ ϑπ(t, x, a) for all t ∈ [tn−2, tn−1], (x, a) ∈ Rd × A. By backward induction on time, we

conclude that v ≥ ϑπ on [0, T ]× Rd ×A.

Let us next consider two partitions π = (tk)0≤k≤n and π′ = (t′k)0≤k≤n′ of [0, T ] with π

⊂ π′, and denote by m = maxk ≤ n′ : t′m /∈ π. Thus, all the points of the grid π and

π′ coincide after time t′m, and since ϑπ and ϑπ′

are viscosity solution to the same IPDE

(4.1.5) starting from the same terminal data g, we deduce by uniqueness that ϑπ = ϑπ′

on

[t′m, T ] × Rd × A. Then, we have ϑπ′(t′−m , x, a) = supa∈A ϑ

π(t′m, x, a) = supa∈A ϑπ(t′m, x, a)

≥ ϑπ(t−m, x, a) since ϑπ is continuous outside of the points of the grid π (recall Remark

4.1.1). Now, since ϑπ and ϑπ′

are viscosity solution to the same IPDE (4.1.5) on [t′m−1, tm),

we deduce by comparison principle that ϑπ′ ≥ ϑπ on [t′m−1, t

′m] × Rd × A. Proceeding by

backward induction, we conclude that ϑπ′ ≥ ϑπ on [0, T ]× Rd ×A.

2) Denote by π0 = t0 = 0, t1 = T the trivial grid of [0, T ]. Since ϑπ0 ≤ ϑπ ≤ v and ϑπ0

and v satisfy a linear growth condition, we get (recall that A is compact):

|ϑπ(t, x, a)| ≤ |ϑπ0(t, x, a)|+ |v(t, x)| ≤ C(1 + |x|),

for any (t, x, a) ∈ [0, T ]× Rd ×A and any grid π. 2

In the sequel, we denote by ϑ the increasing limit of the sequence (ϑπ)π when the grid

increases by becoming finer, i.e. its modulus |π| goes to zero. The next result shows that

ϑ does not depend on the variable a in A.

Proposition 4.2.4 The function ϑ is l.s.c. and does not depend on the variable a ∈ A:

ϑ(t, x, a) = ϑ(t, x, a′) , (t, x) ∈ [0, T ]× Rd, a, a′ ∈ A .

To prove this result we use the following lemma. Observe by definition (4.1.6) of vπ

that the function vπ does not depend on a on the grid times π, and we shall denote by

misuse of notation: vπ(tk, x), for k ≤ n, x ∈ Rd.

Lemma 4.2.3 There exists a constant C (not depending on π) such that

|ϑπ(t, x, a)− vπ(tk+1, x)| ≤ C(1 + |x|)|π|12

for all k = 0, . . . , n− 1, t ∈ [tk, tk+1), (x, a) ∈ Rd ×A.

47

Page 49: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Proof. Fix k = 0, . . . , n−1, t ∈ [tk, tk+1) and (x, a) ∈ Rd×A. Let (Y, Z, U) be the solution

to the BSDE

Ys = vπ(tk+1, Xt,x,atk+1

) +

∫ tk+1

sf(Xt,x,a

s , It,as , Ys, Zs)ds

−∫ tk+1

sZsdWs −

∫ tk+1

s

∫AUs(a′)µ(ds, da′) , s ∈ [t, tk+1] .

From Proposition 4.1.2, Markov property and uniqueness of a solution to the BSDE (4.1.1)-

(4.1.2)-(4.1.3) we have: Ys = ϑπ(s,Xt,x,as , It,as ), for s ∈ [t, tk+1], and so:

|ϑπ(t, x, a)− vπ(tk+1, x)| =∣∣Yt − vπ(tk+1, x)

∣∣≤ E|vπ(tk+1, X

t,x,atk+1

)− vπ(tk+1, x)|

+ E[ ∫ tk+1

t

∣∣f(Xt,x,as , It,as , Ys, Zs)

∣∣ds]. (4.2.20)

From Corollary 4.1.1, we have

E|vπ(tk+1, Xt,x,atk+1

)− vπ(tk+1, x)| ≤ C√

E[|Xt,x,atk+1− x|2] ≤ C

√|π| . (4.2.21)

Moreover, by the growth linear condition on f in (H2), and on ϑπ in Lemma 4.2.2, we

have

E[ ∫ tk+1

t

∣∣f(Xs, Is, Ys, Zs)∣∣ds] ≤ CE

[ ∫ tk+1

t

(1 + |Xt,x,a

s |+ |Zs|)ds].

By classical estimates, we have

sups∈[t,T ]

E[|Xt,x,a

s |2]≤ C(1 + |x|2).

Moreover, under (H1) and (H2), we know from Proposition 4.2 in [8] that there exists a

constant C depending only on the Lipschitz constants of b, σ f and vπ(tk+1, .) such that

E[

sups∈[tk,tk+1]

|Zs|2]≤ C(1 + |x|2).

This proves that

E[ ∫ tk+1

t

∣∣f(Xs, Is, Ys, Zs)∣∣ds] ≤ C(1 + |x|)|π| .

Combining this last estimate with (4.2.20) and (4.2.21), we get the result 2

Proof of Proposition 4.2.4. The function ϑ is l.s.c. as the supremum of the l.s.c.

functions ϑπ. Fix (t, x) ∈ [0, T )×Rd and a, a′ ∈ A. Let (πp)p be a sequence of subdivisions

of [0, T ] such that |πp| ↓ 0 as p ↑ ∞. We define the sequence (tp)p of [0, T ] by

tp = mins ∈ πp : s > t

, p ≥ 0 .

48

Page 50: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Since |πp| → 0 as p → ∞ we get tp → t as p → +∞. We then have from the previous

lemma:

|ϑπp(t, x, a)− ϑπp(t, x, a′)| ≤ |ϑπp(t, x, a)− vπp(tp, x)|+ |vπp(tp, x)− ϑπp(t, x, a′)|≤ 2C|πp|

12 .

Sending p to ∞ we obtain that ϑ(t, x, a) = ϑ(t, x, a′). 2

Corollary 4.2.2 We have the identification: ϑ = v, and the sequence (vπ)π also converges

to v.

Proof. We proceed in two steps.

Step 1. The function ϑ is a supersolution to (4.2.18). Since ϑπk(T, .) = g for all k ≥ 1, we

first notice that ϑ(T, .) = g. Next, since ϑ does not depend on the variable a, we have

ϑπ(t, x, a) ↑ ϑ(t, x) as |π| ↓ 0

for any (t, x, a) ∈ [0, T ]× Rd ×A. Moreover, since the function ϑ is l.s.c, we have

ϑ = ϑ∗ = lim inf|π|→0

∗ϑπ, (4.2.22)

where

lim inf|π|→0

∗ϑπ(t, x, a) := lim inf

|π| → 0

(t′, x′, a′)→ (t, x, a)

t′ < T

ϑπ(t′, x′, a′), (t, x, a) ∈ [0, T ]× Rd × Rq .

Fix now some (t, x) ∈ [0, T ] × Rd and a ∈ A and (p, q,M) ∈ J2,−ϑ(t, x), the limiting

parabolic subjet of ϑ at (t, x) (see definition in [12]). From standard stability results, there

exists a sequence (πk, tk, xk, ak, pk, qk,Mk)k such that

(pk, qk,Mk) ∈ J2,−ϑπk(tk, xk, ak)

for all k ≥ 1 and

(tk, xk, ak, ϑπk(tk, xk, ak)) −→ (t, x, a, ϑ(t, x, a)) as k →∞, |πk| → 0.

From the viscosity supersolution property of ϑπk to (4.1.5) in terms of subjets, we have

−pk − b(xk, ak).qk −1

2tr(σσᵀ(xk, ak)Mk)− f

(xk, ak, ϑ

πk(tk, xk, ak), σᵀ(xk, ak)qk)

−∫A

(ϑπk(tk, xk, a

′)− ϑπk(tk, xk, ak))λ(da′) ≥ 0

for all k ≥ 1. Sending k to infinity and using (4.2.22), we get

−p− b(x, a).q − 1

2tr(σσᵀ(x, a)M)− f

(x, a, ϑ(t, x), σᵀ(x, a)q) ≥ 0.

49

Page 51: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Since a is arbitrary in A, this shows

−p− supa∈A

[b(x, a).q +

1

2tr(σσᵀ(x, a)M) + f

(x, a, ϑ(t, x), σᵀ(x, a)q)

]≥ 0,

i.e. the viscosity supersolution property of ϑ to (4.2.18).

Step 2. Comparison. Since the PDE (4.2.18) satisfies a comparison principle, we have

from the previous step ϑ ≥ v, and we conclude with Lemma 4.2.2 that ϑ = v. Finally, by

definition (4.1.6) of vπ and from Lemma 4.2.2, we clearly have ϑπ ≤ vπ ≤ v, which also

proves that (vπ)π converges to v. 2

In terms of the discretely jump-constrained BSDE, the convergence result is formulated

as follows:

Proposition 4.2.5 We have Yπt ≤ Y πt ≤ Yt, 0 ≤ t ≤ T , and

E[

supt∈[0,T ]

|Yt − Yπt |2]

+ E[

supt∈[0,T ]

|Yt − Y πt |2]

+ E[ ∫ T

0|Zt −Zπt |2dt

]→ 0,

as |π| goes to zero.

Proof. Recall from (4.2.17) and (4.1.13) that we have the representation:

Yt = v(t,Xt), Y πt = vπ(t,Xt, It), Yπt = ϑ(t,Xt, It), (4.2.23)

and the first assertion of Lemma (4.2.2), we clearly have: Yπt ≤ Y πt ≤ Yt, 0 ≤ t ≤ T .

The convergence in S2 for Yπ to Y and Y π to Y comes from the above representation

(4.2.23), the pointwise convergence of ϑπ and vπ to v in Corollary 4.2.2, and the dominated

convergence theorem by recalling that 0 ≤ (v − vπ)(t, x, a) ≤ (v − ϑπ)(t, x, a) ≤ v(t, x) ≤C(1 + |x|). Let us now turn to the component Z. By definition (4.1.1)-(4.1.2)-(4.1.3) of the

discretely jump-constrained BSDE we notice that Yπ can be written on [0, T ] as:

Yπt = g(XT ) +

∫ T

tf(Xs, Is,Yπs ,Zπs )−

∫ T

tZπs dWs −

∫ T

t

∫AUπs (a)µ(ds, da) +KπT −Kπt ,

where Kπ is the nondecreasing process defined by: Kπt =∑

tk≤t(Yπtk− Yπtk), for t ∈ [0, T ].

Denote by δY = Y − Yπ, δZ = Z − Zπ, δU = U − Uπ and δK = K − Kπ. From Ito’s

formula, Young Inequality and (H2), there exists a constant C such that

E[|δYt|2

]+

1

2E[ ∫ T

t|δZs|2ds

]+

1

2E[ ∫ T

t|δUs(a)|2λ(da)ds

]≤ C

∫ T

tE[|δYs|2

]ds+

1

εE[

sups∈[0,T ]

|δYs|2]

+ εE[∣∣δKT − δKt

∣∣2] (4.2.24)

for all t ∈ [0, T ], with ε a constant to be chosen later. From the definition of δK we have

δKT − δKt = δYt −∫ T

t

(f(Xs, Is, Ys, Zs)− f(Xs, Is,Yπs ,Zπs )

)ds

+

∫ T

0δZsdWs +

∫ T

t

∫AδUs(a)µ(ds, da) .

50

Page 52: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Therefore, by (H2), we get the existence of a constant C ′ such that

E[∣∣δKT − δKt

∣∣2] ≤ C ′(E[

sups∈[0,T ]

|δYs|2]

+ E[ ∫ T

t|δZs|2ds

]+ E

[ ∫ T

t|δUs(a)|2λ(da)ds

])Taking ε = C′

4 and plugging this last inequality in (4.2.24), we get the existence of a

constant C ′′ such that

E[ ∫ T

t|δZs|2ds

]+ E

[ ∫ T

t|δUs(a)|2λ(da)ds

]≤ C ′′

(E[

sups∈[0,T ]

|δYs|2]), (4.2.25)

which shows the L2(W ) convergence of Zπ to Z from the S2 convergence of Yπ to Y . 2

4.2.2 Rate of convergence

We next provide an error estimate for the convergence of the discretely jump-constrained

BSDE. We shall combine BSDE methods and PDE arguments adapted from the shaking co-

efficients approach of Krylov [29] and switching systems approximation of Barles, Jacobsen

[5]. We make further assumptions:

(HFC’) The functions b and σ are uniformly bounded:

supx∈Rd,a∈A

|b(x, a)|+ |σ(x, a)| < ∞.

(HBC’) The function f does not depend on z: f(x, a, y, z) = f(x, a, y) for all (x, a, y, z) ∈Rd ×A× R× Rd and

(i) the functions f(., ., 0) and g are uniformly bounded:

supx∈Rd,a∈A

|f(x, a, 0)|+ |g(x)| < ∞,

(ii) for all (x, a) ∈ Rd ×A, y 7→ f(x, a, y) is convex.

Under these assumptions, we obtain the rate of convergence for vπ and ϑπ towards v.

Theorem 4.2.1 Under (HFC’) and (HBC’), there exists a constant C such that

0 ≤ v(t, x)− vπ(t, x, a) ≤ v(t, x)− ϑπ(t, x, a) ≤ C|π|110

for all (t, x, a) ∈ [0, T ]× Rd × A and all grid π with |π| ≤ 1. Moreover, when f(x, a) does

not depend on y, the rate of convergence is improved to |π|16 .

Before proving this result, we give as corollary the rate of convergence for the discretely

jump-constrained BSDE.

Corollary 4.2.3 Under (HFC’) and (HBC’), there exists a constant C such that

E[

supt∈[0,T ]

|Yt − Yπt |2]

+ E[

supt∈[0,T ]

|Yt − Y πt |2]

+ E[ ∫ T

0|Zt −Zπt |2dt

]≤ C|π|

15 .

for all grid π with |π| ≤ 1, and the above rate is improved to |π|13 when f(x, a) does not

depend on y.

51

Page 53: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Proof. From the representation (4.2.23), and Theorem 4.2.1, we immediately have

E[

supt∈[0,T ]

|Yt − Yπt |2]

+ E[

supt∈[0,T ]

|Yt − Y πt |2]≤ C|π|

15 , (4.2.26)

(resp. |π|13 when f(x, a) does not depend on y). Finally, the convergence rate for Z follows

from the inequality (4.2.25). 2

Remark 4.2.2 The above convergence rate |π|110 is the optimal rate that one can prove

in our generalized stochastic control context with fully nonlinear HJB equation by PDE

approach and shaking coefficients technique, see [29], [5], [17] or [42]. However, this rate

may not be the sharpest one. In the case of continuously reflected BSDEs, i.e. BSDEs

with upper or lower constraint on Y , it is known that Y can be approximated by discretely

reflected BSDEs, i.e. BSDEs where reflection on Y operates a finite set of times on the grid

π, with a rate |π|12 (see [1]). The standard arguments for proving this rate is based on the

representation of the continuously (resp. discretely) reflected BSDE as optimal stopping

problems where stopping is possible over the whole interval time (resp. only on the grid

times). In our jump-constrained case, we know from Chapter 3 that, when f(x, a) does

not depend on y and z, the minimal solution to the BSDE with nonpositive jumps has the

stochastic control representation

v(t, x) = supα

E[ ∫ T

tf(Xα

s , αs)ds+ g(XαT )∣∣∣Xα

t = x], (4.2.27)

with controlled diffusion in Rd:

dXαt = b(Xα

t , αt)dt+ σ(Xαt , αt)dWt,

and where α is an adapted control process valued in A. We shall prove an analog rep-

resentation for discretely jump-constrained BSDEs, and this helps to improve the rate of

convergence from |π|110 to |π|

16 . 2

The rest of this section is devoted to the proof of Theorem 4.2.1. We first consider the

special case where f(x, a) does not depend on y, and then address the case f(x, a, y).

Proof of Theorem 4.2.1 in the case f(x, a).

In the case where f(x, a) does not depend on y, z, by (linear) Feynman-Kac formula for ϑπ

solution to (4.1.5), and by definition of vπ in (4.1.6), we have:

vπ(tk, x) = supa∈A

E[ ∫ tk+1

tk

f(Xtk,x,at , Itk,at )dt+ vπ(tk+1, X

tk,x,atk+1

)], k ≤ n− 1, x ∈ Rd.

By induction, this dynamic programming relation leads to the following stochastic control

problem with discrete time policies:

vπ(tk, x) = supα∈AπF

E[ ∫ T

tk

f(Xtk,x,αt , Iαt )dt+ g(Xtk,x,α

T )],

52

Page 54: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

where AπF is the set of discrete time processes α = (αtj )j≤n−1, with αtj Ftj -measurable,

valued in A, and

Xtk,x,αt = x+

∫ t

tk

b(Xt,x,αs , Iαs )ds+

∫ t

tk

σ(Xtk,x,αs , Iαs )dWs, tk ≤ t ≤ T,

Iαt = αtj +

∫(tj ,t]

∫A

(a− Iαs−)µ(ds, da), tj ≤ t < tj+1, j ≤ n− 1.

In other words, vπ(tk, x) corresponds to the value function for a stochastic control problem

where the controller can act only at the dates tj of the grid π, and then let the regime of

the coefficients of the diffusion evolve according to the Poisson random measure µ. Let us

introduce the following stochastic control problem with piece-wise constant control policies:

vπ(tk, x) = supα∈AπF

E[ ∫ T

tk

f(Xtk,x,αt , Iαt )dt+ g(Xtk,x,α

T )],

where for α = (αtj )j≤n−1 ∈ AπF:

Xtk,x,αt = x+

∫ t

tk

b(Xt,x,αs , Iαs )ds+

∫ t

tk

σ(Xtk,x,αs , Iαs )dWs, tk ≤ t ≤ T,

Iαt = αtj , tj ≤ t < tj+1, j ≤ n− 1.

It is shown in [28] that vπ approximates the value function v for the controlled diffusion

problem (4.2.27), solution to the HJB equation (4.2.18), with a rate |π|16 :

0 ≤ v(tk, x)− vπ(tk, x) ≤ C|π|16 , (4.2.28)

for all tk ∈ π, x ∈ Rd. Now, recalling that A is compact and λ(A) < ∞, it is clear that

there exists some positive constant C such that for all α ∈ AπF, j ≤ n− 1:

E[

supt∈[tj ,tj+1)

|Iαt − Iαt |2]≤ C|π|,

and then by standard arguments under Lipschitz condition on b, σ:

E[

supt∈[tj ,tj+1]

|Xtk,x,αt − Xtk,x,α

t |2]≤ C|π|, k ≤ j ≤ n− 1, x ∈ Rd.

By the Lipschitz conditions on f and g, it follows that

|vπ(tk, x)− vπ(tk, x)| ≤ C|π|12 ,

and thus with (4.2.28):

0 ≤ supx∈Rd

(v − vπ)(tk, x) ≤ C|π|16 .

Finally, by combining with the estimate in Lemma 4.2.3, which gives actually under (HBC’)(i):

|ϑπ(t, x, a)− vπ(tk+1, x)| ≤ C|π|12 , t ∈ [tk, tk+1), (x, a) ∈ Rd ×A,

53

Page 55: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

together with the 1/2-Holder property of v in time (see (4.2.19)), we obtain:

sup(t,x,a)∈[0,T ]×Rd×A

(v − ϑπ)(t, x, a) ≤ C(|π|16 + |π|

12 ) ≤ C|π|

16 ,

for |π| ≤ 1. This ends the proof. 2

Let us now turn to the case where f(x, a, y) may also depend on y. We cannot rely

anymore on the convergence rate result in [28]. Instead, recalling that A is compact and

since σ, b and f are Lipschitz in (x, a), we shall apply the switching system method of

Barles and Jacobsen [5], which is a variation of the shaken coefficients method and smooth-

ing technique used in Krylov [29], in order to obtain approximate smooth subsolution to

(4.2.18). By Lemmas 3.3 and 3.4 in [5], one can find a family of smooth functions (wε)0<ε≤1

on [0, T ]× Rd such that:

sup[0,T ]×Rd

|wε| ≤ C, (4.2.29)

sup[0,T ]×Rd

|wε − w| ≤ Cε13 , (4.2.30)

sup[0,T ]×Rd

|∂β0t D

βwε| ≤ Cε1−2β0−∑di=1 β

i, β0 ∈ N, β = (β1, . . . , βd) ∈ Nd, (4.2.31)

for some positive constant C independent of ε, and by convexity of f in (HBC’)(ii), for

any ε ∈ (0, 1], (t, x) ∈ [0, T ]× Rd, there exists at,x,ε ∈ A such that:

− Lat,x,εwε(t, x)− f(x, at,x,ε, wε(t, x)) ≥ 0. (4.2.32)

Recalling the definition of the operator Tkπ in (4.1.7), we define for any function ϕ on

[0, T ]× Rd ×A, Lipschitz in (x, a):

Tπ[ϕ](t, x, a) := Tkπ[ϕ(tk+1, ., .)](t, x, a), t ∈ [tk, tk+1), (x, a) ∈ Rd ×A,

for k = 0, . . . , n− 1, and

Sπ[ϕ](t, x, a) :=1

|π|

[ϕ(t, x)− Tπ[ϕ](t, x, a)

+(tk+1 − t)(Laϕ(t, x) + f(x, a, ϕ(t, x)

)],

for (t, x, a) ∈ [tk, tk+1)× Rd ×A, k ≤ n− 1.

We have the following key error bound on §π.

Lemma 4.2.4 Let (HFC’) and (HBC’)(i) hold. There exists a constant C such that

|Sπ[ϕε](t, x, a)| ≤ C(|π|

12 (1 + ε−1) + |π|ε−3

), (t, x, a) ∈ [0, T ]× Rd ×A,

for any family (ϕε)ε of smooth functions on [0, T ]× Rd satisfying (4.2.29) and (4.2.31).

54

Page 56: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Proof. Fix (t, x, a) ∈ [0, T ]×Rd ×A. If t = T , we have |Sπ[ϕε](t, x, a)| = 0. Suppose that

t < T and fix k ≤ n such that t ∈ [tk, tk+1). Given a smooth function ϕε satisfying (4.2.29)

and (4.2.31), we split:

|Sπ[ϕε](t, x, a)| ≤ Aε(t, x, a) +Bε(t, x, a),

where

Aε(t, x, a) :=1

|π|

∣∣∣Tπ[ϕε](t, x, a)− E[ϕε(tk+1, X

t,x,atk+1

)]− (tk+1 − t)f(x, a, ϕε(t, x)

)∣∣∣ ,and

Bε(t, x, a) :=1

|π|

∣∣∣E[ϕε(tk+1, Xt,x,atk+1

)]− ϕε(t, x)− (tk+1 − t)Laϕε(t, x)

∣∣∣,and we study each term Aε and Bε separately.

1. Estimate on Aε(t, x, a).

Define (Y ϕε , Zϕε , Uϕε) as the solution to the BSDE on [t, tk+1]:

Y ϕεs = ϕε(tk+1, X

t,x,atk+1

) +

∫ tk+1

sf(Xt,x,a

r , It,ar , Y ϕεr )dr

−∫ tk+1

sZϕεr dWr −

∫ tk+1

s

∫AUϕεr (a)µ(dr, da) , s ∈ [t, tk+1]. (4.2.33)

From Theorems 3.4 and 3.5 in [3], we have Y ϕεt = Tπ[ϕε](t, x, a), and by taking expectation

in (4.2.33), we thus get:

Y ϕεt = Tπ[ϕε](t, x, a) = E

[ϕε(tk+1, X

t,x,atk+1

)]

+ E[ ∫ tk+1

tf(Xt,x,a

s , It,as , Y ϕεs )ds

]and so:

Aε(t, x, a) ≤ 1

|π|E[ ∫ tk+1

t

∣∣f(Xt,x,as , It,as , Y ϕε

s )− f(x, a, ϕε(t, x))∣∣ds]

≤ C(E[

sups∈[t,tk+1]

|Xt,x,as − x|+ |It,as − a|

]+ E

[sup

s∈[t,tk+1]|Y ϕεs − ϕε(t, x)|

]),

by the Lipschitz continuity of f . From standard estimate for SDE, we have (recall that the

coefficients b and σ are bounded under (HFC’) and A is compact):

E[

sups∈[t,tk+1]

|Xt,x,as − x|+ |It,as − a|

]≤ C|π|

12 . (4.2.34)

Moreover, by (4.2.33), the boundedness condition in (HBC’)(i) together with the Lipschitz

condition of f , and Burkholder-Davis-Gundy inequality, we have:

E[

sups∈[t,tk+1]

|Y ϕεs − ϕε(t, x)|

]≤ E

[|ϕε(tk+1, X

t,x,atk+1

)− ϕε(t, x)|]

+ C|π|E[1 + sup

s∈[t,tk+1]|Y ϕεs |]

+ C|π|(E[

sups∈[t,tk+1]

|Zϕεs |2] + E[

sups∈[t,tk+1]

∫A|Uϕεs (a)|2λ(da)]

).

55

Page 57: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

From standard estimate for the BSDE (4.2.33), we have:

E[

sups∈[t,tk+1]

|Y ϕεs |2

]≤ C,

for some positive constant C depending only on the Lipschitz constant of f , the upper bound

of |f(x, a, 0, 0)| in (HBC’)(i), and the upper bound of |ϕε| in (4.2.29). Moreover, from the

estimate in Proposition 4.2 in [8] about the coefficients Zϕε and Uϕε of the BSDE with

jumps (4.2.33), there exists some constant C depending only on the Lipschitz constant of

b, σ, f , and of the Lipschitz constant of ϕε(tk+1, .) (which does not depend on ε by (4.2.31)),

such that:

E[

sups∈[t,tk+1]

|Zϕεs |2] + E[

sups∈[t,tk+1]

∫A|Uϕεs (a)|2λ(da)] ≤ C.

From (4.2.31), we then have:

E[

sups∈[t,tk+1]

|Y ϕεs − ϕε(t, x)|

]≤ C

(|tk+1 − t|ε−1 + E

[|Xt,x,a

tk+1− x|

]+ |π|

)≤ C|π|

12(1 + ε−1

),

by (4.2.34). This leads to the error bound for Aε(t, x, a):

Aε(t, x, a) ≤ C|π|12 (1 + ε−1).

2. Estimate on Bε(t, x, a).

From Ito’s formula we have

Bε(t, x, a) =1

|π|

∣∣∣E[ ∫ tk+1

t

(LI

t,as ϕε(s,X

t,x,as )− Laϕε(t, x)

)ds]∣∣∣

≤ B1ε (t, x, a) +B2

ε (t, x, a)

where

B1ε (t, x, a) =

1

|π|E[ ∫ tk+1

t

∣∣∣(b(Xt,x,as , It,as

)− b(x, a)).Dxϕε(s,X

t,x,as )

+1

2tr[(σσᵀ(Xt,x,a

s , It,as )− σσᵀ(x, a))D2xϕε(t, x)

]∣∣∣ds]and

B2ε (t, x, a) =

1

|π|E[ ∫ tk+1

t

∣∣Lat,xϕε(s,Xt,x,as )− Lat,xϕε(t, x)

∣∣ds] ,with Lat,x defined by

Lat,xϕε(t′, x′) =∂ϕε∂t

(t′, x′) + b(x, a).Dxϕε(t′, x′) +

1

2tr(σσᵀ(x, a)D2

xϕε(t′, x′)

).

Under (HFC), (HFC’), and by (4.2.31), we have

B1ε (t, x, a) ≤ C(1 + ε−1)E

[sup

s∈[t,tk+1]|Xt,x,a

s − x|+ |It,as − a|]

≤ C(1 + ε−1)|π|12 ,

56

Page 58: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

where we used again (4.2.34). On the other hand, since ϕε is smooth, we have from Ito’s

formula

B2ε (t, x, a) =

1

|π|E[ ∫ tk+1

t

∣∣∣ ∫ s

tLI

t,ar Lat,xφ(r,Xt,x,a

r )dr∣∣∣ds] .

Under (HFC’) and by (4.2.31), we then see that

B2ε (t, x, a) ≤ C|π|ε−3,

and so:

Bε(t, x, a) ≤ C(|π|

12 (1 + ε−1) + |π|ε−3

).

Together with the estimate for Aε(t, x, a), this proves the error bound for |Sπ[ϕε](t, x, a)|.2

We next state a maximum principle type result for the operator Tπ.

Lemma 4.2.5 Let ϕ and ψ be two functions on [0, T ]×Rd×A, Lipschitz in (x, a). Then,

there exists some positive constant C independent of π such that

sup(t,x,a)∈[tk,tk+1]×Rd×A

(Tπ[ϕ]− Tπ[ψ])(t, x, a) ≤ eC|π| sup(x,a)∈Rd×A

(ϕ− ψ)(tk+1, x, a) ,

for all k = 0, . . . , n− 1.

Proof. Fix k ≤ n− 1, and set

M := sup(x,a)∈Rd×A

(ϕ− ψ)(tk+1, x, a).

We can assume w.l.o.g. that M < ∞ since otherwise the required inequality is trivial. Let

us denote by ∆v the function

∆v(t, x, a) = Tπ[ϕ](t, x, a)− Tπ[ψ](t, x, a),

for all (t, x, a) ∈ [tk, tk+1]× Rd × A. By definition of Tπ, and from the Lipschitz condition

of f , we see that ∆v is a viscosity subsolution to−La∆v(t, x, a)− C

(|∆v(t, x, a)|+ |D∆v(t, x, a)|

)−∫A

(∆v(t, x, a′)−∆v(t, x, a)

)λ(da′) = 0, for (t, x, a) ∈ [tk, tk+1)× Rd ×A,

∆v(tk+1, x, a) ≤M , for (x, a) ∈ Rd ×A .

(4.2.35)

Then, we easily check that the function Φ defined by

Φ(t, x, a) = MeC(tk+1−t) , (t, x, a) ∈ [tk, tk+1]× Rd ×A ,

is a solution to−LaΦ(t, x, a)− C

(|Φ(t, x, a)|+ |DΦ(t, x, a)|

)−∫A

(Φ(t, x, a′)− Φ(t, x, a)

)λ(da′) = 0, for (t, x, a) ∈ [tk, tk+1)× Rd ×A,

Φ(tk+1, x, a) = M , for (x, a) ∈ Rd ×A .

(4.2.36)

57

Page 59: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

From the comparison theorem in [3] for viscosity solutions of semi-linear IPDEs, we get

that ∆v ≤ Φ on [tk, tk+1]× Rd ×A, which proves the required inequality. 2

Proof of Theorem 4.2.1. By (4.1.6) and (4.1.12), we observe that vπ is a fixed point of

Tπ, i.e.

Tπ[vπ] = vπ.

On the other hand, by (4.2.32), and the estimate of Lemma 4.2.4 applied to wε, we have:

wε(t, x)− Tπ[wε](t, x, at,x,ε) ≤ |π|§π[wε](t, x, at,x,ε) ≤ C|π|S(π, ε)

where we set: S(π, ε) = (|π|32 (1 + ε−1) + |π|2ε−3). Fix k ≤ n−1. By Lemma 4.2.5, we then

have for all t ∈ [tk, tk+1], x ∈ Rd:

wε(t, x)− vπ(t, x, at,x,ε) = wε(t, x)− Tπ[wε](t, x, at,x,ε) + (Tπ[wε]− Tπ[vπ])(t, x, at,x,ε)

≤ C|π|S(π, ε) + eC|π| sup(x,a)∈Rd×A

(wε − vπ)(tk+1, x, a). (4.2.37)

Recalling by its very definition that vπ does not depend on a ∈ A on the grid times of π,

and denoting then Mk := supx∈Rd(wε − vπ)(tk, x), we have by (4.2.37) the relation:

Mk ≤ C|π|S(π, ε) + eC|π|Mk+1.

By induction, this yields:

supx∈Rd

(wε − vπ)(tk, x) ≤ CeCn|π| − 1

eC|π| − 1|π|S(π, ε) + eCn|π| sup

x∈Rd(wε − vπ)(T, x)

≤ CS(π, ε) + C supx∈Rd

(wε − v)(T, x),

since n|π| is bounded and v(T, x) = vπ(T, x) (= g(x)). From (4.2.30), we then get:

supx∈Rd

(v − vπ)(tk, x) ≤ C(ε

13 + |π|

12 (1 + ε−1) + |π|ε−3

).

By minimizing the r.h.s of this estimate with respect to ε, this leads to the error bound

when taking ε = |π|310 ≤ 1:

supx∈Rd

(v − vπ)(tk, x) ≤ C|π|110 .

Finally, by combining with the estimate in Lemma 4.2.3, which gives actually under (HBC’)(i):

|ϑπ(t, x, a)− vπ(tk+1, x)| ≤ C|π|12 , t ∈ [tk, tk+1), (x, a) ∈ Rd ×A,

together with the 1/2-Holder property of v in time (see (4.2.19)), we obtain:

sup(t,x,a)∈[0,T ]×Rd×A

(v − ϑπ)(t, x, a) ≤ C(|π|110 + |π|

12 ) ≤ C|π|

110 .

This ends the proof. 2

58

Page 60: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Chapter 5

Approximation scheme for

jump-constrained BSDE and

stochastic control problem

In this chapter, we study the discrete time approximation of the solution to the discretely

constrained BSDE. We fist deal with the approximation of the forward process. We then

provide a discrete-time scheme for the discretely constrained BSDE. We finally show that

the scheme converges to the constrained solution as soon as the discrete-time mesh goes to

zero.

5.1 The forward regime switching process

In this section, we consider the discrete time approximation of the forward process (X, I)

on [0, T ] . Recall that it is defined byXt = X0 +

∫ t0 b(Xs, Is)ds+

∫ t0 σ(Xs, Is)dWs

It = I0 +∫

(0,t]

∫A(a− Is−)µ(ds, da),

(5.1.1)

for all t ∈ [0, T ].

In the sequel, we shall denote by C1 a generic positive constant which depends only on

the Lipschitz constant of b and σ, T , (X0, I0) and λ(A) < ∞, and may vary from lines to

lines. Under (HFC), we have the existence and uniqueness of a solution to (5.1.1), and

in the sequel, we shall denote by (Xt,x,a, It,a) the solution to (5.1.1) starting from (x, a) at

time t.

Remark 5.1.3 We do not make any ellipticity assumption on σ. In particular, some lines

and columns of σ may be equal to zero, and so there is no loss of generality by considering

that the dimension d of X and W are equal. 2

Denoting by (Tn, ιn)n the jump times and marks associated to µ, we observe that I is

59

Page 61: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

explicitly written as:

It = I01[0,T1)(t) +∑n≥1

ιn1[Tn,Tn+1)(t), 0 ≤ t ≤ T,

where the jump times (Tn)n evolve according to a Poisson distribution of parameter λ :=∫A λ(da) <∞, and the i.i.d. marks (ιn)n follow a probability distribution λ(da) := λ(da)/λ.

Assuming that one can simulate the probability distribution λ, we then see that the pure

jump process I is perfectly simulated. Given a partition π = t0 = 0 < . . . < tk < . . . tn =

T of [0, T ], we shall use the natural Euler scheme Xπ for X, defined by:

Xπ0 = X0

Xπtk+1

= Xπtk

+ b(Xπtk, Itk)(tk+1 − tk) + σ(Xπ

tk, Itk)(Wtk+1

−Wtk),

for k = 0, . . . , n−1. We denote as usual by |π| = maxk≤n−1(tk+1−tk) the modulus of π, and

assume that n|π| is bounded by a constant independent of n, which holds for instance when

the grid is regular, i.e. (tk+1−tk) = |π| for all k ≤ n−1. We also define the continuous-time

version of Xπ by setting:

Xπt = Xπ

tk+ b(Xπ

tk, Itk)(t− tk) + σ(Xπ

tk, Itk)(Wt −Wtk), t ∈ [tk, tk+1], k < n.

By standard arguments, see e.g. [27], one can obtain under (HFC) the L2-error estimate

for the above Euler scheme:

E[

supt∈[tk,tk+1]

∣∣Xt − Xπtk

∣∣2] ≤ C1|π|, k < n.

For our purpose, we shall need a stronger result, and introduce the following error control

for the Euler scheme:

Eπk (X) := E[ess supa∈A

Et1,a[. . . ess sup

a∈AEtk,a

[sup

t∈[tk,tk+1]|Xt − Xπ

tk|2]. . .]], (5.1.2)

where Etk,a[.] denotes the conditional expectation E[.|Ftk , Itk = a]. We also denote by Etk [.]

the conditional expectation E[.|Ftk ]. Since Itk is Ftk -measurable, and by the law of iterated

conditional expectations, we notice that

E[

supt∈[tk,tk+1]

∣∣Xt − Xπtk

∣∣2] ≤ Eπk (X), k < n.

Lemma 5.1.6 Under (HFC), we have

maxk<nEπk (X) ≤ C1|π|.

Proof. From the definition of the Euler scheme, and under the growth linear condition in

(HFC), we easily see that

Etk[∣∣Xπ

tk+1

∣∣2] ≤ C1

(1 +

∣∣Xπtk

∣∣2), k < n. (5.1.3)

60

Page 62: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

From the definition of the continuous-time Euler scheme, and by Burkholder-Davis-Gundy

inequality, it is also clear that

Etk[

supt∈[tk,tk+1]

∣∣Xπt − Xπ

tk

∣∣2] ≤ C1(1 +∣∣Xπ

tk

∣∣2)|π|, k < n. (5.1.4)

We also have the standard estimate for the pure jump process I (recall that A is assumed

to be compact and λ(A) < ∞):

Etk[

supt∈[tk,tk+1]

∣∣Is − Itk ∣∣2] ≤ C1|π|. (5.1.5)

Let us denote by ∆Xt = Xt − Xπt , and apply Ito’s formula to |∆Xt|2 so that for all t ∈

[tk, tk+1]:

|∆Xt|2 = |∆Xtk |2 +

∫ t

tk

2(b(Xs, Is)− b(Xπ

tk, Itk)

).∆Xs +

∣∣σ(Xs, Is)− σ(Xπtk, Itk)

∣∣2ds+ 2

∫ t

tk

(∆Xs)′(σ(Xs, Is)− σ(Xπ

tk, Itk)

)dWs

≤ |∆Xtk |2 + C1

∫ t

tk

|∆Xs|2 + |Xπs − Xπ

tk|2 + |Is − Itk |

2ds

+ 2

∫ t

tk

(∆Xs)′(σ(Xs, Is)− σ(Xπ

tk, Itk)

)dWs,

from the Lipschitz condition on b, σ in (HFC). By taking conditional expectation in the

above inequality, we then get:

Etk[|∆Xt|2

]≤ |∆Xtk |

2 + C1

∫ t

tk

Etk[|∆Xs|2 + |Xπ

s − Xπtk|2 + |Is − Itk |

2]ds

≤ |∆Xtk |2 + C1(1 +

∣∣Xπtk

∣∣2)|π|2 + C1

∫ t

tk

Etk[|∆Xs|2

]ds, t ∈ [tk, tk+1],

by (5.1.4)-(5.1.5). From Gronwall’s lemma, we thus deduce that

Etk[|∆Xtk+1

|2]≤ eC1|π||∆Xtk |

2 + C1(1 +∣∣Xπ

tk

∣∣2)|π|2, k < n. (5.1.6)

Since the right hand side of (5.1.6) does not depend on Itk , this shows that

ess supa∈A

Etk,a[|∆Xtk+1

|2]≤ eC1|π||∆Xtk |

2 + C1(1 +∣∣Xπ

tk

∣∣2)|π|2.By taking conditional expectation w.r.t. Ftk−1

in the above inequality, using again estimate

(5.1.6) together with (5.1.3) at step k − 1, and iterating this backward procedure until the

initial time t0 = 0, we obtain:

E[ess supa∈A

Et1,a[. . . ess sup

a∈AEtk,a

[|∆Xtk+1

|2]. . .]]

≤ eC1n|π||∆X0|2 + C1(1 + |X0|2)|π|2 eC1n|π| − 1

eC1|π| − 1≤ C1|π|, (5.1.7)

61

Page 63: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

since ∆X0 = 0 and n|π| is bounded.

Moreover, the process X satisfies the standard conditional estimate similarly as for the

Euler scheme:

Etk[∣∣Xtk+1

∣∣2] ≤ C1

(1 +

∣∣Xtk

∣∣2),Etk[

supt∈[tk,tk+1]

∣∣Xt −Xtk

∣∣2] ≤ C1(1 +∣∣Xtk

∣∣2)|π|, k < n,

from which we deduce by backward induction on the conditional expectations:

E[ess supa∈A

Et1,a[. . . ess sup

a∈AEtk,a

[sup

t∈[tk,tk+1]

∣∣Xt −Xtk

∣∣2] . . . ]] ≤ C1|π|. (5.1.8)

Finally, by writing that supt∈[tk,tk+1] |Xt − Xπtk|2 ≤ 2 supt∈[tk,tk+1] |Xt − Xtk |2 + 2∆Xtk ,

taking successive condition expectations w.r.t to Ft` and essential supremum over It` = a,

for ` going recursively from k to 0, we get:

Etk[

supt∈[tk,tk+1]

|Xt − Xπtk|2]≤ 2E

[ess supa∈A

Et1,a[. . . ess sup

a∈AEtk,a

[sup

t∈[tk,tk+1]

∣∣Xt −Xtk

∣∣2] . . . ]]+ 2E

[ess supa∈A

Et1,a[. . . ess sup

a∈AEtk−1,a

[|∆Xtk |

2]. . .]]

≤ C1|π|,

by (5.1.7)-(5.1.8), which ends the proof. 2

5.2 BSDE scheme

We consider the discrete time approximation of the discretely jump-constrained BSDE in

the case where f(x, a, y) does not depend on z, and define the scheme (Y π, Yπ, Zπ) by

induction on the grid π = t0 = 0 < . . . < tk < . . . < tn = T by:Y πT = YπT = g(Xπ

T )

Yπtk = Etk[Y πtk+1

]+ f(Xπ

tk, Itk , Yπtk)∆tk

Y πtk

= ess supa∈A

Etk,a[Yπtk], k = 0, . . . , n− 1,

(5.2.9)

where ∆tk = tk+1 − tk, Etk [.] stands for E[.|Ftk ], and Etk,a[.] for E[.|Ftk , Itk = a].

By induction argument, we easily see that Yπtk is a deterministic function of (Xπtk, Itk),

while Y πtk

is a deterministic function of Xπtk

, for k = 0, . . . , n, and by the Markov pro-

perty of the process (Xπ, I), the conditional expectations in (5.2.9) can be replaced by the

corresponding regressions:

Etk[Y πtk+1

]= E

[Y πtk+1

∣∣Xπtk, Itk

]and Etk,a

[Yπtk ] = E

[Yπtk∣∣Xπ

tk, Itk = a

].

We then have:

Yπtk = ϑπk(Xπtk, Itk), Y π

tk= vπk (Xπ

tk),

62

Page 64: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

for some sequence of functions (ϑπk)k and (vπk )k defined respectively on Rd × A and Rd by

backward induction:vπn(x, a) = ϑπn(x) = g(x)

ϑπk(x, a) = E[vπk+1(Xπ

tk+1, Itk+1

)∣∣(Xπ

tk, Itk) = (x, a)

]+ f(x, a, ϑπk(x, a))∆tk

vπk (x) = supa∈A ϑπk(x, a), k = 0, . . . , n− 1.

(5.2.10)

There are well-known different methods (Longstaff-Schwartz least square regression, quan-

tization, Malliavin integration by parts, see e.g. [1], [21], [9]) for computing the above

conditional expectations, and so the functions ϑπk and vπk . It appears that in our context,

the simulation-regression method on basis functions defined on Rd × A, is quite suitable

since it allows us to derive at each time step k ≤ n − 1, a functional form ak(x), which

attains the supremum over A in ϑπk(x, a). We shall see later in this section that the feedback

control (ak(x))k provides an approximation of the optimal control for the HJB equation

associated to a stochastic control problem when f(x, a) does not depend on y. We refer to

our companion paper [?] for the details about the computation of functions ϑπk , vπk , ak by

simulation-regression methods, and the associated error analysis.

5.3 Error estimate for the discrete time scheme

The main result of this section is to state an error bound between the component Y π of

the discretely jump-constrained BSDE and the solution (Y π, Yπ) to the above discrete time

scheme.

Theorem 5.3.2 There exists some constant C such that:

E[∣∣Y π

tk− Y π

tk

∣∣2]+ supt∈(tk,tk+1]

E[∣∣Y π

t − Y πtk+1

∣∣2]+ supt∈[tk,tk+1)

E[∣∣Y π

t − Yπtk∣∣2] ≤ C|π|,

for all k = 0, . . . , n− 1.

The above convergence rate |π|12 in the L2−norm for the discretization of the discretely

jump-constrained BSDE is the same as for standard BSDE, see [9], [44]. By combining

with the convergence result in Section 4, we finally obtain an estimate on the error due to

the discrete time approximation of the minimal solution Y to the BSDE with nonpositive

jumps. We split the error between the positive and negative parts:

Errπ+(Y ) := maxk≤n−1

(E[(Ytk − Y

πtk

)2+

]+ supt∈(tk,tk+1]

E[(Yt − Y π

tk+1

)2+

]+ supt∈[tk,tk+1)

E[(Yt − Yπtk

)2+

]) 12

Errπ−(Y ) := maxk≤n−1

(E[(Ytk − Y

πtk

)2−

]+ supt∈(tk,tk+1]

E[(Yt − Y π

tk+1

)2−

]+ supt∈[tk,tk+1)

E[(Yt − Yπtk

)2−

]) 12.

Corollary 5.3.4 We have:

Errπ−(Y ) ≤ C|π|12 .

63

Page 65: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Moreover, under (HFC’) and (HBC’),

Errπ+(Y ) ≤ C|π|110 ,

and when f(x, a) does not depend on y:

Errπ+(Y ) ≤ C|π|16 .

Proof. Recall from Proposition 4.2.5 that Yπt ≤ Y πt ≤ Yt, 0 ≤ t ≤ T . Then, we have:

(Ytk − Y πtk

)− ≤ |Y πtk− Y π

tk|, (Yt − Y π

tk+1)− ≤ |Y π

t − Y πtk+1|, and (Ytk − Yπtk)− ≤ |Y π

tk− Yπtk |, for

all k ≤ n− 1, and t ∈ [0, T ]. The error bound on Errπ−(Y ) follows then from the estimation

in Theorem 5.3.2. The error bound on Errπ−(Y ) follows from Corollary 4.2.3 and Theorem

5.3.2. 2

Remark 5.3.4 In the particular case where f depends only on (x, a), our discrete time

approximation scheme is a probabilistic scheme for the fully nonlinear HJB equation as-

sociated to the stochastic control problem (4.2.27). As in [29], [5] or [17], we have non

symmetric bounds on the rate of convergence. For instance, in [17], the authors obtained

a convergence rate |π|14 on one side and |π|

110 on the other side, while we improve the rate

to |π|12 for one side, and |π|

16 on the other side. This induces a global error Errπ(Y ) =

Errπ+(Y ) + Errπ−(Y ) of order |π|16 , which is derived without any non degeneracy condition

on the controlled diffusion coefficient. 2

Proof of Theorem 5.3.2.

Let us introduce the continuous time version of (5.2.9). By the martingale representation

theorem, there exists Zπ ∈ L2(W ) and Uπ ∈ L2(µ) such that

Y πtk+1

= Etk[Y πtk+1

]+

∫ tk+1

tk

Zπt dWt +

∫ tk+1

tk

∫AUπt (a)µ(dt, da), k < n,

and we can then define the continuous-time processes Y π and Yπ by:

Yπt = Y πtk+1

+ (tk+1 − t)f(Xπtk, Itk , Y

πtk

) (5.3.11)

−∫ tk+1

tZπt dWt −

∫ tk+1

t

∫AUπt (a)µ(dt, da), t ∈ [tk, tk+1),

Y πt = Y π

tk+1+ (tk+1 − t)f(Xπ

tk, Itk , Y

πtk

) (5.3.12)

−∫ tk+1

tZπt dWt −

∫ tk+1

t

∫AUπt (a)µ(dt, da), t ∈ (tk, tk+1],

for k = 0, . . . , n− 1. Denote by δY πt = Y π

t − Y πt , δYπt = Yπt − Yπt , δZπt = Zπt − Zπt , δUπt =

Uπt − Uπt and δft = f(Xt, It,Yπt ) − f(Xπtk, Itk , Yπtk) for t ∈ [tk, tk+1). Recalling (4.1.2) and

(5.3.11), we have by Ito’s formula:

∆t := Etk[|δYπt |2 +

∫ tk+1

t|δZπs |2ds+

∫ tk+1

t

∫A|δUπs (a)|2λ(da)ds

]= Etk

[|δY π

tk+1|2∣∣]+ Etk

[ ∫ tk+1

t2δYπs δfs

]ds

64

Page 66: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

for all t ∈ [tk, tk+1). By the Lipschitz continuity of f in (H2) and Young inequality, we

then have:

∆t ≤ Etk[|δY π

tk+1|2∣∣]+ Etk

[ ∫ tk+1

tη|δYπs |2ds+

C

ηπ|δYπtk |

2]

+C

ηEtk[ ∫ tk+1

t

(|Xs − Xπ

tk|2 + |Is − Itk |

2 + |Yπs − Yπtk |2)ds].

From Gronwall’s lemma, and by taking η large enough, this yields for all k ≤ n− 1:

Etk[|δYπtk |

2]≤ eC|π|Etk

[|δY π

tk+1|2∣∣]+ CBk (5.3.13)

where

Bk = Etk[ ∫ tk+1

tk

(|Xs − Xπ

tk|2 + |Is − Itk |

2 + |Yπs − Yπtk |2)ds]

≤ C|π|(Etk[

sups∈[tk,tk+1]

|Xs − Xπtk|2]

+ |π|(1 + |Xtk |)), (5.3.14)

by (5.1.5) and Proposition 4.1.3. Now, by definition of Y πtk+1

and Y πtk+1

, we have

|δY πtk+1|2 ≤ ess sup

a∈AEtk+1,a

[|δYπtk+1

|2]. (5.3.15)

By plugging (5.3.14), (5.3.15) into (5.3.13), taking conditional expectation with respect to

Itk = a, and taking essential supremum over a, we obtain:

ess supa∈A

Etk,a[|δYπtk |

2]≤ eC|π|ess sup

a∈AEtk,a

[ess supa∈A

Etk+1,a

[|δYπtk+1

|2]

+ C|π|(

ess supa∈A

Etk,a[

sups∈[tk,tk+1]

|Xs − Xπtk|2]

+ |π|(1 + |Xtk |)).

By taking conditional expectation with respect to Ftk−1, and Itk−1

= a, taking essential

supremum over a in the above inequality, and iterating this backward procedure until time

t0 = 0, we obtain:

Eπk (Y) ≤ eC|π|Eπk+1(Y) + C|π|(Eπk (X) + |π|(1 + E[|Xtk |])

)≤ eC|π|Eπk+1(Y) + C|π|2, k ≤ n− 1, (5.3.16)

where we recall the auxiliary error control Eπk (X) on X in (5.1.2) and its estimate in Lemma

5.1.6, and set:

Eπk (Y) := E[ess supa∈A

Et1,a[. . . ess sup

a∈AEtk,a

[|δYπtk |

2]. . .]].

By a direct induction on (5.3.16), and recalling that n|π| is bounded, we get

Eπk (Y) ≤ C(Eπn (Y) + |π|

)≤ C(Eπn (X) + |π|

)≤ C|π|,

65

Page 67: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

since g is Lipschitz, and using again the estimate in Lemma 5.1.6. Observing that E[|δY πtk|2],

E[|δYπtk |2] ≤ Eπk (Y), we get the estimate:

maxk≤n

E[|Y πtk− Y π

tk|2]

+ E[|Yπtk − Y

πtk|2]≤ C|π|.

Moreover, by Proposition 4.1.3, we have

supt∈[tk,tk+1)

E[|Yπt − Yπtk |

2]

+ supt∈(tk,tk+1]

E[|Y πt − Y π

tk+1|2]≤ C(1 + E[|Xtk |])|π|

≤ C(1 + |X0|)|π|.

This implies finally that:

sups∈(tk,tk+1]

E[|Y πt − Y π

tk+1|2]≤ 2 sup

s∈(tk,tk+1]E[|Y πt − Y π

tk+1|2]

+ 2E[|Y πtk+1− Y π

tk+1|2]

≤ C|π|,

as well as

sups∈[tk,tk+1)

E[|Y πt − Yπtk |

2]≤ 2 sup

s∈[tk,tk+1)E[|Y πt − Yπtk |

2]

+ 2E[|Yπtk − Y

πtk|2]

≤ C|π|.

2

5.4 Approximate optimal control

We now consider the special case where f(x, a) does not depend on y, so that the discrete

time scheme (1.0.15) is an approximation for the value function of the stochastic control

problem:

V0 := supα∈A

J(α) = Y0, (5.4.17)

J(α) = E[ ∫ T

0f(Xα

t , αt)dt+ g(XαT )],

where A is the set of G-adapted control processes α valued in A, and Xα is the controlled

diffusion in Rd:

Xαt = X0 +

∫ t

0b(Xα

s , αs)ds+

∫ t

0σ(Xα

s , αs)dWs, 0 ≤ t ≤ T.

(Here G = (Gt)0≤t≤T denotes some filtration under which W is a standard Brownian mo-

tion). Let us now define the discrete time version of (5.4.17). We introduce the set Aπ of

discrete time processes α = (αtk)k with αtk Gtk -measurable, and valued in A. For each α

∈ Aπ, we consider the controlled discrete time process (Xπ,αtk

)k of Euler type defined by:

Xπ,αtk

= X0 +

k−1∑j=0

b(Xπ,αtj

, αtj )∆tj +

k−1∑j=0

σ(Xπ,αtj

, αtj )∆Wtj , k ≤ n,

66

Page 68: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

where ∆Wtj = Wtj+1 −Wtj , and the gain functional:

Jπ(α) = E[ n−1∑k=0

f(Xπ,αtk

, αtk)∆tk + g(Xπ,αtn )

].

Given any α ∈ Aπ, we define its continuous time piecewise-constant interpolation α ∈ Aby setting: αt = αtk , for t ∈ [tk, tk+1) (by misuse of notation, we keep the same notation

α for the discrete time and continuous time interpolation). By standard arguments similar

to those for Euler scheme of SDE, there exists some positive constant C such that for all α

∈ Aπ, k ≤ n− 1:

E[

supt∈[tk,tk+1]

∣∣Xαt −X

π,αtk

∣∣2] ≤ C|π|,

from which we easily deduce by Lipschitz property of f and g:∣∣J(α)− Jπ(α)| ≤ C|π|12 , ∀α ∈ Aπ. (5.4.18)

Let us now consider at each time step k ≤ n− 1, the function ak(x) which attains the

supremum over a ∈ A of ϑπk(x, a) in the scheme (5.2.10), so that:

vπk (x) = ϑπk(x, ak(x)

), k = 0, . . . , n− 1.

Let us define the process (Xπtk

)k by: Xπ0 = X0,

Xπtk+1

= Xπtk

+ b(Xπtk, ak(X

πtk

))∆tk + σ(Xπtk, ak(X

πtk

))∆Wtk , k ≤ n− 1,

and notice that Xπ = Xπ,α, where α ∈ Aπ is a feedback control defined by:

αtk = ak(Xπtk

) = ak(Xπ,αtk

), k = 0, . . . , n.

Next, we observe that the conditional law of Xπtk+1

given (Xπtk

= x, Itk = ak(Xπtk

) = ak(x))

is the same than the conditional law of Xπ,αtk+1

given Xπ,αtk

= x, for k ≤ n− 1, and thus the

induction step in the scheme (5.2.9) or (5.2.10) reads as:

vπk (Xπ,αtk

) = E[vπk+1(Xπ,α

tk+1)∣∣Xπ,α

tk

]+ f(Xπ,α

tk, αtk)∆tk, k ≤ n− 1.

By induction, and law of iterated conditional expectations, we then get:

Y π0 = vπ0 (X0) = Jπ(α). (5.4.19)

Consider the continuous time piecewise-constant interpolation α ∈ A defined by: αt = αtk ,

for t ∈ [tk, tk+1). By (5.4.17), (5.4.18), (5.4.19), and Corollary 5.3.4, we finally obtain:

0 ≤ V0 − J(α) = Y0 − Y π0 + Jπ(α)− J(α)

≤ C|π|16 + C|π|

12 ≤ C|π|

16 ,

for |π| ≤ 1. In other words, for any small ε > 0, we obtain an ε-approximate optimal control

α for the stochastic control problem (5.4.17) by taking |π| of order ε6.

67

Page 69: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

Bibliography

[1] Bally V. and G. Pages (2003): “Error analysis of the quantization algorithm for obstacle prob-

lems”, Stochastic processes and their applications, 106, 1-40.

[2] Barles G. (1994) : Solutions de viscosite des equations d’Hamilton-Jacobi, Mathematiques et

Applications, Springer Verlag.

[3] Barles G., Buckdahn R. and E. Pardoux (1997) : “Backward stochastic differential equations

and integral-partial differential equations”, Stochastics and Stochastics Reports, 60, 57-83.

[4] Barles G. and C. Imbert (2008): “ Second-Order Elliptic Integro-Differential Equations: Viscos-

ity Solutions Theory Revisited”, Ann. Inst. H. Poincare, Anal. Non Lineaire, 25, 567-585.

[5] Barles G. and E.R. Jacobsen (2007): “Error bounds for monotone approximation schemes for

parabolic Hamilton-Jacobi-Bellman equations”, Math. Computation, 76, 1861-1893.

[6] Bonnans F. and H. Zidani (2003): “Consistency of generalized finite difference schemes for the

stochastic HJB equation”, SIAM J. Numer. Anal., 41, 1008-1021.

[7] Bouchard B. and J.F. Chassagneux (2008): “Discrete time approximation for continuously and

discretely reflected BSDEs”, Stochastic Processes and their Applications, 118, 2269-2293.

[8] Bouchard B. and R. Elie (2008): “Discrete time approximation of decoupled FBSDE with

jumps”, Stochastic Processes and Applications, 118 (1), 53-75.

[9] Bouchard B. and N. Touzi (2004): “Discrete-time approximation and Monte-Carlo simulation

of backward stochastic differential equations”, Stochastic processes and Their Applications, 111,

174-206.

[10] Cannarsa P. and Sinestrari C. (2004): Semiconcave Functions, Hamilton-Jacobi Equations,

and Optimal Control, Progress in Nonlinear Differential Equations and Their Applications, 58,

Birkhauser.

[11] Choukroun S., Cosso A. and H. Pham (2013): “Reflected BSDEs with nonpositive jumps, and

controller-and-stopper games”, preprint arXiv:1308.5511

[12] Crandall M., Ishii H. and P.L. Lions (1992) : “User’s guide to viscosity solutions of second

order partial differential equations”, Bull. Amer. Math. Soc., 27, 1-67.

[13] Ekren I., Touzi N. and J. Zhang (2013): “Viscosity Solutions of Fully Nonlinear Parabolic Path

Dependent PDEs: Part I”, Preprint.

[14] El Karoui N. (1981): Les aspects probabilistes du controle stochastique, Lect. Notes in Mathe-

matics 876, Ecole d’Ete de Saint Flour 1979.

[15] El Karoui N, Peng S. and M. C. Quenez (1997): “Backward stochastic differential equations in

finance”, Mathematical Finance, 7, 1-71.

68

Page 70: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

[16] Essaky E. H. (2008) : “Reflected backward stochastic differential equation with jumps and

RCLL obstacle”, Bull. Sci. math., 132, 690-710.

[17] Fahim A., Touzi N. and X. Warin (2011): “A Probabilistic Numerical Scheme for Fully Non-

linear PDEs”, The Annals of Applied Probability, 21 (4), 1322-1364.

[18] Fleming W. H. and H. M. Soner (2006), Controlled Markov Processes and Viscosity Solutions,

2nd edition, Springer-Verlag.

[19] Friedman, A (1975): Stochastic Differential Equations and Applications, Vol. 1, Probability

and Mathematical Statistics, Vol. 28. Academic Press, New York-London.

[20] Fuhrman M. and H. Pham (2013): “Dual and backward SDE representation for optimal control

of non-Markovian SDEs”, preprint arXiv:1310.6943.

[21] Gobet E., Lemor J.P. and X. Warin (2006): “Rate of convergence of empirical regression

method for solving generalized BSDE”, Bernoulli, 12 (5), 889-916.

[22] Henry-Labordere P. (2012): “Counterparty Risk Valuation: A Marked Branching Diffusion

Approach”, preprint arXiv:1203.2369.

[23] Kharroubi I., Langrene N. and H. Pham (2013a): “Discrete time approximation of fully non-

linear HJB equations via BSDEs with nonpositive jumps”, preprint arXiv:1311.4505

[24] Kharroubi I., Langrene N. and H. Pham (2013b): “Numerical algorithm for fully nonlinear

HJB equations: an approach by control randomization”, preprint arXiv:1311.4503

[25] Kharroubi I., Ma J., Pham H. and J. Zhang (2010): “ Backward SDEs with constrained jumps

and quasi-variational inequalities”, Annals of Probability, 38, 794-840.

[26] Kharroubi I. and H. Pham (2012): “Feynman-Kac representation for Hamilton-Jacobi-Bellman

IPDE”, Preprint.

[27] Kloeden P. and E. Platen (1992): Numerical solution of stochastic differential equations,

Springer.

[28] Krylov N.V. (1999): “Approximating value functions for controlled degenerate diffusion pro-

cesses by using piece-wise constant policies”, Electronic Journal of Probability, 4, 1-19.

[29] Krylov N. V. (2000): “On the rate of convergence of finite difference approximations for Bell-

man’s equations with variable coefficients”, Probability Theory and Related Fields, 117, 1-16.

[30] Kushner H. and P. Dupuis (1992): Numerical methods for stochastic control problems in

continuous time, Springer Verlag.

[31] Ma, J. and J. Zhang (2005): “Representations and regularities for solutions to BSDEs with

reflections”, Stochastic processes and their applications, 115, 539- 569.

[32] Oksendal B. and A. Sulem (2007): Applied Stochastic Control of Jump Diffusions, 2-nd edition,

Universitext, Springer.

[33] Pardoux E. and S. Peng (1990) : “Adapted solution of a backward stochastic differential

equation”, Systems and Control Letters, 14, 55-61.

[34] Pardoux E. and S. Peng (1992) : “Backward stochastic differential equation and quasilinear

parabolic partial differential equations”, in Stochastic partial differential equations and their ap-

plications, B. Rozovskiin R. Sowers (eds), Lect. Notes in Cont. Inf. Sci., 176, 200-217.

[35] Peng S. (2000): “Monotonic limit theorem for BSDEs and non-linear Doob-Meyer decomposi-

tion”, Probab. Theory and Rel. Fields, 16 (3), 225-234.

69

Page 71: BSDE Representation and Discretization for Hamilton-Jacobi ...idris/Enit-2014.pdf · solution vto the semi-linear PDE (1.0.3) is connected to the BSDE: Y t = g(X0 T) + Z T t F(X0

[36] Peng S. (2006): “G-expectation, G-Brownian motion and related stochastic calculus of Ito

type”, Proceedings of 2005, Abel symposium, Springer.

[37] Pham H. (1998): “Optimal stopping of controlled jump diffusion processes: a viscosity solutions

approach”, Journal of Mathematical Systems Estimation and Control, 8, 27 pp (electronic).

[38] Pham H. (2009): Continuous-time stochastic control and optimization with financial applica-

tions, Springer, Series SMAP, Vol, 61.

[39] Protter P. and K. Shimbo (2008): “No arbitrage and general semimartingale”, in Festschrift

for Thomas Kurtz.

[40] Royer M. (2006): “Backward stochastic differential equations with jumps and related nonlinear

expectations”, Stochastic Processes and their Applications, 116, 1358-1376.

[41] Soner M., Touzi N. and J. Zhang (2012): “The wellposedness of second order backward SDEs”,

Probability Theory and Related Fields, 153, 149-190.

[42] Tan X. (2013): “A splitting method for fully nonlinear degenerate parabolic PDEs”, Electron.

J. Probab., 18, (15), 1-24.

[43] Tang S. and X. Li (1994): “Necessary conditions for optimal control of stochastic systems with

jumps”, SIAM J. Control and Optimization, 32, 1447-1475.

[44] Zhang J. (2004): “A numerical scheme for BSDEs”, Annals of Applied Probability, 14 (1),

459-488.

70


Recommended