Differ Equ Dyn Syst (April 2012) 20(2):111–125DOI 10.1007/s12591-012-0108-8
ORIGINAL RESEARCH
On Maximum Principle of Near-optimalityfor Diffusions with Jumps, with Applicationto Consumption-Investment Problem
Mokhtar Hafayed · Petr Veverka · Syed Abbas
Published online: 4 April 2012© Foundation for Scientific Research and Technological Innovation 2012
Abstract In the present article, we prove a maximum principle for near-optimal stochasticcontrols for system driven by a nonlinear stochastic differential equations (SDEs in short)with jump processes. The set of controls under consideration is necessarily convex. The proofof our result is based on Ekeland’s variational principle.
Keywords First-order necessary conditions · Near-optimal stochastic control · Controlleddiffusion with jumps · Consumption-investment problem · Ekeland’s variational principle ·Convex perturbation
Introduction
In the present article, we consider the stochastic control problem for systems governed bynonlinear controlled diffusion with jumps of the form
{dxt = f (t, xt , ut ) dt + σ (t, xt , ut ) dWt + ∫
�
g (t, xt− , ut , θ) N (dθ, dt) ,
x0 = ξ,(1.1)
M. HafayedLaboratory of Applied Mathematics, Mohamed Khider University, P.O. Box 145, Biskra 07000, Algeriae-mail: [email protected]
P. VeverkaDepartment of Mathematics, Faculty of Nuclear Sciences and Physical Engineering,Czech Technical University, Trojanova 13, Prague 120 00, EU-Czech Republice-mail: [email protected]
S. Abbas (B)School of Basic Sciences, Indian Institute of Technology Mandi, Mandi 175 001,Himachal Pradesh, Indiae-mail: [email protected]
123
112 Differ Equ Dyn Syst (April 2012) 20(2):111–125
where (Wt )t∈[0,T ] is a standard d−dimensional Brownian motion defined on the filteredprobability space (�, F, (Ft )t∈[0,T ] , P). The filtration {Ft }t∈[0,T ] is a canonical filtration of(Wt )t∈[0,T ] augmented by P-null sets. The initial condition ξ is an F0-measurable randomvariable and N (dθ, dt) is a Poisson martingale measure with local characteristic m (dθ) dt.The criteria to be minimized associate with the state Eq. (1.1) is defined by
J (u) = E
⎡⎣h (xT ) +
T∫0
� (t, xt , ut ) dt
⎤⎦ , (1.2)
and the value function is defined as
V = inf {J (u) , u ∈ U} .
where
f : [0, T ] × Rd×U→ R
d ,
σ : [0, T ] × Rd×U→ R
d×n,
� : [0, T ] × Rd×U → R,
g : [0, T ] × Rd×U×� → R
d×m,
h : Rd→ R.
This kind of stochastic control problem has been investigated extensively, both by the Bell-man’s dynamic programming method [2] and by Pontryagin’s maximum principle [16]. Manymore near-optimal controls are available than optimal ones. Indeed, optimal controls maynot even exist in many situations, while near-optimal controls always exist. The near-optimaldeterministic control problems have been investigated in [9,8,13,21,22]. In an interestingpaper, Zhou [23] established the second-order necessary as well as sufficient conditionsfor near-optimal stochastic controls for controlled diffusions, where the coefficients wereassumed to be twice continuously differentiable. However in [6] the authors extended thesecond-order maximum principle of near-optimality introduced in [23] to a jump case. Thenear-optimal control problem for systems governed by Volterra integral equations has beenstudied in [15]. For justification of establishing a theory of near-optimal controls, see Zhou([21,23] Introduction).
Without jump terms, the weak maximum principle for optimality for controlled diffu-sions, on convex control domain was established by Bensoussan [1]. However, Elliott [7]developed Bensoussan’s result by using the method of Blagovescenskii and Freidlin on thedifferentiability of solutions of stochastic differential equations that depend on a parameter(see for instance [5,20] for some references on optimal control) .The stochastic optimal control problems for diffusion with jumps has been investigated bymany authors, see for instance [4,10,14,17–19]. The general case, where the control domainis not convex and the diffusion coefficient depends explicitly on the control variable, wasderived by Tang et al., [19] by using the second-order expansion. A good account on sto-chastic optimal control for diffusion with jump processes can be founded in Øksendal et al.[14] and their references.
Our goal in this article is to establish a Pontryagin’s maximum principle of near-optimalityfor systems governed by nonlinear SDEs with jump processes. The control variable entersboth the drift, the diffusion coefficients and jump terms. The control domain is necessarilyconvex. The proof of our result is based on Ekeland’s variational principle [8] and some
123
Differ Equ Dyn Syst (April 2012) 20(2):111–125 113
delicate estimates of the state and adjoint processes. An application to finance: consumption-investment problem is provided.
The rest of the article is organized as follows. The assumptions and the weak stochasticmaximum principle for optimality for SDEs with jump processes are given in second section.In third section, we establish the main result of this article. An application to finance is givenin the last section.
Assumptions and Statement of the Problem
Let (�, F, (Ft )t∈[0,T ] , P) be a fixed filtered probability space equipped with a P−completedright continuous filtration in which a d−dimentional Brownian motion W = (Wt )t∈[0,T ]is defined. Let η be a homogeneous {Ft }t∈[0,T ] Poisson point process on a fixed nonemptysubset � of R
k . We denote by m (dθ) the characteristic measure of η and by N (dθ, dt) thecounting measure induced by η. We assume that m (�) < ∞, and we then define
N (dθ, dt) = N (dθ, dt) − m (dθ) dt.
We note that N is Poisson martingale measure with local characteristic m (dθ) dt. We assume
that (Ft )t∈[0,T ] is P−augmentation of the natural filtration{
F (W,N )t
}t∈[0,T ]
defined as follows
F (W,N )t = σ (Ws : 0 ≤ s ≤ t) ∨ σ
⎛⎝ s∫
0
∫B
N (dθ, dr) : 0 ≤ s ≤ t, B ∈ B(�)
⎞⎠ ∨ G,
where G denotes the totality of P−null sets. The control variable u = (ut ) takes values in acompact convex subset U of R
m . The set of admissible controls is the set
U = {u ∈ L
qF
([0, T ] , R
m) , ut ∈ U}, for any q ≥ 1,
where LqF ([0, T ] ; R
m) denotes the Hilbert space of {Ft }t∈[0,T ] −adapted processes u such
that E∫ T
0 |ut |q dt < ∞. We denote W jt the columns of Wt for j = 1, 2, . . . , d.
Throughout this article, we also assume that
f, σ, g, � are continuously differentiable with respect to x and u.f, σ are bounded by C (1 + |x | + |u|) , g is bounded by C (1 + |x | + |u| + |θ |) .
h is continuously differentiable with respect to x .The derivatives of f, σ, � and g with respect to x and u are Lipschitz in x and u,
bounded by C (1 + |x | + |u|) and hx is bounded by C (1 + |x |) ,
(2.1)
where C is a positive constant.From the above assumptions, the SDE (1.1) has a unique strong solution xt which is given by
xt = ξ +t∫
0
f (s, xs, us) ds +t∫
0
σ (s, xs, us) dWs +t∫
0
∫�
g (s, xs− , us, θ) N (dθ, ds) .
For any u ∈ U and the corresponding state trajectory x , we define the adjoint processes(p, q, r (θ)) as the solution of the following backward stochastic differential equation(BSDE):⎧⎨
⎩−dpt = [
f ∗x (t, xt , ut ) pt + σ ∗
x (t, xt , ut ) qt + ∫�
gx (t, xt− , ut , θ) rt (θ) m (dθ)
+�x (t, xt , ut )] dt − qt dWt + ∫�
rt (θ) N (dθ, dt) ,
pT = hx (xT ) .
(2.2)
123
114 Differ Equ Dyn Syst (April 2012) 20(2):111–125
This equation can be rewritten in terms of the derivatives Hx as follows{−dpt = Hx (t, xt , ut , p, q, r (θ)) dt − qt dWt + ∫�
rt (θ) N (dθ, dt) ,
pT = hx (xT ) .(2.3)
The Hamiltonian associated with the stochastic control problem ( 1.1–1.2) is defined by
H (t, x, u, p, q, r (θ)) : = pt f (t, x, u) + qtσ (t, x, u) + � (t, x, u)
+ ∫�
g (t, xt− , ut , θ) rt (θ) m (dθ) . (2.4)
It is a well known fact that under assumptions (2.1), the backward adjoint equation (2.3)admits one and only one {Ft }t∈[0,T ] −adapted solution (p, q, r) ∈ R
d × Rd×n × R
d×m .Moreover, since fx , σx , gx , �x are bounded, we deduce from standard arguments that, thereexists a constant C > 0 independent of (x, u) such that
E
⎡⎣ sup
t∈[0,T ]|pt |2 +
T∫0
|qt |2 dt +T∫
0
∫�
|rt (θ)|2 m (dθ) dt
⎤⎦ < C.
Maximum Principle of Near-optimality for Diffusion with Jumps
Our goal in this section is to derive near-optimality necessary conditions for SDEs with jumpprocesses, where the control domain is necessarily convex.Let us recall the definition of near-optimal control as given in Zhou ([23], Definitions (2.1)and (2.2)), and Ekeland’s variational principle which will be used in the sequel.
Definition 1 (Near-optimal control with order ελ.) For a given ε > 0 the admissible controluε is near-optimal if ∣∣J (uε
)− V∣∣ ≤ O (ε) , (3.1)
where O is a function of ε satisfying limε→0 O (ε) = 0. The estimater O (ε) is called anerror bound. If O (ε) = cελ for some λ > 0 independent of the constant c, then uε is callednear-optimal control with order ελ. If O (ε) = ε, the admissible control uε called ε−optimal.
Lemma 1 (Ekeland’s Principle [8]) Let (E, d) be a complete metric space and f : E → R
be a lower semi-continuous and bounded from below. If for each ε > 0, there exists uε ∈ Esatisfies f (uε) ≤ infu∈E ( f (u)) + ε. Then for any δ > 0, there exists uδ ∈ E such that
(1) f(uδ) ≤ f (uε).
(2) d(uδ, uε
) ≤ δ.(3) f
(uδ) ≤ f (u) + ε
δd(uδ, u
), for all u ∈ E.
To apply Ekeland’s variational principle to our problem, we must define a distance d onthe space of admissible controls such that (U, d) becomes a complete metric space. For anyu, v ∈ U we define
d (u, v) =⎡⎣E
T∫0
|ut − vt |2 dt
⎤⎦
12
. (3.2)
Let (pε, qε, r ε (θ)) be the solution of the adjoint equation (2.3) corresponding to uε .
123
Differ Equ Dyn Syst (April 2012) 20(2):111–125 115
Lemma 2 For any λ ∈ [0, 1
2
)and for any ε > 0 there exists uε and an triplet of adapted
processes (pε, qε, r ε (θ)) such that ∀u ∈ U :
E
T∫0
Hu(t, xε, uε, pε, qε, r ε (θ)
) (ut − uε
t
)dt ≥ −Cελ, (3.3)
where C = C (λ) is a positive constant.
Proof Applying Ekeland’s variational principle with δ = ε1/2 there exists an admissiblecontrol uε such that
1)
d(uε, uε
) ≤ ε1/2, (3.4)
2) J ε (uε) ≤ J ε (u) , for any u ∈ U where
J ε (u) = J (u) + ε1/2d(uε, u
). (3.5)
Notice that uε which is near-optimal for the initial cost J is optimal for the new cost J ε
defined by (3.5).Let us denote uε,h a perturbed control given by
uε,h = uε + h(u − uε
).
By using the fact that
(i) J ε (uε) ≤ J ε(uε,h),
(ii) d(uε, uε,h) ≤ Ch, we get
J(
uε,h)
− J(uε) ≥ −ε1/2d
(uε, uε,h
)≥ −Cε1/2h. (3.6)
Dividing (3.6) by h and sending h to zero we have
d
dh
(J (uε,h
t )) ∣∣∣
h=0≥ −Cε
12 ≥ −Cελ. (3.7)
Arguing as in Cadenillas ([4] Lemma 4.1) for the left hand side of inequality (3.7), the desiredresult follows.
Now we are able to state and prove the Pontryagin’s maximum principle of near-optimalityfor our control problem, which is the main result in this paper.
Let (pε, qε, r ε (θ)) be the solution of the adjoint Eq. (2.3) corresponding to uε . �Theorem 1 (Maximum principle for any near-optimal control for diffusion with jumps.) Letthe assumptions 2.1 holds. For any λ ∈ [
0, 12
), there exists a positive constant C = C (λ)
such that for any ε > 0 and any near-optimal control uε, it hold that
E
T∫0
Hu(t, xε, uε, pε, qε, r ε (θ)
) (ut − uε
t
)dt ≥ −Cελ, for all u ∈ U. (3.8)
To prove the above Theorem, we need the following auxilliary results on the variation ofthe state and adjoint processes with respect to the control variable. First, let us recall thefollowing proposition, which will be used to prove Lemma 3.
123
116 Differ Equ Dyn Syst (April 2012) 20(2):111–125
Proposition 1 Let G be the predictable σ−field on �×[0, T ], and f be a G ×B (�) −mea-surable function such that
E
⎛⎝ T∫
0
∫�
| f (s, θ)|2 m (dθ) ds
⎞⎠ < ∞,
then for all β ≥ 2 there exists a positive constant C = C (β, T, m (�)) such that
E
⎛⎜⎝ sup
t∈[0,T ]
∣∣∣∣∣∣t∫
0
∫�
f (s, θ) N (ds, dθ)
∣∣∣∣∣∣β⎞⎟⎠ ≤ CE
⎛⎝ T∫
0
∫�
| f (s, θ)|β m (dθ) ds
⎞⎠ .
Proof See Bouchard et al. ([3], Appendix). �
Lemma 3 If xut and xv
t be the solutions of the state equation (1.1) associated respectivelywith u and v. For any 1 < α < 2 and β ∈ (0, 2] satisfying αβ < 1, there exists a positiveconstants C = C (α, β, m (�)) such that
E
(sup
0≤t≤T
∣∣xut − xv
t
∣∣β) ≤ Cdαβ2 (u, v) .
Proof Using Hölder’s inequality, it is sufficient to prove the above inequality for β = 2.Using Burkholder–Davis–Gundy, we obtain
E
[sup
t∈[0,T ]
∣∣xut − xv
t
∣∣2] ≤ CE
⎧⎨⎩
T∫0
∣∣ f(t, xu
t , ut)− f
(t, xv
t , vt)∣∣2 dt
+T∫
0
∣∣σ (t, xut , ut
)− σ(t, xv
t , vt)∣∣2 dt
+∣∣∣∣∣∣
T∫0
∫�
(g(t, xu
t− , ut , θ)− g
(t, xv
t− , vt , θ))
N (dt, dθ)
∣∣∣∣∣∣2⎫⎪⎬⎪⎭ ,
due to Proposition 1, we have
E
[sup
t∈[0,T ]
∣∣xut − xv
t
∣∣2] ≤ CE
T∫0
{∣∣ f(t, xu
t , ut)− f
(t, xv
t , vt)∣∣2
+ ∣∣σ (t, xut , ut
)− σ(t, xv
t , vt)∣∣2
+∫�
∣∣g (t, xut− , ut , θ
)− g(t, xv
t− , vt , θ)∣∣2 m(dθ)
⎫⎬⎭ dt,
by adding and subtracting f(s, xv
s , us), σ
(s, xv
s , us)
and g(s, xu
s− , vs, θ)
and applying theLipschitz continuity of the coefficients f, σ and g in x and u, the result follows. This com-pletes the proof of Lemma 3. �
123
Differ Equ Dyn Syst (April 2012) 20(2):111–125 117
Lemma 4 Let (pu, qu, ru (θ)) and (pv, qv, rv (θ)) be the corresponding adjoint processesrespectively to u and v. For any 1 < β < 2 and 0 < α < 1 satisfying α (1 + β) < 2 , thenthere is a positive constant C = C (α, β, m (�)) such that
E
T∫0
⎛⎝∣∣pu
t − pvt
∣∣β + ∣∣qut − qv
t
∣∣β +∫�
∣∣rut (θ) − rv
t (θ)∣∣β m (dθ)
⎞⎠ dt ≤ Cd
αβ2 (u, v) .
Proof First, we denoted by pt = (put − pv
t ), qt = (qut − qv
t ) and rt (θ) = rut (θ) − rv
t (θ) ,
then ( pt , qt , rt ) satisfies the following backward stochastic differential equation⎧⎨⎩
−d pt = [f ∗x
(t, xu
t , ut)
pt + σ ∗x
(t, xu
t , ut)
qt + ∫�
gx(t, xu
t− , ut , θ)
rt (θ) m (dθ)
+�t ] dt − qt dWt + ∫�
rt (θ) N (dθ, dt) ,
pT = hx(xu
T
)− hx(xv
T
),
where the process �t is given by
�t = (fx(t, xu
t , ut)− fx
(t, xv
t , vt))
pvt + (
σx(t, xu
t , ut)− σx
(t, xv
t , vt))
qvt
+ (�x(t, xu
t , ut)− �x
(t, xv
t , vt))
+∫�
(gx(t, xu
t− , ut , θ)− gx
(t, xv
t− , ut , θ))
rvt (θ) m (dθ) .
Let η be the solution of the following linear SDE:⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩
dηt =[
fx(t, xu
t , ut)ηt + | pt |β−1 sgn( pt )
]dt
+[σx(t, xu
t , ut)ηt + |qt |β−1 sgn(qt )
]dWt ,
+ ∫�
[gx(t, xu
t− , ut , θ)ηt + |rt (θ)|β−1 sgn(rt (θ))
]N (dθ, dt) ,
η0 = 0,
(3.9)
where sgn (y) ≡ (sgn(y1), sgn(y2), . . . , sgn(yn))∗ for any vector y = (y1, y2, .., yn)∗.It is worth mentioning that since fx , σx and gx are bounded and the fact that
E
T∫0
{∣∣∣| pt |β−1 sgn ( pt )
∣∣∣2 +∣∣∣|qt |β−1 sgn (qt )
∣∣∣2
+∣∣∣∣∣∣∫�
|rt (θ)|β−1 sgn(rt (θ))N (dθ, dt)
∣∣∣∣∣∣2⎫⎪⎬⎪⎭ dt
≤ E
T∫0
{∣∣∣| pt |β−1 sgn ( pt )
∣∣∣2 +∣∣∣|qt |β−1 sgn (qt )
∣∣∣2
+∫�
∣∣|rt (θ)|β−1 sgn(rt (θ))∣∣2 m (dθ)
⎫⎬⎭ dt
< ∞,
then the SDE (3.9) has a unique strong solution.
123
118 Differ Equ Dyn Syst (April 2012) 20(2):111–125
Let γ ≥ 2 such that 1γ
+ 1β
= 1, β ∈ (1, 2) then we get
E(supt≤T
|ηt |γ ) ≤ E
T∫0
{| pt |βγ−γ + |qt |βγ−γ + ∫
�
|rt (θ)|βγ−γ m (dθ)
}dt
= E
T∫0
{| pt |β + |qt |β + ∫
�
|rt (θ)|β m (dθ)
}dt < C.
Applying Itô’s formula to ptηt on [0, T ] and taking expectations, we obtain
E
T∫0
[| pt |β + |qt |β + ∫
�
|rt (θ)|β m (dθ)
]dt
≤ CE
T∫0
|�t |β dt + CE
∣∣hx(xu
T
)− hx(xv
T
)∣∣β .
(3.10)
We proceed to estimate the right hand side of (3.10). First noting that αβ2 < 1 − β
2 < 1 andusing Lemma 3, we obtain
E
∣∣hx(xu
T
)− hx(xv
T
)∣∣β ≤ CE
∣∣xuT − xv
T
∣∣β ≤ Cdαβ2 (u, v). (3.11)
Now by repeatedly using Cauchy-Schwarz inequality, we can estimate
E
T∫0
∣∣ fx(t, xu
t , ut)− fx
(t, xv
t , vt)∣∣β |pt |β dt ≤ Cd
αβ2 (u, v) . (3.12)
Similarly we can obtain
E
T∫0
∣∣σx(t, xu
t , ut)− σx
(t, xv
t , vt)∣∣β |qt |β dt ≤ Cd
αβ2 (u, v) , (3.13)
and
E
T∫0
∣∣�x(t, xu
t , ut)− �x
(t, xv
t , vt)∣∣β dt ≤ Cd
αβ2 (u, v) , (3.14)
hence, by applying Proposition 1, we can similarly prove that
E
T∫0
∣∣∣∣∣∣∫�
∣∣gx(t, xu
t , ut , θ)− gx
(t, xv
t , vt , θ)∣∣β ∣∣rv
t (θ)∣∣β m (dθ)
∣∣∣∣∣∣β
dt ≤ Cdαβ2 (u, v) .
(3.15)
It follows from (3.12) to (3.15) that
E
T∫0
|�t |β dt ≤ Cdαβ2 (u, v) . (3.16)
Finally, combining (3.10) and (3.15), the proof of Lemma 3 is complete. �
123
Differ Equ Dyn Syst (April 2012) 20(2):111–125 119
Proof of Theorem 1. First, for each ε > 0 by using Lemma 3, there exists uε and an{Ft }t∈[0;T ] adapted processes (pε, qε, r ε (θ)) such that, ∀u ∈ U
E
T∫0
Hu(t, xε
t , uεt , pε, qε, r ε (θ)
) (ut − uε
t
)dt ≥ −Cελ.
Now, to prove (3.8) it remains to estimate the following difference:
E
T∫0
Hu(t, xεt , uε
t , pεt , qε
t , r εt (θ))(ut−uε
t )dt−E
T∫0
Hu(t, xεt , uε
t , pεt , qε
t , r εt (θ))(ut−uε
t )dt.
First, by adding and subtracting E∫ T
0 Hu(t, xε
t , uεt , pε
t , qεt , r ε
t (θ))(ut − uε
t )dt we have
E
T∫0
Hu(t, xε
t , uεt , pε
t , qεt , r ε
t (θ)) (
ut − uεt
)dt
−E
T∫0
Hu(t, xε
t , uεt , pε
t , qεt , r ε
t (θ)) (
ut − uεt
)dt
≤ E
T∫0
Hu(t, xε
t , uεt , pε
t , qεt , r ε
t (θ)) (
uεt − uε
t
)dt
+E
T∫0
(Hu
(t, xε
t , uεt , pε
t , qεt , r ε
t (θ))− Hu
(t, xε
t , uεt , pε
t , qεt , r ε
t (θ))) (
ut − uεt
)dt
= Iε1 + I
ε2,
by using Schwarz’s inequality and the boundness of Hu we have
Iε1 ≤ E
T∫0
∣∣Hu(t, xε
t , uεt , pε
t , qεt , r ε
t (θ))∣∣ ∣∣(uε
t − uεt
)∣∣ dt
≤⎛⎝E
T∫0
∣∣Hu(t, xε
t , uεt , pε
t , qεt , r ε
t (θ))∣∣2 dt
⎞⎠
12⎛⎝E
T∫0
∣∣(uεt − uε
t
)∣∣2 dt
⎞⎠
12
≤ Cd(uε
t , uεt
)≤ Cε
12 .
Let us turn to the second term, it holds that
Iε2 = E
T∫0
(Hu
(t, xε
t , uεt , pε
t , qεt , r ε
t (θ))− Hu
(t, xε
t , uεt , pε
t , qεt , r ε
t (θ))) (
ut − uεt
)dt
= E
T∫0
[pε
t fu(t, xε
t , uεt
)− pεt fu
(t, xε
t , uεt
)] (ut − uε
t
)dt
123
120 Differ Equ Dyn Syst (April 2012) 20(2):111–125
+E
T∫0
[qε
t σu(t, xε
t , uεt
)− qεt σu
(t, xε
t , uεt
)] (ut − uε
t
)dt
+E
T∫0
[�u(t, xε
t , uεt
)− �u(t, xε
t , uεt
)] (ut − uε
t
)dt
+E
T∫0
∫�
(gu(t, xε
t , uεt , θ
)− gu(t, xε
t , uεt , θ
))rt (θ)
(ut − uε
t
)m (dθ) dt
= Jε1 + J
ε2 + J
ε3 + J
ε4.
We estimate the first term on the right hand side Jε1 by adding and subtracting pε
t fu(t, xε
t , uεt
)then we have
Jε1 ≤ E
T∫0
∣∣pεt − pε
t
∣∣ ∣∣ fu(t, xε
t , uεt
) (ut − uε
t
)∣∣ dt
+E
T∫0
∣∣ fu(t, xε
t , uεt
)− fu(t, xε
t , uεt
))pε
t
(ut − uε
t
)∣∣ dt.
First, by adding and subtracting fu(t, xε
t , uεt
)it holds that
Jε1 ≤ E
T∫0
∣∣pεt − pε
t
∣∣ ∣∣ fu(t, xε
t , uεt
) (ut − uε
t
)∣∣ dt
+E
T∫0
∣∣ fu(t, xε
t , uεt
)− fu(t, xε
t , uεt
)∣∣ ∣∣pεt
(ut − uε
t
)∣∣ dt
+E
T∫0
∣∣ fu(t, xε
t , uεt
)− fu(t, xε
t , uεt
)∣∣ ∣∣pεt
(ut − uε
t
)∣∣ dt
= Jε,11 + J
ε,21 + J
ε,31 .
Using Hölder inequality, the boundness of fu and Lemma 2, we obtain, for 1γ
+ 1β
= 1,
Jε,11 ≤
⎛⎝E
T∫0
∣∣ fu(t, xε
t , uεt
) (ut − uε
t
)∣∣γ⎞⎠
1γ⎛⎝E
T∫0
∣∣pεt − pε
t
∣∣β dt
⎞⎠
1β
≤ C
⎛⎝E
T∫0
∣∣pεt − pε
t
∣∣β dt
⎞⎠
1β
≤ C(
dαβ2(uε
t , uεt
)) 1α
≤ Cελ.
123
Differ Equ Dyn Syst (April 2012) 20(2):111–125 121
To estimate the second term Jε,21 we use assumption (2.1), then we have
Jε,21 ≤ cE
T∫0
∣∣xεt − xε
t
∣∣β ∣∣pεt
(ut − uε
t
)∣∣ dt,
using Hölder inequality, where 1γ
+ 1α
= 1, γ ∈ (2,∞)
Jε,21 ≤ c
⎛⎝E
T∫0
∣∣xεt − xε
t
∣∣αβ ∣∣pεt
∣∣α⎞⎠
1α⎛⎝E
T∫0
∣∣(ut − uεt
)∣∣γ dt
⎞⎠
1γ
≤ c
⎛⎝E
T∫0
∣∣xεt − xε
t
∣∣αβ ∣∣pεt
∣∣α⎞⎠
1α
,
applying Hölder inequality for 12/(2−α)
+ 12/α
= 1 it holds that
Jε,21 ≤ c
⎡⎢⎢⎣⎛⎝E
T∫0
∣∣xεt − xε
t
∣∣ 2αβ2−α
⎞⎠
2−αα⎛⎝E
T∫0
∣∣pεt
∣∣α. 2α
⎞⎠
α2
⎤⎥⎥⎦
1α
≤ C(
d2αβ2−α
(uε
t , uεt
)) 2−α2 . 1
α
≤ Cελ.
Next by applying assumption (2.1) and Hölder inequality then we can proceed to estimateJ
31 as follows
Jε,31 ≤ CE
T∫0
∣∣uεt − uε
t
∣∣β ∣∣pεt
∣∣ ∣∣(ut − uεt )∣∣ dt
≤ C
⎛⎝E
T∫0
∣∣uεt − uε
t
∣∣αβ ∣∣pεt
∣∣α dt
⎞⎠
1α⎛⎝E
T∫0
∣∣(ut − uεt )∣∣γ dt
⎞⎠
1γ
≤ C
⎛⎜⎜⎝⎛⎝E
T∫0
∣∣uεt − uε
t
∣∣ 2αβ2−α dt
⎞⎠
2−α2⎛⎝E
T∫0
∣∣pεt
∣∣α 2α dt
⎞⎠
2α
⎞⎟⎟⎠
1α
≤ Cεβ.
Using similar arguments developed above for Jε2, J
ε3 and J
ε4 we can prove that I
ε2 ≤ cελ, and
conclude
E
T∫0
Hu (t, xε, uε, pε, qε, r ε (θ)) (ut − uεt )dt
−E
T∫0
Hu (t, xε, uε, pε, qε, r ε (θ)) (ut − uεt )dt ≤ Cελ.
(3.17)
123
122 Differ Equ Dyn Syst (April 2012) 20(2):111–125
Finally, combining (3.3) and (3.17) the proof of Theorem 1 is complete.
Application to Finance: Consumption-Investment Problem
In this section, we study stochastic maximum principle for near-optimality that covered con-sumption-investment problem. We consider a financial market (see [4,11,12]) in which m+1securities are traded continuously. One of them is a bond, with price S0 (t) at time t governedby
d S0 (t) = S0 (t) μ (t) dt,
where μ (t) is the force of interest at time t . There are also m stocks with prices-per-shareSi (t) at time t governed by
d Si (t) = Si (t−)
⎡⎣bi (t) dt +
d∑j=1
�i j (t) dW j (t) +l∑
j=1
�i j (t) d N j (t)
⎤⎦ , (4.1)
for i : 1, 2, . . . , m.
These equations are driven by an d−dimensional Brownian motion W = (W1, . . . , Wd)∗and a l−dimensional componsated multivariate Poisson precess N = (N1, . . . , Nl)
∗ withintensity λ (t) = (λ1 (t) , . . . , λl (t))∗. For notational simplicity, we assume that λi (t) ≡ 1.
These, if Ni denotes a standard Poisson process with expectation t then the process Ni
defined by Ni (t) = Ni (t) − t is a martingale for each i ∈ {1, 2, . . . , l} hence, M = (W, N )
is an m−dimensional mixture of a d−dimensional Brownian motion and an l−dimensionalcomponsated Poisson precess.
Let us denote
b (t) = b (t)m×1 , � (t) = (�i j (t)
)m×d ,� (t) = (
�i j (t))
m×l , σ (t) = (� (t) ,� (t))m×m
Assumption we assume that
(i) ρ, b and σ are predictable with respect to {Ft }t∈[0,T ] and bounded uniformly in (t, w).(ii) P {t ∈ [0, T ] : �(t) > −1} = 1,
(iii) There exists a positive constant c > 0 such that P{t ∈ [0, T ], ξ ∈ Rm : ξ∗σσ ∗ξ ≥
c|ξ |2} = 1, (non-degenerate diffusion).
These assumptions guarantee that the prices of the stocks will be always positive, and the rel-ative risk process given by θ (t) = σ (t)−1 [b (t) − μ (t) 1
]is bounded, where 1 is a column
vector of 1.
For an investor, a portfolio π is a process whose components πi represent the amount ofmoney invested in the correspending stock, i : 1, 2, . . . , m, and the consumption rate χ is arate at which he withdraws funds for consumption.
Portfolio and consumption rate processes: We denote by � the set of all processes π :[0, T ]×� → R
m that are predictable with repect to {Ft }t∈[0,T ] and satisfy P{∫ T0 |π(t)|2dt <
∞} = 1. The elements of � are called portfolio processes.We denote by � the set of all processes χ : [0, T ] × � → [0,∞) that are predictable with
respect to {Ft }t∈[0,T ] and satisfy P
{∫ T0 χ (t) dt < ∞
}= 1. The elements of � are called
consumption rate processes.
123
Differ Equ Dyn Syst (April 2012) 20(2):111–125 123
The wealth process: The wealth process xt = x (ξ,π,χ)t corresponding to the initial capital
ξ > 0, portfolio π , and consumption rate χ satisfies then the following SDE⎧⎨⎩
dxt = [μ (t) xt − χ (t)] dt + π∗ (t)[b (t) − μ (t) 1
]dt + π∗ (t)� (t) dWt
+ π∗ (t)� (t) d N (t) ,
x0 = ξ,
(4.2)
A pair (π, χ) ∈ � × � is called admissible pair for the initial capital ξ > 0 if the corre-sponding wealth process x satisfies P {t ∈ [0, T ] , xt ≥ 0} = 1. We denote by A =� × �
the class of such pairs.Utility functions: An utility function is a function U ∈ C1 ((0,∞) , R) that is strictly
increasing, strictly convex, and has a derivative U ′ : (0,∞) → (0,∞) that satisfieslimχ→∞ U ′ (χ) = 0.
Let xε the solution of SDE (4.2) corresponding to (πε, χε). The objective of the investor isto find a pair (πε, χε) ∈ A that near-maximizes
J(πε, χε
) = E
⎡⎣ T∫
0
U1(t, χε
t
)dt + U2
(xε
T
)⎤⎦ , (4.3)
where U1 (t, .) and U2 are two fixed utility functions.Now, let us take uε ≡ (πε, χε), U = R × [0,∞), U ≡ A, � ≡ {θ1, θ2,, . . . , θl}, whereθi = {x ∈ R
l : xi = 1 and ∀ j �= i : x j = 0}, f (t, xt , ut ) = μ(t)xt − χ(t) + π∗[b(t) −μ(t)1], σ (t, x, u) = π∗�(t) and g(t, x, u, θ) = ∑m
i=1 πi (t)�i j (t), �(t, x, u) ≡ U1(t, χ),
and h(x) = U2(x).It is worth mentioning that all the assumptions made in the second section hold for this linearmodel described in this section.Applying the stochastic maximum principle of near-optimality to consumption-investmentproblem as follows:Let r ε
t (θ) = (r ε
t (θ1) , . . . , r εt (θl)
)1×l , the Hamiltonian function associated with this prob-
lem is
H (t, xε, (πε, χε) , pε, qε, r ε) := pεt
(μ (t) xε − χε (t) + πε∗ [b (t) − μ (t) 1
])+qε
t πε∗� (t) + U1 (t, χε) + r εt πε∗�(t) ,
(4.4)
and the adjoint equation takes the form{−dpεt = μ (t) pε
t dt + qεt dWt + r ε
t d Nt ,
pεT = U ′
2
(xε
T
).
(4.5)
As an example, if κ, τ be a predictable processes such that κ : [0,∞) × � → Rd , and
τ : [0,∞) × � → Rl uniformly bounded in (t, w) ∈ [0, T ] × �. We define the stochastic
processes Q and ξε(κ,τ ) by
Q (t) = exp
⎛⎝−
t∫0
μ (t) ds
⎞⎠ and ξε
(κ,τ ) (t) = Q (t) Zε(κ,τ ) (t) ,
where Zε(κ,τ ) (t) is an exponential martingale, solution of the linear stochastic differential
equation{dZε
(κ,τ ) (t) = −∑di=1 Zε
(κ,τ ) (t−) κi (t) dWi (t) −∑li=1 Zε
(κ,τ ) (t−) τi (t) d Ni (t) ,
Zε(κ,τ ) (0) = 1.
(4.6)
123
124 Differ Equ Dyn Syst (April 2012) 20(2):111–125
Since κ and τ are uniformly bounded in (t, w) , the equation (4.6) admits one and only onesolution given by
Zε(κ,τ ) (t) = 1 −
d∑i=1
t∫0
Zε(κ,τ ) (s−) κi (s) dWi (s) −
l∑i=1
t∫0
Zε(κ,τ ) (s−) τi (s) d Ni (s) ,
(4.7)
by using the integration by parts for ξε(κ,τ ) (t), we have
ξε(κ,τ ) (t) = Q (0) Zε
(κ,τ ) (0) +t∫
0
Zε(κ,τ ) (s−) d Q (s) +
t∫0
Q (s−) dZε(κ,τ ) (s)
= 1 −t∫
0
Zε(κ,τ ) (s−) Q (s) μ (s) ds −
d∑i=1
t∫0
Q (s−) Zε(κ,τ ) (s−) κi (s) dWi (s)
−l∑
i=1
t∫0
Q (s−) Zε(κ,τ ) (s−) τi (s) d Ni (s) ,
hence a simple calculation shows that the triplet(
pεt , qε
t , r εt (θ)
)defined by
pεt = ξε
(κ,τ ) (t) , qεt = −ξε
(κ,τ ) (t−) κ∗ (t) and r εt = −ξε
(κ,τ ) (t−) τ ∗ (t)
satisfies the adjoint equation (4.5) and the terminal condition means that U ′2
(xε
T
) =ξε(κ,τ ) (T ).
Remark When ε → 0, Theorem 1 reduces to the stochastic maximum principle of optimalitydeveloped in [4].
Acknowledgments The authors would like to thank the anonymous reviewers and handling editor for theirexpert comments and suggestions which helped us to improve the manuscript considerably. The first authorwas partially supported by the Algerian PNR project 8/u07/857. The second author was supported by theCzech CTU grant SGS12/197/OHK4/3T/14 and MSMT grant INGO II INFRA LG12020.
References
1. Bensoussan, A.: Lectures on stochastic control. In: Lecture Notes in Mathematics, Vol. 972, pp. 1-62 ,Springer, Berlin (1983)
2. Bellman, R.: Dynamic Programming. Princeton Univ. Press, Princeton (1957)3. Bouchard, B., Elie, R.: Discrete time approximation of decoupled forward–backward SDE with
jumps. Stoch. Process. Appl. 118(1), 53–75 (2008)4. Cadenillas, A.: A stochastic maximum principle for system with jumps, with applications to finance. Syst.
Control Lett. 47, 433–444 (2002)5. Cadenillas, A., Karatzas, I.: The stochastic maximum principle for linear convex optimal control with
random coefficients. SIAM J. Control Optim. 33, 590–624 (1995)6. Chighoub, F., Mezerdi, B.: Near optimality conditions in stochastic control of jump diffusion pro-
cesses. Syst. Control Lett. 60, 907–916 (2011)7. Elliott, R.J.: The optimal control of diffusions. Appl. Math. Optim. 22, 229–240 (1990)8. Ekeland, I.: On the variational principle. J. Math. Anal. Appl. 47, 443–474 (1974)9. Gabasov, R., Kirillova, M., Mordukhovich, B.S.: The ε-maximum principle for suboptimal controls.
Soviet Math. Dokh. 27, 95–99 (1983)
123
Differ Equ Dyn Syst (April 2012) 20(2):111–125 125
10. Framstad, N.C., Øksendal, B., Sulem, A.: Sufficient stochastic maximum principle for the optimal controlof jump diffusions and applications to finance. J. Optim. Theory. Appl. 121, 77–98 (2004)
11. Jeanblanc-Picqué, M., Pontier, M.: Optimal portfolio for small investor in a market model with discon-tinuous prices. Appl. Math. Optim. 22, 287–310 (1990)
12. Karatzas, I., Lehoczky, J.P., Shreve, S.E.: Optimal portfilio and consumption decision for a “Small ivestor”on a finite horizon. SIAM J. Control Optim. 25, 1557–1586 (1987)
13. Mordukhovich, B.S.: Approximation Methods in Problems of Optimization and Control. Nauka, Mos-cow (1988)
14. Øksendal, B., Sulem, A.: Applied Stochastic Control of Jump Diffusions. 2nd edn. Springer, Berlin (2007)15. Pan, L.P., Teo, K.L.: Near-optimal controls of class of volterra integral systems. J. Optim. Theory
Appl. 101(2), 355–373 (1999)16. Pontryagin, L.S., Boltanskii, V.G., Gamkrelidze, R.V.: The Mathematical Theory of Optimal Processes.
Interscience, New York (1962)17. Rishel, R.: A minimum principle for controlled jump processes. Lecture Notes in Economics and Math-
ematical Systems, Vol. 107, pp. 493–508. Springer, New York (1975)18. Shi, J., Wu, Z.: Maximum principle for fully coupled stochastic control system with random jumps.
Proceedings of the 26th Chinese Control Conference, pp. 375–380. Zhangjiajie (2007)19. Tang, S.L., Li, X.J.: Necessary conditions for optimal control of stochastic systems with random
jumps. SIAM J. Control Optim. 32, 1447–1475 (1994)20. Yong, J., Zhou, X.Y.: Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer, New
York (1999)21. Zhou, X.Y.: Deterministic near-optimal controls. Part I: Necessary and sufficient conditions for near
optimality. J. Optim. Theory Appl. 85, 473–488 (1995)22. Zhou, X.Y.: Deterministic near-optimal controls. Part II: Dynamic programming and viscosity solution
approach. Mathematics of Operations Research (1996)23. Zhou, X.Y.: Stochastic near-optimal controls: necessary and sufficient conditions for near-optimal-
ity. SIAM J. Control Optim. 36(3), 929–947 (1998)
123