+ All Categories
Home > Documents > OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions....

OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions....

Date post: 29-Jul-2020
Category:
Upload: others
View: 8 times
Download: 0 times
Share this document with a friend
21
OPTIMALITY NECESSARY CONDITIONS IN SINGULAR STOCHASTIC CONTROL PROBLEMS WITH NON SMOOTH DATA K. BAHLALI y , F. CHIGHOUB z , B. DJEHICHE x , AND B. MEZERDI { Abstract. The present paper studies the stochastic maximum principle in singular optimal control, where the state is governed by a stochastic di/erential equation with non smooth coe¢ cients, allowing both classical control and singular control. The proof of the main result is based on the approximation of the initial problem, by a sequence of control problems with smooth coe¢ cients.We, then apply Ekelands variational principle for this approximating sequence of control problems, in order to establish necessary conditions satised by a sequence of near optimal controls. Finally, we prove the convergence of the scheme, using Krylovs inequality in the non degenerate case and the Bouleau-Hirsch ow property in the degenerate one. The adjoint process obtained is given by means of distributional derivatives of the coe¢ cients. Key words. stochastic di/erential equation, stochastic control, maximum principle, singular control, distributional derivative, adjoint process, variational principle. AMS subject classication. : 49J30, 49A55, 60G44, 93E20. 1. Introduction. We consider stochastic control problems of nonlinear systems, where the control variable has two-components, the rst being absolutely continuous and the second singular. More precisely, we study the stochastic maximum principle in optimal control for problem in which the state evolves according to the ddimensional stochastic di/erential equation dx t = b (t; x t ;u t ) dt + (t; x t ) dB t + G t d t ; for t 2 [0;T ] ; x 0 = ; (1.1) and the expected cost has the form J (u; )= E 2 4 T Z 0 f (t; x t ;u t ) dt + T Z 0 k t d t + g (x T ) 3 5 ; (1.2) Singular control problems have numerous applications. They appear in mathe- matical nance, e.g in the problem of optimal consumption investment, with trans- action costs (see Davis, Norman [14]; Shreve, Soner [25]): A huge literature have been produced on the subject, including Ben… es, Shepp, and Witsenhausen [6]; Chow, Menaldi, and Robin [12]; Karatzas, Shreve [19]; Davis, Norman [14]; Haussmann, Suo [17]; [18]: See [17] for a complete list of references on the subject. The approaches used in these papers, are mainly based on dynamic programming. It was shown in particular that the value function is solution of a variational inequality, and the opti- mal state is a reected di/usion at the free boundary. Note that in [17]; the authors Partially supported by PHC Tassili 07 MDU 705 y UFR Sciences, UTV, B.P 132, 83957 La Garde, Cedex, France (E-mail: [email protected]) z Laboratory of Applied Mathematics, University Med Khider, Po. Box 145 Biskra (07000) Algeria (E-mail: [email protected]) x Dept of Mathematics, Royal Institute of Technology, S 100 44, Stockholm, Sweden. (E-Mail: [email protected]) { Laboratory of Applied Mathematics, University Med Khider, Po. Box 145 Biskra (07000) Algeria (E-mail: [email protected]) 1
Transcript
Page 1: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

OPTIMALITY NECESSARY CONDITIONS IN SINGULARSTOCHASTIC CONTROL PROBLEMS WITH NON SMOOTH DATA�

K. BAHLALIy , F. CHIGHOUBz , B. DJEHICHEx , AND B. MEZERDI{

Abstract. The present paper studies the stochastic maximum principle in singular optimalcontrol, where the state is governed by a stochastic di¤erential equation with non smooth coe¢ cients,allowing both classical control and singular control. The proof of the main result is based on theapproximation of the initial problem, by a sequence of control problems with smooth coe¢ cients.We,then apply Ekeland�s variational principle for this approximating sequence of control problems, inorder to establish necessary conditions satis�ed by a sequence of near optimal controls. Finally, weprove the convergence of the scheme, using Krylov�s inequality in the non degenerate case and theBouleau-Hirsch �ow property in the degenerate one. The adjoint process obtained is given by meansof distributional derivatives of the coe¢ cients.

Key words. stochastic di¤erential equation, stochastic control, maximum principle, singularcontrol, distributional derivative, adjoint process, variational principle.

AMS subject classi�cation. : 49J30, 49A55, 60G44, 93E20.

1. Introduction. We consider stochastic control problems of nonlinear systems,where the control variable has two-components, the �rst being absolutely continuousand the second singular. More precisely, we study the stochastic maximum principle inoptimal control for problem in which the state evolves according to the d�dimensionalstochastic di¤erential equation�

dxt = b (t; xt; ut) dt+ � (t; xt) dBt +Gtd�t; for t 2 [0; T ] ;x0 = �;

(1.1)

and the expected cost has the form

J (u; �) = E

24 TZ0

f (t; xt; ut) dt+

TZ0

ktd�t + g (xT )

35 ; (1.2)

Singular control problems have numerous applications. They appear in mathe-matical �nance, e.g in the problem of optimal consumption investment, with trans-action costs (see Davis, Norman [14]; Shreve, Soner [25]): A huge literature have

been produced on the subject, including Ben¼es, Shepp, and Witsenhausen [6]; Chow,Menaldi, and Robin [12]; Karatzas, Shreve [19]; Davis, Norman [14]; Haussmann, Suo[17]; [18]: See [17] for a complete list of references on the subject. The approachesused in these papers, are mainly based on dynamic programming. It was shown inparticular that the value function is solution of a variational inequality, and the opti-mal state is a re�ected di¤usion at the free boundary. Note that in [17]; the authors

�Partially supported by PHC Tassili 07 MDU 705yUFR Sciences, UTV, B.P 132, 83957 La Garde, Cedex, France (E-mail: [email protected])zLaboratory of Applied Mathematics, University Med Khider, Po. Box 145 Biskra (07000) Algeria

(E-mail: [email protected])xDept of Mathematics, Royal Institute of Technology, S 100 44, Stockholm, Sweden. (E-Mail:

[email protected]){Laboratory of Applied Mathematics, University Med Khider, Po. Box 145 Biskra (07000) Algeria

(E-mail: [email protected])

1

Page 2: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

apply the compacti�cation method to show existence of an optimal relaxed singularcontrol.

The other major approach to study singular control problems is the investigationfor necessary conditions satis�ed by an optimal control. The �rst version of thestochastic maximum principle that covers singular control problems was obtainedby Cadenillas and Haussmann [10], in which they consider linear dynamics, convexcost criterion and convex state constraints. The method used in [10] is based on theknown principle of convex analysis, related to the minimization of convex, Gâteauxdi¤erentiable functionals de�ned on a convex closed set.

A �rst order weak maximum principle has been derived by Bahlali and Chala [1],in which convex perturbations are used for both absolutely continuous and singularcomponents. A second order stochastic maximum principle for nonlinear SDEs witha controlled di¤usion matrix was obtained by Bahlali and Mezerdi [4], extending thePeng�s maximum principle [23] to singular control problems. This result is basedon two perturbations of the optimal control, the �rst is a spike variation, on theabsolutely continuous component of the control, and the second one is convex onthe singular component. A similar approach has been used by Bahlali et al. [2] tostudy the relaxed stochastic maximum principle in the case of uncontrolled di¤usioncoe¢ cient.

On the other hand, the stochastic maximum principle for classical control prob-lems(without the singular part) have been studied, with di¤erentiability assumptionson the data weakened. The �rst result has been derived by Mezerdi [22], in the case ofa SDE with a non smooth drift, by using Clarke generalized gradients and stable con-vergence of probability measures. In [3] [5], the authors extend the classical stochasticmaximum principle to the case where the coe¢ cients of the di¤usion process are onlyLipschitz continuous. The adjoint process obtained is given by means of generalizedderivatives of the coe¢ cients.

Our aim in this paper is to extend the stochastic maximum principle in singularoptimal control to the case where the coe¢ cients b; �; f and g are Lipschitz continuousin the state variable: The main result is proved via an approximation scheme ofthe initial control problem by a sequence of control problems where the data aresmooth functions. Ekeland�s variational principle is then applied to derive necessaryconditions for near optimality satis�ed by a sequence of near optimal controls. Theconvergence of the approximation scheme is obtained by using Krylov�s estimate inthe non degenerate case and the Bouleau Hirsch �ow property in the degenerate case.

2. Assumptions and preliminaries. Let (; F; Ft; P ) be a �ltered probabilityspace, satisfying the usual conditions, on which a d-dimensional Brownian motion(Bt) is de�ned with the �ltration (Ft), Let T be a strictly positive real number, A1 isa non empty subset of Rn and A2 = ([0;1))m : U1 is the class of measurable, adaptedprocesses u : [0; T ] � ! A1; and U2 is the class of measurable, adapted processes� : [0; T ]� ! A2:

Definition 2.1. An admissible control is a pair (u; �) of measurable A1 � A2-valued, Ft-adapted processes, such that � is of bounded variation, non decreasing left-continuous with right limits and �0 = 0:

We denote by U = U1 � U2 the set of all admissible controls.For (u; �) 2 U , suppose that the state xt = x

(u;�)t 2 Rd is described by the

equation �dxt = b (t; xt; ut) dt+ � (t; xt) dBt +Gtd�t; for t 2 [0; T ] ;x0 = �;

(2.1)

2

Page 3: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

Since d�t may be singular with respect to Lebesgue measure dt; we call � thesingular part of the control and the process u its absolutely continuous part. Supposewe are given a cost functional J (u; �) of the form

J (u; �) = E

24 TZ0

f (t; xt; ut) dt+

TZ0

ktd�t + g (xT )

35 ; (2.2)

where b : [0; T ]�Rd�A1 ! Rd; � : [0; T ]�Rd ! RdRd; f : [0; T ]�Rd�A1 ! R;g : Rd ! R; G : [0; T ]! Rd Rm; and k : [0; T ]! ([0;1))m :

Assume that b; �, f and g are Borel measurable, bounded functions and thereexist M > 0; such that for all (t; x; y; a) in R+ � Rd � Rd �A1

jb (t; x; a)� b (t; y; a)j+ j� (t; x)� � (t; y)j �M jx� yj ; (2.3)

jf (t; x; a)� f (t; y; a)j+ jg (x)� g (y)j �M jx� yj ; (2.4)

b (t; x; a) and f (t; x; a) are continuous in a uniformly in (t; x) ; (2.5)

and

G; k are continuous and bounded. (2.6)

Find (u; �) 2 U such that

J(u; �) = min(u;�)2U

J (u; �) :

Any (u; �) satisfying the above property is called an optimal control of problem (2.1),(2.2). The corresponding state process x is called the optimal state process.

Under the above hypothesis, the SDE (2.1) has a unique strong solution xt, suchthat for any p > 0,

E

�sup0�t�T

jxtjp�< +1:

Moreover the cost functional is well de�ned from U into R:Since b, �j (the jth column of the matrix �), f and g are Lipschitz continuous

functions in the state variable, then they are di¤erentiable almost everywhere in thesense of Lebesgue measure (Rademacher Theorem see [13]): Let us denote by bx, �x;fx and gx any Borel measurable functions such that

@xb (t; x; a) = bx (t; x; a) dx-a:e:;

@xf (t; x; a) = fx (t; x; a) dx-a:e:;

@x� (t; x) = �x (t; x) dx-a:e:;

@xg (x) = gx (x) dx-a:e:

It is clear that these almost everywhere derivatives are bounded by the Lipschitzconstant M: Finally, assume that bx (t; x; a) and fx (t; x; a) are continuous in a uni-formly in (t; x) :

3

Page 4: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

Let us recall Krylov�s inequality and Ekeland�s variational principle, which willbe used in the sequel.

Theorem 2.2. (Krylov [20]) Let (; F; Ft; P ) be a �ltered probability space,(Bt)t�0 a d-dimensional Brownian motion, b : � R+ ! Rd; � : � R+ ! Rd Rdbounded adapted processes such that: 9c > 0; 8� 2 Rd; 8 (t; x) 2 [0; T ]�Rd; ������ �c j�j2. Let

xt = x+

TZ0

b (t; !) dt+

TZ0

� (t; !) dBt;

be an Itô process . Then for every Borel function f : R+ � Rd ! R with support in[0; T ]�B (0;M) ; the following inequality holds

E

24 TZ0

jf (t; xt)j dt

35 � K264 TZ0

ZB(0;M)

jf (t; x)jd+1 dtdx

3751

d+1

;

where K is a constant and B (0;M) is the ball of center 0 and radius M .Lemma 2.3. (Ekeland variational principle [15]) Let (S; d) be metric space and

� : S ! R [ f+1g be lower-semicontinuous and bounded from below. For " � 0;suppose u" 2 S satis�es � (u") � inf

u2S� (u) + ": Then for any � > 0; there exists

u� 2 S such that

��u��� � (u") ;

d�u�; u"

�� �;

��u��� � (u) + "

�d�u; u�

�; for all u 2 S:

To apply Ekeland�s variational principle to the control problem, we have to endowthe set of controls with an appropriate metric. For any (u; �) ; (�; �) 2 U; we set

d1 (u; v) = P dt f(!; t) 2 � [0; T ] ; v (!; t) 6= u (!; t)g ; (2.7)

d2 (�; �) =

�E

�sup0�t�T

j�t � �tj2

�� 12

; (2.8)

d ((u; �) ; (�; �)) = d1 (u; v) + d2 (�; �) : (2.9)

where P dt is the product measure of P with the Lebesgue measure dt:Lemma 2.4. (1) (U; d) is a complete metric space.(2) The cost functional J is continuous from U into R:Proof. (1) It is clear that (U2; d2) is a complete metric space. Moreover, it was

shown in [19] that (U1; d1) is a complete metric space. Hence (U; d) is a completemetric space.

Item (2) is proved as in [22] [26].

3. The non degenerate case. In this section, we assume the following condi-tion

9c > 0;8� 2 Rd;8 (t; x) 2 [0; T ]� Rd; ��� (t; x)�� (t; x) � � c j�j2 ; (3.1)

4

Page 5: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

3.1. The main result. The main result of this section is stated in the followingTheorem.

Theorem 3.1. (Stochastic maximum principle) Let (u; �) be an optimal controlfor the controlled system (2.1), (2.2) and let x be the corresponding optimal trajectory.Then there exists a measurable Ft-adapted process pt satisfying

pt := �E

24 TZt

�� (s; t) :fx (s; xs; us) ds+�� (T; t) :gx (xT )�Ft

35 ; (3.2)

such that for all a 2 A1 and � 2 U2

0 � H (t; xt; a; pt)�H (t; xt; ut; pt) dt-a.e; P -a.s:; (3.3)

and

0 � EZ T

0

(kt +G�t pt) d

�� � �

�t

(3.4)

where the Hamiltonian H associated to the control problem is

H (t; x; u; p) = p:b (t; x; u)� f (t; x; u) ; (3.5)

and � (s; t) ; (s � t) is the fundamental solution of the linear equation(d� (s; t) = bx (s; xs; us) :� (s; t) ds+

P1�j�d

�jx (s; xs) :� (s; t) dBjs ;

� (t; t) = Id:(3.6)

Here � denotes the transpose:

3.2. Proof of the main result. Let ' be a non negative smooth function

de�ned on Rd; with support in the unit ball such thatZRd

' (y) dy = 1: De�ne the

following smooth functions by convolution

bn (t; x; a) = ndZRd

b (t; x� y; a)' (ny) dy;

fn (t; x; a) = ndZRd

f (t; x� y; a)' (ny) dy;

�j;n (t; x) = ndZRd

�j (t; x� y)' (ny) dy;

gn (x) = ndZRd

g (x� y)' (ny) dy:

Lemma 3.2. (1) The functions bn (t; x; a), �j;n (t; x) ; fn (t; x; a) ; and gn (x) areBorel measurable bounded functions and Lipschitz continuous with constant K in x:

5

Page 6: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

(2) There exists a constant C positive independent of t, x and n such that forevery t in [0; T ]

jbn (t; x; a)� b (t; x; a)j+���j;n (t; x)� �j (t; x)�� � C

n;

jfn (t; x; a)� f (t; x; a)j+ jgn (x)� g (x)j � C

n:

(3) The functions bn (t; x; a) ; fn (t; x; a) ; �j;n (t; x) and gn (x) are C1-functionsin x; and for all t in [0; T ] ; we have

limn!1

bnx (t; x; a) = bx (t; x; a) dx-a:e:;

limn!1

fnx (t; x; a) = fx (t; x; a) dx-a:e:;

limn!1

�j;nx (t; x) = �jx (t; x) dx-a:e:;

limn!1

gnx (x) = gx (x) dx-a:e:

(4) For every p � 1 and R > 0

limn!1

ZZB(0;R)�[0;T ]

supa2A

jbnx (t; x; a)� bx (t; x; a)jpdxdt = 0;

limn!1

ZZB(0;R)�[0;T ]

supa2A

jfnx (t; x; a)� fx (t; x; a)jpdxdt = 0:

Proof. Statements (1), (2) and (3) are classical facts (see [16] for the proof).(4) is proved as in [20].

For n 2 N�; let us consider the sequence of perturbed control problems obtainedby replacing b, �; f and g by bn, �n; fn and gn: Let us denote y the solution of thecontrolled stochastic di¤erential equation.�

dyt = bn (t; yt; ut) dt+ �

n (t; yt) dBt +Gtd�t;y0 = �;

(3.7)

The corresponding cost is given by

Jn (u; �) = E

24 TZ0

fn (t; yt; ut) dt+

TZ0

ktd�t + gn (yT )

35 ; (3.8)

Lemma 3.3. Let (u; �) 2 U; xt and yt the solutions of (2:1) and (3:9) respectivelycorresponding to the control (u; �) ; then we have

(1) E�sup0�t�T

jxt � ytj2��M1: (�n)

2; where �n =

C

n:

(2) jJn (u; �)� J (u; �)j � M2:�n:Proof. Since xt � yt and Jn (u; �) � J (u; �) does not depend on the singular

part, then This lemma follows from standard arguments from stochastic calculus andlemma 3.2:

6

Page 7: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

Let us suppose that�u; ��2 U is an optimal control for the initial control problem

(2:1) and (2:2) : Note that�u; ��is not necessarily optimal for the perturbed control

problem (3:9) and (3:10) : However, by Lemma 3.6 we obtain the existence of (�n) �(2M2:�n) a sequence of positive real numbers converging to 0, such that

Jn�u; ��� inf

(�;�)2UJn (�; �) + �n:

The control�u; ��will be �n-optimal for the perturbed control problem. Ac-

cording to Lemma 3.5, it is easy to see that Jn (:; :) is continuous on U = U1 � U2endowed with the metric d = d1 + d2 de�ned by (3.8). By Ekeland�s variational prin-

ciple (lemma 3.4) applied to�u; ��with �n = �

23n ; there exist an admissible control

(un; �n) such that

d��u; ��; (un; �n)

�� �

23n ;

and

Jn� (un; �n) � Jn� (�; �) ; for all (�; �) 2 U;

where

Jn� (�; �) = Jn (�; �) + �

13n :d ((�; �) ; (u

n; �n)) :

This means that (un; �n) is an optimal control for the perturbed system (3.9) witha new cost function Jn� : The controlled process x

n is de�ned as the unique solutionto the stochastic di¤erential equation;�

dxnt = bn (t; xnt ; u

nt ) dt+ �

n (t; xnt ) dBt +Gtd�nt ;

y0 = �:(3.9)

We consider �n (s; t) (s � t) ; the fundamental solution of the linear stochasticdi¤erential equation(

d�n (s; t) = bnx (s; xns ; u

ns ) :�

n (s; t) ds+P

1�j�d�j;nx (s; xns ) :�

n (s; t) dBjs ;

�n (t; t) = Id:(3.10)

Note that bnx ; �n;jx (j = 1; ::; d) are respectively the matrices of �rst order partial

derivatives of bn; �n;j (j = 1; ::; d) with respect to x:Proposition 3.4. For each integer n, there exists an admissible control (un; �n)

and a (Ft)-adapted process pnt given by

pnt = �E

24 TZt

�n;� (s; t) :fnx (s; xns ; u

ns ) ds+�

n;� (T; t) :gnx (xnT )�Ft

35 ; (3.11)

and a Lebesgue null set N such that for t 2 N c

E [Hn (t; xnt ; �; pnt )�Hn (t; xnt ; u

nt ; p

nt )] � ��

13n :M1; (3.12)

7

Page 8: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

and

E

TZ0

(kt +G�t pnt ) d (� � �n)t � ��

13n :M2: (3.13)

for all � 2 A1; and � 2 U2: The Hamiltonian Hn is de�ned by

Hn (t; x; u; p) = p:bn (t; x; u)� fn (t; x; u) : (3.14)

Here � denotes the transpose:Proof. According to the optimality of (un; �n) for the perturbed system with cost

function Jn� ; we can use the spike variation method to derive a maximum principlefor (un; �n). Let t0 2 [0; T ] ; � 2 A1 and � 2 U2; for any " > 0; de�ne the twoperturbations (un;"t ; �nt ) and (u

nt ; �

n;"t ) by

(un;"t ; �nt ) =

�(�; �nt ) t 2 [t0; t0 + "] ;(unt ; �

nt ) t 2 [0; T ]� [t0; t0 + "] :

and

(unt ; �n;"t ) = (unt ; �

nt + " (�t � �nt ))

Since (unt ; �nt ) is optimal for the cost J

n� ; then

0 � Jn� (un;"t ; �nt )� Jn� (unt ; �nt )

and

0 � Jn� (unt ; �n;"t )� Jn� (unt ; �nt )

this imply that

0 � Jn (un;"t ; �nt )� Jn (unt ; �nt ) + �13n :d1 (u

nt ; u

n;"t ) ;

and

0 � Jn (unt ; �n;"t )� Jn (unt ; �nt ) + �

13n :d2 (�

nt ; �

n;"t ) ;

using the de�nitions of d1 and d2 it holds that

0 � Jn (un;"t ; �nt )� Jn (unt ; �nt ) + �13n :M1"; (3.15)

and

0 � Jn (unt ; �n;"t )� Jn (unt ; �nt ) + �

13n :M2"; (3.16)

where Mi (i = 1; 2) is a positive constant. From inequalities (3.17) and (3.18)respectively we use the same method as in subsection 3.3 in [2] to obtain respectively(3.14) and (3.15).

We use a transformation that makes it possible to apply Krylov�s estimate fordi¤usion processes. De�ne dynamics b : [0; T ]�Rd�A1 ! Rd; bn : [0; T ]�Rd�A1 !

8

Page 9: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

Rd; � : [0; T ]� Rd ! Rd Rd; and �n : [0; T ]� Rd ! Rd Rd, by

b (t; x; a) = b

�t; x+

tR0

Gsd�s; a

�;

bn(t; x; a) = bn

�t; x+

tR0

Gsd�s; a

�;

� (t; x) = �

�t; x+

tR0

Gsd�s

�;

�n (t; x) = �n�t; x+

tR0

Gsd�s

�:

Let z the unique solution of�dzt = b (t; zt; ut) dt+ �

j (t; zt) dBt;z0 = �:

(3.17)

This implies that xt = zt +R t0Gsd�s solves the SDE (2:1) with data (b; �) :

Similary, let zn the unique solution of�dznt = b

n(t; znt ; ut) dt+ �

n (t; znt ) dBt;zn0 = �:

(3.18)

Then xnt = znt +

R t0Gsd�s solves the SDE (3:9) with data (b

n; �n) :

Note that, b; bn; �j ; and �j;n (j = 1; :::; d) are measurable bounded functions and

Lipschitz continuous with constant M in x: We conclude that the generalized deriva-tives (in the sense of distributions) bx; b

n

x ; �jx; and �

j;nx (j = 1; :::; d) are well de�ned:

Lemma 3.5. The following estimates hold

limn!+1

E

�sup0�t�T

jxnt � xtj2

�= 0; (3.19)

limn!+1

E

�supt�s�T

j�n (s; t)� � (s; t)j2�= 0; (3.20)

limn!+1

E

�sup0�t�T

jpnt � ptj2

�= 0; (3.21)

limn!+1

E [jHn (t; xnt ; unt ; p

nt )�H (t; xt; ut; pt)j] = 0; (3.22)

where �t, pt and H are determined respectively by the solution of (3.5), the adjointprocess (3.1) and the associated Hamiltonian (3.4), corresponding to the optimal stateprocess xt: �nt ; p

nt and H

n are determined respectively by the solution (3.12), theadjoint process (3.13) and the associated Hamiltonian (3.16), corresponding to theapproximating sequence xnt ; given by (3.11).

Proof. In what follows, C represents a generic constant, which can be di¤erentfrom line to line.

By squaring, taking expectations and using Burkholder-Davis-Gundy inequality,we get

E

�sup0�t�T

jxnt � xtj2

�� C

�An1 +A

n2 +A

n3 +M:

�d2

��n; �

��2�;

9

Page 10: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

where M is a positive constant, and

An1 = E

�tR0

jbn (s; xns ; uns )� bn (s; xns ; us)j2�fun 6=ug (s) ds

�;

An2 = E

�tR0

jbn (s; xns ; us)� bn (s; xs; us)j2+ j�n (s; xns )� �n (s; xs)j

2ds

�;

An3 = E

�tR0

jbn (s; xs; us)� b (s; xs; us)j2 + j�n (s; xs)� � (s; xs)j2 ds�:

By using the boundness of the coe¢ cient bn and the fact that d1 (un; u) ! 0 asn ! +1; we have lim

n!1An1 = 0: Since b

n and �n are Lipschitz in the state variable,

then

An2 � CE�tR0

sup0�r�s

jxnr � xrj2ds

�:

Finally, we conclude from the Lemma 3:2 that limn!+1

An3 = 0: Then by Gronwall

Lemma, we obtain (3:21) :Again, using standard arguments based on Burkholder-Davis-Gundy, Schwartz

inequalities and Gronwall Lemma, we easily check that

E

�supt�s�T

j�n (s; t)� � (s; t)j2��

CE

�supt�s�T

j�n (s; t)j4� 12

8<:E"TR0

jbnx (t; xnt ; unt )� bx (t; xt; ut)j4dt

# 12

+P

1�j�dE

"TR0

���j;nx (t; xnt )� �jx (t; xt)��4 dt# 1

2

9=; ;Since the coe¢ cients in the linear stochastic di¤erential equation (3.12) are bounded,

it is easy to see that E�sups�t�T

j�n (s; t)j4�< +1: To obtain the desired result it is

su¢ cient to prove that

limn!+1

E

"TR0

jbnx (t; xnt ; unt )� bx (t; xt; ut)j4dt

#= 0;

limn!+1

E

"TR0

���j;nx (t; xnt )� �jx (t; xt)��4 dt# = 0; for j = 1; ::; d;

we have, E

"TR0

jbnx (t; xnt ; unt )� bx (t; xt; ut)j4dt

#� C (In1 + In2 ) ; where

In1 = E

"TR0

jbnx (t; xnt ; unt )� bnx (t; xnt ; ut)j4�fun 6=ug (t) dt

#;

In2 = E

"TR0

jbnx (t; xnt ; ut)� bnx (t; xt; ut)j4dt

#:

10

Page 11: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

First, in view of the boundness of the derivative bnx by the Lipschitz constant andthe fact that d1 (un; u)! 0 as n! +1; we obtain lim

n!+1In1 = 0: Next, Let k � 1 be

a �xed integer, we then get

limn!+1

In2 � limnC: fJn1 + Jn2 + Jn3 g ;

where

Jn1 = E

"TR0

��bnx (t; xnt ; ut)� bkx (t; xnt ; ut)��4 dt#;

Jn2 = E

"TR0

��bkx (t; xnt ; ut)� bkx (t; xt; ut)��4 dt#;

Jn3 = E

"TR0

��bkx (t; xt; ut)� bx (t; xt; ut)��4 dt#:

Now, let z (resp zn) denotes the unique solution of the SDE (3.19) (resp (3.20))

corresponding to�u; ��(resp (un; �n)); then it holds that

Jn1 = E

"TR0

���bnx (t; znt ; ut)� bkx (t; znt ; ut)���4 dt#;

and

Jn3 = E

"TR0

���bkx (t; zt; ut)� bx (t; zt; ut)���4 dt#;

Arguing as in [20], page 87; let w (t; x) be a continuous function such that w (t; x) =0 if t2 + x2 � 1, and w (0; 0) = 1: Then for M > 0, we have

limnJn1 � CE

"Z T

0

�1� w

�t

M;ztM

��dt

#

+ClimnE

"Z T

0

w

�t

M;ztM

�:���bnx (t; znt ; ut)� bkx (t; znt ; ut)���4 dt

#:

Therefore without loss of generality, we may suppose that for all n 2 N�; thefunctions bx; �x b

n

x ; and �nx have compact support in [0; T ] � B (0;M) : Since the

di¤usion matrix �n satis�es the non degeneracy condition with the same constant as�; then by applying Krylov�s inequality, we obtain

limnJn1 � CE

"Z T

0

�1� w

�t

M;ztM

��dt

#

+Climn

supa2A1

���bnx (t; x; a)� bkx (t; x; a)���4 d+1;M

:

Since bnx converges to bx dx-a:e:; it is simple to see that bn

x converges to bx dx-a:e:and

limn

supa2A1

���bnx (t; x; a)� bkx (t; x; a)���4 d+1;M

= 0:

11

Page 12: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

Next, letM goes to +1; then from the properties of the function w (t; x) we havelimnJn1 = 0: Istimating J

n3 similarily, it holds that lim

nJn3 = 0:We use the continuity of

bkx in x. From (3:21) ; and by using the Dominated convergence theorem we deducethat lim

nJn2 = 0: Hence lim

n!+1In1 = 0: Using the same technique, we prove that

limn!+1

E

"TR0

���j;nx (t; xnt )� �jx (t; xt)��4 dt# = 0; for j = 1; :::; d:

Now, let us prove that limn!+1

Ehsup0�t�T jpnt � ptj

2i= 0: Clearly,

Ehjpnt � ptj

2i� C (�n1 + �n2 ) ; (3.23)

where

�n1 = E

24 TZt

j�n;� (s; t) :fnx (s; xns ; uns )� �� (s; t) :fx (s; xs; us)j2ds

35 ;and

�n2 = Ehj�n;� (T; t) :gnx (xnT )� �� (T; t) :gx (xT )j

2i

Since fx is bounded by the Lipschitz constant M , and applying the Schwartzinequality, we get

�n1 � CE�supt�s�T

j�n;� (s; t)j4� 12

:E

"Z T

0

jfnx (s; xns ; uns )� fx (s; xs; us)j4ds

# 12

+CM:E

�supt�s�T

j�n;� (s; t)� �� (s; t)j2�:

Hence, by the continuity and the boundness of derivatives fnx ; fx; relations (3:21) ;(3:22) and the fact that d1 (un; u)! 0 as n!1; together with the Krylov�s inequal-ity and the Dominated convergence theorem, for the term involving fnx (s; x

ns ; u

ns ) �

fx (s; xs; us) ; we get by sending n to in�nity limn!+1

�n1 = 0:

On the other hand, since gx is bounded by the Lipschitz constant, and applyingthe Schwartz inequality we get

�n2 � CnEhj�n;� (T; t)j4

io 12

:nEhjgnx (xnT )� gx (xT )j

4io 1

2

+CM:Ehj�n;� (T; t)� �� (T; t)j2

i;

Since; gnx and gx are bounded by the Lipschitz constant and gnx converges to gx,

we conclude by (3:21) and the dominated convergence theorem that

limn!+1

Ehjgnx (xnT )� gx (xT )j

4i= 0:

From (3:25) ; then by using Burkholder-Davis-Gundy inequality, we obtain (3:23) :

12

Page 13: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

The Schwartz inequality, gives

E [jHn (t; xnt ; unt ; p

nt )�H (t; xt; ut; pt)j] �

nE jpnt � ptj

2o 1

2nE jbn (t; xnt ; unt )j

2o 1

2

+nE jbn (t; xnt ; unt )� b (t; xt; ut)j

2o 1

2nE jptj2

o 12 � E jfn (t; xnt ; unt )� f (t; xt; ut)j :

Lemma 3.2 and (3.23) imply that the �rst expression in the right hand sideconverges to 0 as n! +1:

Next,

E jbn (t; xnt ; unt )� b (t; xt; ut)j2 � C (�n1 + �n2 + �n3 ) ;

where

�n1 = Ehjbn (t; xnt ; unt )� bn (t; xnt ; ut)j

2�fun 6=ug (t)

i;

�n2 = Ehjbn (t; xnt ; ut)� bn (t; xt; ut)j

2i;

�n3 = Ehjbn (t; xt; ut)� b (t; xt; ut)j2

i:

The boundness of bn and the fact that d1 (un; u) !n!1

0; guarantee the conver-

gence of �n1 to 0 as n ! +1: By virtue of (3:21) ; and the dominated convergencetheorem we get, lim

n!+1�n2 = 0: In view of the Lemma 3.2, we have lim

n!+1�n3 = 0:

The term E jfn (t; xnt ; unt )� f (t; xt; ut)j can be treated by the same technique.Proof. of Theorem 3.1. Let n goes to +1; then from Proposition 3.7 and Lemma

3.8, we get

E [H (t; xt; v; pt)�H (t; xt; ut; pt)] � 0; dt� a.e., P � a.s:;

E

TZ0

(kt +G�t pt) d

�� � �

�t� 0;

for every A1-valued Ft-measurable random variable v; and � 2 U2:Let a 2 A1; then for every At 2 Ft

E�(H (t; xt; a; pt)�H (t; xt; ut; pt))�At

�� 0; dt� a.e., P � a.s:;

which implies that

E [(H (t; xt; a; pt)�H (t; xt; ut; pt))�Ft] � 0

Since H (t; xt; a; pt) � H (t; xt; ut; pt) is Ft-measurable, then the �rst variationalinequality without expectations follows immediately.

4. The Degenerate case. In this section we drop the uniform ellipticity con-dition on the di¤usion matrix. It is clear that the method used in the last section willno longer be valid. To overcome this di¢ culty, the idea is to use a result of Bouleauand Hirsch [9]; on the di¤erentiability in the sense of distributions, of the solutionof a SDE with Lipschitz coe¢ cients, with respect to the initial data. This derivativeis de�ned as the solution of a linear stochastic di¤erential equation de�ned on anextension of the initial probability space.

13

Page 14: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

Let h be a continuous positive function on Rd such thatRh (x) dx = 1 andR

jxj2 h (x) dx <1: We set

D =

�f 2 L2 (hdx) ; such that @f

@xj2 L2 (hdx)

�;

where@f

@xjdenotes the derivative in the distribution sense.

Equipped with the norm

kfkD =

24ZRd

f2hdx+X1�j�d

ZRd

�@f

@xj

�2hdx

35 12

;

D is a Hilbert space, which is a classical Dirichlet space (see [9]). Moreover D is asubset of the Sobolev space H1

loc

�Rd�:

Let e = Rd�; and eF the Borel �-�eld over e and eP = hdxP: Let eBt (x;w) =Bt (w) and eFt the natural �ltration of eBt augmented with eP -negligible sets of eF : Itis clear that

�e; eF ;� eFt�t�0

; eP ; eBt� is a Brownian motion. We introduce the process~xt de�ned on the enlarged space

�e; eF ;� eFt�t�0

; eP ; eBt� ; which is the solution of thestochastic di¤erential equation

�d~xt = b (t; ~xt; ~ut) dt+ � (t; ~xt) d eBt +Gtd~�t; for t 2 [0; T ] ;~x0 = �;

(4.1)

associated to the control�~ut; ~�t

�(x; !) = (ut; �t) (!) :

Since the coe¢ cients are Lipschitz continuous and bounded, equations (4:1) hasa unique eFt-adapted solution. Equations (2:1) ; and (4:1) are almost the same exceptthat uniqueness of the solution of (4:1) is slightly weaker, one can easily prove thatthe uniqueness implies that for each t � 0; ~xt = xt; eP�a.s:

4.1. The main result. The main result of this section is stated in the followingTheorem.

Theorem 4.1. (Stochastic maximum principle) Let (u; �) be an optimal controlfor the controlled system (2.1), (2.2) and let x be the corresponding optimal trajectory.Then there exists a measurable Ft-adapted process pt satisfying

pt := � eE24 TZt

�� (s; t) :fx (s; xs; us) ds+�� (T; t) :gx (xT )� eFt

35 ; (4.2)

such that for all a 2 A1 and � 2 U2

0 � H (t; xt; a; pt)�H (t; xt; ut; pt) dt-a.e; eP -a.s:; (4.3)

and

0 � eE Z T

0

(kt +G�t pt) d

�� � �

�t

(4.4)

14

Page 15: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

where the Hamiltonian H is de�ned by

H (t; x; u; p) = p:b (t; x; u)� f (t; x; u) ; (4.5)

and � (s; t) ; (s � t) is the fundamental solution of the linear equation(d�s = bx (s; xs; us) :� (s; t) ds+

P1�j�d

�jx (s; xs) :� (s; t) d eBjs ;� (t; t) = Id:

(4.6)

Here � denotes the transpose:

4.2. Proof of the main result. Let ~zt = ~xt �R t0Gsd�s the unique solution of

the SDE �d~zt = b (t; ~zt; ut) dt+ � (t; ~zt) d eBt;~z0 = �:

(4.7)

on the enlarged space�e; eF ;� eFt�

t�0; eP ; eBt�, where b and � are de�ned in subsection

3:2:Theorem 4.2. (The Bouleau-Hirsch �ow property) For eP -almost every w(1) For all t � 0; ~zt is in Dd.

(2) There exists a eFt-adapted GLd (R)-valued continuous process �e�t�t�0

such

that for every t � 0

@

@x(z�t (w)) = e�t (�;w) dx-a:e:;

where@

@xdenotes the derivative in the ditribution sense.

(3) The distributional derivative e�t is the unique fundamental solution of thelinear stochastic di¤erential equation8<: de� (s; t) = bx (s; ~zs; ~us) :e� (s; t) ds+ P

1�j�d�jx (s; ~zs) :e� (s; t) d eBjs ; s � t;e� (t; t) = Id; (4.8)

where bx and �jx are versions of the almost everywhere derivatives of b and �j :

(4) The image measure of eP by the map ~zt is absolutely continuous with respectto the Lebesgue measure.

Now, consider the process yt; t � 0; solution of the system valued in Rd, de�ned

on the enlarged probability space�e; eF ;� eFt�

t�0; eP ; eBt� by

�dyt = b

n (t; yt; ut) dt+ �n (t; yt) d eBt +Gtd�t;

y0 = �;(4.9)

and de�ne the cost functional

Jn (ut) = eE24 TZ0

fn (t; yt; ut) dt+

TZ0

ktd�t + gn (yT )

35 ; (4.10)

15

Page 16: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

where bn; �n; fn and gn be the regularized functions of b; �; f and g:The following result gives the estimates which relate the original control problem

with the perturbed ones.Lemma 4.3. Let (xt) and (yt) the solutions of (2:1) and (4:9) respectively, corre-

sponding to an admissible control (u; �) : Then

(1) eE � sup0�t�T

jxt � ytj2��M1: (�n)

2;

(2) jJn (u; �)� J (u; �)j �M2:�n;

where �n =C

n; and M1 and M2 are positive constants.

Let�u; ��be an optimal control for the initial problem (2:1) and (2:2) : Note that�

u; ��is not necessarily optimal for the perturbed control problem (4:9) and (4:10) :

However, according to lemma 4:3, there exists (�n) � (2M2:�n) a sequence of positivereal numbers converging to 0, such that

Jn(u; �) � inf(�;�)2U

Jn (�; �) + �n:

The functional Jn de�ned by (4:10) being continuous on U = U1 � U2; withrespect to the topology induced by the metric d0 ((u; �) ; (�; �)) = d01 (u; v) + d

02 (�; �) ;

where

d01 (u; v) = eP dtn(w; t) 2 e� [0; T ] ; v (w; t) 6= u (w; t)o ;d02 (�; �) =

� eE � sup0�t�T

j�t � �tj2

�� 12

;

Then by applying Ekeland�s principle to Jn for�u; ��with �n = �

23n ; there exists

an admissible control (un; �n) such that

d0�(u; �); (un; �n)

�� �

23n ;

Jn� (un; �n) � Jn� (�; �) ; for any (�; �) 2 U;

and (un; �n) is an optimal control for the perturbed system (4.9) with a new costfunction

Jn� (�; �) = Jn (�; �) + �

13n :d

0 ((�; �) ; (un; �n)) :

Denote by xn the unique solution of (4:9) corresponding to (un; �n)�dxnt = b

n (t; xnt ; unt ) dt+ �

n (t; xnt ) d eBt +Gtd�nt ;xn0 = �;

(4.11)

The controlled process dznt = dxnt �Gtd�nt is then de�ned as the solution to the

stochastic di¤erential equation�dznt = b

n(t; znt ; u

nt ) dt+ �

n (t; znt ) d eBt;zn0 = �:

(4.12)

16

Page 17: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

where bnand �n are de�ned in subsection 3:2: Let �n (s; t) (s � t) ; be the fundamental

solution of the linear equation(d�n (s; t) = bnx (s; x

ns ; u

ns ) :�

n (s; t) ds+P

1�j�d�j;nx (s; xns ) :�

n (s; t) d eBjs ;�n (t; t) = Id:

(4.13)

Proposition 4.4. For each integer n, there exists an admissible control (un; �n)

and a� eFt�-adapted process pnt given bypnt = � eE

24 TZt

�n;� (s; t) :fnx (s; xns ; u

ns ) ds+�

n;� (T; t) :gnx (xnT )� eFt

35 ; (4.14)

and a Lebesgue null set N such that for t 2 N c

eE [Hn (t; xnt ; �; pnt )�Hn (t; xnt ; u

nt ; p

nt )] � ��

13n :M1; (4.15)

and

eE TZt

(kt +G�t pnt ) d (� � �n)t � ��

13n :M2; (4.16)

for all � 2 A1; and � 2 U2; where the Hamiltonian Hn is de�ned by

Hn (t; x; u; p) = p:bn (t; x; u)� fn (t; x; u) : (4.17)

Here � denotes the transpose:The proof goes as in section 3.2.The proof of the main result is based on the following lemma.Lemma 4.5. The following estimates hold

i) limn!+1

eE � sup0�t�T

jxnt � xtj2

�= 0; (4.18)

ii) limn!+1

eE � sups�t�T

j�n (s; t)� � (s; t)j2�= 0; (4.19)

iii) limn!+1

eE � sup0�t�T

jpnt � ptj2

�= 0; (4.20)

iv) limn!+1

eE [jHn (t; xnt ; unt ; p

nt )�H (t; xt; ut; pt)j] = 0; (4.21)

where �t; pt and H are determined by (4.6), (4.2), and (4.5), corresponding to theoptimal solution xt: �nt ; p

nt and H

n are determined by (4.13), (4.14) and (4.17),corresponding to the approximating sequence xnt ; given by (4.11):

Proof. i) is proved as (3:21) :Let us prove ii)Using Burkholder Davis Gundy, Schwartz inequalities and the Gronwall Lemma,

17

Page 18: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

we obtain

eE � supt�s�T

j�n (s; t)� � (s; t)j2��

C eE � supt�s�T

j�n (s; t)j4� 12

8<:eE"TR0

jbnx (t; xnt ; ut)� bx (t; xt; ut)j4dt

# 12

+P

1�j�deE "TR

0

���j;nx (t; xnt )� �jx (t; xt)��4 dt# 1

2

9=; :Since the coe¢ cients in the linear stochastic di¤erential equation (4.13) are bounded,

it is easy to see that eE � supt�s�T

j�n (s; t)j4�< +1: To derive (4.19), it is su¢ cient to

prove the following two assertions

eE "TR0

jbnx (t; xnt ; ut)� bx (t; xt; ut)j4dt

#! 0 as n! +1;

and

eE "TR0

���j;nx (t; xnt )� �jx (t; xt)��4 dt#! 0 as n! +1; for j=1,2,.....,d.

Let us prove the �rst Limit. We have

eE "TR0

jbnx (t; xnt ; unt )� bx (t; xt; ut)j4dt

#� C (In1 + In2 + In3 ) ;

where

In1 = eE "TR0

jbnx (t; xnt ; unt )� bnx (t; xnt ; ut)j4�fun 6=ug (t) dt

#;

In2 = eE "TR0

jbnx (t; xnt ; ut)� bx (t; xnt ; ut)j4dt

#;

In3 = eE "TR0

jbx (t; xnt ; ut)� bx (t; xt; ut)j4dt

#;

According to the boundness of the derivative bnx by the Lipschitz constant andthe fact that d01 (u

n; u)! 0 as n! +1; we obtain limn!+1

In1 = 0:

Moreover, we have

In2 � eE "TR0

supa2A1

���bnx (t; znt ; a)� bx (t; znt ; a)���4 dt#;

=TR0

RRdsupa2A1

���bnx (t; y; a)� bx (t; y; a)���4 �nt (y) dydt;18

Page 19: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

where znt denotes the unique solution of the SDE (3:20) ; corresponding to (un; �n),

and �nt (y) its density with respect to the Lebesgue measure. Let us show

limn!+1

ZRd

supa2A1

���bnx (t; y; a)� bx (t; y; a)���4 �nt (y) dydt = 0:For each p > 0; eE � sup

0�t�Tjznt j

p

�< +1: Thus, lim

R!1eP � sup

0�t�Tjznt j > R

�= 0;

then it is enough to show that for every R > 0;

limn!+1

ZB(0;R)

supa2A1

���bnx (t; y; a)� bx (t; y; a)���4 �nt (y) dy = 0:According to Lemma 3.2, it is easy to see that

supa2A1

���bnx (t; y; a)� bx (t; y; a)���4= sup

a2A1

�����bnx t; y +

TR0

Gtd�nt ; a

!� bx

t; y +

TR0

Gtd�nt ; a

!�����4

! 0 dy-a:e;

at least for a subsequence. Then by Egorov�s Theorem, for every � > 0; there exists

a measurable set F with � (F ) < �; such that supa2A1

���bnx (t; y; a)� bx (t; y; a)��� convergesuniformly to 0 on the set F c: Note that, since the Lebesgue measure is regular, Fmay be chosen closed. This implies that

limn!+1

ZF c

supa2A1

���bnx (t; y; a)� bx (t; y; a)���4 �nt (y) dy� lim

n!+1

�supy2F c

supa2A1

���bnx (t; y; a)� bx (t; y; a)���4� = 0:Now, by using the boundness of the derivatives b

n

x ; bx we haveZF

supa2A1

���bnx (t; y; a)� bx (t; y; a)���4 �nt (y) dy= eE � sup

a2A1

���bnx (t; znt ; a)� bx (t; znt ; a)���4 �fznt 2Fg�� 2M4 eP (znt 2 F ) :

According to (4:18) ; it is easy to see that znt = xnt �R t0Gsd�

ns converges to

zt = xt �R t0Gsd�s in probability, then in distribution. Applying the Portmanteau-

Alexandrov theorem, we obtain

limn

ZF

supa2A1

���bnx (t; y; a)� bx (t; y; a)���4 �nt (y) dy � 2M4 lim sup eP (znt 2 F )� 2M4 eP (zt 2 F )= 2M4

ZF

�t (y) dy < ":

19

Page 20: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

where �t (y) denotes the density of zt with respect to Lebesgue measure.Now, since Z

B(0;R)

supa2A1

���bnx (t; y; a)� bx (t; y; a)���4 �nt (y) dy=

ZF

supa2A1

���bnx (t; y; a)� bx (t; y; a)���4 �nt (y) dy+

ZF c

supa2A1

���bnx (t; y; a)� bx (t; y; a)���4 �nt (y) dy;we get lim

n!+1In2 = 0:

Let k � 0 be a �xed integer, then it holds that In3 � C�Jk1 + J

k2 + J

k3

�; where

Jk1 = eE "TR0

��bx (t; xnt ; ut)� bkx (t; xnt ; ut)��4 dt#;

Jk2 = eE "TR0

��bkx (t; xnt ; ut)� bkx (t; xt; ut)��4 dt#;

Jk3 = eE "TR0

��bkx (t; xt; ut)� bx (t; xt; ut)��4 dt#:

Applying the same arguments used in the �rst limit (Egorov and Portmanteau-Alexandrov Theorems), we obtain that lim

n!+1Jk1 = 0:We use the continuity of b

kx in x

and the convergence in probability of xnT to xT to deduce that bkx (t; x

nt ; ut) converges

to bkx (t; xt; ut) in probability as n ! +1; and to conclude by using the dominatedconvergence theorem that lim

n!+1Jk2 = 0:

Jk3 = eE "TR0

supa2A1

���bkx (t; zt; a)� bx (t; zt; a)���4 dt#

=

TZ0

ZRd

supa2A1

���bkx (t; y; a)� bx (t; y; a)���4 �t (y) dydtbk

x; bx being bounded, then by using the convergence of bk

x to bx; and the dominatedconvergence theorem, we get lim

n!+1Jk3 = 0:

iii) and iv) are proved by using the same techniques as in ii) and lemma 3.5.

Proof. of Theorem 4.1. Use the Corollary 4.5 and Lemma 4.6.

REFERENCES

[1] Bahlali, S., Chala, A.: The stochastic maximum principle in optimal control of singular di¤u-sions with nonlinear coe¢ cients, Random Oper. Stochastic Equations,13, 1-10 (2005)

[2] Bahlali, S., Djehiche, B., Mezerdi B.: The relaxed maximum principle in singular control ofdi¤usions. SIAM J. Control Optim., 46, 427-444 (2007)

20

Page 21: OPTIMALITY NECESSARY CONDITIONS IN SINGULARboualem/Bah_Chigh_Djeh_Mez.pdf · smooth functions. Ekeland™s variational principle is then applied to derive necessary conditions for

[3] Bahlali, K., Djehiche, B., Mezerdi, B.: On the Stochastic Maximum Principle in OptimalControl of Degenerate Di¤usions with Lipschitz Coe¢ cients. Appl. Math. Optim., Vol. 56,364-378 (2007).

[4] Bahlali, S., Mezerdi, B.:A general stochastic maximum principle for singular control problems,Electron J. of proba , 10, 988-1004.(2005)

[5] Bahlali, K., Mezerdi, B., Ouknine, Y.: The maximum principle in optimal control of a di¤usionswith nonsmooth coe¢ cients. Stoch. Stoch. Rep. 57, 303-316 (1996).

[6] Ben¼es, V.E , Shepp, L.A and Witsenhausen, H.S., Some solvable stochastic control problems,Stochastics, 4 (1980), pp. 39-83.

[7] Bensoussan, A.: Lectures on stochastic control. In: Lect. Notes in Math., vol. 972, pp. 1-62.Springer, Berlin (1983)

[8] Bismut, J.M.: An introductory approach to duality in optimal stochastic control. SIAM Rev.20(1), 62-78 (1978)

[9] Bouleau, N., Hirsch, F.: Sur la propriété du �ot d�une équation di¤érentielle stochastique. C.R.Acad. Sci. Paris 306, 421-424 (1988)

[10] Cadenillas, A., Haussmann, U. G.: The stochastic maximum principle for a singular controlproblem. Stoch. Stoch. Rep. 49, 211-237 (1994)

[11] Cadenillas, A., Karatzas, I.: The stochastic maximum principle for linear convex optimal controlwith random coe¢ cients. SIAM J. Control Optimal. 33(2), 590-624 (1995)

[12] Chow, P.-L., Menaldi, J.-L., Robin, M.: Additive control of stochastic linear systems with �nitehorizan, SIAM J. Control Optim., 23, 858-899 (1985)

[13] Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)[14] Davis, M.H.A., Norman, A., Portfolio selection with transaction costs, Math. Oper. Research,

15 (1990), pp. 676 - 713.[15] Ekeland, I.: Non convex minimization problems. Bull. Am. Math. Soc. (N. S.) 1, 443-474 (1979)[16] Frankowska, H.: The �rst order necessary conditions for nonsmooth variational and control

problems. SIAM J. Control Optim. 22(1), 1-12 (1984)[17] Haussmann, U. G., Suo, W.: Singular optimal stochastic controls I: Existence, SIAM J. Control

Optim. 33, 916-936 (1995)[18] Haussmann, U. G., Suo, W.: Singular optimal stochastic controls II: Dynamic programming ,

SIAM J. Control Optim. 33 , 937-959 (1995)[19] Karatzas, I., Shreve, S., Connections between optimal stopping and stochastic control I:

Monotone follower problem, SIAM J. Control Optim., 22 (1984), pp. 856 - 877.[20] Krylov, N.V.: Controlled Di¤usion Processes. Springer, Berlin (1980)[21] Kushner, N.J.: Necessary conditions for continuous parameter stochastic optimization prob-

lems. SIAM J. Control Optim. 10, 550-565 (1972)[22] Mezerdi, B.: Necessary conditions for optimality for a di¤usion with a non smoth drift. Sto-

chastics 24, 305-326 (1988)[23] Peng, S.: A general stochastic maximum principle for optimal control problems. SIAM J.

Control Optim. 28, 966-979 (1990)[24] Rockafellar, R.T.: Conjugat convex functions in optimal control problems and the calculus of

variations. J. Math. Anal. Appl. 32, 174-222 (1970)[25] Shreve, S.E and Soner, H.M, Optimal investment and consumption with transaction costs, Ann.

Appl. Probab. 4(3),609 - 692 (1994).[26] Zhou, X.Y.: Stochastic near optimal controls: necessary and su¢ cient conditions for near

optimality. SIAM J. Control Optim. 36(3), 929-947 (1998)

21


Recommended