+ All Categories
Home > Documents > Stochastic hamiltonian dynamical systems

Stochastic hamiltonian dynamical systems

Date post: 19-Dec-2016
Category:
Upload: juan-pablo
View: 215 times
Download: 2 times
Share this document with a friend
58
Vol. 61 (2008) REPORTS ON MATHEMATICAL PHYSICS No. 1 STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS JOAN-ANDREU LA.ZARO-CAMf Departarnento de Ffsica Te6rica, Universidad de Zaragoza, Pedro Cerbuna 12, E-50009 Zaragoza, Spain (e-mail: [email protected]) and JUAN-PABLO ORTEGA Centre National de la Recherche Scientifique, D6partement de Math6matiques de Besanqon, Universit6 de Franche-Comt6, UFR des Sciences et Techniques, 16, route de Gray, F-25030 Besan~on cedex, France (e-mail: Juan-Pablo.Ortega @ univ-fcomte.fr) (Received November 3, 2007) We use the global stochastic analysis tools introduced by R A. Meyer and L. Schwartz to write down a stochastic generalization of the Hamilton equations on a Poisson manifold that, for exact symplectic manifolds, are characterized by a natural critical action principle similar to the one encountered in classical mechanics. Several features and examples in relation with the solution semimartingales of these equations are presented. Keywords: stochastic Hamilton equations, stochastic variational principle, stochastic mechanics. 1. Introduction The generalization of classical mechanics to the context of stochastic dynamics has been an active research subject ever since K. It6 introduced the theory of stochastic differential equations in the 1950s (see e.g. [25, 4, 37, 38, 39, 34, 35, 2, 10, 5, 6], and references therein). The motivations behind some pieces of work related to this field lay in the hope that a suitable stochastic generalization of classical mechanics should provide an explanation of the intrinsically random effects exhibited by quantum mechanics within the context of the theory of diffusions. In other instances the goal is establishing a framework adapted to the handling of mechanical systems subjected to random perturbations or whose parameters are not precisely determined and are hence modeled as realizations of a random variable. Most of the pieces of work in the first category use a class of processes that have a stochastic derivative introduced in [25] and that has been subsequently refined over the years. This derivative can be used to formulate a real valued action and various associated variational principles whose extremals are the processes of interest. [651
Transcript
Page 1: Stochastic hamiltonian dynamical systems

Vol. 61 (2008) REPORTS ON MATHEMATICAL PHYSICS No. 1

S T O C H A S T I C H A M I L T O N I A N D Y N A M I C A L S Y S T E M S

JOAN-ANDREU LA.ZARO-CAMf

Departarnento de Ffsica Te6rica, Universidad de Zaragoza, Pedro Cerbuna 12, E-50009 Zaragoza, Spain

(e-mail: [email protected])

and

JUAN-PABLO ORTEGA

Centre National de la Recherche Scientifique, D6partement de Math6matiques de Besanqon, Universit6 de Franche-Comt6, UFR des Sciences et Techniques,

16, route de Gray, F-25030 Besan~on cedex, France (e-mail: Juan-Pablo.Ortega @ univ-fcomte.fr)

(Received November 3, 2007)

We use the global stochastic analysis tools introduced by R A. Meyer and L. Schwartz to write down a stochastic generalization of the Hamilton equations on a Poisson manifold that, for exact symplectic manifolds, are characterized by a natural critical action principle similar to the one encountered in classical mechanics. Several features and examples in relation with the solution semimartingales of these equations are presented.

Keywords: stochastic Hamilton equations, stochastic variational principle, stochastic mechanics.

1. Introduction

The generalization of classical mechanics to the context of stochastic dynamics has been an active research subject ever since K. It6 introduced the theory of stochastic differential equations in the 1950s (see e.g. [25, 4, 37, 38, 39, 34, 35, 2, 10, 5, 6], and references therein). The motivations behind some pieces of work related to this field lay in the hope that a suitable stochastic generalization of classical mechanics should provide an explanation of the intrinsically random effects exhibited by quantum mechanics within the context of the theory of diffusions. In other instances the goal is establishing a f ramework adapted to the handling of mechanical systems subjected to random perturbations or whose parameters are not precisely determined and are hence modeled as realizations of a random variable.

Most of the pieces of work in the first category use a class of processes that have a stochastic derivative introduced in [25] and that has been subsequently refined over the years. This derivative can be used to formulate a real valued action and various associated variational principles whose extremals are the processes of interest.

[651

Page 2: Stochastic hamiltonian dynamical systems

66 J.-A. L~ZARO-CAMI and J.-P. ORTEGA

The approach followed in this paper is closer to the one introduced in [4] in which the action has its image in the space of real valued processes and the variations are taken in the space of processes with values in the phase space of the system that we are modeling. This paper can be actually seen as a generalization of some of the results in [4] in the following directions:

(i) We make extensive use of the global stochastic analysis tools introduced by P. A. Meyer [23, 24] and L. Schwartz [33] to handle non-Euclidean phase spaces. This feature not only widens the spectrum of systems that can be handled but it is also of paramount importance at the time of reducing them with respect to the symmetries that they may eventually have (see [29]); indeed, the orbit spaces obtained after reduction are generically non-Euclidean, even if the original phase space is.

(ii) The stochastic dynamical components of the system are modeled by continuous semimartingales and are not limited to Brownian motion.

(iii) We handle stochastic Hamiltonian systems on Poisson manifolds and not only on symplectic manifolds.

(iv) The variational principle that we propose in Theorem 4.2 is not just satisfied by the stochastic Hamiltonian equations (as in [4]) but fully characterizes them.

There are various reasons that have led us to consider these generalized Hamil- tonian systems. First, even though the laws that govern the dynamics of classical mechanical systems are, in principle, completely known, the finite precision of experimental measurements yields impossible the estimation of the parameters of a particular given one with total accuracy. Second, the modeling of complex physical systems involves most of the time simplifying assumptions or idealizations of parts of the system, some of which could be included in the description as a stochastic component; this modeling philosophy has been extremely successful in the social sciences [7]. Third, even if the model and the parameters of the system are known with complete accuracy, the solutions of the associated differential equations may be of great complexity and exhibit high sensitivity to the initial conditions hence making the probabilistic treatment and description of the solutions appropriate. Finally, we will see (Section 3.3) how stochastic Hamiltonian modeling of microscopic systems can be used to model dissipation and macroscopic damping.

The paper is structured as follows: in Section 2 we introduce the stochastic Hamilton equations with phase space as a given Poisson manifold and we study some of the fundamental properties of the solution semimartingales like, for instance, the preservation of symplectic leaves or the characterization of the conserved quantities. This section contains a discussion on two notions on nonlinear stability, almost sure Lyapunov stability and stability in probability, that reduce in the deterministic setup to the standard definition of Lyapunov stability. We formulate criteria that generalize to the Hamiltonian stochastic context the standard energy methods to conclude the stability of a Hamiltonian equilibrium using existing conservation laws. More specifically, there are two different natural notions of conserved quantity in the stochastic context that, via a stochastic Dirichlet criterion (Theorem 2.2) allow

Page 3: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 67

one to conclude the different kinds of stability that we have mentioned above. Section 3 contains several examples: in the first one we show how the systems studied by Bismut in [4] fall in the category introduced in Section 2. We also see that a damped oscillator can be described as the average motion of the solution semimartingale of a natural stochastic Hamiltonian system, and that Brownian motion in a manifold is the projection onto the base space of a very simple Hamiltonian stochastic semimartingale defined on the cotangent bundle of the manifold or of its orthonormal frame bundle, depending on the availability or not of a parallelization for the manifold in question. Section 4 is dedicated to showing that the stochastic Hamilton equations are characterized by a critical action principle that generalizes the one found in the treatment of deterministic systems. In order to make this part more readable, the proofs of most of the technical results needed to prove the theorems in this section have been included separately at the end of the paper.

One of the goals of this paper is conveying to the geometric mechanics community the plentitude of global tools available to handle mechanical problems that contain a stochastic component and that do not seem to have been exploited to the full extent of their potential. In order to facilitate the task of understanding the paper to non-probabilists we have included an appendix that provides a self-contained presentation of some major facts in stochastic calculus on manifolds needed for a first comprehension of our results. Those pages are a very short and superficial presentation of a deep and technical field of mathematics so the reader interested in a more complete account is encouraged to check with the references quoted in the appendix and especially with the excellent monograph [12].

Conventions: All the manifolds in this paper are finite dimensional, second-countable, locally compact, and Hausdorff (and hence paracompact).

2. The stochastic Hamilton equations

In this section we present a natural generalization of the standard Hamilton equations in the stochastic context. Even though the arguments gathered in the following paragraphs as motivation for these equations are of formal nature, we will see later on that, as it was already the case for the standard Hamilton equations, they are characterized by a natural variational principle.

We recall that a symplectic manifold is a pair (M, co), where M is a manifold and w ~ f22(M) is a closed nondegenerate two-form on M, that is, dw = 0 and, for every, m ~ M, the map v ~ TraM ~ co(m)(v, .) ~ T*mM is a linear isomorphism between the tangent space TraM to M at m and the cotangent space T*M. Using the nondegeneracy of the symplectic form co, one can associate to each function h ~ C ~ ( M ) a vector field Xh ~ ~ (M) , defined by the equality

iXhCO = dh. (2.1)

We will say that Xh is the Hamiltonian vector field associated to the Hamiltonian function h. The expression (2.1) is referred to as the Hamilton equations.

Page 4: Stochastic hamiltonian dynamical systems

68 J.-A. L/~ZARO-CAMI and J.-E ORTEGA

A Poisson manifold is a pair (M, {-, .}), where M is a manifold and {., .} is a bilinear operation on C ~ ( M ) such that (C~(M), {., .}) is a Lie algebra and {.,-} is a derivation (that is, the Leibniz identity holds) in each argument. The functions in the center C(M) of the Lie algebra (C~(M), {., .}) are called Casimir functions. From the natural isomorphism between derivations on C ~ ( M ) and vector fields on M it follows that each h ~ C ~ ( M ) induces a vector field on M via the expression Xh = {', h}, called the Hamiltonian vector field associated to the Hamiltonian function h. Hamilton's equations ~ = Xh (Z) can be equivalently written in the Poisson bracket form as f = {f ,h} , for any f ~ C~(M) . The derivation property of the Poisson bracket implies that for any two functions f , g ~ C~(M), the value of the bracket {f, g}(z) at an arbitrary point z 6 M (and therefore Xf ( z ) as well), depends on f only through d f ( z ) which allows us to define a contravariant antisymmetric two-tensor B ~ A2(M) by B(z)(ot z, /~z) = {f, g}(z), where d f ( z ) = Otz ~ TzM and rig(z) = ~z ~ TzM. This tensor is called the Poisson tensor of M. The vector bundle map B ~ : T*M --+ T M naturally associated to B is defined by B(z)(ot z, ~z) = (otz, B~(~z)).

We start by rewriting the solutions of the standard Hamilton equations in a form that we will be able to mimic in the stochastic differential equations context. All the necessary prerequisites on stochastic calculus on manifolds can be found in a short review in the appendix at the end of the paper.

PROPOSITION 2.1. Let (M, w) be a symplectic manifold and h ~ C~(M). The smooth curve y : [0, T] -+ M is an integral curve of the Hamiltonian vector field Xh if and only if for any ot ~ f2 (M) and for any t ~ [0, T]

oe = - dh(o)~(oe)) o t'(s)ds, (2.2) I[0,tl

where o) ~ : T*M ~ T M is the vector bundle isomorphism induced by w. More generally, i f M is a Poisson manifold with bracket {., .} then the same result holds with (2.2) replaced by

a = - dh(B~(o0) o y(s)ds, (2.3) ][0,t]

Proof: Since in the symplectic case w ~ = B ~, it suffices to prove (2.3). As (2.3) holds for any t 6 [0, T], we can take derivatives with respect to t on both sides and we obtain the equivalent form

(or (y (t)), ~ (t)) = - (dh (y (t)), B ~ (]/(t)) (a (y (t)))). (2.4)

Let f ~ C~(M) be such that d f ( y ( t ) ) = t~(y(t)). Then (2.4) can be rewritten as

( d f ( y ( t ) ) , ~(t)) = -<dh (y ( t ) ) , B~(y(t))(d f (y(t)))) = {f, h}(y(t)),

which is equivalent to ~ ( t ) = Xh(y(t)) , as required. []

Page 5: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 69

We will now introduce the stochastic Hamilton equations by mimicking in the context of Stratonovich integration the integral expressions (2.2) and (2.3). In the next definition we will use the following notation: let f : M ~ W be a differentiable function that takes values on the vector space W. We define the differential d f : T M ~ W as the map given by d f = P2 o T f , where T f : T M ~ T W = W x W is the tangent map of f and P2 : W x W ~ W is the projection onto the second factor. If W = IR this definition coincides with the usual differential. If {el . . . . . en} is a basis of W and f = ~in___l f i e i then

n d f : ~ i = l d f i ® ei.

DEFINITION 2.1. Let (M,{.,.}) be a Poisson manifold, X : IR+ x f2 ~ V a semimartingale that takes values on the vector space V with X0 = 0, and h • M ~ V* a smooth function. Let {E 1 . . . . . E r } be a basis of V* and h = }--~=1 hiEi" The Hamilton equations with stochastic component X, and Hamiltonian function h are the Stratonovich stochastic differential equation

8I "h = H(X, r ) s x , (2.5)

defined by the Stratonovich operator H(v, z) : TvV ~ TzM given by r

H(v, z)(u) := ~ ( E j, u)Xhj(Z). (2.6) j= l

The dual Stratonovich operator H*(v, z) : T*M ~ To*V of H(v, z) is given by

H*(v, z ) (Otz )=-dh(z ) . B~(z)(Otz). Hence, the results quoted in Appendix 6.4 show that for any ~0 measurable random variable 1-'0, there exists a unique semimartingale 1 -'h such that F h = F0 and a maximal stopping time ~h that solve (2.5), that is, for any oe ~ fl (M),

f ( a , 8 I " h ) = - - J (dh(Bg(ot))(Fh), SX}. (2.7) P

We will refer to I ' h a s the Hamiltonian semimartingale associated to h with the initial condition I'0.

REMARK 2.1. The stochastic component X encodes the random behavior exhibited by the stochastic Hamiltonian system that we are modeling and the Hamiltonian function h specifies how it embeds in its phase space. Unlike the situation encountered in the deterministic setup we allow the Hamiltonian function to be vector valued in order to accommodate higher-dimensional stochastic dynamics.

REMARK 2.2. The generalization of Hamilton's equations proposed in Defini- tion 2.1 by using a Stratonovich operator is inspired by one of the transfer principles presented in [13] to provide stochastic versions of ordinary differential equations. This procedure can be also used to carry out a similar generalization of the equations induced by a Leibniz bracket (see [31]).

REMARK 2.3. Stratonovieh versus It8 integration: At the time of proposing the equations in Definition 2.1 a choice has been made, namely, we have chosen

Page 6: Stochastic hamiltonian dynamical systems

70 J.-A. L/~ZARO-CAMI and J.-E ORTEGA

Stratonovich integration instead of It6 or other kinds of stochastic integration. The option that we took is motivated by the fact that by using Stratonovich integration, most of the geometric features underlying classical deterministic Hamiltonian me- chanics are preserved in the stochastic context (see the next section). Additionally, from the mathematical point of view, this choice is the most economical one in the sense that the classical geometric ingredients of Hamiltonian mechanics plus a noise semimartingale suffice to construct the equations; had we used It6 integration we would have had to provide a Schwartz operator (see Section 6.4) and the construc- tion of such an object via a transfer principle like in [13] involves the choice of a connection.

The use of It6 integration in the modeling of physical phenomena is sometimes preferred because the definition of this integral is not anticipative, that is, it does not assume any knowledge about the behavior of the system in future times. Even though we have used Stratonovich integration to write down our equations, we also share this feature because the equations in Definition 2.1 can be naturally translated to the It5 framework (see Proposition 2.3). This is a particular case of a more general fact since given any Stratonovich stochastic differential equation there always exists an equivalent It6 stochastic differential equation, in the sense that both equations have the same solutions. Note that the converse is in general not true.

2.1. Elementary properties of the stochastic Hamilton equations

PROPOSITION 2.2. Let (M, {., .}) be a Poisson manifold, X : ~ + × ~2---> V a semimartingale that takes values on the vector space V with Xo : 0 and h : M ~ V* a smooth function. Let Fo be a ~o measurable random variable and F h the Hamiltonian semimartingale associated to h with initial condition Fo. Let ~h be the corresponding maximal stopping time. Then, for any stopping time r < ~h, the Hamiltonian semimartingale F h satisfies

f ( r ~ ) - f (Fo h) = ~ f ~ { f , h j } ( F h ) a x j, (2.8) j : l JO

where { h j } j e { 1 ..... r} a n d { x J } j e { 1 ..... r} are the components of h and X with respect to two given dual bases {el . . . . . er) and {E I . . . . . E r) of V and V*, respectively. Expression (2.8) can be rewritten in differential notation as

r

(~f(r h) = ~_,{f, h j} (rh)sX j. j-----1

Proof: It suffices to take a = d f in (2.7). Indeed, by (6.5)

fo ~(af , s r h) = f ( r ~ ) - f(roh).

Page 7: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 71

At the same time

/o -- (dh(B~(df))(Fh), SX) = - ((dhj ® eJ(BtI(df)))(I'h), •X) j = l 0

= ({f, hjI(rh)#, ~X). J= l 0

By the second statement in (6.5) this equals Y~}=I f0~{f, hjI(rh) 8 (f(d, ~X)). Given

that f (d , ~x)= x J - x~, the equality follows. []

REMARK 2.4. Notice that if in Definition 2.1 we take V* = R, h ~ C~(M), and X : 1R+ x f2 ~ IR the deterministic process given by (t,o)) ~ > t, then the stochastic Hamilton equations (2.7) reduce to

f (ot, SF h) = f <=,Xh> (I'h) dt. (2.9)

A straightforward application of (2.8) shows that Fth(w) is necessarily a differentiable curve, for any co 6 f2, and hence the Riemann-Stieltjes integral in the left-hand side of (2.9) reduces, when evaluated at a given co E f2, to a Riemann integral identical to the one in the left-hand side of (2.3), hence proving that (2.9) reduces to the standard Hamilton equations.

Indeed, let Fth0(co)~ M be an arbitrary point in the curve Fth(co), let U be

a coordinate patch around Fth0(co) with coordinates {x I . . . . . xn}, and let x ( t ) = ( x l ( t ) . . . . . xn(t)) be the expression of Fth(co) in these coordinates. Then by (2.8), for h 6 IR sufficiently small, and i 6 {1 . . . . . n},

to+h x i ( to -t- h) -- x i ( to ) = {x i, h}(x(t))dt.

d t o

Hence, by the Fundamental Theorem of Calculus, xi(t) is differentiable at to, with the derivative

xi(to) lim l (xi(to + e) xi(to)) = lim l ( f t°+~ ) . . . . {x i, h}(x(t))dt = {x i, h}(x(to)), e-~O ~ e--*O E X Jto

as required.

The following proposition provides an equivalent expression of the stochastic Hamilton equations in the It6 form (see Section 6.4).

PROPOSITION 2.3. The stochastic Hamilton equations in Definition 2.1 admit an equivalent description using It6 integration by using the Schwartz operator ~ ( v , m) : rvV --~ zmM naturally associated to the Hamiltonian Stratonovich operator H and that can be described as follows. Let L c zvM be a second-order vector

Page 8: Stochastic hamiltonian dynamical systems

72 J.-A. L~ZARO-CAMI and J.-P. ORTEGA

and f ~ C °~ (M) arbitrary, then r

Moreover, expression (2.8) in the It6 representation is given by

f - f

1 r fo e f {f, hj}(r'h)dXJ 1 {{f, h j } ,h i } (Fh)d[XJ , Xi]. j = l dO , - =

We will refer to ~ as the Hamiltonian Schwartz operator associated to h.

(2.10)

Proof: According to the remarks made in Appendix 6.4, the Schwartz operator 7-/ naturally associated to H is constructed as follows. For any second-order vector L~ 6 vvM associated to the acceleration of a curve v (t) in V such that v (0) = v we define 7-( (v, m) (L~) := L~(0) 6 vmM, where m( t ) is a curve in M such that m (0) = m and rh (t) = H (v ( t ) , m (t)) ~ (t), for t in a neighborhood of 0. Consequently,

d2 t=0 d t=o (v, m) (L~) [ f ] = ~-~ f (m (t)) = ~ (df(m(t)), rh(t))

d

d- t t=o

d

d-t t=0

d t=o dt

F

= y'~JE j, j = l

r = ~--~ (E J,

j=l

(d f (m(t)), H (v(t), m(t))iJ(t))

r

E (E j, i~(t)) (d f (m(t)), Xhj (m(t))) j = l

r

~ ( e j, ~?(t)){f, hj}(m(t)) j = l

iJ(O)){f, hj}(m) + (¢J, ~)(0))(d{f, hj}(m), th(O))

r

i~(O)){f, hj}(m) + (E j, ~)(0)) Z ( E i, 1)(O)){{f, h j}, hi}(m) i=1

r

In order to establish (2.10) we need to calculate ~* (v, m) (d2f(m)) for a second- order form d2f(m) E r*M at m e M, f ~ C ~ (M). Since 7-[* (v, m)(d2f(m)) is fully characterized by its action on elements of the form L~ ~ rvV for some curve

Page 9: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 73

v (t) in V such that v ( 0 ) = v, we have

(7~* (v, m) (d z f (m)), Lv) = (d2f (m), 7-[ (v, m) (Li~)) = 7-L (v, m) (Li~) [ f ]

= (~--]. {f, hj}(m)~ j + {{f, hj}, hi}(m)E i . ~J, L~). i,j=l

Consequently, 7-[* (v, m) (dz f (m)) = Y~,j=l {f, hj}(m)~J + {{f, hi}, hi}(m)E i • ~J. Hence, if I'h is the Hamiltonian semimartingale associated to h with initial

condition I'0, r < ~h is any stopping time, and f ~ C~(M) , we have by (6.5) and (6.6)

f0 f0 f ( rh ) -- / ( rob ) = (d2f, er ) =

= ({f'hJ}(rh)EJ'dX)

+ (l{f, hj},hiI(rh)E 'EJ, dX) j,i=l

-~-E { { f ' h j } ' h i l ( I ~ h ) d [ X i ' x J ] " [] j,i=l

PROPOSITION 2.4 (Preservation of the symplectic leaves by Hamiltonian semi- martingales). In the setup of Definition 2.1, let £. be a symplectic leaf of (M, co) and I ~h a Hamiltonian semimartingale with initial condition Fo(OO)= Zo, where Zo is a random variable such that Zo(w) ~ 12 for all w E f2. Then, for any stopping time r < ~h we have that I~ ~ 17..

Proof: Expression (2.6) shows that for any z 6 /2, the Stratonovich operator H(v , z ) takes values in the characteristic distribution associated to the Poisson structure (M, {., .}), that is, in the tangent space T/2 of/2. Consequently, H induces another Stratonovich operator H£(v, z) : T~V --+ TzE, v ~ V, z ~ E, obtained from H by restriction of its range. It is clear that if i • E ~ M is the inclusion then

H~(v, z) o Tzi = H*(v, z). (2.11)

Let I'~ be the semimartingale in E that is a solution of the Stratonovich stochastic

differential equation 8I'~ = H£(X, I'hc)sX (2.12)

with initial condition F0. We now show that F := i o F~ is a solution of

8F = H(X, F)SX.

Page 10: Stochastic hamiltonian dynamical systems

74 J.-A. L,~ZARO-CAM[ and J.-R ORTEGA

The uniqueness of the solution of a stochastic differential equation will guarantee in that situation that F h necessarily coincides with F, hence proving the statement. Indeed, for any a 6 72(M),

Since F~ satisfies (2.12) and T*i-or ~ 72(/2), by (2.11) this equals

f<H (X, rS)(T*i f<H*(X, iorS,(.),Sx>= that is, ~I" = H(X, F)~X, as required. []

PROPOSITION 2.5 (The stochastic Hamilton equations in Darboux-Weinstein coordinates). Let (M, {., .}) be a Poisson manifold and P h be a solution of the Hamilton equations (2.5) with an initial condition xo ~ M. There exists an open neighborhood U of Xo in M and a stopping time ru such that Fib(co) ~ U, for any co E 72 and any t < ru (co). Moreover, U admits local Darboux-Weinstein coordinates ( q l , . . . , qn, P l , . . . , Pn, zl, . . . , zt) in which (2.8) takes the form

qi(Fhr)--qi(Fh)= rj~__l fO r Ohj

P i ( p h ) - - Pi( F h ) : - ~ [ r Ohj

j= lJO ~q /&Xj '

Z i ( p h ) - - Z i ( r h ) = {Zi, hj}r3X j, j = l

where {-, "}r is the transverse Poisson structure of (M, {., .}) at xo.

Proof: Let U be an open neighborhood of x0 in M for which Darboux coordinates can be chosen. Define ru = inft_>0{Ft h 6 U c} (ru is the exit time of U). It is a standard fact in the theory of stochastic processes that rv is a stopping time. The proposition follows by writing (2.8) for the Darboux-Weinstein coordinate functions ( q l . . . . . qn, Pl . . . . . Pn, Zl . . . . . Zl). []

Let ( : M x 72 ---> [0, oo] be the map such that, for any z 6 M, ( ( z ) is the maximal stopping time associated to the solution of the stochastic Hamilton equations (2.5) with initial condition F0 = z a.s. Let F be the flow of (2.5), that is, for any z 6 M, F (z) : [0, ( (z)] --+ M is the solution semimartingale of (2.5) with initial condition z. The map z ~ M i > Ft (z, w) E M is a local diffeomorphism of M, for each t > 0 and almost all co 6 72 in which this map is defined (see [17]). In the following result, we show that, in the symplectic context, Hamiltonian flows preserve the symplectic form and hence the associated volume form 0 = co A. n. A W.

Page 11: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 75

This has already been shown for Hamiltonian diffusions (see Example 3.1) by Bismut [4].

THEOREM 2.1 (Stochastic Liouville's Theorem). Let ( M , w ) be a symplectic manifold, X : ~+ × f2 --+ V* a semimartingale, and h : M ~ V* a Hamiltonian function. Let F be the associated Hamiltonian flow. Then, for any z ~ M and any (t, 7) ~ [0, ~ (z)],

F,* (z, 7) o) = o~.

Proof: By [19, Theorem 3.3] (see also [36]), given an arbitrary form ot 6 f2 k (M) and z 6 M, the process F (z)* or satisfies the following stochastic differential equation:

F(z)* ~=~(z)+ ~-~ f F(z)*(£Xhj~)~X j. j = l

In particular, if c~ = 09 then £XhjW = 0 for any j 6 {1 . . . . . r}, and hence the result

follows. []

2.2. Conserved quantities and stability

Conservation laws in Hamiltonian mechanics are extremely important since they make easier the integration of the systems that have them and, in some instances, provide qualitative information about the dynamics. A particular case of this is their use in concluding the nonlinear stability of certain equilibrium solutions using Dirichlet type criteria that we will generalize to the stochastic setup using the following definitions.

DEFINITION 2.2. A function f E C ~ (M) is said to be a strongly (respectively, weakly) conserved quantity of the stochastic Hamiltonian system associated to h : M --~ V* if for any solution F h of the stochastic Hamilton equations (2.5) we have that f (F h) = f (r~) (respectively, E [ f (r~)] = E [ f (r~)], for any stopping time r).

Notice that strongly conserved quantities are obviously weakly conserved and that the two definitions coincide for deterministic systems with the standard definition of conserved quantity. The following result provides in the stochastic setup an analogue of the classical characterization of the conserved quantities in terms of Poisson involution properties.

PROPOSITION 2.6. Let (M, {., .}) be a Poisson manifold, X : ~+ x f2 --~ V a semimartingale which takes values on the vector space V such that Xo = O, and h • M --+ V* and f c C ~ ( M ) are smooth functions. I f { f , hj} = 0 for every component hj of h then f is a strongly conserved quantity o f the stochastic Hamilton equations (2.5).

r Conversely, suppose that the semimartingale X = Z j = I X J 6 j is such that

[X i, X j] = 0 if i 7~ j . I f f is a strongly conserved quantity then { f , hi} = O, for

Page 12: Stochastic hamiltonian dynamical systems

76 J.-A. Lfi, ZARO-CAM[ and J.-P. ORTEGA

any j ~ {1 . . . . . r} such that [X J, X J] is a strictly increasing process at O. The last condition means that there exist A E ~ and ~ > 0 with P(A) > 0 such that for any t < ~ and o) E A we have [X j, xJ]t(w) > [X j, XJ]o(w), for all j E {1 . . . . . r}.

Proof: Let 1 -'h be the Hamiltonian semimartingale associated to h with initial condition F0 h. As we saw in (2.10),

f (rh) =f (rh) + {f, hj} (rh)dx j

{{f, hjI,h }(rh)d[Xi, XJ]. (2.13)

If {f, hi} = 0 for every component hj of h then all the integrals in the previous expression vanish and therefore f (F h) = f (F0 h) which implies that f is a strongly conserved quantity of the Hamiltonian stochastic equations associated to h.

Conversely, suppose now that f is a strongly conserved quantity. This implies that for any initial condition F h, the semimartingale f (F h) is actually time independent and hence of finite variation. Equivalently, the (unique) decomposition of f (F h) into two processes, one of finite variation plus a local martingale, only has the first term. In order to isolate the local martingale term of f (F h) recall first that the quadratic variations [X i, xJ] have finite variation and that the integral with respect to a finite variation process has finite variation (see [20, Proposition 4.3]). Consequently, the last summand in (2.13) has finite variation. As to the second summand, let M j and A j, j = 1 . . . . . r, be local martingales and finite variation processes, respectively, such that X J = A j + M j. Then

f {f, hj} ( F h ) d x J = f {f , hj} ( I ' h )dMJ+ f {f, hj} ( I 'h)da j.

Given that for each j , f { f , hj}(rh)da j is a finite variation process and f { f , hj}(rh)dM j is a local martingale (see [32, Theorem 29, page 128]) we conclude that Z := Y~=I f {f, hj} (rh)dMJ is the local martingale term of f (F h) and hence equal to zero.

We notice now that any continuous local martingale Z : R + x f2--+ N is also a local L 2 (f~)-martingale. Indeed, consider the sequence of stopping times

r n = {inft > 0 1 [Zt l=n}, n EN. Then E[(Zrn)~] < E [ n 2] = n 2, for all t E]~+.

Hence, zrnE L2 (~'~) for any n. In addition, E [(Z~n)~] = E[[Z'gn,z'~n]t ] (see [32,

Corollary 3, page 73]). On the other hand, by Proposition 5.1,

rf = = lto,~,l{f, h j } ( F h ) d M j. j=l

Page 13: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS

Thus, by [32, Theorem 29, page 75] and the hypothesis [X i, X J] = 0 if i # j ,

E [ ( Z r n ) i ] = E [ [ Z r n , zrn]t ]

= ~E[[ii~o.:~li, hjl(r~)dMJ, ilEo.:,l:,h,i(r~)dM']] j , i = l .I t -I

=~E[(flEo.:i({f, hx}lf, h,I)(r~)d[M:,M']),] j,i=l

= ~E[(f lm,:~({f , hsilf, hii)(r'h)d[XS, xi])] j,i=l t

=~E[(f lro.:,{f,h:i~(r~)d[X:,X:]),]. [X J, X s ] an increasing process of finite variation Since is

f 1[o,:1 {f, hj} 2 (r 'h)d[X j co E g2

77

t h e n

, S j ] is a Riemann-Stieltjes integral and hence for any

d [xJ, xJ]) (o,)

= f ' .n+ . {:. + }= ( : +>" <tX . X"] + ) As [X J, X J] (co) is an increasing function of t 6 •+, then for any j ~ {1 . . . . . r}

E[f lm,:]{f,h~}2 (I'h)d[XJ, XJ]] >_0. (2.14)

since E [ ( Z : ) ~ ] = 0 , we necessarily have that the inequality in Additionally, (2.14) L . , J

is actually an equality. Hence,

L t l [0 , : l {f, (F h) [X j, X j] = (2.15) h j} 2 d O.

Suppose now that [X J, x J ] is strictly increasing at 0 for a particular j . Hence, there exist A E ~" with P(A) > 0, and 3 > 0 such that [X j, xJ]t ((_D) > [X j, XJ]o (60) for any t < 6. Take now a fixed w 6 A. Since ~.n _~ o~ a.s., we can take n large enough to ensure that r" (co) > t, where t 6 [0,3). Thus, we may suppose that 1[0,:1 (t, w) = 1. As [X j, X j] (co) is a strictly increasing process

at zero, - ~ j o { : , h j V ( r h ( o , ) ) d [ X J , x J ] ( ~ o ) > o unless {f, h j }2(Fh(o) ) )= 0 in

a neighborhood [0, 8,o) of 0 contained in [0,/~). In principle 3~o > 0 might depend

Page 14: Stochastic hamiltonian dynamical systems

78 J.-A. L,/~ZARO-CAMI and J.-E ORTEGA

on w • A, so the values of t • [0,8) for which {f, hj} 2 (I "h (co)) = 0 for any o9 • a

are those satisfying 0 < t < inf~oeaSco. In any case (2.15) allows us to conclude

that {f, hj}Z(Fh(og))= 0 for any o9 • a . Finally, consider any I "h solution to

the stochastic Hamilton equations with constant initial condition I "h = m • M an arbitrary point. Then, for any o9 • A,

O = {f, hj} 2 (Fo h (o9) )= {f, hj} 2 (m).

Since m • M is arbitrary we can conclude that {f, hj} = O. []

We now use the conserved quantities of a system in order to formulate sufficient Dirichlet type stability criteria. Even though the statements that follow are enounced for processes that are not necessarily Hamiltonian, it is for these systems that the criteria are potentially most useful. We start by spelling out the kind of nonlinear stability that we are after.

DEFINITION 2.3. Let M be a manifold and let

6F = e(X, F)~X (2.16)

be a Stratonovich stochastic differential equation whose solutions F : i R x ga-+ M take values on M. Given x • M and s • IR, denote by lp*,x the unique solution of (2.16) such that F~,X(og) = x, for all co • ga. Suppose that the point z0 • M is an equilibrium of (2.16), that is, the constant process Pt(og) := z0, for all t • R and co • ga, is a solution of (2.16). Then we say that the equilibrium z0 is:

(i) Almost surely (Lyapunov) stable when for any open neighborhood U of zo there exists another neighborhood V C U of z0 such that for any z • V we have P °'z C U, a.s.

(ii) Stable in probability. For any s > 0 and e > 0

lim P {supd(F~'X, zo) > e} =O, x--+zo t>s

where d : M x M--+ IR is any distance function that generates the manifold topology of M.

THEOREM 2.2 (Stochastic Dirichlet's Criterion). Suppose that we are in the setup of the previous definition and assume that there exists a function f • C°°(M) such that d f ( z o ) = 0 and that the quadratic form d2f(z0) is (positive or negative) definite. If f is a strongly (respectively, weakly) conserved quantity for the solutions of (2.16) then the equilibrium zo is almost surely stable (respectively, stable in probability).

Proof: Since the stability of the equilibrium z0 is a local statement, we can work in a chart of M around z0 with coordinates (Xl . . . . . Xn) in which z0 is modeled by the origin. Moreover, using the Morse lemma and the hypotheses on the function f ,

Page 15: Stochastic hamiltonian dynamical systems

S T O C H A S T I C H A M I L T O N I A N D Y N A M I C A L S Y S T E M S 79

and assuming without loss of generality that f ( zo )= 0, we choose the coordinates (Xl . . . . . Xn) SO that f (x l . . . . . Xn) = x~+...+X2n . Hence, in the definition of stability in probability, we can use the distance function d(x, zo)= f (x) .

Suppose now that f is a strongly conserved quantity and let U be an open neighborhood of z0. Let r > 0 be such that V := f - l ( [ 0 , r]) C U. Let z 6 V with f ( z ) = r'. As f is a strongly conserved quantity f ( F °'z) = r ' < r and hence F °'z C U, as required.

In order to study the case in which f is a weakly conserved quantity, let 6 > 0 and let U~ be the ball of radius E around z0. Then, for any x 6 U~ and s E ~+, let ru, be the first exit time of F s'x with respect to U,. Notice first that if co ~ f2 belongs

S ,X to the set {co 6 f2 [suP0_<s< t d ( F t ,z0) > 6} = {co 6 f2 [suP0_<s< t f ( F t 'x) > 62}, then ru, (co) < t and hence the stopped process (FS,X)~U, satisfies that

(1-'~, x = : ((r,s,x)p (co))= : :,

for those values of co. This ensures that

2 6 zo)>.) < y

Taking expectations in both sides of this inequality we obtain

. (sup.( :x zo) > <

\O<s<t -- 62

Since by hypothesis f is a weakly conserved quantity, we can rewrite the right-hand side of this inequality as

62 (52 62 E2 '

and we can therefore conclude that

\O<_s<t ( sup d (Ft 'x, z0) > 6 ) <_ f(x)62 . (2.17) P

Taking the limit x ---> z0 in this expression and recalling that f(zo) = 0, the result follows. []

A careful inspection of the proof that we just have carried out reveals that in order for (2.17) to hold, it would have sufficed to have E[ f (Fr)] < E l f (F0)], for any stopping time r and any solution F, instead of the equality guaranteed by the weak conservation condition. This motivates the next definition.

DEFINITION 2.4. Suppose that we are in the setup of Definition 2.3. Let U be an open neighborhood of the equilibrium z0 and let V : U ---> R be a continuous function. We say that V is a Lyapunov function for the equilibrium z0 if V(zo) = O,

Page 16: Stochastic hamiltonian dynamical systems

80 J.-A. L.~ZARO-CAMI and J.-P. ORTEGA

V(z) > 0 for any z ~ U \ {z0}, and

E[V (F~)] < E[V (F0)], (2.18)

for any stopping time r and any solution r of (2.16).

This definition generalizes to the stochastic context the standard notion of Lyapunov function that one encounters in dynamical systems theory. If (2.16) is the stochastic differential equation associated to an It6 diffusion and the Lyapunov function is twice differentiable, the inequality (2.18) can be ensured by requiring that A[V](z) < O, for any z E U \ {z0}, where A is the infinitesimal generator of the diffusion, and by using Dynkin's formula.

THEOREM 2.3 (Stochastic Lyapunov's Theorem). Let zo ~ M be an equilibrium solution of the stochastic differential equation (2.16) and let V : U ~ ]R be a continuous Lyapunov function for zo. Then zo is stable in probability.

Proof: Let U~ be the ball of radius E around z0 and let VE := infx~u\u, V(x). Using the same notation as in the previous theorem we denote, for any x ~ U, and s ~ JR+, ru, as the first exit time of F s'x with respect to U,. Using the same approach as above we notice that if 09 ~ f2 belongs to the set {o9 ~ f2 I sup0<s<t d (Ft 'x, z0) > E}, then ru, (09) < t, and hence the stopped process (FS'X) rU, satisfies

v ( ( r s x ) ? , (o9)) = v I r s,x ) \ ~u,(o~)(og) >- V,,

S~X for those values of o9, since rru ~ (~o)(°9) belongs to the boundary of UE. This ensures

that Vel{oj~f21supo<_s<td(r~,X,zo)>,} ~ V

Taking expectations in both sides of this inequality we obtain

P ( s u p d (F: '/,zO) > E) < \ O<s <t - - VE

We now use that V being a Lyapunov function satisfies (2.18) and hence

_ _ <

v, v , - v , v ,

We can therefore conclude that

( ,FS.X , ) V(x) P sup d~ t ,zo) > e <

\O<s <t -- VE

Taking the limit x ~ z0 in this expression and recalling that V(zo) = 0, the result follows. []

REMARK 2.5. This theorem has been proved by Gihman [14] and Hasminskii [15] for It6 diffusions.

Page 17: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 81

3. Examples

3.1. Stochastic perturbation of a Hamiltonian mechanical system and Bismut's Hamiltonian diffusions

Let (M, {.,.}) be a Poisson manifold and hj ~ Coo(M), j = 0 . . . . . r, be smooth functions. Let h : M > ]i~ r+ l be the Hamiltonian function m l ) (h0 (m) . . . . . hr (m)), and consider the semimartingale X : ~+ x f2 --+ ~ r + l given by (t,w) , , (t, Bt 1 (o9) . . . . . B 7 (w)), where B J, j = 1 . . . . . r, are r-independent Brownian motions. L6vy's characterization of Brownian motion shows (see for instance [32, Theorem 40, page 87]) that [B J, Bi]t = t• ji . In this setup, (2.8) reads

f0 • • f ( r h ) - - f (Fh) = {f, h o } ( r h ) d t + {f , h j} (r 'h)SB j (3.1)

j = l

for any f ~ C °o (M). According to (2.10), the equivalent It6 version of this equation is

+ ,

j=l J0

+ {{f, hj} ,hjI(Fh) dt.

Equation (3.1) may be interpreted as a stochastic perturbation of the classical Hamilton equations associated to h0, that is,

d ( f o y) (t) = {f, h0} (y (t)),

dt

by the r Brownian motions B j . These equations have been studied by Bismut in [4] in the particular case in which the Poisson manifold (M, {-, .}) is just the symplectic Euclidean space ]1~ 2n with the canonical symplectic form. He refers to these particular processes as Hamiltonian diffusions.

If we apply Proposition 2.6 to the stochastic Hamiltonian system (3.1), we obtain a generalization to Poisson manifolds of a result originally formulated by Bismut (see [4, Th6or~mes 4.1 and 4.2, page 231]) for Hamiltonian diffusions. See also [30].

PROPOSITION 3.1. Consider the stochastic Hamiltonian system introduced in (3.1). Then f E C °° (M) is a conserved quantity if and only if

{f , h0} = { f , h i} . . . . . { f , hr} = 0. (3 .2)

Proof: If (3.2) holds then f is clearly a conserved quantity by Proposition 2.6. Conversely, notice that as [B', B J] = t~ ij, i, j ~ {1 . . . . . r}, and X ° ( t , w ) = t is a finite variation process then IX i, X J] = 0 for any i, j 6 {0, 1 . . . . . r} such that i ~: j . Consequently, by Proposition 2.6, if f is a conserved quantity then

{f , h i} . . . . . {f, hr} = 0. (3.3)

Page 18: Stochastic hamiltonian dynamical systems

82 J.-A. LAZARO-CAMI and J.-P. ORTEGA

Moreover, (3.1) reduces to

L r h0} (F h) dt O, {f,

for any Hamiltonian semimartingale F h and any stopping time r < (h. Suppose that {f, h0} (m0) > 0 for some m0 • M. By continuity there exists a compact neighborhood U of m0 such that {f, h0}lcr > 0. Take F h the Hamiltonian semimartingale with initial condition F0 h = m0, and let ~ be the first exit time of U for 1 -'h. Then, defining r := ~ A (,

f; f; {f, ho} (F h) dt >_ min {{f, ho} (m) I m • U} dt > O,

which contradicts (3.3). Therefore, {f, h0} = 0 also, as required. []

REMARK 3.1. Notice that, unlike what happens for standard deterministic Hamil- tonian systems, the energy h0 of a Hamiltonian diffusion does not need to be conserved if the other components of the Hamiltonian are not in involution with h0. This is a general fact about stochastic Hamiltonian systems that makes them useful in the modeling of dissipative phenomena. We see more of this in the next example.

3.2. Integrable stochastic Hamiltonian dynamical systems

Let (M, w) be a 2n-dimensional manifold, X : ]~+ x f2 --~ V a semimartingale, and h : M ~ V* be such that h = E~=lhi~. i, w i t h { E 1 . . . . . E r } a basis of V*. Let H be the associated Stratonovich operator in (2.6).

Suppose that there exists a family of functions {fr+l . . . . . fn} C C °O (M) such that the n-functions {fl := hi . . . . . fr : : hr, fr+l . . . . . fn} C Coo (M) are in Poisson involution, that is, {3~, f j } = 0, for any i , j • {1 . . . . . n}. Moreover, assume that F := ( f l . . . . . fn) satisfies the hypotheses of the Liouville-Arnold Theorem [3]: F has compact and connected fibers and its components are independent. In this setup, we will say that the stochastic Hamiltonian dynamical system associated to H is integrable.

As it was already the case for standard (Liouville-Arnold) integrable sys- tems, there is a symplectomorphism that takes (M, w) to ( ,~n x ]I~ n , Ein__l dO i A dli) and for which F - F(I1 . . . . . In). In particular, in the action-angle coordinates (I1 . . . . . In, 01 . . . . . on), hj -- hj (11 . . . . . In) with j • {1 . . . . . r}. In other words, the components of the Hamiltonian function depend only on the actions I := (I1 . . . . . In). Therefore, for any random variable F0 and any i • {1 . . . . . n},

Ii (F) - I / (Fo) = {I~, hj (I)} ( F ) 8 X j = 0, (3.4a)

0 i (P) - - 0 i (P0) : {0 i, hj (I)} (F) 6X j = ~ (F) 8X j. (3.4b)

Page 19: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 83

Consequently, the tori determined by fixing I = constant are left invariant by the stochastic flow associated to (3.4). In particular, as the paths of the solutions are contained in compact sets, the stochastic flow is defined for any time and the flow is complete. Moreover, the restriction of this stochastic differential equation to the toms given by say, I0, yields the solution

r

0 i (F) -- 0 i (I"o) ~--- Z o)j (I0) X j, (3.5) j = l

Ohj where wj (I0) := ~ (Io) and where we have assumed that X0 = 0. Expression (3.5)

clearly resembles the integration that can be carried out for deterministic integrable systems.

Additionally, the Haar measure d01m . . . A dO n on each invariant toms is left invariant by the stochastic flow (see Theorem 2.1 and [22]). Therefore, if we can ensure that there exists a unique invariant measure /z (for instance, if (3.5) defines a nondegenerate diffusion on the toms qr n, the invariant measure is unique up to a multiplicative constant by the compactness of ~?n (see [17, Proposition 4.5])) then /z coincides necessarily with the Haar measure.

3.3. The Langevin equation and viscous damping

Hamiltonian stochastic differential equations can be used to model dissipation phenomena. The simplest example in this context is the damping force experienced by a particle in motion in a viscous fluid. This dissipative phenomenon is usually modeled using a force in Newton's second law that depends linearly on the velocity of the particle (see for instance [21, §25]). The standard microscopic description of this motion is carried out using the Langevin stochastic differential equation (also called the Orstein-Uhlenbeck equation) that says that the velocity ~(t) of the particle with mass m is a stochastic process that solves the stochastic differential equation

m dil(t) = -)~il(t)dt + bdBt, (3.6)

where ~ > 0 is the damping coefficient, b is a constant, and Bt is a Brownian motion. A common physical interpretation for this equation (see [8]) is that the Brownian motion models random instantaneous bursts of momentum that are added to the particle by collision with lighter particles, while the mean effect of the collisions is the slowing down of the particle. This fact is mathematically described by saying that the expected value qe := E[q] of the process q determined by (3.6) satisfies the ordinary differential equation /le =- -~qe . Even though this description is accurate it is not fully satisfactory given that it does not provide any information about the mechanism that links the presence of the Brownian perturbation to the emergence of damping in the equation. In order for the physical explanation to be complete, a relation between the coefficients b and )~ should be provided in such a way that the damping vanishes when the Brownian collisions disappear, that is,

= 0 when b = 0 .

Page 20: Stochastic hamiltonian dynamical systems

84 J.-A. L./~ZARO-CAMI and J.-E ORTEGA

We now show that the motion of a particle of mass m in one dimension subjected to viscous damping with coefficient ~. and to a harmonic potential with Hooke constant k is a Hamiltonian stochastic differential equation. More explicitly, we will give a stochastic Hamiltonian system such that the expected value qe of its solution semimartingales satisfies the ordinary differential equation of the damped harmonic oscillator, that is,

m ~e(t) = - -~e( t ) -- kqe(t).

This description provides a mathematical mechanism by which the stochastic per- turbations in the system generate an average damping.

Consider ~2 with its canonical symplectic form and let X • R+ x f2 ~ ~ be the real semimartingale given by Xt (o9) = (t + vBt (o9)) with v ~ R and Bt a Brownian motion. Let now h • R 2 --~ ~{ be the energy of a harmonic oscillator, that is,

1 2 1 2 h (q, p) := ~m p + ~pq .

By (2.10), the solution semimartingales F h of the Hamiltonian stochastic equations associated to h and X satisfy

' f q ( P h ) - - q ( p h ) = ~m (2p(ph t ) - -v2pq(Fh) )d t +--m P(pht)dBt' (3.7)

p (r h) - p ( r 0 =

Given that e[fp(rh)dB,]= e[iq(rh)ds,] = 0 , if we denote

q e ( t ) : = E [ q ( r h t ) ] p e ( t ) : = E [ p ( F h t ) ] ,

Fubini's theorem guarantees that

1 v2p I)2p q e ( t ) = - - p e ( t ) - ~ m q e ( t ) and /Se(t)-- - - p e ( t ) - - P q e ( t ) . (3.9)

m 2m From the first of these equations we obtain that

v2p Pe (t) = m~e + --~--qe

whose time derivative is 1)2/9 ,_

Pe (t) = m O, q- T q e.

These two equations substituted in the second equation of (4.8) yield

( I)4/0 ) mqe (t) = --V2pCle (t) -- p \--4---ram -t- 1 qe ( t ) , (3.10)

that is, the expected value of the position of the Hamiltonian semimartingale 1 ~h associated to h and X satisfies the differential equation of a damped harmonic

Page 21: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 85

oscillator (5.8) with constants

(+ ) )~ = v2p and k = p \-4--m-m + 1 .

Notice that the dependence of the damping and elastic constants on the coefficients of the system is physically reasonable. For instance, we see that the more intense the stochastic perturbation is, that is, the higher v is, the stronger the damping becomes 0~ = vZP increases). In particular, if there is no stochastic perturbation, that is, if v = 0, then the damping vanishes, k = p and (4.10) becomes the differential equation of a free harmonic oscillator of mass m and elastic constant p.

The stability of the resting solution. It is easy to see that the constant process r't(w) = (0,0), for all t 6 IR and w 6 f2 is an equilibrium solution of (3.7) and (3.8). One can show using the stochastic Dirichlet criterion (Theorem 2.2) that this equilibrium is almost surely Lyapunov stable since the Hamiltonian function h is a strongly conserved quantity (by (2.8)) that exhibits a critical point at the origin with definite Hessian.

The Langevin equation. In the previous paragraphs we have succeeded in providing a microscopic Hamiltonian description of the harmonic oscillator subjected to Brown- ian perturbations whose macroscopic counterpart via expectations yields the equations of the damped harmonic oscillator. In view of this, is such a stochastic Hamiltonian description available for the pure Langevin equation (3.6)? The answer is no. More specifically, it can be easily shown (proceed by contradiction) that (3.6) cannot be written as a stochastic Hamiltonian differential equation on ]1~2 with its canonical sym- plectic form with a noise semimartingale of the form Xt(w) = (fo(t, Bt), f l ( t , B t ) )

and a Hamiltonian function h(q, p) = (ho(q, p), hi(q, p)), fo, f l , ho, hi ~ C ~ ( R ) . Nevertheless, if we put aside for a moment the stochastic Hamiltonian category and we use It6 integration, the Langevin equation can still be written in phase space, that is,

dqt --= vtdt, dvt -= -)~vtdt + bdBt, (3.11)

as a stochastic perturbation of a deterministic system, namely, a free particle whose evolution is given by the differential equations

dqt = vtdt and dvt = 0. (3.12)

Let {u l, u 2 } be global coordinates on ~ 2 associated to the canonical basis {el, e2} and

consider the global basis {d2u i, d2lgi.d2uJ}i,j=l,2 o f g*~l~ 2. Define a dual Schwartz .g* ~ 2 * 2 operator S* (x, (q, v) ) : (q,v) ) rxlR characterized by the relations

d2q l ~ vdzu 1, d2v l ~ bdzu 2 - )~v (dzu 2.dzu2) ,

where (q, v) ~ ]1~ 2 is an arbitrary point in phase space and x 6 IR 2. If X : R+ x ~ ~ IR 2 is such that X (t, co) = (t, bBt (co)), for any (t, w) 6 N+ x f2, it is immediate to see that the It6 equations associated to $* and X are (3.11). Moreover, if we set b = 0, that is, we switch off the Brownian perturbation then we recover (3.12), as required.

Page 22: Stochastic hamiltonian dynamical systems

86 J.-A. LAZARO-CAM[ and J.-P. ORTEGA

3.4. Brownian motions on manifolds

The mathematical formulation of Brownian motions (or Wiener processes) on manifolds has been the subject of much research and it is a central topic in the study of stochastic processes on manifolds (see [17, Chapter 5], [12, Chapter V], and references therein for a good general review of this subject).

In the following paragraphs we show that Brownian motions can be defined in a particularly simple way using the stochastic Hamilton equations introduced in Definition 2.1. More specifically, we will show that Brownian motions on manifolds can be obtained as the projections onto the base space of a very simple Hamiltonian stochastic semimartingales defined on the cotangent bundle of the manifold or of its orthonormal frame bundle, depending on the availability or not of a parallelization for the manifold in question.

We will first present the case in which the manifold is parallelizable or, equivalently, when the coframe bundle on the manifold admits a global section, as the construction is particularly simple in this situation. The parallelizability hypothesis is verified by many important examples. For instance, any Lie group is parallelizable; the spheres S 1, S 3, and S 7 are parallelizable too. At the end of the section we describe the general case.

The notion of manifold-valued Brownian motion that we will use is the following. An M-valued process F is called a Brownian motion on (M, g), with g a Riemannian metric on M, whenever F is continuous and adapted and for every f ~ C a ( M )

f ( r ) - f ( r 0 ) - ~ AMf(r)dt

is a local martingale. We recall that the Laplacian AM ( f ) is defined as AM ( f ) = T r ( H e s s f ) , for any f ~ C ~c (M), where H e s s f := V ( V f ) , with V : ~ (M) x ~ (M) ~ ~(M) , the Levi-Civita connection of g. Hess f is a symmetric (0, 2)- tensor such that for any X, Y ~ ~(M) ,

Hess Z(X, Y) = X [g(grad f, Y)] - g(grad f , VxY) . (3.13)

B r o w n i a n m o t i o n s on paral le l izable mani fo lds . Suppose that the n-dimensional manifold (M, g) is parallelizable and let {Y1 . . . . . In} be a family of vector fields such that for each m 6 M, {Yl(m) . . . . . Yn(m)} forms a basis of TraM (a parallelization). Applying the Gram-Schmidt orthonormalization procedure, if necessary, we may suppose that this paralMization is orthonormal, that is, g (Yi, Y j ) = 3ij, for any i , j = l . . . . . n.

Using this structure we are going to construct a stochastic Hamiltonian system on the cotangent bundle T*M of M, endowed with its canonical symplectic structure, and we will show that the projection of the solution semimartingales of this system onto M are M-valued Brownian motions in the sense specified above. Let X : N+ x f2 --+ R n+l be the semimartingale given by X(t, co) := (t, B 1(co) . . . . . Bt(co)), where B J, j = 1 . . . . . n, are n-independent Brownian motions and let h = (h0, hi . . . . . hn):T*M--+ N ~+l be the function whose components are

Page 23: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 87

given by

ho : T*M > IR h j " T*M > N and (3.14)

1 n Ol m I > - S Z j = l ( o l m , ( V y j Y j ) ( m ) } Ol m I >(Otm, Y j ( m ) ) .

We will now study the projection onto M of the Hamiltonian semimartingales F h that have X as stochastic component and h as Hamiltonian function and will prove that they are M-valued Brownian motions. In order to do so we will be particularly interested in the projectable functions f of T ' M , that is, the functions f ~ COO(T'M) that can be written as f = f o z r with f ~ C°°(M) and Jr • T*M -+ M the canonical projection.

We start by proving that for any projectable function f = f o zr ~ COO(T'M)

( -- I ~ V r j Y j ) and {f, h j }=g(gr ad - f , Yj), (3.15) {f, h 0 } = g g r a d f , - ~ j = l

and where {-,-} is the Poisson bracket associated to the canonical symplectic form on T*M. Indeed, let U be a Darboux patch for T*M with associated

i There exists functions c o o r d i n a t e s (qm . . . . . qn, Pl . . . . . Pn) such that {qi, pj} = 6j. f~ ~ Coo(re(U)), with k, j 6 {1 . . . . . n} such that the vector fields may be locally written as

n °

k=l Oqk "

Moreover,

and

n

hj (q, p) = ~ f ; (q) Pk k=l

{ n } {f, h j } = - forc,~-]~f fpk = f f { - f o ~ , p ~ } = f f -~i { qi 'pk}

k=l k=l k, i=l

= ~_, f~ai a f - (grad-f, 11;)o k,i=l j k - ~ = Yj [ f ] o zr = g Jr,

as required. The first equality in (3.15) is proved analogously. Notice that the formula that we have just proved shows that if f is projectable then so is {f, h j}, with j ~ {1 . . . . . n}. Hence, using (3.15) again and (3.13) we obtain that

{ {f, hj} ,h j} = Yj [g (grad-f , Yj)] o zr

= Hess--f (Yj, Yj) o zr + g (grad-f , vy;Yj ) o zr, (3.16)

for j 6 {1 . . . . . n}. Now, using (3.15) and (3.16) in (2.10) we have shown that for any projectable function f = f o rr, the Hamiltonian semimartingale I "h satisfies that

Page 24: Stochastic hamiltonian dynamical systems

88 J.-A. L/i, ZARO-CAM[ and J.-R ORTEGA

7 o ( r 9 - 7 o (r09

=~.~=lf g(gradf , FJ ) (Zr°Fh)dBJ+~j=l

or equivalently

-

f o ( r - 7 o (r09 - A (7) o r dt

Hess-f (rj, rj) (Jr o rh)dt, (3.17)

=~-]~f g(grad-f, Yj)(~orh)aej. (3.18) j = l

Since ~ ; "= l fg (g r a d f , Yj)(Vh)dB i is a local martingale (see [32, Theorem 20, page 631), :rt'(F h) is a Brownian motion.

Brownian motions on Lie groups. Let now G be a (finite-dimensional) Lie group with Lie algebra g and assume that G admits a bi-invariant metric g, for example when G is Abelian or compact. This metric induces a pairing in g invariant with respect to the adjoint representation of G on g. Let {~1 . . . . . ~n} be an orthonormal basis of g with respect to this invariant pairing and let {Vl . . . . . vn} be the corresponding dual basis of g*. The infinitesimal generator vector fields {~lG . . . . . ~nG} defined by ~ia(h) = TeLh • ~, with Lh " G ~ G the left translation map, h 6 G, i 6 {1 . . . . n}, are obviously an orthonormal parallelization of G, that is g(~iG, ~jG) : = ~ij. Since g is bi-invariant then VxY = ½IX, Y], for any X, Y ~ X(G) G (see [28, Proposition 9, page 304]), and hence V~io~ia = 0. Therefore, in this particular case the first component h0 of the Hamiltonian function introduced in (3.14) is zero and we can hence take ha = (hi . . . . . hn) and x a -= (B¢ . . . . . B;') when we consider the Hamilton equations that define the Brownian motion with respect to g.

As a special case of the previous construction that serves as a particularly simple illustration, we are going to explicitly build the Brownian motion on a circle. Let S 1 = {e iO ] 0 E N} be the unit circle. The stochastic Hamiltonian differential equation for the semimartingale F h associated to X • JR+ x f2 ~ N, given by Xt (w) : = Bt (o)), and the Hamiltonian function h • TS 1 --~ S 1 x JR --+ JR given by h(e i°, L) := 3., is simply obtained by writing (3.17) down for the functions f l ( e i°) :-----cos0 and f 2 ( e i°) : = sin0 which provide us with the equations for the projections X h and yh of I "h onto the OX and OY axes, respectively. A straightforward computation yields

d X h = - y h d B - x h d t and d Y h = X h d B - - Y dt, (3.19) 2

which, incidentally, coincides with the equations proposed in expression (5.1.13) of [26]. A solution of (3.19) is (Xt h, Yt h) = (cos Bt, sin Bt), that is, Ft h = e iBt.

Brownian motions on arbi t rary manifolds. Let (M, g) be a not necessarily parallelizable Riemannian manifold. In this case we will reproduce the same strategy as in the previous paragraphs but replacing the cotangent bundle of the manifold by the cotangent bundle of its orthonormal frame bundle.

Page 25: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 89

Let Ox (M) be the set of orthonormal flames for the tangent space TxM. The orthonormal flame bundle O ( M ) = [..JxeM Ox (M) has a natural smooth manifold structure of dimension n (n + 1) /2 . We denote by Jr : O (M) -+ M the canonical projection. We recall that a curve g : ( - e , s) C IR -+ O (M) is called horizontal if Yt is the parallel transport of Y0 along the projection Jr (Yt). The set of tangent vectors of horizontal curves that contain a point u 6 0 (M) defines the horizontal subspace HuO (M) C TuO (M), with dimension n. The projection yr : O (M) -+ M induces an isomorphism T, yr : H u O ( M ) ~ T~r(u)M. On the orthonormal flame bundle, we have n horizontal vector fields Yi, i = 1 . . . . . n, defined as follows. For each u 6 0 (M), let Yi (u) be the unique horizontal vector in HuO (M) such that Turf (Yi) = ui, where ui is the ith unit vector of the orthonormal frame u. Now, given a smooth function F ~ Co* (O (M)), the operator

AO(M) (F) = £ Yi [Yi [FI] i=1

is called Bochner's horizontal Laplacian on O (M). At the same time, we recall that the Laplacian AM ( f ) , for any f 6 C °* (M), is defined as AM ( f ) = Tr (Hess f ) . These two Laplacians are related by the relation

AO(M) (zr*f) = AM ( f ) , (3.20)

for any f 6 Co* (M) (see [16]). The Eells-Elworthy-Malliavin construction of Brownian motion can be summarized

as follows. Consider the following stochastic differential equation on O (M) (see [17]),

n

art = E ri CUt) ¢~B[ (3.21) i=1

where B J, j = 1 . . . . . n, are n-independent Brownian motions. Using the conventions introduced in the appendix 6.4, the expression (3.21) is the Stratonovich stochastic differential equation associated to the Stratonovich operator:

e (v, u) : ToN n > TuO (M)

V : Ein=l viei I > y].inl viyi (u) ,

where {el . . . . . en} is a fixed basis for N n. A solution of the stochastic differential equation (3.21) is called a horizontal Brownian motion on (.9 (M) since, by the It6 formula,

f (U) - f (Uo) = ~ [F] (Us) ~B~

= Yi [F] (Us)dB~ + -~ AO(M)(F) (Us)ds,

Page 26: Stochastic hamiltonian dynamical systems

90 J.-A. L/uZARO-CAMI and J.-R ORTEGA

for any F • C ~ ((.9 (M)). In particular, if F = zr* ( f ) for some f • C ~ ( M ) , by (3.20)

if f (X) - f (X0) = Y/[yr* ( f ) ] (Us) dB~ + -~ A M f (Xs) ds, i=l

where Xt = yr (Ut), which implies precisely that Xt is a Brownian motion on M. In order to generate (3.21) as a Hamilton equation, we introduce the functions hi :

T*O (M) ~ 11~, i = 1 . . . . . n, given by hi (or) = (or, Yi) . Recall that T*O (M) being a cotangent bundle has a canonical symplectic structure. Mimicking the computations carried out in the parallelizable case it can be seen that the Hamiltonian vector field Xhi coincides with Y/ when acting on functions of the form F o Zrr*o(M), where F • C °° (O (M)) and zrr*o(M) is the canonical projection 7"rr.o(M) " T*O (M) ---+ O (M). By (2.8), the Hamiltonian semimartingale F h associated to h = (hi . . . . . hn) and to the stochastic Hamiltonian equations on T*O(M) with stochastic component X = (B] . . . . . Bt) is such that

F o i= l , /

: £ f Yi [F](772T*O(M)(rsh)) (~B~ i=1

for any F • C °° ((9 (M)). This expression obviously implies that U h = 7 ~ T * O ( M ) ( r h) is a solution of (3.21) and consequently X h = Jr (U h) is a Brownian motion on M.

3.5. The inverted pendulum with stochastically vibrating suspension point

The equation of motion for small angles of a damped inverted unit mass pendulum of length I with a vertically vibrating suspension point is

where ¢ is the angle that measures the separation of the pendulum from the vertical upright position, y = y (t) is the height of the suspension point (externally controlled), )~ is the friction coefficient, and g is the gravity constant. By construction, the point (¢p, ~) = (0, 0) corresponds to the upright equilibrium position. It can be shown that if the function y(t) is of the form y(t) = az (wt), with z periodic, the amplitude a is sufficiently small, and the frequency w is sufficiently high, then this equilibrium becomes nonlinearly stable.

We now consider the case in which the external forcing of the suspension point is given by a continuous stochastic process ~ : N+ x f2 --~ N such that ~2 is continuous and stationary. Under this assumptions, Eq. (3.22) becomes the stochastic differential equation

\ l /

Page 27: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 91

where e := VC~/l. Observe that this equation is not Hamiltonian unless the friction term -M~ vanishes 0~ = 0), in which case one obtains a Hamiltonian stochastic system with Hamiltonian function h(4~, ~ ) = (1(/2~2_/4~2), ¼(e2co2ckl)2, _½(ecockl)2) and noise semimartingale Xt = (t, [~, ~], ~) (the symplectic form is obviously 12a¢ A a6).

The stability of the upright position of the stochastically forced pendulum has been studied in [27, 18], and references therein. In [27] it is assumed that the noise has the fairly strong mixing property. We recall that a continuous, adapted, stationary process P • N+ x f2 --+ ]R has the fairly strong mixing property if E [F 2] < oo,

there exists a real function c such that f o c (s)ds < oo, and for any t > s

IIE[G - E[I't]lYrs]llz2 < c( t - - s ) I l rs - E[I's]IIL2,

where II'IIL2 stands for the L 2 norm. For example, if x is the unique stationary solution with zero mean of the It6 equations

dxt = ytdt, dyt = - (xt + Yt) dt + dBt, 1 • ~_ __ y 2 - ~ has the fairly where Bt is a standard Brownian motion, then x 2 - 2

strong mixing property. Using this hypothesis, it can be shown [27, Theorem 1] that if z : R+ × f2 ---> ll~ is a continuously differentiable and stationary process such that, for any t ~ R+, E[zt] = O, E[exp(e l z t l ) ] < ~ if e = vr~/ l is sufficiently small, and the process ~2 has the fairly strong mixing property, then the solution (q~, q~) = (0, 0) of (3.23) is exponentially stable in probability, if e is sufficiently small and

le4

Moreover, Ovseyevich shows in [27, Section 4] that if we put X = 0 in (3.23) and we consider hence the inverted pendulum as a Hamiltonian system, then the equilibrium point (q~, q~)= (0, 0) is unstable.

4. Critical action principles for the stochastic Hamilton equations

Our goal in this section is showing that the stochastic Hamilton equations can be characterized by a variational principle that generalizes the one used in the classical deterministic situation. In the following pages we shall consider an exact symplectic manifold (M, w), that is, there exists a one-form 0 ~ ~2 (M) such that co = - d 0 . The archetypical example of an exact symplectic manifold is the cotangent bundle T*Q of any manifold Q, with 0 the Liouville one-form.

In the following pages we will proceed in two stages. In the first subsection we will construct a critical action principle based on using variations of the solution semimartingale using the flow of a vector field on the manifold. Even though this approach is extremely natural and mathematically very tractable it yields a variational principle (Theorem 4.1) that does not fully characterize the stochastic Hamilton equations. In order to obtain such a characterization one needs to use more general variations associated to the flows of vector fields defined on the

Page 28: Stochastic hamiltonian dynamical systems

92 J.-A. L.A.ZARO-CAM[ and J.-P. ORTEGA

solution semimartingale, that is, they depend on f2. This complicates considerably the formulation and will be treated separately in the second subsection.

DEFINITION 4.1. Let ( M , w = - d O ) be an exact symplectic manifold, X " ~+ × [2 ~ V a semimartingale taking values on the vector space V, and h • M ~ V* a Hamiltonian function. We denote by S (M) and S (R) the sets of M and real- valued semimartingales, respectively. We define the stochastic action associated to h as the map S ' S ( M ) ~ S(]~) given by

s(r)= f f where in the previous expression, h (F) • N+ x [2 ~ V x V* is given by h (F) (t, o9) := (Xt (w), h (F/(09))).

4.1. Variations involving vector fields on the phase space

DEFINITION 4.2. Let M be a manifold, F : S ( M ) --~ S(N) a map, and F ~ S (M). A local one-parameter group of diffeomorphisms (p : D C N x M --~ M is said to be complete with respect to F if there exists E > 0 such that (ps(F) is a well-defined process for any s ~ (-E, 6). We say that F is differentiable at F in the direction of a local one-parameter group of diffeomorphisms ~0 complete with respect to F, if for any sequence {Sn}n~N C R such that Sn ) 0, the family

n ---> o o

1 Xn = - - ( F (USn ( r ) ) - ( r ) )

Sn

converges uniformly on compacts in probability (ucp) to a process that we will denote by a[s=0 F (~0s (F)) and that is referred to as the directional derivative of F at F in the direction of ~Os.

REMARK 4.1. Note that global one-parameter groups of diffeomorphisms (for instance, flows of complete vector fields) are complete with respect to any semi- martingale.

Let F : N + × [2 ~ M be a M-valued continuous and adapted stochastic process and A C M a set. We will denote by rA = inf {t > 0 I Ft (co) ~ A} the first exit time of F with respect to A. We recall that ra is a stopping time if A is a Borel set. Additionally, let F be a semimartingale and K a compact set such that F0 C K. Then, any local one-parameter group of diffeomorphisms ~0 is complete with respect to the stopped process F ~K. Note that this conclusion could also hold for certain non-compact sets.

The proof of the following proposition can be found in Section 5.1.

PROPOSITION 4.1. Let M be a manifold, ot ~ f 2 ( M ) a one-form, and F : S (M) ~ ,.~ (]~) the map defined by F (F) := f (=, Then F is differentiable in all directions. Moreover, if F : 1~+ x £-2 ~ M is a continuous semimartingale, ~o is

Page 29: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 93

an arbitrary local one-parameter group of diffeomorphisms complete with respect to F, and Y ~ 3~(M) is the vector field associated to go, then

d s dss s=O

= ] (£rOt, (~F). (4.1)

The symbol £yct denotes the Lie derivative of ot in the direction given by Y.

COROLLARY 4.1. In the setup of Definition 4.1 let ot = co ~ (Y) ~ f2(M), with o9~ the inverse of the vector bundle isomorphism o9~ : T*M --+ T M induced by o9. Let F : JR+ x f2 ~ M be a continuous adapted semimartingale, go an arbitrary local one-parameter group of diffeomorphisms complete with respect to F, and Y E ~ ( M ) the associated vector field. Then, the action S is differentiable at F in the direction of go and the directional derivative is given by

=-f (4.2)

Proof: It is clear from Proposition 4.1 that

s

in ucp. The proof of that result can be easily adapted to show that in ucp

Thus, using (6.5) and ot = o9~ (Y) 6 f2 (M),

".=o f f s (go, ( r ) ) = (L~o, ~r ) - ((Lyh-') ( r ) , ~x)

_f (iydO + d (iyO), ~r> - _[ (ah (r) (r), ~x)

+ (i t0) ( r ) - ( i t0) ( r0) . []

COROLLARY 4.2 (Noether's Theorem). In the setup of Definition 4.1, let go: ]~ x M --+ M be a one parameter group of diffeomorphisms and Y E Y.(M) the associated vector field. I f the action S : S (M) ~ S (]~) is invariant by go, that is, S (gos (r)) = s (r), for any s c ]~, then the function irO is a conserved quantity of the stochastic Hamiltonian system associated to h : M --+ V*.

Page 30: Stochastic hamiltonian dynamical systems

94 J.-A. L./~ZARO-CAMI and J.-E ORTEGA

Proof: Let I "h be the Hamiltonian semimartingale associated to h with initial condition Fo. Since 9s leaves invariant the action we have that

d s=O S (~, ( r h ) ) = 0

and hence by (4.2) we have that

= - f (or, ~ r h) - f (dh (w # (oe))(Fh), 8 X ) + it0 (F h) - i t 0 (Fo). 0

As F h is the Hamiltonian semimartingale associated to h we have that

and hence ivO ( r h) = i t0 (Fo), as required. []

REMARK 4.2. The hypotheses of the previous corollary can be modified by requiring, instead of the invariance of the action by ¢Ps, the existence of a function F ~ C°° (M) such that

d = o s ( Os = F(r) - F(r0). (rh))

In that situation, the conserved quantity is iyO + F.

Before we state the Critical Action Principle for the stochastic Hamilton equations we need one more definition.

DEFINITION 4.3. Let M be a manifold and A a set. We will say that a local one-parameter group of diffeomorphisms ~ : l ) × M --~ M fixes A if 9s (Y) = Y for any y E A and any s E ]~ such that ( s , y ) E ~D. The corresponding vector field

d Y ~ X(M) given by Y ( m ) = ~ls=O~Os(m) satisfies YIA = 0 .

THEOREM 4.1 (First Critical Action Principle). Let (M, co = - d O ) be an exact symplectic manifold, X : JR+ x f2 --+ V a semimartingale taking values on the vector space V such that Xo = O, and h : M --+ V* a Hamiltonian function. Let mo ~ M be a point in M and F : ]R+ x f2 --+ M a continuous semimartingale such that Fo = mo. Let K be a compact set that contains the point mo. I f the semimartingale P satisfies the stochastic Hamilton equations (2.7) (with an initial condition F0 = mo) up to time rK then for any local one-parameter group o f diffeomorphisms 9 that fixes the set {mo} tO OK we have

l(~x<oo } S (~0, (FrK)) = 0 a.s. (4.3) TK

Proof: We start by emphasizing that when we write that I" satisfies the stochastic Hamiltonian equations (2.7) up to time rK we mean that

(f f + (dh ( J (r) , 8x = o.

Page 31: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 95

For the sake of simplicity in our notation we define the linear operator H a m : f2(M) --+ S(~) given by

Suppose now that the semimartingale F satisfies the stochastic Hamilton equations up to time rK. Let ~o be a local one-parameter group of diffeomorphisms that fixes {m0} U OK, and let Y 6 if(M) be the associated vector field. Then, taking ot = w b (Y), we have by Corollary 4.1,

since Y ( m o ) = 0 and hence i t0 ( F o ) = 0. Additionally, l{~K<oo}Fr K e 8K and YI~K = 0. Hence,

l{rK<OO}[j~ s=oS(~os(I"rK))lr K

Now, Proposition 5.1 and the hypothesis on guarantee that the previous expression equals

d (F~K))] l{rK <oo} [ ~S s=O S (~Os VK

,8X)+irO ( r ~ ) , (4.4)

since F is continuous,

(~F zK ) -4- f (dh (w # (~)) (F rK) ,~X)] . .a/;K

F satisfying Hamilton's equation

+ f (dh(J (a))(r'¢K),aX)]~K]~ K

= --l{~K<OO } [Ham(a)~K] = 0 a.s.,

as required. []

REMARK 4.3. The relation between the Critical Action Principle stated in The- orem 4.1 and the classical one for Hamiltonian mechanics is not straightforward since the categories in which both are formulated are very much different; more specifically, the differentiability hypothesis imposed on the solutions of the deter- ministic principle is not a reasonable assumption in the stochastic context and this has serious consequences. For example, unlike the situation encountered in classical mechanics, Theorem 4.1 does not admit a converse within the set of hypotheses in which it is formulated.

Page 32: Stochastic hamiltonian dynamical systems

96 J.-A. L,~ZARO-CAMI and J.-P. ORTEGA

In order to elaborate a little bit more on this question let (M, co = - d O ) be an exact symplectic manifold, take the Hamiltonian function h ~ C°°(M), and consider the stochastic Hamilton equations with trivial stochastic component X : JR+ x fa --+ 1R given by Xt(oo) = t. As we saw in Remark 2.4 the paths of the semimartingales that solve these stochastic Hamilton equations are the smooth curves that integrate the standard Hamilton equations. In this situation the action reads

S(r)= f (o,,r)- f h(rs)ds. If the path Ft(w) is differentiable then the integral ( f ( 0 , ~ F ) ) ( a ) ) reduces to the Riemann integral frt(o,~O and S(F)(w) coincides with the classical action. In particular, if F is a solution of the stochastic Hamilton equations then the paths lPt(o) ) are necessarily differentiable (see Remark 2.4), they satisfy the standard Hamilton equations, and hence make the action critical. The following elementary example shows that the converse is not necessarily true, that is, one may have semimartingales that satisfy (4.3) and that do not solve the Hamilton equations up to time rK.

We will consider a deterministic example. Let m0, ml ~ M be two points. Suppose that there exists an integral curve g : [to, tl] --+ M of the Hamiltonian vector field Xh defined on some time interval [to, tl] such that Y ( t o ) = mo and g (tl) = m l. Define the continuous and piecewise smooth curve a : [0, h] -+ M as follows: f

Jmo if t ~ [0, to]. Or (t) / y (t) if t ~ [to, t l] .

Let ~0 be a local one-parameter group of diffeomorphisms that fixes {mo, ml}. Then by (4.2)

[ d s=o S

I I' = - - O l - ~ - (Ol, Xh) (cr(t))dt + (O(cr(t)), Y(~r(t))) - (O(mo), Y(mo)}, I[0,t]

where Y (m) = ~ls=o~Os (m), for any m ~ M and ot = w b (Y). Using that cr satisfies the Hamilton equations on [to, tl] and oe ( m o ) = 0, it is easy to see that

tl

that is, ~ makes the action critical. However, it does not satisfy the Hamilton equations on the interval [0, h ] , because they do not hold on (0, to). This shows that the converse of the statement in Theorem 4.1 is not necessarily true. In the following subsection we will obtain such a converse by generalizing the set of variations allowed in the variational principle.

Page 33: Stochastic hamiltonian dynamical systems

S T O C H A S T I C H A M I L T O N I A N D Y N A M I C A L S Y S T E M S 97

4.2. Variations involving vector fields on the solution semimartingale

We start by spelling out the variations that we will use in order to obtain a converse to Theorem 4.1.

DEFINITION 4.4. Let M be a manifold and I ~ an M-valued semimartingale. Let so > 0; we say that the map E • (-So, So) × N+ x f2 -~ M is a pathwise variation of F whenever E ° = Ft for any t ~ N+ a.s.

We say that the pathwise variation E of F converges uniformly to I TM whenever the following properties are satisfied:

(i) For any f ~ C ~ (M), f (E s) -+ f (F) in ucp as s--~ 0. (ii) There exists a process Y • N+ x f2 --~ T M over F such that, for any f ~ C ~ (M),

the Stratonovich integral f Y [ f ] 8X exists for any continuous real semimartingale X (this is for instance guaranteed if Y is a semimartingale) and, additionally, the increments ( f (E s) - f ( F ) ) / s converge in ucp to Y [ f ] as s --~ 0. We will call such a Y the infinitesimal generator of E.

We will say that E (respectively, Y) is bounded when its image lies in a compact set of M (respectively, TM).

The next proposition shows that, roughly speaking, there exist bounded pathwise variations that converge uniformly to a given semimartingale with prescribed bounded infinitesimal generator.

PROPOSITION 4.2. Let I ~ be a continuous M-valued semimartingale F, K c M a compact set, and rK the first exit time of F from K. Let Y • •+ x f2 --~ T M be a bounded process over I ~K (that is, the image of Y lies in a compact subset of TM) such that f Y [ f ] 8X exists for any continuous real semimartingale X and for any f ~ C ~ (M). Then, there exists a bounded pathwise variation ~ that converges uniformly to F rK whose infinitesimal generator is Y.

Proof: Let {(Vk, ~k)}k~N be a countable open covering of M by coordinate patches such that any Vk is contained in a compact set. This covering is always available by the second countability of the manifold and Lindel6f's lemma. Let {Uk}k~N be an open subcovering such that, if U~ c Vi for some k, i ~ N, then Uk ~ Vi. Let {tin}meN be a sequence of stopping times (available by Lemma 3.5 in [12]) such that, a.s., r0 = 0, rm _< ~:m+l, supra ~:m : ~ , and that, on each of the sets [rm, "i'm+l] ["){rm+l > "t'm} the semimartingale F takes values in the open set U~(m), for some k (m) c N. Since K is compact, it can be covered by a finite number of these open sets, i.e. K c Uj~jUkj, where IJI < oo.

Let xkj. ------ (x~j . . . . . . xk~), n = dim (M), be a set of coordinate functions on Ukj(m)

, X 1 n 1 , n the corresponding adapted coordinates for and (Xkj vkj) -- ( kj . . . . . Xkj, Vkj . . . . Vkj)

T M on rrr~t(U~j(,n)). Since Y is bounded and covers F rK, and on [rm, rm+l] A {Zm+l > rm} the semimartingale F takes values in the open set Ukj(m), there exists

Page 34: Stochastic hamiltonian dynamical systems

98 J.-A. L~ZARO-CAM[ and J.-P. ORTEGA

a shy > 0 such that, on [rm, "Cm+l] A {Z'm+ 1 > "Cm} , the points

. X n (xlj (F) + svl., (Y) . . . . kj (F) + so t., (r))

lie in the image of some coordinate patch Vkj containing Ukj(m) for all s ~ (--Ski, ski). Let so = minj~j{skj }. Now, since the sets of the form

Im : = ['Cm, rm+l ) N {Z'm+ 1 > "Cm} C ]~+ X ~'2, m E N

form a disjoint partition of R+ x f2 we define E as the map that for any m ~ N satisfies

Slim "(-So, So) × [rm, r m + l ) n {~'m+l > "On}

(s, t, co),

Observe that by construction the image of E

> Vkj,

> qgk 1 (Xkj (r t (~)) -[- SVkj (Yt (co))).

is covered by a finite number of coordinated patches and therefore, by hypothesis, contained in a compact set. E is hence bounded. More specifically

{E t (09) I (s, t, w) ~ (-so, so) x ~ × f2} _c U V~j. (4.5) j~J

It is immediate to see that E is a pathwise variation which converges uniformly to F rr. Indeed, if f ~ C °~ (M) has compact support within one of the elements in the family {Ukj}jej, it can be easily checked that

f (~s) > f (F) and f (Es) - f (F) > Y [ f ] . (4.6) ucp S ucp

s-+0 s--~0

If, more generally, f 6 C °° (M) has no compact support contained in one of the {Ukj}jej, observe that, by (4.5), we only need to consider the restriction of f to

[,]j~s Vkj. Take now a partition of the unity {~bk}~N subordinated to the covering {Uk}k~N. Since {supp(~bk)}keN is a locally finite family and Uj~ j Vkj is contained in a compact set because, by hypothesis, so is each Vkj for any j 6 J, then among

all the {q~k}keN only a finite number of them have their supports in {Ukj ]j~j, say

{~)ki }iEl with III < OO. Thus,

Ill

flu~;vk~ = ~ ckki f, i=1

and since each dt)ki f is a function similar to those considered in (4.6) it is straightforward to see that those implications also hold for f . []

The following result generalizes Proposition 4.1 to pathwise variations of a semi- martingale. The proof can be found in Section 5.2

Page 35: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 99

PROPOSITION 4.3. Let F be an M-valued continuous semimartingale F, K c M a compact set, and rK the first exit time of I" from K. Let E be a bounded pathwise variation that converges uniformly to F ~x and Y • JR+ x f2 ---> T M the infinitesimal generator o f E that we will also assume to be bounded. Then, for any a E f2 (M),

f l im - (oe, 8Es) - (or, ~F ~x ) ucp S s-+0

= f (iydot, SF r/¢ ) + (or (FrK), Y) - (oe ( F r g ) , Y)t=0-

The next theorem shows that the generalization of the Critical Action Principle in Theorem 4.1 to pathwise variations fully characterizes the stochastic Hamilton equations.

THEOREM 4.2 (Second Critical Action Principle). Let (M, o ) = - d O ) be an exact symplectic manifold, X ' I R + x f 2 - + V a semimartingale that takes values in the vector space V, and h • M -+ V* a Hamiltonian function. Let mo be a point in M and F • N+ x f2 --+ M a continuous adapted semimartingale defined on [0, fir) such that F0 = too. Let K c_ M be a compact set that contains mo and rg the first exit time o f F from K. Suppose that rK < oc a.s. Then

(i) For any bounded pathwise variation E with bounded infinitesimal generator Y which converges uniformly to I "~K uniformly, the action has a directional derivative that equals

d ~=o S ( E s ) : = ,--,os l i m l [S (E s) - S (Frg)]

+ (0 (FrK), Y) - (0 (FrX), Y)t=o, (4.7)

where the symbol Y[h] (F rx) is consistent with the notation introduced in Definition 4.1

(ii) The semimartingale F satisfies the stochastic Hamiltonian equations (2.7) with initial condition Fo = mo up to time VK if and only i f for any bounded pathwise variation ]E • ( - so , so) × ]~+ × ~2 --+ M with bounded infinitesimal generator which converges uniformly to I "~K and such that Z~ = mo and E s -= I'~ x a.s.

rK

for any s ~ ( - so , So),

s = 0 v K

Proof: We first show that the limit (4.7) exists. Let Z be an arbitrary bounded pathwise variation converging to F uniformly and Y • R+ x g2 --> T M its infinitesimal

Page 36: Stochastic hamiltonian dynamical systems

100 J.-A. L,i~ZARO-CAMi and J.-R ORTEGA

generator, that we also assume to be bounded. We have

s

1 I f - (4.8) S

By Proposition 4.3, the first summand in the right-hand side of (4.8) converges ucp to

f (irdO, 3I "':K) + (0 (FCK), Y) -- (0 (FCK), Y)t=0,

as s - -+ 0. An argument similar to the one leading to Proposition 4.3 shows that

the second summand converges to Hence,

lim I [S (E ~) - S (F~K)] s-->O s

If we denote by ~/ := -iydO = iyw the one-form over F CK built using the vector field Y over F ¢K, the previous relation may be rewritten as

d (E , ) ] _f(u, ar K)f(dh(r¢K)(w#(U)),SX} [ ~ ,=0 s = + (0 (FCK), Y) -- (0 (F~K), Y)t=0- (4.9)

We are now going to prove the assertion in part (ii). Recall that the hypothesis that F satisfies the stochastic Hamilton equations up to time rK means that

(f f /) (/3, 8F) + ( (dh . w # (/3)) (F ) , 6X = 0, (4.10)

for any /3 ~ ~2 (M). We now show that this expression is also true if we replace 13 with any process 77 : R+ × ~2 --+ T*M over F such that the two Stratonovich integrals involved in (4.10) are well defined (for instance if 13 is a semimartingale). Indeed, invoking [12, 7.7] and Whitney's Embedding Theorem, there exists an integer p ~ N such that the manifold M can be seen as an embedded submanifold of ]Ep. In this embedded picture, there exists a family of functions { f l . . . . . fP} Q C ~ (~P) such that the one-form 7/ may be written as

P

= Z ZJdfJ ' j = l

where the Zj : N+ x f2 --> •, j ~ {1 . . . . . p}, are real processes. Moreover, using

Page 37: Stochastic hamiltonian dynamical systems

S T O C H A S T I C H A M I L T O N I A N D Y N A M I C A L S Y S T E M S 101

the properties of the Stratonovich integral (see [12, Proposition 7.4]),

P

(f( f = zj~ (dI~, ~r)+ ((dh. J (dI0) (r), 8x) ,

where the last equality follows from Proposition 5.1. Therefore, since dfJ is a deter- ministic one-form we can conclude that ( f ((dfJ, 6 F ) + f ((Oh. w # (d f J ) ) (F ) , 3X))) TM

= 0, which justifies why (4.10) also holds if we replace /~ ~ f~ (M) by an arbitrary integrable one-form 0 over F.

Suppose now that 1-' satisfies the stochastic Hamilton equations up to rK and let E • (-So, so) x •+ x f2 --+ M be a pathwise variation like in the statement of the theorem. We want to show that

[ s:0 Due to (4.9), we have that

= 0 a.s.

/ T K

+ (o (F~K), Y)~K - (o ( r~ , ( ) , Y),=o •

Since Z ~ = m o and Zs = F ~ K a.s. for any s c ( - S o , So), then Yo=Y~K = 0 a.s. VK

and both (0 (F~K), Y)~K and (0 (F~K), Y)t=o vanish. Moreover,

f (0, 3F~K)

((f f /) = (~, sr ~ ) + (dh (r ~ ) ( J (n)), s x

((f f = (~, 8r) + (dh (r) ( J (~)), 8x) rK

rK

(4.11)

which is zero because of (4.10). In the last equality we have used Proposition 5.1. Conversely, suppose that

0 a s 7: K

Page 38: Stochastic hamiltonian dynamical systems

102 J.-A. L . /~ZARO-CAMI and J.-P. O R T E G A

for arbitrary bounded pathwise variations tending to 1 -'~K uniformly, like in the statement. We want to show that (4.10) holds. Since our pathwise variations satisfy that Y0 = Y~K = 0 a.s., we obtain

[~s=oS(~')] =-(/(,,,<~r:'<>+f(<,,,(r':,<)(<o"(,,)),<-<)) =o, (4.12) r K l" K

where 77 is an arbitrary bounded one-form over F. Suppose now that 17 is a semi- martingale. Then l[0,t]~7 : R+ × if2 --+ T * M is again bounded and expressions

S(I[0,t]/7, SFrK) and i(dh(WX)(co#(l[o,tlO)),aX) are well defined by Proposition 5.3 because both F ~K and X are continuos semimartingales. We already saw in (4.11) that (4.12) is equivalent to

(f(.,,~>+f(.~(.)(<o"(.)),<) =o. ,.r K

Replacing ~ by l[0,tl/7 in (4.12) and using again Proposition 5.3, we write

o = (/(,~o,,,,,, <~,-')-+- f (<,,, (i-') (<o" (1~o,,,,,)), <~.))~.<

((f f )') = (~, 8r> + (ah (r) ( J (~)), 8x) ZK

= (/(.,,=> + / (.~ (.) (~"(.,), < ) I t A ~ K

=((f<o, sr>+f(ah(r)(<o"(o)),sx))'<)t • Since t is arbitrary this implies that the process ( f (7, 8F) + f (dh (F) (w # (#7)), 8X)) rK is identically zero, as required. []

5. Proofs and auxiliary results

5.1. Proof of Proposition 4.1

Before proving the proposition, we recall a technical lemma dealing with the convergence of sequences in a metric space.

LEMMA 5.1. Let (E, d) be a metric space. Let {Xn}n~N be a sequence o f functions Xn : (0, 8) -+ E where (0, 8) C ]~ is an open interval o f the real line. Suppose that Xn converges uniformly on (0, 8) to a function x. Additionally, suppose that f o r

* * Then any n, the limits l i m x n ( S ) = X n ~ E exist and so do l i m x n. s--+O n--+oo

lim x (s) = lim x n. s--+0 n--+o~

Page 39: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 103

Proof: Let e > 0 be an arbitrary real number. We have

( q ( ,) ( , q d x ( s ) , h m x n < d ( x ( s ) , x k ( s ) ) + d Xk(S) ,X k + d x k, h m x n . n---> ~ .," \ n----> ~ /

From the definition of limit and since x~ (s) converges uniformly to x on (0, a) , * ~ and d ( x ( s ) Xk(S)) < ~, we can choose ko such that d (x~,limn-+~Xn*) < ~

simultaneously for any k > k0. Additionally, since lims-+oXk ( s ) = x k we choose so small enough such that d (xk (s) , x~) < ~, for any s < So. Thus,

d (x (s) , n-+~lim x * ) < e

* [ ] for any s < so. Since e > 0 is arbitrary, we conclude that lim x (s) = lim x~. s-+0 n--+cx~

Proof of Proposition 4.1. First of all, the second equality in (4.1) is a straightforward consequence of [12, page 93]. Now, let {Uk}ker~ be a countable open covering of M by coordinate patches. By [12, Lemma 3.5] there exists a sequence {rm}meN of stopping times such that r0 = 0, rm < rm+l, sup m Zm = (x), a.s., and that, on each of the sets

['~m, "~m+l] ~ {'~m < Tin+l}

:= {(t, w) c R+ × ~ I ¢m+1 (~o) > rm (¢o) and t e [tm (oo), ¢m+l (W)]} (5.1)

the semimartingale F takes its values in one of the elements of the family {Uk]k~N. Second, the statement of the proposition is formulated in terms of Stratonovich

integrals. However, the proof will be carried out in the context of It6 integration since we will use several times the notion of uniform convergence on compacts in probability (ucp) which behaves well only with respect to this integral. Regarding this point we recall that by the very definition of the Stratonovich integral of a I-form oe along a semimartingale F we have that

f f (d2( 2=),clr) and f f (d2(£r=),dr). (5.2)

The proof of the proposition follows directly from Lemma 5.1 by applying it to the sequence of functions given by

Xn(S) := [d2(~o*o~)-d2(oe)],dF .

This sequence lies in the space D of c~gl~d processes endowed with the topology of the ucp convergence. We recall that this space is metric [32, page 57] and hence we are in the conditions of Lemma 5.1. In the following points we verify that the rest of the hypotheses of this result are satisfied.

(i) The sequence of functions { X n ( S ) ] n E N converges uniformly to

Page 40: Stochastic hamiltonian dynamical systems

104 J.-A. LfiuZARO-CAMI and J.-P. ORTEGA

The pointwise convergence is a consequence of part (i) in Proposition 5.2. Moreover, in the proof of that result we saw that if d : D x D ~ •+ is a distance function associated to the ucp convergence, then for any t e ~ + and any s ~ (0,~), d(xn(s), x(s)) < P({rn < t}). Since the tight-hand side of this inequality does not depend on s and P({rn < t}) ~ 0 as n ~ oo, the uniform convergence follows,

(ii)

(f , ) n , lim xn(s) = (d2 ( £ r a ) , d F =: x n. ucp s--~0

By the construction of the coveting {Uk}kcN and of the stopping times {rm}mcr~, there exists a k(m) ~ N such that the semimartingale F takes its values in U~(m) when evaluated in the stochastic interval (rn, rn+l] C [r~, r~+l] A {Zn < r~+l}. Now, since d2 is a linear operator and

1 ((~o*ot) - -or ) (m) s-.0 £rot(m), S

for any m E M, we have that

1 (d2 (~ ;~) - d2~) (m) s-,o d2 ( L , ~ ) ( m ) . S

Moreover, a straightforward application of Taylor's Theorem shows that

1 (d2 (~OsOt) d2ot)Iuk(,n, ~--,0 - * -- ) d2 (£rot) IUk(m) S

uniformly, using a Euclidean norm in r*Uk(m) (we recall that Uk(m) is a coordinate patch). This fact immediately implies that

l('Cn,'rn+l]~ (d2 (~0s Ol ) d2od)(I TM) . _ s ~ O l(rn,r~+l]d2 (£yOt) (F)

in ucp. As by construction the It6 integral behaves well when we apply it to a ucp convergent sequence of processes we have that

[ lim _ ( - t -n , ~ n q _ l ] - - = l(rn,rn+l] ucp j S s---->0

Consequently,

lim [dz (~O:Ot) - dz (~) ] , dF ucp

s-+0

n l[(f<l * . c . s - ,

( f ( ~ >)rm] - [e2 (~:~1 - e 2 ( ~ ) ] , d r

Page 41: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 105

n, ( ~ ) = lira ~--~. f l(~m,~m+l ] (dz(~O*oO-dzol),dF

ucp ~ j s--+0 m=0 - -

~--~ln-I ( f t rn = ~.,__ojl(~,,,~m+l](d2(£rot),dF)= (dz(£yoO,dF}

where in the second equality we have used Proposition 5.1, and the third one follows from (5.3).

(iii) / -

* ] (d2 (•yO/) dF) . lim x n = n---~ oo J

It is a straightforward consequence of Proposition 5.2. Eq. (4.1) follows from Lemma 5.1 applied to the sequences {Xn}n~N and {X*}n~r~,

and using the statements in (i), (ii), and (iii). []

5.2. Proof of Proposition 4.3

We will start the proof by introducing three preparatory results.

LEMMA 5.2. Let {Xn}n~N and {Yn}n~N be two sequences of real-valued processes converging in ucp to a couple of processes X and Y, respectively. Suppose that, for any t E R+, the random variables SUPn~/~suP0<s<, [(Xn)sl and suP0<s<t IYsl are bounded (their images lie in a compact set of ]~). Then, the sequence Xn Yn converges in ucp to X Y as n--+ oo.

Proof: We need to prove that for any e > 0 and any t 6 R+,

P ( [ s u p [ ( X n Y n ) s - - ( X Y ) s [ < e } ) \ [0<s<t n--+ oo ) 1.

First of all, note that

sup [(XnYn)s - (XY)s[ < sup [Xn[ [Yn - Y[ + sup IF] [Xn - X[ . O<s<t O<s<t O<s<t

Hence, we have

sup ~ n ~ ~ ~ / ~ /sup Xn' ~n ~ + sup ~ ' ~ ~ ~ J 0<s<t [0<s<t 0<s<t

D sup I X . t l Y ~ - Y I < - - I , O < s < t - -

Denote

An := I sup lXnI IYn - YI < 2 ] ~n:----Isup'~''Xn

Page 42: Stochastic hamiltonian dynamical systems

106 J.-A. L~ZARO-CAMI and J.-E ORTEGA

and let c be a constant such that SUpneNSUPO<s<t J(Xn)s[ < c and suP0<s<t IYsl < c, available by the boundedness hypothesis. Then,

(/ 1 > P (An) > P sup IYn -- YI < > 1, - - k,[ 0<s<t - - ~C n--+ cx~

1 > P ( B n ) > P sup I X . - X J < ) 1. k, t 0 < s < t n--+~

Thus, P (An) ~ 1 and P (Bn) --+ 1 as n ~ c~. But as

P (an N Bn) = P (an) + P (nn) -- P (an U Bn),

we conclude that P (An n Bn) > 1.

n--+ ~

Since An n Bn ~ {suP0<s< t I(XnYn)~ -- (XY)s l <- e}, we obtain

LEMMA 5.3. Let {Xn}neN be a sequence of real processes converging in ucp to a process X. Let r be a stopping time such that v < ~ a.s. Then, the sequence of random variables {(Xn)~}neN converges in probability to (X)~.

Proof: First of all we show that since r < cx~ a.s., then P ({r > t}) converges to zero as t ~ o0. By contradiction, suppose that this is not the case. Then, denoting A, := {r > n}, we have that A.+I ~ An, so P (An) forms a nonincreasing sequence of real numbers in the interval [0, 1]. Since this sequence is bounded below, it must have a limit. This limit corresponds to the probability of the event {r = e~}. If it is strictly positive then there is a contradiction with the fact that r < oc a.s. So P ({r > t}) tends to zero as t --+ ~ .

We now prove the statement of the lemma. Take some E > 0 and an auxiliary t E I~+. The set {J(Xn)v -- XrJ > e} can be decomposed as the disjoint union of the following two events,

( { J ( S n ) r - - XvJ > E} n {T < t}) U ( { J ( X n ) r - - x r j > e} n {V > t}).

The first one is contained in the set {sup0<s<t J(Xn)s - XsJ > e} whose probability, by hypothesis, converges to zero as n ~ cx~. Regarding the second one,

P ({[(Xn). c - - X r J > e} n {/7 > t } ) _<< P ({v > t}).

But P ({r > t}) can be made arbitrarily small by taking the auxiliary t big enough. In conclusion, for any e > 0,

P ( { l ( X n ) r - X r l > e} ) ) 0 n----~ cx~

in probability. []

Page 43: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 107

LEMMA 5.4. Let {Xn}new be a sequence of real processes converging in ucp to a real process X and r a stopping time. Then, the stopped sequence {X~]ne N converges in ucp to X ~ as well.

Proof: We just need to observe that, for any t e R+,

sup [(Xn~)s- X~[ = sup [(Xn)~A~- X~/,~[ _< sup [ ( X , ) s - Xs[ 0<s<t 0<s<t 0<s<t

and, consequently, for any e > 0,

O<s<t /O<s<t

Hence, since by hypothesis P ({suP0<~<t I(Xn)~ - Xs I < e}) converges to 1 as n --* oo, then so does P ({suP0<s<t [ (Xnr ) s - Xsr[ < e}). []

We now proceed with the proof of the proposition. We will start by using Whitney's Embedding Theorem and the remarks in [12, §7.7] to visualize M as an embedded submanifold of R p, for some p e N, and to write down our Stratonovich integrals as real Stratonovich integrals. Indeed, there exists a family of functions {h 1 . . . . . h p ] C C °° (R p) such that, in the embedded picture, the one-form ot can be written as ot = Y~.jP--1 ZjdhJ , where Zj e C ~ (N p) for j e {1 . . . . . p}. Therefore, using the properties of the Stratonovich integral (see [12, Proposition 7.4]),

l[f z,(r.'),(h'(r.'))- f z,(r"),(h'(r",)]. (5.4) Adding and subtracting the term y~;=lfZj (Es)r~h j (F rg) in the right-hand side of (5.4), we have

1=1 S

"[fs ] + ~ (zj (z s) - zj (r~,,))ahJ (r ~K) . (5.5)

We are going to study the terms (1) and (2) separately. We start by considering

~n = { 0 = T0 n ~r; ~...<_r;. <~], a sequence of random partitions that tends to the identity (in the sense of [32, page 64]).

Page 44: Stochastic hamiltonian dynamical systems

108 J.-A. LAZARO-CAMI and J.-P. ORTEGA

The expression (1): We want to study the ucp convergence of

as s ~ 0. Define

1C~-S~I 1 r n : _ - _ , + 1 _ Xn (S) "~ ( Z j (~S)Tinq_l "Ji- Z j (~S)Tin) ( hj h j

s \ i = 0

nll )) i=0

kn-1 1

i=0

hJ (IY)r~+l - hJ (F ~K) ,+1 hJ (:~s)rp _ hJ (F~K)rP

S S

which corresponds to the discretization of the Stratonovich integrals

using the random partitions of an. Indeed, by [32, Corollary 1, page 291],

l[f f ] , - Zj(XS) 3h j ( X ' ) - Zj(ES) Sh j (F rK) . Xn (S) ucp S

n ---~ oo

On the other hand, as T/n < oe a.s. for any i 6 {1 . . . . . kn}, part (i) in Definition 4.4 and Lemma 5.3 imply that

ucp

s---~0

The convergence above is in probability but, for convenience, we prefer to regard these random variables as trivial processes. Furthermore, part (ii) in Definition 4.4 and Lemma 5.4

T n n . n

), Y , S S ucp

s---~0

• T n hJ (Es) r/n -hJ ( F r K ) T/n {hi ( E ~ ) - h j ( F r K ) ' ~ i . T. n

= > r [ h J ] ' S t S ) ucp

s ----~0

by Definition (4.4) item 3 and Lemma (5.4). Now, since by hypothesis E and Y

Page 45: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 109

are bounded then so are 1

(Zj (•S)Ttn+l "31- Zj (]~S)Tin) and Y[h j] = iydh j

(dh j is only evaluated on the compact K since Y is a vector field over F ~K) and hence by Lemma 5.2

kn-1 1 T n Xn (S) ucp '= 2 Zj (FrK)Tff+ 1 ~- Zj (FrK)T/n Y Y [h J] Tin *

s---y o In addition, by [32, Corollary 1, page 291],

xZ--~uc, f ZJ(WKI3(Y[hJ])" n-+ oo

Hence, by Lemma 5.1 we conclude that

l Zj (E s) 8h j ( E ' ) - f Zj (Es)6h j (F~/C)] u-~cp f Zj (F~K)' (Y [hJ]). (5.6)

s---+0 The expression (2): We want to study now the ucp convergence of

s (zj(r~')- Zj(l~))~h j ( r ~K)

as s ~ 0. As in the previous paragraphs, we define

Yn (S) := - ~ (Zj (ES)T/~_l-~- Zj (ES)Ti n) (h j (FrK)Tt~ 1 - -h j (l"rK) T/n) s \ i=o

k n - l l ( z j (FrK)T/% 1 -]- Zj (FrK)Tin) (h j (FrK) T/~-I - -h j (I"rK)T/n)) i=0

kn-1 l ( Zj (]~S)Tin+l -- Zj (I"rK)Tin+l Zj (~]s)Tn __ Zj (FrK)Tin ) - - ~ -

- E 5 s s i=0

as a discretization of the Stratonovich integral

using On- Then, by construction,

I f > - (Zj(E s) - zj(rrx))gh j (rrK). (s) ~p s Yn

n.-.+ oo

Page 46: Stochastic hamiltonian dynamical systems

110 J.-A. LJCZARO-CAMI and J.-P. ORTEGA

On the other hand, invoking Definition 4.4 and Lemma 5.3 we have that

Zj (•S)T.n+I -- Z j (FrK)Tin+I

ucp Ti~- 1 S \ - S / Ti%l s---~0

_ Y[Zj] I ) S -- S /Tin ucp Tin.

s---~0

We now use again the boundedness of E and Y to guarantee the boundedness of Y[Zj]Tin+l = (irdZj)Tin+, and Y[Zj]Tin = (iydZj)Tin (notice that dZj is only

evaluated on the compact set K because Y is a vector field over F ~r c K). Therefore, by Lemma 5.2,

kn-ll( )( n ) * X n (S) ucp ) E "~ Y [ZJ]Tin+l "4- Y [Zj]Ttn h j ( F r K ) Ti+l - - h j (FrK)TI ~ : = X n.

i :0 ' ~ s--+0

Additionally, the sequence {x~,}n~N obviously converge in ucp to f Y [Zj] ~ (hi (WK)) as n ~ cx~. Hence, by Lemma 5.1, we conclude that

s--+0

To sum up, if we substitute (5.6) and (5.7) in (5.5) we obtain

If / ] / 1 (a,~Z~)_ (a,~WK) --~ Zj(W~)5(y[M])+ y[zj]~(hJ(r~K)). S ucp

s----~0

Using the integration by parts,

f zj (Y = Zj [h j] - (Z j [hJ])t= 0 (r~) 6 N]) (r ~K) Y (r~) Y

- f r[hJ]~(zj(r~))

= <a (rrK), Y) - (or (r~K), Y)t=O -- [ Y [ hj] (zj (r~)) J

and, consequently,

s--~O

r [zj] ~ (hi (r~) ) - f r [hi] ~ (zj (r~))

+ (a (F[K), Y) -- (ct (FrK), Y),=0 •

Page 47: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 111

In order to conclude the proof, we claim that

Indeed,

P p P

dot :d(~-]~ Zjdh j ) : ~_dZ j mdh j, and irdot= ~ (Y[Z j ]dh j - Y[hJ]dZj) j : l j = l j : l

which proofs (5.8), as required. []

5.3. Auxiliary results about integrals and stopping times

In the following paragraphs we collect three results that are used in the paper in relation with the interplay between stopping times and integration limits.

PROPOSITION 5.1. Let X be a continuous semimartingale defined on [0, ~x) and F a continuous semimartingale. Let r, ~ be two stopping times such that r < ~ < ~x. Then,

( X - r ) r = ( l l o , ~ I X ) - F = ( X - P r) and (X . P) ¢ - (X . F) ~ = ( I ( r , ~ 1 X ) . F .

An equivalent result holds when dealing with the Stratonovich integral, namely

Proof: By [32, Theorem 12, page 60] we have ll0,~lX. I" = (X • F) ~ = (X • I'~). Therefore,

( X . F) ~ - ( X . F) ~ = llo,~lX. F - l l0 ,~lX-F = [(lt0,~l - lt0,~l) X ] . F = (I(~,~IX) • F.

As to the Stratonovich integral, since X and F are semimartingales, we can write [32, Theorem 23, page 68]

(f )~ 1 l[x, rq=fx,r~" x~r = ( x . r ) ~+~[X,r] ~ = ( x . r ~)+

Finally, observe that for any process, (Xr)~ = X r. On the other hand, taking into account that l [ 0 , r ] X = l[0,~]X r and [F, X] = [X, F], we have

l X S F = lt0,~lX • F + ~ [X, F] ~ = lt0,~jX ~ • F + [X, F] ~

1 =(x~. r ) ~+ [x~,r] = x ~ . r + ~ [ x ~ , r ]

:(fx ,O • []

Page 48: Stochastic hamiltonian dynamical systems

112 J.-A. L~ZARO-CAM[ and J.-P. ORTEGA

PROPOSITION 5.2. Let X : N + x f2 ~ N be a real-valued process. Let {Tn}ne N be a sequence of stopping times such that a.s. T0 = 0, Tn < Tn+l, for all n • N, and SUPneN Tn --= OO. Then,

X = lim X r". ucp

n----> oo

In particular, if I" : IR+ x g2 -+ M is a continuous M-valued semimartingale and 0 • f22 (M) then,

f (f )" (r/, dF) = lim (17, dF) = lim l(~n:n+~] (r/, d F ) . ucp ucp

k--+ oo k---> oo =

Proof: Let e > 0 and t • N+. Then for any s • [0, t] one has

{IX ~" - Xl . > e} _ {T n < s} C {T n < t } .

Hence for any t • N+,

P ({IX r~ - XIs > e}) < P ({T n < t}).

The result follows because P ({Tn < t}) ---> 0 as n ---> oo since Tn ---> oo a.s., and hence in probability.

Let now F be an M-valued continuous semimartingale and 77 • f22 (M). Notice first that ( f (r/ ,dF))r° = 0 because T0 = 0. Consequently, by Proposition 5.1 we can write

(f d'))rk k-a (f ),+I (n, = ~ (n, d r ) - n=0

and the result follows.

k-1

= ~ f l ( rn,rn+l ] (r 1, d r ) n=0

[]

PROPOSITION 5.3. Let X and Y be two real semimartingales. Suppose that X is continuous and Xo = O. Then, for any t e ]R+, the Stratonovich integral f (lt0,t]Y) 8X is well defined and equal to ( f YSX) t.

Proof: If f (I[o,tlY) SX was well defined, it should be equal to f (l[o,,]Y)dX + ![l[o,t]Y,X] " 2 Since f (lto,,~Y)dX is well defined, the only thing that we need to

check is that [lt0,tlY, X] exists. On the other hand, recall that ([32, Theorem 12 page 60 and Theorem 23 page 68])

(f )t f 1 f 1[El, X] YSX = (l[0,t]r) d X + -~ [r, x] t = (l[0,tlY) d X + -~ .

Hence, what we are actually going to proceed by showing that [l[0,tjY, X] is equal to [yt, X].

Let crn = {0 = To ~ --< TI n - - < ' ' ' < - - Tnkn < OO} be a sequence of random partitions tending to the identity (in the sense of [32, page 64]). Given two real processes

Page 49: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 113

X and Y, their quadratic variation, if it exists, can be defined as the limit in ucp when n ~ cx~ of the following sums

Let now

kn - 1 i~O ( Tn ) ( Tn ) . [Y, X] = lim y i+i -- yr/" X i+1 - Xr/n

ucp .= n---~ oo

kn - 1

i=0 kn - 1

an := E ( ( I[0't]Y)T/%I --(l[O't]Yt)Tin) ( XT~%I -- XTin) " i=0

It is clear that the sequence {H~}n~r~ converges uniformly on compacts in probability to [ y t X] . We are going to prove that there exists such a convergence for the sequence of processes {Gn}nsN by showing that the elements (Gn)s coincide with (/-/n)s, for any s ~ 1R+, up to a set whose probability tends to zero as n ~ oc. We will consider two cases:

1. The case s < t. Given a specific i e {0 . . . . . k n - 1 } , and recalling that by n ( ( T'n )

construction T/n ~ T/+ 1 a.s., it is clear that y t ) t + l (yt)T~ n = YTin+lAS_ yrinAs s

is different from 0 only for those o2 ~ f2 in { Tf < s } in which case it takes the value

rrn+F,~- rrn . (5.9)

(( On the other hand, l[0,t]Y) i+1 __ ( l [ o , t l Y t ) , is again different from 0 only in s

the set {T/n < s} and there it is equal to (5.9). Therefore, (Gn)s = (Hn)s whenever s < t .

(( T.n T n ) 2. The case s > t. In this case, y t ) ,+1 _ (y t ) i = Yt^ri~+l - Ytr, r n which is

s different from 0 only in the set { Ti n < t}, where it takes the value

However, in this case

( ( l [0 , t ] r ) Tt%l - ( l [0 , t ]y t )T/" )s = llTp+l<_t}rtATin+l- llT/n_<tlYtATn,

which is equal to (5.10) in the set {T/~_ 1 < t} T/n _< T/~_I), but differs from (5.10) in

(5.10)

(which contains { T/n < t } since

n n n A i ( t ) : = {T/ < t < T/+ 1 }

Page 50: Stochastic hamiltonian dynamical systems

114 J.-A. L,A~ZARO-CAMI and J.-P. ORTEGA

where it takes the value - Y r/,. For any other o9 e f2 not in these sets,

((l[o,tlY) ~ q - (l[o,tlYt)Tin)s (r.O) = O.

Therefore, whenever s > t, (Gn)s and (Hn)s are different only for the w e A n (t). Observe that, since t is fixed, only one of the sets {A n (t)}ie{O,...,kn_l} is nonempty and, on it,

(Hn)s - (Gn)s = Yt (Xt - XTin) .

To sum up, the analysis that we just carried out shows that for any u e R+

s u p [(nn)s - ( a n ) , l -- 1an( t ) [Yt l (Xt - XTin ) 0<s<u

for some i e {0 . . . . . k , - 1}. If X is continuous, this expression tells us that supo<,<u[(H,)s-(GnL] ~ 0 a.s. as n --+ oo which, in turn, implies that

sup0<s<u [ ( n n ) s - (Gn)s[ converges to 0 in probability as well. That is, for any e > O ,

P([\lO<s<uSUp I (Hn)~-(Gn)s l>e}) - - -~O, as n--+ c~,

which is the same as saying that H n - Gn converges to 0 in ucp. Thus, since Gn = nn - (nn - Gn) and the limit in ucp as n --+ oo exists for both sequences {Hn}~N and { H n - Gn}neN, SO does the limit of {Gn}neN which, by definition, is the quadratic variation [lt0,/lY, X]. Moreover, as (H~ - Gn) ~ 0 in ucp as n --+ c~,

[yt, X] = lim Hn = lim G n : [l[0,t]V , X] , ucp ucp

n--+oo n--+oo

which concludes the proof. []

6. Appendices

6.1. Preliminaries on semimartingales and integration

In the following paragraphs we state a few standard definitions and results on manifold-valued semimartingales and integration. Semimartingales are the natural setup for stochastic differential equations and, in particular, for the equations that we handle in this paper. For proofs and additional details the reader is encouraged to check, for instance, [9, 11, 12, 17, 20, 32], and the references therein.

Semimartingales. The first element in our setup for stochastic processes is a prob- ability space (f2, ~ , P) together with a filtration {~'t I t _> 0} of ~ such that ~0 contains all the negligible events (complete filtration) and the map t l > bvt is fight-continuous, that is, ~ t = N~>0 ~t+~.

Page 51: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS | 15

A real-valued martingale P : R+ = [0, cx~) × f2 --+ ~ is a stochastic process such that for every pair t, s 6 ~+ such that s < t, we have:

(i) F is Yt-adapted, that is, ]?t is Yt-measurable. (ii) I's = E[Ft l Ys].

(iii) I't is integrable: E[IFtl] <-q-c~.

For any p 6 [1, c~), F is called an LP-martingale whenever F is a martingale and Ft E LP(g2), for each t. I f supteR+E[lI'tl p] < cx~, we say that F is LP-bounded. The process I" is locally bounded if for any time t > 0, sup{IPs(co)l Is < t} < e~, almost surely. Every continuous process is locally bounded. Recall that a process is said to be continuous when its paths are continuous. Most processes considered in this paper will be of this kind. Given two continuous processes X and Y we will write X = Y when they are a modification of each other or when they are indistinguishable since these two concepts coincide for continuous processes.

A random variable r : f2 --+ [0, +c~] is called a stopping time with respect to the filtration {Yt I t > 0} if for every t > 0 the set {w I r (w) < t} belongs to Yt. Given a stopping time r, we define

Y~ = { A 6 5 t" I A A { r < t } E.)Ut for any t ~ + } .

Given an adapted process I', it can be shown that I-' r is Y~-measurable. Furthermore, the stopped process 1 ' r is defined as

F t := FtAr := Ftl{t_<r} + Frl{t>r}.

A continuous local martingale is a continuous adapted process F such that for any n 6 N, F~"ll~,>0/ is a martingale, where rn is the stopping time rn := inf{t > 0 I Ir'tl = n}.

We say that the stochastic process F : IR+ x f2 --* IR has finite variation whenever it is adapted and has bounded variation on compact subintervals of N+. This means that for each fixed o) 6 f2, the path t~ > Pt (w) has bounded variation on compact subintervals of N+, that is, the supremum sup {Y~f-1 [Fti ( w ) - I'ti_l (o))l } over all the partitions 0 = to < tl < . . . < tp = t of the interval [0, t] is finite.

A continuous semimartingale is the sum of a continuous local martingale and a process with finite variation. It can be proved that a given semimartingale has a unique decomposition of the form P = I'0 + V + A, with P0 the initial value of F, V a finite variation process, and A a local continuous semimartingale. Both V and A are null at zero.

The It6 integral with respect to a cont inuous semimartingale . Let F : N+ × f2 --, R be a continuous local martingale. It can he shown that there exists a unique increasing process with finite variation [P, F]t such that I "2 - - [ I ' , F]t is a local continuous martingale. We will refer to [I', I']t as the quadratic variation of F. Given F = I'0 + V + A, F' = F~ + V' + A', two continuous local martingales, we

Page 52: Stochastic hamiltonian dynamical systems

116 J.-A. L,fZARO-CAMI and J.-E ORTEGA

define their joint quadratic variation or quadratic covariation as

1 [V, r ' ] t = ~ ([A + A', A + A']t - [A, A]t - [A', A ' ] t ) .

Let {Xn}nEl~ be a sequence of processes. We will say that {Xn}nE N converges uniformly on compacts in probability (abbreviated ucp) to a process X if for any s > 0 and any t e R + ,

/ ,0 \ tO<s< t

as n--->oo. Following [32], we denote by L the space of processes X : ~ + x £-2 ---> ~ whose

paths are left-continuous and have right limits. These are usually called citgltM processes, which are initials in French for left-continuous with right limits. We say that a process X ~ L is elementary whenever it can be expressed as

p-1

X : X01{0 } + E Xil(ri'ri+ 1]' i=1

where 0 < rl < . . . < rp-1 < rp are stopping times, and X0 and Xi are ,To and ~ri-measurable random variables, respectively such that IX01 < oo and I Xil < oo a.s. for all i ~ {1 . . . . . p - 1}. l(ri,ri+l] is the characteristic function of the set

('ci, ~i+1] = {(t, o9) E R+ x f2 I t E (ri (w), Z'i+ 1 (09)]}

and 1{0} of { ( t , w ) ~ R+ x [2 I t----0}. It can be shown (see [32, Theorem 10, page 57]) that the set of elementary processes is dense in 11, in the ucp topology.

Let F be a semimartingale such that F0 = 0 and X is elementary. We define Itf's stochastic integral of X with respect to F as given by

p - I P

X . F := I XdF := E X i ( I ~ri+l - y, ri). (6.1) d i=1

In the sequel we will exchangeably use the symbols X . F and f XdF to denote the It6 stochastic integral. It is a deep result that, if F is a semimartingale, the It6 stochastic integral is a continuous map from L into the space of processes whose paths are right-continuous and have left limits (cdtdl?tg), usually denoted by D, equipped also with the ucp topology. Therefore we can extend the It6 integral to the whole L. In particular, we can integrate any continuous adapted processes with respect to any semimartingale.

Given any stopping time r we define

fo r XdF (X F)r. o

It can be shown that (l[0,r]X)- F = ( X - F ) r = X . F r. If there exists a stopping time (r such that the semimartingale F is defined only on the stochastic intervals

Page 53: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 117

[0, ( r ) , then we may define the It6 integral of X with respect to F on any interval [0, r ] such that r < ( r by means of X . F ~.

The Stratonovich integral and stochastic calculus. Given two semimartingales F and X we define the Stratonovich integral of X along F as

f0 lo 1 X6F = X d F + -~[X, F]t.

Let X 1 . . . . . X p be p continuous semimartingales and f 6 C2(]RP). The celebrated ItO formula states that

X 1 X 1 f ( t . . . . . Xt p ) = U ( o,

J- 2 i,j=l

The analogue of this equality for

f ( X ] . . . . . xP) = f ( X l o ,

P r o t . . . . xg) + °f. (xl ... X; )d< i=1 O X t ' '

fOO t (X~ X P ) d [ X i , 02 f

XJ]s . OxiOx j . . . . .

the Stratonovich integral is

- - . X s )SXs. . . . . X ~ ) "~ ~=1 jO OX i ,

An important particular case of these relations are the integration by parts formulas

f0 t f0 t 1 X d r = ( X f ' ) t - ( X F ) o - F d X - ~[X, r] t ,

£ Yo' x ~ r = ( x r ) ~ - (Xr)o - r s x .

Stochastic differential equations. Let F = ( F 1 . . . . . F p ) be p semimartingales with Fo : 0 and f : ~q × R p --~ ~q a smooth function. A solution of the ItO stochastic differential equation

P d X i Z i X = f j ( , r ) d W (6.2)

j=l

with initial condition as the random vector X0 = (X 1 . . . . . Xg) is a stochastic process

X t = ( X ] , . . X q) such that X~-X~=y~qP__ l fo i X ., f~( , F )dFJ . It can be shown [32, page 310] that for any x ~ ]I~ q there exists a stopping time ( : ]I~ q x ~2 "+ 1~+ and a time-continuous solution X(t , w, x) of (6.2) with initial condition x and defined in the time interval [0, ( (x ,w)) . Additionally, limsuPt~c(x,~o)IlXt(w)ll = c~ a.s. on {( < co} and X is smooth on x in the open set {x [ ( ( x , w ) > t}. Finally, the solution X is a semimartingale.

Page 54: Stochastic hamiltonian dynamical systems

118 J.-A. LAZARO-CAMI and J.-R ORTEGA

6.2. Second-order vectors and forms

In the paragraphs that follow we review the basic tools on second-order geometry needed in the definition of the stochastic integral of a form along a manifold-valued semimartingale. The reader interested in the proofs of the statements cited in this section is encouraged to check [12], and the references therein.

Let M be a finite-dimensional, second-countable, locally compact Hausdorff (and hence paracompact) manifold. Given rn 6 M, a tangent vector at m of order two with no constant term is a differential operator L : C °O (M) > R that satisfies

L [f3] (m) = 3 f (m) L [f2] (m) - 3 f 2 (m) L [ f ] (m).

The vector space of tangent vectors of order two at m is denoted as rmM. The manifold r M := Um~M rmM is referred to as the second-order tangent bundle of M. Notice that the (first order) tangent bundle TM of M is contained in rM. A vector field of order two is a smooth section of the bundle zM ~ M. We denote the set of vector fields of order two by X2(M). If Y, Z 6 X(M) then the product ZY ~ 3~2(M). Conversely, every second-order vector field L 6 X2(M) can be written as a finite sum of fields of the form ZY and W, with Z, Y, W 6 if(M).

The forms of order two f22(M) are the smooth sections of the cotangent bundle of order two r*M : = UmEM r,~M. For any f, g, h E Coo(M) and L ~ ~2(M) we define d2f E f22(M) by d2f(L) := L[f], and d2f . d2g E g22(M) as

1 d2f . d2g[L] := -~ (L [f g] - f L [g] - gL [ f ] ) .

It is easy to show that for any Y, Z, W 6 ~(M),

1 d2f . d 2 g [ Z Y ] = - ~ ( Z [ f ] Y [ g ] + Z[g]Y[f]) and d2f .d2g[W]=O.

More generally, let am, t~m E T*M and choose two functions f , g c Coo (M) such that d f ( m ) = o/m and r ig (m)= fin- It is easy to check that (d f - d g ) ( m ) does not depend on the particular choice of f and g above and hence we can write Oem-tim to denote ( d f . dg)(m). If ~,f l 6 f2(M) then we can define or. fl 6 g22(M) as (or. fl)(m) := or(m)-fl(m). This product is commutative and Coo (M)-bilinear. It can be shown that every second-order form can be locally written as a finite sum of forms of the type d f . dg and d2h.

The d2 operator can also be defined on forms by using a result (Theorem 7.1 in [12]) that claims that there exists a unique linear operator d2 : ~2(M) --+ f22(M) characterized by

d2 (d f ) = d2f and d2 (for) = d f - o t + fd2ot.

6.3. Stochastic integrals of forms along a semimartingale

Let M be a manifold. A continuous M-valued stochastic process X : ~+ x f2 --~ M is called a continuous M-valued semimartingale if for each smooth function f c Coo(M), the real-valued process f o X is a (real-valued) continuous semimartingale.

Page 55: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 119

We say that X is locally bounded if the sets {Xs(w) I 0 < s < t} are relatively compact in M for each t ~ R+, a.s.

Let X be an M-valued semimartingale and 0 : R+ x f2 ~ r*M be a c?aglhd locally bounded process over X, that is, yr o 0 = X, where yr : r*M --+ M is the canonical projection. It can be shown (see [12, Theorem 6.24]) that there exists a unique linear map O i > f (o, dX) that associates to each such 0 a continuous real-valued semimartingale and that is fully characterized by the following properties: for any f ~ C °° (M) and any locally bounded chgl~d real-valued process K,

f (d2foX, d X ) = f ( X ) - f ( X o ) and f <KO, dX)= f Kd(f <O, dX)). (6.3)

The stochastic process f (o, dX) will be called the It6 integral of 0 along X. If ot 6 g22(M), we will write in the sequel the It6 integral of ot along X, that is, f (ot o X, dX) as f (ot, dX).

The integral of a (0,2)-tensor b on M along X is the image of the unique linear mapping b i > f b(dX, dX) onto the space of real continuous processes with finite variation that for all f , g, ~ C~(M) satisfies

and f (fb)(dX, dX)= f (f oX)d(f b(dX, dX)) (6.4)

f (df ®dg) (dX, dX) = [f o X, g o X].

If ot 6 f2 (M) and X is a semimartingale on M, the real semimartingale ff(dzot, dX) is called the Stratonovich integral of c~ along X and is denoted by

(ol, 6X). This definition can be generalized by taking fl a T'M-valued semimartin- gale over X and by defining the Stratonovich integral as the unique real-valued semimartingale that satisfies the properties

f (df, 6 X ) = f ( X ) - f ( X o ) , and f (zfl,,x)= f z(x)6(f (6.5)

for any f 6 C °° (M) and any continuous real-valued semimartingale Z. Finally, it can be shown that (see [12, Proposition 6.31]) for any f, g c C°°(M),

f 1 ( d f . dg, dX) = ~ [ f (X), g (X)]. (6.6)

6.4. Stochastic differential equations on manifolds

The reader interested in the details of the material presented in this section is encouraged to check through the chapter 7 in [12].

Let M and N be two manifolds. A Stratonovich operator from M to N is a family {e(x, Y)}xeM,y~N such that e(x, y) : TxM--~ TiN is a linear mapping that depends smoothly on its two entries. Let e*(x, y) : TiN --~ T*M be the adjoint of e(x, y).

Page 56: Stochastic hamiltonian dynamical systems

120 J.-A. L~ZARO-CAMI and J.-P. ORTEGA

Let X be an M-valued semimartingale. We say that an N-valued semimartingale is a solution of the Stratonovich stochastic differential equation

8Y = e(X, Y)SX (6.7)

if for any ot 6 f2 (N), the following equality between Stratonovich integrals holds,

f = f (e*(x,r)=,ax). It can be shown [12, Theorem 7.21] that given a semimartingale X in M, an 5%-measurable random variable Y0, and a Stratonovich operator e from M to N, there are a stopping time ( > 0 and a solution Y of (6.7) with initial condition Y0 defined on the set {(t, co) ~ ~+ x f2 I t ~ [0, ~(w))} that has the following maximality and uniqueness property: if ( ' is another stopping time such that ( ' < ( and Y' is another solution defined on {( t ,o ) )6 R+ x ~ 2 l t 6 [0, ('(w))}, then Y' and Y coincide in this set. If ( is finite then Y explodes at time (, that is, the path Yt with t 6 [0, () is not contained in any compact subset of N.

The stochastic differential equations from the It6 integration point of view require the notion of Schwartz operator whose construction we briefly review. The reader interested in the details of this construction is encouraged to check [.12]. Note first that we can associate to any element L ~ t 2 ( M ) a symmetric tensor L 6 i f ( M ) ® i f ( M ) . Second, given x ~ M and y E N, a linear mapping from rxM into ryN is called

a Schwartz morphism whenever f (TxM) C TyN and f(LI-~ = (flrxM ® ftrxM) ('L), for any L ~ zxM. Third, let M and N be two manifolds; a Schwartz operator from M to N is a family { f (x , Y)}x~M,y~N such that f(x, y) : rxM --+ ryN is a Schwartz operator that depends smoothly on its two entries. Let f*(x, y) : ryN --+ z*~M be the adjoint of f ( x , y). Finally, let X be an M-valued semimartingale. We say that a N-valued semimartingale is a solution of the It6 stochastic differential equation

dY = f ( X , Y)dX (6.8)

if for any c~ ~ ~2(N), the following equality between It6 integrals holds,

dX).

There exists an existence and uniqueness result for the solutions of these stochastic differential equations analogous to the one for Stratonovich differential equations.

Given a Stratonovich operator e from M to N, there exists a unique Schwartz operator f : zM x N --+ r N defined as follows. Let y(t) = (x(t), y(t)) ~ M x N be a smooth curve that satisfies e(x(t), y ( t ) ) (~( t ) )= ~(t), for all t. We define f (x ( t ) , y(t)) (L~(o) := (Ly(0), where the second-order differential operators (L~(o) rx(t)M and (Ly(t))E ry(t)N are defined as

d 2 d 2 (L~(o) [h] := d--~h (x (t)) and (Ly(/)) [g] := d-~g (y (t)),

Page 57: Stochastic hamiltonian dynamical systems

STOCHASTIC HAMILTONIAN DYNAMICAL SYSTEMS 121

for any h ~ C ~ ( M ) and g 6 C ~ (N). This relation completely determines f since the vectors of the form L~(t) span rx(t)M. Moreover, the It6 and Stratonovich equations 8Y = e(X, Y ) rX and dY = f ( X , Y)dX are equivalent, that is, they have the same solutions.

Acknowledgements

We thank Michel Emery and Jean-Claude Zambrini for carefully going through the paper and for their valuable comments and suggestions. We also thank Nawaf Bou-Rabee and Jerry Marsden for stimulating discussions on stochastic variational integrators. The authors acknowledge partial support from the French Agence National de la Recherche, contract number JC05-41465. J.-A. L.-C. acknowledges support from the Spanish Ministerio de Educaci6n y Ciencia grant number BES-2004- 4914. He also acknowledges partial support from MEC grant BFM2006-10531 and Gobierno de Arag6n grant DGA-grupos consolidados 225-206. J.-P. O. has been partially supported by a "Bonus Qualit6 Recherche" contract from the Universit6 de Franche-Comt6.

REFERENCES

[1] R. Abraham and J. E. Marsden: Foundations of Mechanics, Second edition, Addison-Wesley 1978. [2] L. Arnold: Random Dynamical Systems, Springer Monographs in Mathematics. Springer 2003. [3] V. I. Arnold: Mathematical Methods of Classical Mechanics, Second edition, Volume 60 of Graduate

Texts in Mathematics, Springer 1989. [4] J.-M. Bismut: M~canique Aldatoire, Lecture Notes in Mathematics, vol. 866, Springer 1981. [5] N. Bou-Rabee and H. Owhadi: Stochastic Variational Integrators, arXiv:0708.2187 (2007). [6] N. Bou-Rabee and H. Owhadi: Stochastic Variational Partitioned Runge-Kutta Integrators for Constrained

Systems, arXiv:0709.2222 (2007). [7] G. E. P. Box and G. M. Jenkins: Time Series Analysis: Forecasting and Control, Holden-Day, 1976. [8] A. J. Chorin and O. H. Hald: Stochastic Tools in Mathematics and Science, Surveys and Tutorials in the

Applied Mathematical Sciences, volume 1. Springer 2006. [9] K. L. Chung and R. J. Williams: Introduction to Stochastic Integration, Second edition, Probability and

its Applications. Birkh~iuser 1990. [10] J. Cresson and S. Darses: Plongement stochastique des syst~mes lagrangiens, C. R. Math. Acad. Sci. Paris

342 (2006), 333-336. [11] R. Durrett: Stochastic Calculus. A Practical Introduction, Probability and Stochastics Series, CRC Press

1996. [12] M. l~mery: Stochastic Calculus in Manifolds, Springer 1989. [13] M. l~mery: On two transfer principles in stochastic differential geometry, Sdminaire de Probabilit~s, XXIV,

1988/89, 407-441, Lecture Notes in Math. 1426, Springer 1990. Correction: S~minaire de Probabilitgs, XXVI, 633, Lecture Notes in Math. 1526, Springer 1992.

[14] I. I. Gihman: Stability of solutions of stochastic differential equations (Russian), Limit Theorems Statist. Inference (Russian) pp. 14-45 Izdat. "Fan", Tashkent. English translation: Selected Transl. Statist. and Probability 12, Amer. Math. Soc., pp. 125-154. MR 40, number 944 (1966).

[15] R. Z. Hasminskii: Stochastic Stability of Differential Equations, translated from Russian by D. Louvish, Monographs and Textbooks on Mechanics of Solids and Fluids: Mechanics and Analysis 7. Sijthoff and Noordhoff, Alphen aan den Rijn--Germantown, Md 1980.

[16] E. R Hsu: Stochastic Analysis on Manifolds, Graduate Studies in Mathematics 38, American Mathematical Society (2002).

Page 58: Stochastic hamiltonian dynamical systems

122 J.-A. L.~ZARO-CAM[ and J.-P. ORTEGA

[17] N. Ikeda and S. Watanabe: Stochastic Differential Equations and Diffusion Processes, Second edition. North-Holland Mathematical Library, 24. North-Holland Publishing Co. 1989.

[18] R Imkeller and C. Lederer: Some formulas for Lyapunov exponents and rotation numbers in two dimensions and the stability of the harmonic oscillator and the inverted pendulum, Dyn. Syst. 16(1) (2001), 29--61.

[19] H. Kunita: Some extensions of It6's formula, Seminaire de probabilitds de Strasbourg XV, 118-141. Lecture Notes in Mathematics 850, Springer 1981.

[20] J.-E Le Gall: Mouvement Brownian et Calcul Stochastique, Notes de Cours DEA 1996-97. Available at http:llwww.dma.ens.frl legall/(1997).

[21] L. D. Landau and E. M. Lifshitz: Mechanics, Volume 1 of Course of Theoretical Physics, Third Edition, Pergamon Press 1976.

[22] X.-M. Li: An averaging principle for integrable stochastic Hamiltonian systems, Preprint available at http://www.lboro.ac.uk/departments/ma/researchlpreprints/papers06/06-27.pdf.

[23] P.-A. Meyer: G6om6trie stochastique sans larmes. Seminar on Probability, XV (Univ. Strasbourg, Strasbourg 1979/1980), 44-102, Lecture Notes in Math. 850, Springer 1981.

[24] P.-A. Meyer: GEom6trie diff6rentielle stochastique, II. Seminar on Probability, XVI, Supplement, pp. 165-207, Lecture Notes in Math. 921, Springer 1982.

[25] E. Nelson: Dynamical Theories of Brownian Motion, Princeton University Press 1967. [26] B. ~3ksendal: Stochastic Differential Equations, Sixth Edition, Universitext, Springer 2003. [27] A. I. Ovseyevich: The stability of an inverted pendulum when there are rapid random oscillations of the

suspension point, J. Applied Math. Mech. 70 (2006), 762-768. [28] B. O'Neill: Semi-Riemannian Geometry. With Applications to Relativity, Pure and Applied Mathematics

volume 103, Academic Press 1983. [29] J.-A. Lfizaro-Carnf and J.-P. Ortega: Reduction, reconstruction and skew-product decomposition of symmet-

ric stochastic differential equations, Preprint available at http://front.math.ucdavis.edu/0705.3156 (2007). [30] T. Misawa: Conserved quantities and symmetries related to stochastic dynamical systems, Ann. Inst.

Statist. Math. 51(4) (1999), 779-802. [31] J.-P. Ortega and V. Planas-Bielsa: Dynamics on Leibniz manifolds, J. Geom. Phys. 52(1) (2004), 1-27. [32] P. Protter: Stochastic Integration and Differential Equations. A New Approach, Applications of Mathematics,

volume 21, Second Edition, Springer 2005. [33] L. Schwartz: G6om6trie diff6rentielle du 2~me ordre, semi-martingales et ~quations diff6renfielles stochas-

tiques sur une vari6t6 diff6rentielle, Seminar on Probability, XVI, Supplement, 1-148, Lecture Notes in Math. 921, Springer 1982.

[34] M. Thieullen and J. C. Zambrini: Probability and quantum symmetries. I. The theorem of Noether in Schr6dinger's Euclidean quantum mechanics, Ann. Inst. H. Poincard Phys. Th~or. 67(3) (1997), 297-338.

[35] M. Thieullen and J. C. Zambrini: Symmetries in the stochastic calculus of variations, Probab. Theory Related Fields 107(3) (1997), 401-427.

[36] S. Watanabe: Differential and variation for flow of diffeomorphisms defined by stochastic differential equations on manifolds (in Japanese), Sfikaiken K6kyuroku 391 (1980).

[37] K. Yasue: Stochastic calculus of variations, J. Funct. Anal. 41 (1981), 327-340. [38] J.-C. Zambrini and K. Yasue: Semi-classical quantum mechanics and stochastic calculus of variations,

Annals of Phys. 143 (1982), 54-83. [39] W. A. Zheng and R-A. Meyer: Quelques r6sultats de "m6canique stochastique", Seminar on probability,

XVIII, 223-244, Lecture Notes in Math. 1059, Springer 1984.


Recommended