Project IST-2001-38314, COLUMBUSDesign of Embedded Controllers for Safety Critical Systems
WPSHS: Stochastic Control and Analysis of Hybrid Systems
Stochastic Markovian Switching Hybrid Processes
Chenggui Yuan1 and J. Lygeros
June 17, 2004
Version:1.1
Task number:SHS1 & SHS2S
Deliverable number:DSHS3
Contract:IST-2001-38314 of European Comission
1 University of Cambridge, UK
DOCUMENT CONTROL SHEET
Title of document:Stochastic Markovian Switching Hybrid Processes
Authors of document:C. Yuan and J. Lygeros
Deliverable number:DSHS3
Contract:IST-2001-38314 of European Comission
Project:Design of Embedded Controllers for Safety Critical Systems (COLUMBUS)
DOCUMENT CHANGE LOG
Version # Issue Date Sections affected Relevant information
0.1 June 17,2004 All First draft
Version 1.0 Organisation Signature/Date
AuthorsC.Yuan UCAM
J. Lygeros UCAMInternal reviewers
COLUMBUS, IST-2001-38314
Work Package SHS, Deliverable DSHS3
Stochastic Markovian Switching Hybrid Processes
Chenggui Yuan and John Lygeros
Contents
1 Introduction 21.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Overview of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Stochastic Hybrid Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Reachability Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Stabilization of A Class of Stochastic Differential Equations with Markovian Switch-ing 6
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 A Class of Systems with Controllable Linearization . . . . . . . . . . . . . . . . . . . . . . 82.4 Minimum Phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 On the Exponential Stability and Reachability for Switching Diffusion Processes 143.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.2 Almost sure exponential stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.3 Exponential Stabilization of Linear Switching Diffusion Processes . . . . . . . . . . . . . . 173.4 Robust Stability of Linear Switching Diffusion Processes . . . . . . . . . . . . . . . . . . . 203.5 Reachability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4 Invariant Measure of Stochastic Hybrid Processes with Jumps 24
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.2 Stochastic Hybrid Systems with Jump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.3 Transition Probability, Probability Density and Invariant Measure . . . . . . . . . . . . . 254.4 Existence of Invariant Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5 Asymptotic Stability and Boundedness of Delay Switching Diffusions 325.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.2 Background on Switching Diffusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.4 Implications for Boundedness and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 375.5 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1
Chapter 1
Introduction
1.1 Background
Hybrid systems driven by continuous-time Markov chains have been used to model many practical sys-tems that may experience abrupt changes in their structure and parameters caused by phenomena suchas component failures or repairs, changing subsystem interconnections, and abrupt environmental distur-bances. Hybrid systems combine a part of the state that takes continuous values and another part of thestate that takes discrete values. The research of jumping systems began to develop since the pioneeringworks of [31], [35] and [32] presented a jump system, where a macroeconomic model of the national econ-omy was used to study the effect of federal housing removal policies on the stabilization of the housingsector. The term describing the influence of interest rates was modeled by a finite-state Markov chain toprovide a quantitative measure of the effect of interest rate uncertainty on optimal policy. [51] studiedthe moment stability problem. [3] suggested that the hybrid systems would become a basic framework inposing and solving control-related issues in Battle Management Command, Control and Communications(BM/C3) systems. Hybrid systems were also considered for the modeling of electric power systems by [72]as well as for the control of a solar thermal central receiver by [66]. In his book [49], Mariton explainedthat hybrid systems had been emerging as a convenient mathematical framework for the formulation ofvarious design problems in different fields such as target tracking (evasive target tracking problem), faulttolerant control and manufacturing processes.
One of the important classes of the hybrid systems is the stochastic differential equations withMarkovian switching (SDEwMSs)
dX(t) = f(X(t), t, r(t))dt + g(X(t), t, r(t))dB(t).
Here the state vector has two components X(t) and r(t): the first one is in general referred to as thestate while the second one is regarded as the mode. In its operation, the system will switch from onemode to another in a random way, and the switching between the modes is governed by a Markov chain.The study of SDEwMSs has included the optimal regulator, controllability, observability, stability andstabilization etc. Some sufficient conditions for the boundedness, recurrence and stability in probabilitywere established by [40]. [4] discussed stability in distribution for a semi-linear stochastic differentialequation with Markovian switching of the form
dX(t) = A(r(t))X(t)dt + g(X(t), r(t))dB(t).
[23] studied the ergodic control problem of SDEwMSs. [44] and [45] give the sufficient condition ofstability in probability and stability in moment for SDEwMSs and stochastic differential delay equationwith Markovian switching (SDDEwMSs).
For more information on the hybrid systems the reader is referred to [7, 16, 17, 18, 30, 55, 60, 64,62, 63, 66, 74] and the references therein.
1.2 Overview of the Study
Chapter 2 studies the stabilization of a class of stochastic systems with Markovian switching. Eventhough the stabilization of stochastic control systems has been widely studied [9, 19, 20, 21] relatively
2
little is known about the stabilization of systems that also involve Markovian switching. In this chapter weinvestigate the problem of exponential stabilization by state feedback for a class of stochastic differentialequations with Markovian switching. After stating the general problem and giving Lyapunov and converseLyapunov conditions for exponential stability we restrict attention to two specific classes: systems withcontrollable linearization and composite systems that typically arise in the process of linearization bystate feedback. We show that under certain assumptions both these classes can be stabilized using aswitched, linear state feedback controller. It should be noted that the properties of the class of compositesystems considered here have been studied extensively in the deterministic case, for example by Byrnesand Isidori [14], Kokotovic and Sussmann [34], Saberi et.al. [61]. Here we extend the results to stochasticMarkovian switching processes.
Chapter 3 we investigate almost sure exponential stability and reachability for switching diffusionprocesses. Sufficient critera for exponential stabilization and robust stability are also established for linearswitching diffusion processes.
Chapter 4 develops criteria on the existence of invariant measures for a non-linear stochastic hybridsystem
X(t) =∫ t
0
f(X(s), r(s))ds +∫ t
0
g(X(s), r(s))dB(s) +∫
[0,t]×Rd
h(X(s−), ρ)N(dsdρ). (1.1)
It is well known that once the existence of the invariant measure of an SDE is established, we maycompute it by solving the associated PDE, known as the forward equation or the Kolmogorov-Fokker-Planck equation [33]. If the system is a linear SDE, one can explicitly solve the Fokker-Planck equation.But for the nonlinear SDE, especially in the case of Eq.(1.1), the situation becomes more complex and itis nontrivial to solve the PDEs. As an alternative, we derive the relation between the probability densityof Eq.(1.1) and that of the corresponding SDEs.
Most of the existing results on stochastic stability for switching diffusions rely on the existence of asingle Lyapunov function. Examples from the hybrid systems literature, however, suggest that even in thedeterministic case one can find systems for which a single Lyapunov function does not exist; the systemscan nontheless be shown to be stable if one considers multiple Lyapunov functions. Motivated by thisobservation, in Chapter 5 we study the stability of delayed switching diffusions using multiple Lyapunovfunctions in the spirit of [67, 41, 75, 11, 13, 46], i.e. besides the Lyapunov functions V (x, t, i), i ∈ S, thereis another Lyapunov function U(x, t) (see Theorem 5.1 and Theorem 5.2), which is different from [8, 65].On a more technical note, our new asymptotic stability criteria do not require the diffusion operatorassociated with the underlying stochastic differential equations to be negative definite, as is the case withmost of the existing results.
1.3 Stochastic Hybrid Processes
Throughout this deliverable, unless otherwise specified, we let (Ω,F ,Ft, P ) be a complete probabilityspace with a filtration Ft satisfying the usual conditions (i.e. it is right continuous and F0 contains allp-null sets). Let B(t) = (B1
t , . . . , Bmt )T be a m-dimensional Brownian motion defined on the probability
space. Let | · | denote the Euclidean norm for vectors or the trace norm for matrices.The evolution of the process (X(t), r(t)) is governed by the following equations:
dX(t) = f(X(t), t, r(t))dt + g(X(t), t, r(t))dB(t) (1.2)
Pr(t + ∆) = j|r(t) = i =
γij∆ + o(∆) if i 6= j,1 + γii∆ + o(∆) if i = j,
(1.3)
on t ≥ 0 with initial value X0 = x0 and r(0) = i0, where ∆ > 0, f : Rn × R+ × S → R
n andg : R
n × R+ × S → Rn×m. If i 6= j, γij ≥ 0 is transition rate from i to j, while
γii = −∑j 6=i
γij .
r(t), t ≥ 0 is a right-continuous Markov chain on the probability space taking values in a finite state spaceS = 1, 2, . . . , N. Recall that a continuous time Markov chain r(t) with generator Γ = γijN×N can
3
be represented as a stochastic integral with respect to a Poisson random measure (cf. [4, 22]). Indeed,let ∆ij be consecutive, left closed, right open intervals of the real line each having length γij such that
∆12 = [0, γ12),∆13 = [γ12, γ12 + γ13),
...
∆1N = [N−1∑j=2
γ1j ,
N∑j=2
γ1j),
∆21 = [N∑
j=2
γ1j ,
N∑j=2
γ1j + γ21),
∆23 = [N∑
j=2
γ1j + γ21,
N∑j=2
γ1j + γ21 + γ23),
...
∆2N = [N∑
j=2
γ1j +N−1∑
j=1,j 6=2
γ2j ,
N∑j=2
γ1j +N∑
j=1,j 6=2
γ2j),
and so on. Define a function h : S × R → R by
h(i, y) =
j − i : if y ∈ ∆ij ,0 : otherwise.
Then
dr(t) =∫
R
h(X(t), r(t−), y)ν(dt, dy), (1.4)
where ν(dt, dy) is a Poisson random measure with intensity dt × m(dy), in which m is the Lebesguemeasure on R.
Let C2,1(Rn ×R+ ×S; R+) denote the family of all non-negative functions V (x, t, i) on Rn ×R+ ×S
which are continuously twice differentiable in x and once in t. If V ∈ C2,1(Rn × R+ × S; R+), define anoperator LV from R
n × R+ × S to R by
LV (x, t, i) = Vt(x, t, i) + Vx(x, t, i)f(x, t, i)
+12trace[gT (x, t, i)Vxx(x, i)g(x, t, i)] +
N∑j=1
γijV (x, t, j), (1.5)
where
Vt(x, t, i) =∂V (x, t, i)
∂t, Vx(x, t, i) =
(∂V (x, t, i)
∂x1, . . . ,
∂V (x, t, i)∂xn
)
and
Vxx(x, t, i) =(
∂2V (x, t, i)∂xi∂xj
)n×n
.
As a standing hypothesis we assume that both f and g are sufficiently smooth so that equation (1.2) hasa unique solution. We refer the reader to [44] for the conditions on the existence and uniqueness of thesolution. Fix any x0 and i0 and write X(t; x0, i0) = X(t) for simplicity.
The following generalised Ito’s formula (cf. [63]) will play an important role in this deliverable andwe cite it as a lemma.
4
Lemma 1.1 If V ∈ C2,1(Rn × R+ × S), then for any t ≥ 0
V (X(t), t, r(t))
=V (X(0), 0, r(0)) +∫ t
0
LV (X(s), s, r(s))ds
+∫ t
0
Vx(X(s), s, r(s))g(X(s), s, r(s))dB(s)
+∫ t
0
∫R
(V (X(s), s, r(0) + h(r(s), l)) − V (X(s), s, r(s)))µ(ds, dl), (1.6)
where the function h is as in (1.4) and µ(ds, dl) = ν(ds, dl) − m(dl)ds is a martingale measure.
Taking the expectation on both sides of (1.6), we get the following lemma ([63] Lemma 3 on p104).
Lemma 1.2 Let V ∈ C2(Rn × R+ × S; R+) and τ1, τ2 be bounded stopping times such that 0 ≤ τ1 ≤ τ2
a.s. If V (X(t), t, r(t)) and LV (X(t), t, r(t)) etc. are bounded on t ∈ [τ1, τ2] with probability 1, then
EV (X(τ2), τ2, r(τ2)) = EV (X(τ1), τ1, r(τ1)) + E
∫ τ2
τ1
LV (X(s), s, r(s))ds. (1.7)
In this deliverable whenever we apply this formula we will define the bounded stopping times τ1 andτ2 such that X(t) : τ1 ≤ t ≤ τ2 is bounded in R
n with probability 1 and hence V (X(t), t, r(t)) etc.become bounded on t ∈ [τ1, τ2].
1.4 Reachability Problem
The mathematical formulation of the reachability problem has been given in DSHS1. Since the hybridsystem we interested in are usually Markov processes, Bujorianu and Lygeros set up the reachabilityproblem for general Markov processes. In this very general setting, they proved some measurabilityresults and give some estimations for reach set probabilities. In DSHS2 they invesgated the reachabilityby using Dirichlet form method. In present deliverable Section 3.5, we will discuss the reachability fordiffusion switching processes by using Lypunov functional method, which is interesting because the resultis very similar to the result obtained by using the Dirichlet form method in DSHS2.
5
Chapter 2
Stabilization of A Class of StochasticDifferential Equations withMarkovian Switching
2.1 Introduction
The stability and control of Markovian jump systems has recently received a lot of attention. For example,Ji and Chizeck [30] and Mariton [49] studied the stability of linear jump equations of the form
dX(t) = A(r(t))X(t)dt, (2.1)
where r(t) is a Markov chain taking values in S = 1, 2, ..., N. Mao [44] investigated exponential stabilityfor general nonlinear stochastic differential equations with Markovian switching
dX(t) = f(X(t), t, r(t))dt + g(X(t), t, r(t))dB(t). (2.2)
Mao et al. [45] and the authors [77] investigated the stability of nonlinear stochastic differential delayequations with Markovian switching. In terms of control, Sworder [69] and Wonham [74] studied thelinear quadratic regulator problem and Mariton and Bertrand [50] extended this to the case of outputfeedback. Pakshin [59] studied the robust stability and stabilization of linear jump systems. Finally,Ghosh et.al. [23] developed a dynamic programming approach to the optimal control of general stochasticdifferential equations with Markovian switching.
Even though the stabilization of stochastic control systems has been widely studied [9, 19, 20, 21]relatively little is known about the stabilization of systems that also involve Markovian switching. Inthis chapter we investigate the problem of exponential stabilization by state feedback for a class ofstochastic differential equations with Markovian switching. After stating the general problem and givingLyapunov and converse Lyapunov conditions for exponential stability we restrict attention to two specificclasses: systems with controllable linearization and composite systems that typically arise in the processof linearization by state feedback. We show that under certain assumptions both these classes can bestabilized using a switched, linear state feedback controller. It should be noted that the properties of theclass of composite systems considered here have been studied extensively in the deterministic case, forexample by Byrnes and Isidori [14], Kokotovic and Sussmann [34], Saberi et.al. [61]. Here we extend theresults to stochastic systems with switching.
2.2 Problem Statement
Consider stochastic differential equations with Markovian switching of the form
dX(t) = f(X(t), r(t))dt + g(X(t), r(t))dB(t) (2.3)
with solutions defined on t ≥ 0 with initial value X(0) = x0 ∈ Rn and r(0) = i0. Here f : R
n × S → Rn,
g : Rn × S → R
n×m, and B(·) is an m-dimensional Brownian motion and r(·) is a Markov chain asdescribed in chapter 1. B(·) is independent of r(·).
6
To ensure the existence and uniqueness of solutions for equation (2.3) we impose the followinghypothesis.
Assumption 2.1 f and g are locally Lipschitz and have linear growth. That is, for each k = 1, 2, . . .,there exists an hk > 0 such that
|f(x, i) − f(y, i)| + |g(x, i) − g(y, i)| ≤ hk|x − y|for all i ∈ S and those x, y ∈ R
n with |x| ∨ |y| ≤ k. There exists moreover an h > 0 such that
|f(x, i)| + |g(x, i)| ≤ h(1 + |x|)for all x ∈ R
n and i ∈ S.
It can be shown (cf. Mao [44]) that, under Assumption 2.1, equation (2.3) has a unique continuoussolution Xx0,i0 on t ≥ 0 for each initial value.
In this section, we discuss the stability in probability of the trivial solution.
Assumption 2.2 Equation (2.3) has an equilibrium at x = 0, i.e. f(0, i) = 0, g(0, i) = 0 for all i ∈ S.
Under Assumptions 2.1 and 2.2, equation (2.3) has the unique solution X0,i0(t) = 0 for all t ≥ 0, i0 ∈ S.This solution is called the trivial solution.
Definition 2.1 The trivial solution of equation (2.3) is said to be exponentially stable in mean square ifthere exist positive constants λ1 and λ2 such that for all t ≥ 0
E|Xx0,i0(t)|2 ≤ λ1|x0|2e−λ2t. (2.4)
In [44], Mao extended the Lyapunov approach to establish the following sufficient exponential stabilitycondition.
Theorem 2.1 If there exists a function V (x, i) ∈ C2(Rn × S; R+) and positive constants c1, c2 and c3
such that
c1|x|2 ≤ V (x, i) ≤ c2|x|2 (2.5)
and
LV (x, i) < −c3|x|2 (2.6)
for all (x, i) ∈ Rn × S, then the trivial solution of equation (2.3) is exponentially stable in mean square.
It is interesting to compare this result to Lyapunov theorems for deterministic hybrid systems. It is easyto see that for deterministic hybrid systems, arbitrary switching between stable systems (even stablelinear systems) may result in an unstable system (for examples see [11, 12]). To ensure stability ofthe system, in addition to the requirement that the Lyapunov function decreases along the continuousevolution, one also needs to impose conditions so that the effect of discrete transitions on the value ofthe Lyapunov function does not undo this decrease. These conditions can take the form of constraints onthe value of the Lyapunov function before and after a discrete transition [29], or more generally on thesequence of values taken by the Lyapunov function at the switching times [11, 75, 13], or of a minimumor average “dwell time” in each discrete state [56, 42]; for an overview the reader is referred to [12].Somewhat surprisingly, none of these conditions seem to be necessary in Theorem 2.1. The reason isthat the transition rates of the Markov chain that control the frequency of the switching are included inequation (1.5) and hence in the conditions of the theorem. In other words, the conditions of the theoremimplicitly quantify the trade-off between frequency of switching/dwell time and rate of decrease of theLyapunov function along the solution of the stochastic differential equation (2.3).
Extending the classical converse Lyapunov results [24], one can prove the following theorem whichgives necessary conditions for exponential stability in mean square of the trivial solution of equation (2.3).
Theorem 2.2 If the trivial solution of equation (2.3) is exponentially stable in mean square and thefunctions f and g have continuous bounded derivative with respect to x up to second order, then thereexists a function V (x, i) ∈ C2(Rn × S; R) satisfying inequalities (2.5) and (2.6) and also
| ∂V
∂xj| < c4|x|, | ∂2V
∂xj∂xk| < c5 (2.7)
for some c4 > 0 and c5 > 0.
7
Since the proof is similar to the classical converse Lyapunov results, it is omited.As an extension of the system of (2.3), one can consider controlled stochastic differential equations
with Markovian switching, i.e. stochastic differential equations of the form
dX(t) = σ(X(t), r(t), u(t))dt + b(X(t), r(t), u(t))dB(t) (2.8)
where u is Ft-measurable and Rp-valued control law. σ : R
n×S×Rp → R
n and b : Rn×S×R
p → Rn×m.
To ensure the existence and uniqueness of the solutions of (2.8) and the existence of a trivial solutionwe impose the following assumption.
Assumption 2.3 σ and b are continuous in u and σ(0, i, 0) ≡ 0 and b(0, i, 0) ≡ 0 for all i ∈ S. σ andb are locally Lipschitz in x and have linear growth. That is, for each k = 1, 2, . . ., there exists an hk > 0such that
|σ(x, i, u) − σ(y, i, u)| + |b(x, i, u) − b(y, i, u)| ≤ hk|x − y|for all u ∈ R
p, i ∈ S and those x, y ∈ Rn with |x| ∨ |y| ≤ k. There is moreover an h > 0 such that
|σ(x, i, u)| + |b(x, i, u)| ≤ h(1 + |x|)for all x ∈ R
n, u ∈ Rp and i ∈ S
Definition 2.2 An Ft-measurable u : Rn ×S → R
p is said to be a stabilizing feedback law for the controlstochastic systems with Markovian switching (2.8) if the trivial solution of the closed-loop system
dX(t) = σ(X(t), r(t), u(X(t), r(t)))dt + b(X(t), r(t), u(X(t), r(t))dB(t) (2.9)
is exponentially stable in mean square.
In general, one needs to impose restrictions on the class of controllers considered to ensure that Assump-tion 2.1 is met for the closed loop system. Fortunately, this technical issue does not arise for the classesof controllers used in subsequent sections.
2.3 A Class of Systems with Controllable Linearization
We first consider the case where for each i ∈ S, the linearization of equation (2.8) about the equilibrium(0, i, 0) is completely controllable. In this case, after coordinate transformations (and possibly a linearstate feedback pre-compensator) equation (2.8) can be written in the form
X(t) = x0+∫ t
0
[A(r(s))X(s) + H(r(s))u(s) + F (X(s), r(s), u(s))
]ds
+∫ t
0
G(X(s), r(s), u(s))dB(s) (2.10)
where A(i) ∈ Rn×n and H(i) ∈ R
n×p are matrices in Brunovsky canonical form. In other words, thereexist positive integers k1(i), . . . , kp(i) with
∑pj=1 kj(i) = n such that A(i) is a block-diagonal matrix of
the form
A(i) =
Ak1(i) 0 . . . 0
0. . . . . .
......
. . . 00 . . . 0 Akp(i)
where Akj(i) ∈ Rkj×kj for 1 ≤ j ≤ p is given by
Akj(i) =
0 1 0 . . . 0 00 0 1 . . . 0 0. . .0 0 0 . . . 0 10 0 0 . . . 0 0
.
8
Likewise, H(i) is a block-diagonal matrix of the form
H(i) =
bk1(i) 0 0 . . . 0 00 bk2(i) 0 . . . 0 0. . . . . .0 0 0 . . . bkp−1(i) 00 0 0 . . . 0 bkp
.
where bkj(i), 1 ≤ j ≤ p, is a column vector in Rkj given by
bkj(i) =
0...01
In addition we impose the following assumption on the nonlinearities F and G.
Assumption 2.4 F = (F1, . . . , Fn) : Rn × S × R
p → Rn and G = (G1, . . . , Gm) : R
n × S × Rp → R
n×m
are such that there exists a λ > 0 such that for any j, 1 ≤ j ≤ q, x ∈ Rq and u ∈ R
p,
|Fj(x, i, u)| + |Gj(x, i, u)| ≤ λ|πj(x)|.
Notice that under our earlier assumptions F (0, i, 0) = 0 and G(0, i, 0) = 0.Let αi be a real number, αi > 1 for all i ∈ S, and denote by Φ(i) ∈ R
n×n the diagonal matrix givenby
Φ(i) =
α−1i 0 . . . 0
0. . . . . .
......
. . .0
0 . . . 0 α−ni
By [20], we obtain the following Lemmas.
Lemma 2.1 For any x ∈ Rn and u ∈ R
p, |Φ(i)F (x, i, u)| ≤ √nλ|Φ(i)x| and |Φ(i)G(x, i, u)| ≤ √
nλ|Φ(i)x|.
Lemma 2.2 The following three properties hold:
(i) αiΦ−1(i)A(i)Φ(i) = A(i).
(ii) For any matrix K(i) ∈ Rp×n, the matrix
K(i) = αiHT (i)Φ−1(i)H(i)K(i)Φ(i).
satisfies
H(i)K(i) = αiΦ−1(i)H(i)K(i)Φ(i).
(iii) For any x ∈ Rn,
α−ni |x| ≤ |Φ(i)x| ≤ α−1
i |x|.
Since the pairs of matrices (A(i), H(i)) are completely controllable, for all i there exists a matrixK(i) ∈ R
p×n such that the matrix A(i) + H(i)K(i) is asymptotically stable. If we let M(i) = A(i) +H(i)K(i), this implies that the Lyapunov equation
MT (i)P (i) + P (i)M(i) = −I (2.11)
has a unique symmetric positive definite solution P (i).
9
Theorem 2.3 If there exist αi, i ∈ S such that the following Linear Matrix Inequalities (LMIs)[−αiΦ
2(i) +∑N
j=1 γijΦ(j)P (j)Φ(j)√
(2√
nλ + nλ2)‖P‖Φ(i)√(2√
nλ + nλ2)‖P‖Φ(i) −I
]< 0 (2.12)
are satisfied, then there exists a stabilizing feedback law for the stochastic differential equation with Marko-vian switching (2.10).
The proof is constructive and makes use of the Schur complement (see [10]).
Lemma 2.3 (The Schur complement) Let M, N, R be constant matrices with appropriate dimensionssuch that R = RT > 0 and M = MT . Then M + NR−1NT < 0 if and only if[
M NNT −R
]< 0.
Proof of Theorem 2.3 By Lemma 2.2, for each i ∈ S there exists a matrix K(i) ∈ Rp×n such that
A(i) + H(i)K(i) = αiΦ−1(i)M(i)Φ(i). (2.13)
We show that the linear control law
u(x, i) = K(i)x (2.14)
is a stabilizing feedback law for the system (2.10). According to Definition 2.2, we need to prove that thetrivial solution of the closed-loop system
X(t) = x0 +∫ t
0
[A(r(s))X(s) + H(r(s))K(r(s))X(s) + F (X(s), r(s), K(r(s))X(s)))
]ds
+∫ t
0
G(X(s), r(s), K(r(s))X(s))dB(s) (2.15)
is exponentially stable in mean square.Define
V (x, i) = xT Φ(i)P (i)Φ(i)x.
Let L denote the infinitesimal generator associated with the closed-loop system (2.15). Then
LV (x, i) =N∑
j=1
γijV (x, j) + Vx(x, i)[A(i)x + H(i)K(i)x + F (x, i, K(i)x)
]
+12trace[GT (x, i, K(i)x)Vxx(x, i)G(x, i, K(i)x)]
=2xT Φ(i)P (i)Φ(i)F (x, i, K(i)x) + xTN∑
j=1
γijΦ(j)P (j)Φ(j)x
+ xT [(A(i) + H(i)K(i))T Φ(i)P (i)Φ(i) + Φ(i)P (i)Φ(i)(A(i) + H(i)K(i))]x
+ trace[GT (x, i, K(i)x)Φ(i)P (i)Φ(i)G(x, i, K(i)x)].
Let ‖P‖ = max‖P (i)‖, i ∈ S. By (2.13) and (2.11), we have
LV (x, i) =2xT Φ(i)P (i)Φ(i)F (x, i, K(i)x) + xTN∑
j=1
γijΦ(j)P (j)Φ(j)x
+ αixT Φ(i)[MT (i)P (i) + P (i)M(i)]Φ(i)x
+ trace[GT (x, i, K(i)x)Φ(i)P (i)Φ(i)G(x, i, K(i)x)]
=2xT Φ(i)P (i)Φ(i)F (x, i, K(i)x) + xTN∑
j=1
γijΦ(j)P (j)Φ(j)x
− αixT ΦΦ(i)x + trace[GT (x, i, K(i)x)Φ(i)P (i)Φ(i)G(x, i, K(i)x)]
≤− αixT Φ(i)Φ(i)x + xT
N∑j=1
γijΦ(j)P (j)Φ(j)x
+ ‖P (i)‖ [2|Φ(i)x||Φ(i)F (x, i, K(i)x)| + |Φ(i)G(x, i, K(i)x)|2] .
10
This implies that
LV (x, i) ≤ [ − αi + (2√
nλ + nλ2)‖P‖]|Φ(i)x|2 + xTN∑
j=1
γijΦ(j)P (j)Φ(j)x
= xT
[−αiΦ
2(i) +∑N
j=1 γijΦ(j)P (j)Φ(j)√
(2√
nλ + nλ2)‖P‖Φ(i)√(2√
nλ + nλ2)‖P‖Φ(i) −I
]x
Using Lemma 2.3, we know that there exists a c > 0 such that
LV (x, i) ≤ −c|x|2.The required result follows from Theorem 2.1.
Example 2.1 Let B(t) be a three-dimensional Brownian motion. Let r(t) be a right-continuous Markovchain taking values in S = 1, 2 with generator
Γ = (γij)2×2 =(−0.05 0.05
0.001 −0.001
).
Assume
|Fj(x, i, u)| + |Gj(x, i, u)| ≤ 110000
|πj(x)|. (2.16)
Let
A(1) =
0 0 0
0 0 10 0 0
A(2) =
0 1 0
0 0 00 0 0
H(1) =
1 0
0 00 1
H(2) =
0 0
1 00 1
.
If
K(1) =(−5 0 0−4 −2 −3
)K(2) =
(−3 −4 03 −1 −4
),
then M(i) = A(i) + H(i)K(i), i = 1, 2 are asymptotically stable. Solving the Lyapunov equations
MT (i)P (i) + P (i)M(i) = −I,
we obtain
P (1) =
0.1 −0.0095 −0.0476
−0.095 1.0508 −0.5000−0.0476 −0.5000 0.5635
P (2) =
0.8333 −0.5000 0.6286−0.5000 0.5000 −0.48570.6286 −0.4857 0.7179
and ‖P (1)‖ = 1.3636, ‖P (2)‖ = 1.7848.Using this and (2.16), we know that α1 = 1/2, α2 = 1/3 is the solution of LMIs (2.12). By (2.13),
we have
K(1) =(−2.5 0 0
−8 −2 −1.5
)K(2) =
( −7.5 0 0−17.8378 −2.9730 −1.5
).
Therefore, by Theorem 2.3, the system is be exponentially stable with feedback u(x, i) = K(i)x.
2.4 Minimum Phase Systems
Consider a system whose continuous state is partitioned in two components (ξ, ζ) with ξ ∈ Rq, ζ ∈ R
r.Assume that the evolution of the continuous state is governed by the equations
ξ(t) = ξ0 +
∫ t
0 [A(r(s))ξ(s) + H(r(s))u + F (ξ(s), r(s), u)]ds +∫ t
0 G(ξ(s), r(s), u)dW1(s)
ζ(t) = ζ0 +∫ t
0[f(ζ(s), r(s)) + D(ζ(s), r(s), ξ(s))ξ(s)]ds +
∫ t
0g(ζ(s), r(s))dW2(s)
(2.17)
11
where ξ0 ∈ Rq and ζ0 ∈ R
r are given initial conditions, (W1(t), t ∈ R+) and (W2(t), t ∈ R+) are standardR
l1 and Rl2 valued Wiener processes independent of one another, f : R
r × S → Rr, g : R
r × S → Rr×l2 ,
F = (F1, . . . , Fq)T : Rq×S×R
p → Rq, G = (G1, . . . , Gq)T : R
q×S×Rp → R
q×l1 , D : Rr×S×R
q → Rr×q,
A(i) ∈ Rq×q and H(i) ∈ R
q×p for all i ∈ S.Systems of this type can arise out of a process of feedback linearization [27, 57]. In this case, the
matrices A(i) and H(i) are typically assumed to be in Brunowski canonical form. With systems of thistype one is typically interested to select a feedback control law u(x, i) in such a way that the stabilityof the state ξ is guaranteed. The stability of the overall closed loop system is then determined by thestability of the so called zero dynamics
ζ(t) = ζ0 +∫ t
0
f(ζ(s), r(s))ds +∫ t
0
g(ζ(s), r(s))dW2(s). (2.18)
The system is called minimum phase if the zero dynamics are stable; non-minimum phase systems arenotoriously difficult to deal with [5, 25, 68]. Even though this link to feedback linearization will not befully explored here, it is worth noting that:
• The system of equation (2.17) is slightly more general than the standard normal form obtainedby taking Lie derivatives of an output function, since nonlinear perturbations are allowed in the ξdynamics.
• The relative degree is assumed to be well defined and the constant for all i ∈ S; in other words thestate partition into ξ and ζ does not depend on the discrete state.
• The Wiener process enters through a vector field that respects the state partition, in the sense thatf is independent of ξ and u and G is independent of ζ.
To ensure that the system fits into the framework established in earlier sections we introduce thefollowing assumption.
Assumption 2.5 F = (F1, . . . , Fq)T and G = (G1, . . . , Gq)T are Lipschitz continuous with F (0, i, 0) = 0and G(0, i, 0) = 0. Moreover there exists a λ > 0 such that for any j, 1 ≤ j ≤ q, ξ, ξ ∈ R
q and u ∈ Rp,
|Fj(ξ, i, u) − Fj(ξ, i, u)|+ |Gj(ξ, i, u) − Gj(ξ, i, u)| ≤ λ|πj(ξ − ξ)|.
f and g are C2 functions with bounded derivatives, such that f(0, i) = 0 and g(0, i) = 0. D is a Lipschitzfunction, bounded by a constant Λ. u is a measurable R
p-valued control law. A(i) and H(i) are inBrunovsky canonical form.
Define again matrices K(i) as the same way described in section 2.3.
Theorem 2.4 Assume that the trivial solution of the zero dynamics (2.18) is exponentially stable in meansquare that and the conditions of Theorem 2.3 hold. Then the switched linear control law u(ξ, i) = K(i)ξis an exponential stabilizing feedback law for the system of equation (2.17).
Proof Since the zero dynamics (2.18) are exponentially stable in mean square, by Theorem 2.2 thereexists a Lyapunov function V : R
r ×S → R+ and ci, 1 ≤ i ≤ 5 satisfying (2.5), (2.6) and (2.7). Moreover,by Theorem 2.3, if the ξ dynamics are exponentially stable in mean square, there exist positive constantsc6, c7 such that
E|ξ(t)|2 ≤ c6|ξ(0)|2e−c7t. (2.19)
Denoting by L(1) the infinitesimal generator associated with the ζ dynamics of (2.18) and L(2) theinfinitesimal generator associated with the ζ dynamics of (2.17) we have that
L(2)V (ζ, i) = L(1)V (ζ, i) + Vζ(ζ, i)D(ζ, i, ξ)ξ.
For each integer k ≥ 1, define a stopping time
τk = inft ≥ 0 : |ζ(t)| ≥ k.
12
Obviously, τk → ∞ almost surely as k → ∞. Noting that 0 < |ζ(t)| ≤ k if 0 ≤ t ≤ τk, we can apply thegeneralized Ito formula and use (2.5)), (2.6) and (2.7)) to derive that for any t ≥ 0 and ε > 0
E[e(c3/c2)(t∧τk)V (ζ(t ∧ τk), r(t ∧ τk))
=V (ζ0, i0) + E
∫ t∧τk
0
e(c3/c2)s[(c3/c2)V (ζ(s), r(s)) + L(2)V (ζ(s), r(s))]ds
≤c2|ζ0|2 + E
∫ t∧τk
0
e(c3/c2)s[(c3/c2)c2|ζ(s)|2 − c3|ζ(s)|2
+ Vζ(ζ(s), i)D(ζ(s), i, ζ(s))ζ(s)]ds
≤c2|ζ0|2 + E
∫ t∧τk
0
e(c3/c2)sVζ(s)(ζ(s), i)D(ζ(s), i, ζ(s))ζ(s)ds
≤c2|ζ0|2 + εE
∫ t∧τk
0
e(c3/c2)s|Vζ(s)(ζ(s), i)|2ds
+1εE
∫ t∧τk
0
e(c3/c2)s|D(ζ(s), i, ξ(s))ξ(s)|2ds
≤c2|ζ0|2 + c4εE
∫ t∧τk
0
e(c3/c2)s|ζ(s)|2ds +Λ2
ε
∫ t
0
e(c3/c2)sE|ξ(s)|2ds.
By (2.5) and (2.19), we then have
c1E[e(c3/c2)(t∧τk)|ζ(t ∧ τk)|2] ≤c2|ζ0|2 + c4εE
∫ t
0
e(c3/c2)s|ζ(s ∧ τk)|2ds
+Λ2
ε
c2c6
c3 − c2c7
[exp(
c3 − c2c7
c2)t − 1
]E|ξ(0)|2.
By the Gronwall inequality,
E[e(c3/c2)(t∧τk)|ζ(t ∧ τk)|2]
≤ c2
c1e
c4εc1
t
[|ζ0|2 +
Λ2
ε
∣∣∣∣ c6
c3 − c2c7
∣∣∣∣(
exp(c3 − c2c7
c2)t + 1
)|ξ(0)|2
]. (2.20)
Let
ε =12
(c1c7
c4∧ c1c3
c2c4
)
and k → ∞, by (2.20) we then have
E|ζ(t)|2 ≤c2
c1
[|ζ0|2 +
Λ2
ε
∣∣∣∣ c6
c3 − c2c7
∣∣∣∣ |ξ(0)|2]
e−(c3/2c2)t
+c2
c1
Λ2
ε
∣∣∣∣ c6
c3 − c2c7
∣∣∣∣ e−(c7/2)t|ξ(0)|2,
as required.
13
Chapter 3
On the Exponential Stability andReachability for Switching DiffusionProcesses
3.1 Introduction
The classical stochastic stability theory deals with not only moment stability but also almost sure stability(cf. [2, 24, 36, 43, 47, 70]). Mao [44] investigated the almost sure exponential stability of stochasticdifferential equations with Markovian switching, however, the almost sure exponential stability dependson the moment exponential stability. Because almost sure exponential stability may hold even for systemsthat are not moment stable, it is desirable to obtain direct conclusions for almost sure exponential stability,that do not rely on moment stability. The present chapter will provide such conditions.
This chapter is organised as follows. In Section 3.2, we investigate the almost sure stability forthe switching diffusion processes. In Section 3.3, the results of Section 2 are then applied to establisha sufficient criterion for the stabilization of linear switching processes. In Section 3.4, the robustness ofstability is discussed.
3.2 Almost sure exponential stability
Consider a stochastic differential equation with Markovian switching of the form
dX(t) = f(X(t), t, r(t))dt + g(X(t), t, r(t))dB(t) (3.1)
on t ≥ 0 with initial data X(0) = x0 and r(0) = i0 ∈ S, where
f : Rn × R+ × S → R
n and g : Rn × R+ × S → R
n×m.
In this chapter the following hypothesis is imposed on the coefficients f and g.
Assumption 3.1 Both f and g satisfy the local Lipschitz condition and the linear growth condition.That is, for each k = 1, 2, . . ., there is an hk > 0 such that
|f(x, t, i) − f(y, t, i)| + |g(x, t, i) − g(y, t, i)| ≤ hk|x − y|for all t ≥ 0, i ∈ S and those x, y ∈ R
n with |x| ∨ |y| ≤ k; and there is an h > 0 such that
|f(x, t, i)| + |g(x, t, i)| ≤ h(1 + |x|)for all x ∈ R
n and i ∈ S, t ≥ 0; moreover
f(0, t, i) ≡ 0 and g(0, t, i) ≡ 0
for all i ∈ S, t ≥ 0.
14
It is known (cf. Mao [44]) that under Assumption (3.1), equation (3.1) has a unique continuous solutionX(t; x0, i0) on t ≥ 0 for any given initial data x0 and i0. Furthermore, equation (3.1) has the solutionX(t; 0, i0) ≡ 0 corresponding to the initial value x0 = 0 and for any i0 ∈ S.
Definition 3.1 The solution of equation (3.1) is said to be almost surely exponentially stable if for allx0 ∈ R
n and i0 ∈ S
lim supt→∞
1t
log |X(t; x0, i0)| < 0 a.s.
We can now state our main result.
Theorem 3.1 Under Assumption 3.1. Assume that there exists a function V ∈ C2,1(Rn × R+ × S; R+)and x : V (x, t, i) = 0, ∀t ≥ 0, i ∈ S = 0 or = ∅. Moreover, there exist positive constants α, β, ρ1 andρ2 such that
DV (x, t, i) ≤ −αV (x, t, i), (3.2)
ρ1 ≤ V (x, t, i)V (x, t, j)
≤ ρ2 ∀x ∈ Rn, i, j ∈ S, (3.3)
|Vx(x, t, i)g(x, t, i)| ≤ βV (x, t, i), (3.4)
where
DV (x, t, i) = Vt(x, t, i) + Vx(x, t, i)f(x, t, i) + V (x, t, i)N∑
j=1
γij log V (x, t, j)
+12trace
[gT (x, t, , i)
(Vxx(x, t, i) − V T
x (x, t, i)Vx(x, t, i)V (x, t, i)
)g(x, t, i)
].
Then the solution of equation (3.1) is almost surely exponentially stable.
To prove this theorem, we start by a lemma. The following fact is shown in [44].
Lemma 3.1 Under Assumption 3.1. Then
PX(t; x0, i0) 6= 0 on t ≥ 0 = 1 (3.5)
for all x0 6= 0 in Rn and i0 ∈ S. That is, almost all the sample paths of any solution of equation (3.1)
starting from nonzero will never reach the origin.
Proof of Theorem 3.1 Clearly, the solution of equation (3.1) is almost surely exponentially stablefor x0 = 0 since X(t; 0, i0) ≡ 0. Fix any x0 6= 0 and write X(t; x0, i0) = X(t). By Lemma 3.1, X(t) 6= 0for all t ≥ 0 almost surely. Using generalised Ito formula (1.6) and (3.2), we have
log V (X(t), t, r(t)) = log V (x0, 0, i0) +∫ t
0
L log V (X(s), s, r(s))ds + M(t)
= log V (x0, 0, i0) +∫ t
0
DV (X(s), s, r(s))V (X(s), s, r(s))
ds + M(t)
≤ log V (x0, 0, i0) − αt + M(t), (3.6)
where
M(t) =∫ t
0
Vx(X(s), s, r(s))V (X(s), s, r(s))
g(X(s), s, r(s))dB(s)
+∫ t
0
∫Θ
[log V (X(s), s, i0 + h(r(s), l)) − log V (X(s), s, r(s))
]µ(ds, dl). (3.7)
ThereforeV (X(t), t, r(t)) ≤ V (x0, 0, i0) exp [−αt + M(t)]. (3.8)
15
By (3.3) and (3.4),
∣∣∣∣∫ t
0
Vx(X(s), s, r(s))V (X(s), s, r(s))
g(X(s), s, r(s))dB(s)∣∣∣∣2
=∫ t
0
∣∣∣∣Vx(X(s), s, r(s))V (X(s), s, r(s))
g(X(s), s, r(s))∣∣∣∣2
ds ≤ β2t
and ∣∣∣∣∫ t
0
∫Θ
[log V (X(s), s, i0 + h(r(s), l)) − log V (X(s), s, r(s))
]µ(ds, dl)
∣∣∣∣2
=∫ t
0
∫Θ
|log V (X(s), s, i0 + h(r(s), l)) − log V (X(s), s, r(s))|2 dsm(dl)
≤ max((log ρ1)2, (log ρ2)2
)m(Θ)t.
This implies there exists a constant C which is independent of t such that
〈M(t), M(t)〉 ≤ Ct.
By the strong law of the large numbers for local martingales(cf. Lipster [39]), we then have
limt→∞
M(t)t
= 0 a.s.
This, together with (3.6), yields
limt→∞ sup
log V (X(t), t, r(t))t
≤ −α,
the required assertion follows.
Example 3.1 Let B(t) be a scalar Brownian motion. Let r(t) be a right-continuous Markovian chaintaking values in S = 1, 2 with generator
Γ = (γij)2×2 =(
γ11 γ12
γ21 γ22
).
Assume that B(t) and r(t) are independent. Consider a one-dimensional stochastic differential equationwith Markov switching of the form
dX(t) = f(X(t), t, r(t))dt + σ(r(t))X(t)dB(t) (3.9)
on t ≥ 0, wheref(x, t, 1) = sin(x) cos(2t), f(x, t, 2) = 2x
σ(1) = 2, σ(2) = 1.
Clearly
xf(x, t, 1) ≤ x2, xf(x, t, 2) ≤ 2x2.
for all (x, t) ∈ R×R+. To examine the exponential stability, we construct a function V : R×S → R+ by
V (x, i) = βi|x|2,with β1 = 1 and β2 = β a constant to be determined. It is easy to show that the operator DV fromR × R × R+ × S to R has the form
DV (x, t, 1) = 2xf(x, t, 1) +12
[2x
(2 − 4x2
x2
)2x
]+ x2
(γ11 log(x2) + γ12 log(βx2)
)≤ −(2 − γ12 log β)x2.
16
Similarly, we have
DV (x, t, 2) ≤ −β(γ21 log β − 3)x2.
Setting 2 − γ12 log β > 0,γ21 log β − 3 > 0,
that is γ21 > 1.5γ12, which implies (3.2) is satisfied. It is easy to see that (3.3) and (3.4) hold. ApplyingTheorem 3.1, we can therefore conclude that if γ21 > 1.5γ12, then the solution of equation (3.9) is almostsurely exponentially stable.
Let γ12 = 1, γ21 = 2, then the solution of equation (3.9) will tend to zero exponentially withprobability one. Choosing X0 = 200, r0 = 1, the following simulation (log scale) illustrates our theory.
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000−800
−700
−600
−500
−400
−300
−200
−100
0
100
t
x
Figure 3.1: The graph of simulation when γ12 = 1, γ21 = 2.
3.3 Exponential Stabilization of Linear Switching Diffusion Pro-cesses
Stabilisation by feedback control is one of the most important issues in control theory (cf. [17, 21, 38,54, 73]). So far the known results on stabilisation for stochastic differential equations with Markovianswitching are mainly concerned with the design of feedback controls under which the underlying equationsbecome asymptotically stable in moment e.g. in mean square (cf. [8, 71]). Results in this direction werealso presented in chapter 2. In this section we will discuss the stabilisation by feedback control in thesense that the underlying equations will become almost surely exponentially stable, rather than stable inmoment. We restrict our attention to systems where dynamics are linear. Our stabilisation criterion willbe described in terms of matrix inequalities which can be solved efficiently.
Let us consider the following linear stochastic differential equation
dX(t) = [A(r(t))X(t) + C(r(t))u(t)] dt +m∑
k=1
[Dk(r(t))X(t)]dBk(t) (3.10)
17
on t ≥ 0 with initial value X(0) = x0 ∈ Rn and r(0) = r0 ∈ S. Here u is an R
p-valued control law.For each mode r(t) = i ∈ S, we write A(i) = Ai etc. for simplicity, and Ai, Dki are all n × n constantmatrices while Ci is an n × p matrix.
The main aim of this section is to design a state feedback controller of the form
u(t) = H(r(t))X(t)
based on the state X(t) and the mode r(t), such that the following closed-loop system of (3.10)
dX(t) = [A(r(t))X(t) + C(r(t))H(r(t))X(t)] dt +m∑
k=1
[Dk(r(t))X(t)]dBk(t) (3.11)
is exponentially stable. Here, for each mode r(t) = i ∈ S, H(i) = Hi is a p × n matrix.
Theorem 3.2 If
Λi + αQi qiDT1iQi qiD
T2iQi · · · qiD
TmiQi
qiQiD1i −I 0 · · · 0qiQiD2i 0 −I · · · 0
......
.... . .
...qiQiDmi 0 0 · · · −I
< 0, i ∈ S, (3.12)
has the solutions α, Qi and Hi, subject to Qi = QTi > 0, where I is the n × n identity matrix, qi =
1√λmax(Qi)
and
pi =N∑
l=1,l 6=i
γil log λmax(Ql) − |γii| log λmin(Qi) (3.13)
Λi = QiAi + ATi Qi + QiCiHi + (CiHi)T Qi + piQi +
m∑k=1
DTkiQiDki. (3.14)
Then equation (3.10) is exponentially stable with controller u(t) = H(r(t))X(t).
Proof LetV (x, i) = xT Qix.
The operator DV : Rn × S → R has the form
DV (x, i) = 2xT Qi(Ai + CiHi)x + xT Qix
N∑l=1
γil log xT Qlx
+m∑
k=1
xT DTki
(Qi − Qix
T xQi
xT Qix
)Dkix.
Note that
2xT QiAix = xT [QiAi + ATi Qi]x,
2xT QiCiHix = xT [QiCiHi + (CiHi)T Qi]x
DV (x, i) ≤ xT [QiAi + ATi Qi + QiCiHi + (CiHi)T Qi]x + xT piQix
+ xTm∑
k=1
DTkiQiDkix − 1
λmax(Qi)xT
m∑k=1
DTkiQ
2i Dkix
≤ xT Λix +1
λmax(Qi)xT
m∑k=1
DTkiQ
2i Dkix. (3.15)
18
By Lemma 2.3 and (3.12),
Λi + αQi +1
λmax(Qi)
m∑k=1
DTkiQ
2i Dki < 0.
This implies
xT [Λi +1
λmax(Qi)
m∑k=1
DTkiQ
2i Dki]x ≤ −αxT Qix. (3.16)
Using (3.15) and (3.16), we haveDV (x, i) ≤ −αV (x, i). (3.17)
Clearly
ρ1 :=λmin(Qi)λmax(Qj)
≤ V (x, i)V (x, j)
≤ ρ2 :=λmax(Qi)λmin(Qj)
(3.18)
and
|Vx(x, i)m∑
k=1
Dkix| = |xTm∑
k=1
(QiDki + DTkiQi)x|
≤ ρ(∑m
k=1(QiDki + DTkiQi))
λmin(Qi)xT Qix. (3.19)
By Theorem 3.1 and (3.17) ,(3.18) and (3.19), equation (3.10) is exponentially stable. Let us discuss an example to show how this method can be used to design the controllers.
Example 3.2 Let B(t) be a 1-dimensional Brownian motion. Let r(t) be a right-continuous Markovchain taking values in S = 1, 2 with generator
Γ = (γij)2×2 =( −1 1
1 −1
).
Assume that B(t) and r(t) are independent. Consider a two-dimensional stochastic differential systemswith Markovian switching of the form
dX(t) = A(r(t))X(t)dt + C(r(t))u(t)dt + D(r(t))X(t)dB(t) (3.20)
on t ≥ 0, where
A1 =(−3 1
0.1 −2.5
), C1 =
( −2 0.10.05 −4
), D1 =
( −3 −0.8−0.01 −0.1
),
A2 =(
10 12 15
), C2 =
(−20 00 −30
), D2 =
(−3 0.50.1 −1
).
By [48], we know the system
X(t) = A(r(t))X(t) + D(r(t))X(t)dB(t) (3.21)
has the property
lim inft→∞
1t
log(|X(t)|) > 0,
that is system (3.21) is not stable without a control (i.e. if set u = 0). Therefore it is necessary to designa feedback control in order to stabilise equation (3.20). For this purpose, we set, as described in Theorem3.2,
Λi = QiAi + ATi Qi + QiCiHi + (CiHi)T Qi + piQi + DT
i QiDi.
It is not difficult to verify that [Λi + αQi qiD
T Qi
qiQiDi −I
]< 0, i = 1, 2, (3.22)
19
have the solutions α = 1 and
Q1 =(
1 00 2
), Q2 =
(5 22 4
), H1 =
(3 40 0
), H2 =
(2 0.11
0.1 2
).
Obviously, Q1 and Q2 are symmetric and positive-definite matrices. By Theorem 3.2 we can then concludethat equation (3.20) is almost surely exponentially stable with the feedback control u = H(r(t))x(t) andthe exponential rate −1.
3.4 Robust Stability of Linear Switching Diffusion Processes
In many practical situations the system parameters can only be estimated with a certain degree ofuncertainty. The robustness of stability is therefore an important issue in the stability theory (cf. [17,43, 44, 45]).
Let us now consider the following equation
dX(t) = [A(r(t)) + ∆A(r(t))]X(t)dt +m∑
k=1
Dk(r(t))X(t)dBk(t). (3.23)
As before we shall write A(i) = Ai etc. Assume that
∆Ai = MiHiNi
where Mi ∈ Rn×p and Ni ∈ R
q×n are known real constant matrices but Hi’s are unknown p× q-matricessuch that
HTi Hi ≤ I, ∀i ∈ S. (3.24)
(As usual, by HTi Hi ≤ I we mean that I − HT
i Hi is a nonnegative-definite matrix.)Equation (3.23) can be regarded as the perturbed system of the following linear jump equation
X(t) = A(r(t))X(t) (3.25)
by taking into account the uncertainty of system parameter matrices as well as the stochastic perturbation.Assuming that system (3.25) is exponentially stable, we are interested in finding the conditions underwhich the system can tolerate the parameter uncertainty and the stochastic perturbation without losingthe stability property.
Theorem 3.3 If there exist constants α > 0 and µi > 0, i ∈ S, such that
Λi + αQi µiQiMi µ−1i NT
i qiDT1iQi · · · qiD
TmiQi
µiMTi Qi −I 0 0 · · · 0
µ−1i Ni 0 −I 0 · · · 0
qiQiD1i 0 0 −I · · · 0...
. . ....
qiQiDmi 0 0 0 · · · −I
< 0, i ∈ S, (3.26)
have solutions Qi, subject to Qi = QTi , where pi and qi are defined as in Theorem 3.2 and
Λi = QiAi + ATi Qi + piQi +
m∑k=1
DTkiQiDki, (3.27)
then equation (3.23) is exponentially stable.
To prove this useful result on the robustness of stability we will need the following well known lemmaon linear matrix inequality.
Lemma 3.2 Let M, N and H be real-valued matrices with appropriate dimensions such that HT H ≤ I.Then for any constant µ > 0, we have
MHN + NT HT MT ≤ µMMT + µ−1NT N.
20
Proof of Theorem 3.3 LetV (x, i) = xT Qix.
The operator DV : Rn × S → R has the form
DV (x, i) = 2xT Qi(Ai + ∆Ai)x + xT QixN∑
l=1
γil log xT Qlx
+m∑
k=1
xT DTki
(Qi − Qix
T xQi
xT Qix
)Dkix.
Note that
2xT QiAix = xT [QiAi + ATi Qi]x,
It follows from Lemma 3.2 and HTi Hi ≤ I that
2xT Qi∆Aix = xT [Qi∆Ai + (∆Ai)T Qi]x
= xT [(QiMi)HiNi + NTi HT
i (QiMi)T ]x
≤ xT [µ2i QiMiM
Ti Qi + µ−2
i NTi Ni]x,
DV (x, i) ≤ xT [QiAi + ATi Qi + piQi]x + xT [µ2
i QiMiMTi Qi + µ−2
i NTi Ni]x
+xT∑m
k=1 DTkiQiDkix + 1
λmax(Qi)xT
∑mk=1 DT
kiQ2i Dkix
= xT Λix + xT [µ2i QiMiM
Ti Qi + µ−2
i NTi Ni]x + 1
λmax(Qi)xT
∑mk=1 DT
kiQ2i Dkix. (3.28)
By (3.26), we haveDV (x, i) ≤ −αV (x, i).
Just as in the proof of Theorem 3.2, V (x, i) satisfies the conditions of Theorem 3.1. Therefore equation(3.23) is exponentially stable.
Let us discuss one more example to close this chapter.
Example 3.3 Let B(t) be a one-dimensional Brownian motion. Let r(t) be a right-continuous Markovchain taking values in S = 1, 2 with the generator
Γ = (γij)2×2 =(−1 1
2 −2
).
Assume that B(t) and r(t) are independent. Consider a two-dimensional stochastic differential systemwith Markovian switching of the form
dX(t) = [A(r(t)) + ∆A(r(t))X(t)]dt + D(r(t))X(t)dB(t) (3.29)
on t ≥ 0, where ∆Ai = MiHiNi, HTi Hi ≤ I and
A1 =(−20 1
2 −30
), D1 =
(0.5 0.25−0.5 −0.25
), M1 =
(2 31 −2
), N1 =
(0.25 −0.250.5 0
),
A2 =(−19 −2
10 −9
), D2 =
(−0.5 −12 2
), M2 =
(− 12 − 1
30.25 0.5
), N2 =
(− 13 0
0.5 0.25
)Let
Λi := QiAi + ATi Qi + piQi + DT
i QiDi, i = 1, 2.
The matrix inequalities (3.26) are equivalent toΛi + αQi µiQiMi µi
−1NTi qiD
Ti Qi
µiMTi Qi −I 0 0
µi−1Ni 0 −I 0
qiQiDi 0 0 −I
< 0, i = 1, 2. (3.30)
It is easy to verify that these inequality have the solutions: α = 0.5, µ1 = 1, µ2 = 0.5 and
Q1 =(
1 00 1
), Q2 =
(2 11 1
).
By Theorem 3.3 we can conclude that system (3.29) is almost surely exponentially stable with exponentialrate − 1
2 .
21
3.5 Reachability
Theorem 3.4 Under Assumption 3.1. Assume that there exists a function V (x, t, i) ∈ C2,1(Rn × R+ ×S; R+) such that for some h > 0,
LV (x, t, i) ≤ h(1 + V (x, t, i)) ∀(x, t, i) ∈ Rn × R+ × S. (3.31)
Then
P (|X(t)| ≥ R, ∃t ∈ [0, T ]) ≤ ehT
vR(V (x0, 0, i0) + hT ),
where
vR = infV (x, t, i) : |x| ≥ R, t ≥ 0, i ∈ S.
Proof For a sufficiently large number R, define the stopping time
θ = inft ∈ [0, T ] : |X(t)| ≥ R.Applying the generalised Ito formula to V (X(t), t, r(t)) yields
E[V (X(t ∧ θ), t ∧ θ, r(t ∧ θ))] = V (x0, 0, i0) + E
∫ t∧θ
0
LV (X(s), s, r(s))ds.
By condition (3.31)
E[V (X(t ∧ θ), t, r(t ∧ θ))] ≤ V (x0, 0, i0) + hE
∫ t∧θ
0
(1 + V (X(s), s, r(s)))ds
≤ V (x0, 0, i0) + hT + h
∫ t
0
EV (X(s ∧ θ), s ∧ θ, r(s ∧ θ))ds.
Using the Gronwall inequality, we obtain
E[V (X(T ∧ θ), T ∧ θ, r(T ∧ θ))] ≤ (V (x0, 0, i0) + hT )ehT . (3.32)
Noting that |X(θ)| = R whenever θ < T , we derive from (3.32) that
(V (x0, 0, i0) + hT )ehT ≥ E[V (X(θ), θ, r(θ))Iθ<T ]≥ vRP (θ < T ).
That is
P (θ < T ) ≤ ehT
vR(V (x0, 0, i0) + hT ).
This implies
P (|X(t)| ≥ R, ∃t ∈ [0, T ]) ≤ ehT
vR(V (x0, 0, i0) + hT ),
as required. The proof is therefore complete.
Theorem 3.5 Under Assumption 3.1. Let f, g be one dimensional and B(t) be a scalar Brownian motion.Moreover, If there exists a function V (x) ∈ C2(Rn; R+) and satisfies the following conditions
LV (x, t) − 12V (x)
gT (x, t, i)V Tx (x, t)Vx(x, t)g(x, t, i) ≤ −αV (x, t), (3.33)
|Vx(x, t)g(x, t, i)| ≤ βV (x). (3.34)
Then, for any x0 6= 0, we have
P
(sup
0≤t≤TV (X(t)) ≥ δ
)=
12
1 − erf
(ln(δ/V (x0)) + αT
β√
2T
)+ exp(−2dL)
[1 − erf
(ln(δ/V (x0)) − αT
β√
2T
)].
To prove the Theorem, we need the following lemma.
22
Lemma 3.3 If B(t) is a one-dimensional Brownian motion, then for any positive constants d and T
P ( supt∈[0,T ]
(−dt + B(t)) ≥ L) =12
1 − erf
(L + dT√
2T
)+ exp(−2dL)
[1 − erf
(L − dT√
2T
)],
where erf(a) = 2√π
∫ a
0 exp(−u2)du.
Proof of Theorem 3.5 For any x0 6= 0, write X(t; x0, i0) = X(t). Apply generalized Ito formula
d lnV (X(t)) = F (X(t), t, r(t))dt + G(X(t), t, r(t))dB(t), (3.35)
where
F (x, t, i) =1
V (x)
(LV (x, t, i) − 1
2V (x)gT (x, t, i)V T
x (x)Vx(x)g(x, t, i))
,
G(x, t, i) =1
V (x)Vx(x)g(x, t, i).
Using (3.33) and (3.34) and Computing
P
(sup
0≤t≤TV (X(t)) ≥ δ
)
=P
(sup
0≤t≤TV (x0) exp
( ∫ t
0
F (X(s), r(s))ds +∫ t
0
G(X(s), r(s))dB(s)) ≥ δ
)
=P
(sup
0≤t≤T
∫ t
0
F (X(s), r(s))ds +∫ t
0
G(X(s), r(s))dB(s) ≥ lnδ
V (x0)
)
=P
(sup
0≤t≤T(−αt + βB(t)) ≥ ln
δ
V (x0)
),
The required assertion follows from Lemma 3.3.
23
Chapter 4
Invariant Measure of StochasticHybrid Processes with Jumps
4.1 Introduction
In this chapter, our aim is to establish criteria on the existence of an invariant measure for a non-linearstochastic hybrid system with jump
X(t) =∫ t
0
f(X(s), r(s))ds +∫ t
0
g(X(s), r(s))dB(s) +∫
[0,t]×Rd
h(X(s−), ρ)N(dsdρ). (4.1)
It is well known that once the existence of the invariant measure of an SDE is established, we maycompute it by solving the associated PDE, known as the forward equation or the Kolmogorov-Fokker-Planck equation [33]. If the system is a linear SDE, one can explicitly solve the Fokker-Planck equation.But for nonlinear SDE, especially in the case of Eq.(4.1), the situation becomes more complex and it isnontrivial to solve the PDEs. As an alternative, we derive the relation between the probability densityof Eq.(4.1) and that of the corresponding SDEs.
In Section 4.2 we shall give the formal definition of stochastic hybrid systems. In Section 4.3, ageneralized Ito formula for Eq. (4.1) will be established and relation of probability density and invari-ant measure among different stochastic systems will be discussed. Section 4.4 provides some sufficientconditions for the existence of invariant measure in terms of Lyapunov functions.
4.2 Stochastic Hybrid Systems with Jump
Let Π(·) be a probability measure on the Borel subsets of Rd that has compact support Γ. Assume there
exist mutually independent sequences of iid random varibles νn, 0 ≤ n < ∞ and ρn, 0 ≤ n < ∞,where the νn are exponentially distributed with mean 1
λ and the ρn have distribution Π(·). Assume alsothat these random variables are independent of B(·) and r(·). Let τ0 = 0, τn+1 = τn + νn. Then τn willbe the jump times of the process X(t). Let h : R
n × Rd → R
n be a bounded measure function. Startingwith a given initial condition X(0) = x0, r(0) = i0, between the jumps of [τk, τk+1), the state equationsare of the form
dX(t) = f(X(t), r(t))dt + g(X(t), r(t))dB(t)X(τk) = X(τk−) + h(X(τk−), ρk), t ∈ [τk, τk+1)
(4.2)
wheref : R
n × S → Rn, g : R
n × S → Rn×m.
The process thus constructed will be defined for all t ≥ 0 since τn → ∞ as n → ∞ a.s. The mutualindependence of the components used to construct the process implies that (X(t), r(t)) is a time homo-geneous Markov process [37, 26]. In this chapter we shall discuss the invariant measure for the stochasticdifferential equations with Poisson jump. Assume (νi, ρi) are the point masses of the Poisson randommeasure. Random variable infνi − t : νi ≥ t is exponentially distributed with mean 1/λ. Let N(·) be aPoisson random measure with intensity measure λdt × Π(dy). Given that (ν, ρ) is a point mass of N(·),
24
ρ is distributed according to Π(·). Eq. (4.2) shall be of the form [37]
X(t) =∫ t
0
f(X(s), r(s))ds +∫ t
0
g(X(s), r(s))dB(s) +∫
[0,t]×Rd
h(X(s−), ρ)N(dsdρ). (4.3)
For the existence and uniqueness of the solution we shall impose a hypothesis [26]:
Assumption 4.1 f and g satisfy Assumption 2.1. Moreover, there is an M > 0 for all x ∈ Rn, z ∈ R
d
such that|h(x, z)| ≤ M.
To state our main result, we still need a few more notations. Let C2(Rn × S; R+) denote the family ofall non-negative functions V (x, i) on R
n × S which are continuously twice differentiable in x. Let
λ(x) = λ
∫y:h(x,y) 6=0
Π(dy) ≤ λ,
Π(x, A) =∫y:h(x,y)∈A,h(x,y) 6=0
Π(dy).
If V ∈ C2(Rn × S; R+), define operators from Rn × S to R by
L(1)V (x, i) = Vx(x, i)f(x, i) +12trace[gT (x, i)Vxx(x, i)g(x, i)], (4.4)
L(2)V (x, i) = λ(x)∫
Γ
[V (x + y, i) − V (x, i)]Π(x, dy), (4.5)
LV (x, i) = L(1)V (x, i) + L(2)V (x, i), (4.6)
LV (x, i) = LV (x, i) +N∑
j=1
qijV (x, j). (4.7)
We conclude this section by defining invariant measure equation (4.1) ([6]). Let Xx0,i0 denote the solutionwith initial X(0) = x0, r(0) = i0 and Y x0,i0(t) denote the R
n ×S-valued process (Xx0,i0(t), ri0 (t)). ThenY (t) is a time homogeneous Markov process. Let P (t, x, i, dy × j) denote the transition probability ofthe process Y (t).
Definition 4.1 The probability measure µ is said to be the invariant measure of Y (t) = (X(t), r(t)) iffor any t > 0
N∑l=1
∫Rn
P (t, (x, l), A × j)µ(dx × l) = µ(A × j).
4.3 Transition Probability, Probability Density and InvariantMeasure
In this section, we shall discuss the relations among transition probability, probability density and invari-ant measures. For the future use we shall prove the generalized Ito formula as a lemma. Fix any x0 andi0 and write Xx0,i0(t) = X(t) for simplicity.
Lemma 4.1 Let V ∈ C2(Rn × S; R+) and τ1, τ2 be bounded stopping times such that τ1 ≤ τ2 a.s. IfV (X(t), r(t)) and LV (X(t), r(t)) etc. are bounded on t ∈ [τ1, τ2], then
EV (X(τ2), r(τ2)) = EV (X(τ1), r(τ1)) + E
∫ τ2
τ1
LV (X(s), r(s))ds. (4.8)
Proof For any s < t, define
JV (i, s, t) =∑
s≤u≤t
[V (X(u, i)) − V (X(u−, i))]
−∫ t
s
λ(X(s))∫
Γ
[V (X(u) + y, r(u)) − V (X(u), r(u))
]Π(X(u), dy)du.
25
Let s = σ0 < σ1 < · · · < σv < t be all the times when r(u) has a jump. Applying the Ito formula toV (X(u), i) on the intervals [s, σ1), (σ1, σ2), . . . , (σv, t], we have that
V (X(σ1), i) − V (X(s), i) =∫ σ1
s
LV (X(u), i)du +∫ σ1
s
Vx(X(u), i)g(X(u), i)dB(s) + JV (i, s, σ1),
V (X(σl+1), i) − V (X(σl), i) =∫ σl+1
σl
LV (X(u), i)du +∫ σl+1
σl
Vx(X(u), i)g(X(u), i)dB(s)
+ JV (i, σl, σl+1), l = 2, . . . , v − 1,
V (X(t), i) − V (X(σv), i) =∫ t
σv
LV (X(u), i)du +∫ t
v
Vx(X(u), i)g(X(u), i)dB(s) + JV (i, t, σv).
Setting s = 0 and substituting i = i0 in the first equation, i = r(σl) in the second and i = r(σv) in thethird, we get
V (X(t), r(t)) − V (x0, i0) =∫ t
0
LV (X(s), r(s))ds + M(t) +v∑
l=1
[V (X(σl), r(σl)) − V (X(σl), r(σl−))]
=∫ t
0
LV (X(s), r(s))ds + M(t) +N∑
j=1
∫ t
0
γr(s)jV (X(s), j)ds
=∫ t
0
LV (X(s), r(s))ds + M(t),
where M(t) =∫ t
0 Vx(X(u), r(u))g(X(u), r(u))dB(s) +∑l
v=1 JV (r(σv), 0, t) is a martingale. The requiredassertion follows taking the expectation both sides.
For t ∈ R+, i ∈ S, x ∈ Rn and A ∈ B(Rn), let P i(t, x, A) denote the transition probability of the
following stochastic differential equation
dX(t) = f(X(t), i)dt + g(X(t), i)dB(t) +∫
[0,t]×Rd
h(X(s−), ρ)N(dsdρ), (4.9)
and P i0(t, x, A) denote the transition probability of the following stochastic differential equation
dX(t) = f(X(t), i)dt + g(X(t), i)dB(t). (4.10)
Let σi denote the sojour time of r(t) in state i ∈ S. Then we have the well-known fact for t > 0,A ∈ B(Rn) and i, j ∈ S such that
P (σi > t) = eγiit, (4.11)
P (t, (x, i), A × j)= Ex,i[IX(s)∈A,r(s)=j|σi > t]Px,i(σi > t) + Ex,i[IX(s)∈A,r(s)=jIσi<t]
= eγiitP i(t, x, A)δij −∫ t
0
γiieγiisds ×
∫Rn
P i(s, x, dy)∑l 6=i
γilP (t − s, (y, l), A × j) (4.12)
and
P (t, (x, i), A × j)= Ex,i[IX(s)∈A,r(s)=j|σi > t]Px,i(σi > t) + Ex,i[IX(s)∈A,r(s)=jIσi<t]
= eγiitP i(t, x, A)δij −∫ t
0
γiieγiisds ×
∫Rn
∑l 6=i
γljP (s, (x, i), dy × l)P l(t − s, y, A). (4.13)
By (4.13), we have the following lemma.
26
Lemma 4.2 Let µ denote the invariant measure of P (t, (x, i), A× j). Then for any A ∈ B(Rn), j ∈ S
µ(A × j) =N∑
i=1
∫Rn
eqiitP i(t, x, A)δijµ(dx × i)
−N∑
i=1
∫ t
0
qiieqiisds
∫Rn
∑l 6=i
qljPl(t − s, x, A)µ(dx × i). (4.14)
From (4.13) and (4.14), If we know the transition probability of Eq. (4.9), then we can obtainthe transition probability and invariant measure of Eq. (4.1). Using (4.13) again, the relation betweenprobability densities are discussed by the following lemma.
Lemma 4.3 Let p and and pi denote the probability density for P and P i, respectively. Then for anyt ≥ 0, x, y ∈ R
n, i, j ∈ S
p(t, (x, i), y × j) = eγiitpi(t, x, y)δij −∫ t
0
γiieγiisds ×
∫Rn
∑l 6=i
γljp(s, (x, i), dz × l)pl(t − s, z, y).
(4.15)
On the other hand, by [63, Theorem 14 on p35] and noting Π(Rd) = 1, compute
P i(t, x, A) = e−tP i0(t, x, A) +
∫ t
0
du
∫ ∫e−tP i
0(t − u, y + h(y, θ), A)P i0(u, x, dy)Π(dθ)
+∞∑
n=2
∫0<u1<···<un<t
∫ ∫· · ·
∫ ∫e−u1P i
0(u1, x, dy1)Π(dθ)e−(u2−u1)
× P i0(u2 − u1, y1 + h(y1, θ1), dy2) × · · · × e−(un−un−1)
× P i0(un − un−1, yn−1 + h(yn−1, θn−1), dyn)e−(t−un)
× P i0(t − un, yn + h(yn, θn), A)Π(dθ1) · · ·Π(dθn)du1 · · · dun. (4.16)
Let pi0 denote the probability density for P i
0. By (4.12) and (4.16), we have
p(t, (x, i), y × j)
= eγiitδije−tpi
0(t, x, y) +∞∑
n=1
δij
∫0<u1<···<un<t
∫ ∫· · ·
∫ ∫e−u1
× pi0(u1, x, dy1)Π(dθ)e−(u2−u1)pi
0(u2 − u1, y1 + h(y1, θ1), dy2) × · · · × e−(un−un−1)
× pi0(un − un−1, yn−1 + h(yn−1, θn−1), dyn)e−(t−un)
× pi0(t − un, yn + h(yn, θn), y)Π(dθ2) · · ·Π(dθn)du1 · · · dun
−∑l 6=i
γilγii
∫ t
0
eγiisds
∫Rn
e−spi0(t, x, dy)p(t − s, (y, l), y × j)
−∞∑
n=1
∑l 6=i
γilγii
∫ t
0
eγiisds
∫Rn
p(t − s, (y, l), y × j)∫
0<u1<···<un<s
∫ ∫· · ·
∫ ∫e−u1
× pi0(u1, x, dy1)Π(dθ)e−(u2−u1)pi
0(u2 − u1, y1 + h(y1, θ1), dy2) × · · · × e−(un−un−1)
× pi0(un − un−1, yn−1 + h(yn−1, θn−1), dyn)e−(s−un)
× pi0(s − un, yn + h(yn, θn), dy)Π(dθ1) · · ·Π(dθn)du1 · · · dun. (4.17)
Theorem 4.1 The probability density of Eq. (4.1) and that of the corresponding SDEs can be related by(4.17).
4.4 Existence of Invariant Measure
In this section we will discuss the existence of invariant measure for the Eq. (4.1). Let Y (t) denote theR
n × S-valued process (X(t), r(t)). Then Y (t) is a time homogeneous Markov process. Let P(Rn × S)
27
be the family of all probability measures on Rn ×S. Define by L the family of mappings f : R
n ×S → R
satisfying|f(x, i) − f(y, j)| ≤ |x − y| + d(i, j) and |f(x, i)| ≤ 1,
where
d(i, j) =
1, if i 6= j,0, if i = j.
For P1, P2 ∈ P(Rn × S) define metric dL as follows:
dL(P1, P2) = supf∈L
∣∣∣ N∑i=1
∫Rn
f(x, i)P1(dx, i) −N∑
i=1
∫Rn
f(x, i)P2(dx, i)∣∣∣.
It is known that the weak convergence of probability measures is a metric concept (Ikeda and Watanabe[26, Proposition 2.5]). In other words, a sequence Pkk≥1 of probability measures in P(Rn×S) convergesweakly to a probability measure P0 ∈ P(Rn × S) if and only if
limk→∞
dL(Pk, P0) = 0.
Let us now define the weak convergence for Y (t).
Definition 4.2 Y (t) is said to be convergent weakly to π(·, ·) ∈ P(Rn × S) if the transition probabilitymeasure P (t, x, i, dy × j) converges weakly to π(dy × j) as t → ∞ for every (x, i) ∈ R
n × S, that is
limt→∞
(supf∈L
|Ef(Y (t) − Eπf |)
= 0,
where
Eπf =N∑
i=1
∫Rn
f(y, i)π(dy, i).
Obviously if Y (t) converges weakly to a probability measure implies the existence of a unique in-variant probability measure for Y (t). To show this property we impose the following assumptions.
Assumption 4.2 For any (x, i) ∈ Rn × S, we have
sup0≤t<∞
E|Xx,i(t)|2 < ∞. (4.18)
Assumption 4.3 For any (x, y, i) ∈ Rn × S, we have
limt→∞ E|Xx,i(t) − Xy,i(t)|2 = 0. (4.19)
Using the same method of Yuan and Mao [76], we have the following results.
Theorem 4.2 Under Assumptions 4.1, 4.2 & 4.3, Y (t) converges weakly to a probability measure.
Theorem 4.2 implies that Y (t) has a unique invariant measure under Assumptions 4.2 & 4.3. It istherefore necessary to establish sufficient criteria for these properties so that Theorem 4.2 is applicable.On the other hand, Assumption 4.2 is concerned with boundedness while Assumption 4.3 is associatedwith uniformly asymptotic stability. The importance of the study on both of them is therefore clear.
Lemma 4.4 Assume that there exist functions V ∈ C2(Rn × S; R+) and positive numbers c1, c2, β andsuch that
c1|x|2 ≤ V (x, i) (4.20)
and
LV (x, i) ≤ −c2V (x, i) + β (4.21)
for all (x, i) ∈ Rn × S. Then Assumption 4.2 holds.
28
We omit the proof because it is similar to that for SDEs.In what follows we shall establish another criterion. Clearly we need to consider the difference
between two solutions of Eq. (4.1) starting from different initial values, namely
Xx,i(t) − Xy,i(t) = x − y +∫ t
0
[f(Xx,i(s), ri(s)) − f(Xy,i, ri(s))]ds
+∫ t
0
[g(Xx,i(s), ri(s)) − g(Xy,i, ri(s))]dB(s)
+∫
[0,t]×Rd
[h(Xx,i(s−), ρ) − h(Xy,i(s−), ρ)]N(dsdρ). (4.22)
Let
λ(x − y) = λ
∫z:h(x,z)−h(y,z) 6=0
Π(dz),
Π(x − y, A) =∫z:h(x,z)−h(y,z)∈A,h(x,z)−h(y,z) 6=0
Π(dz).
For a given function U ∈ C2(Rn × S; R+), we define operators by
L(1)U(x, y, i) = Ux(x − y, i)[f(x, i) − f(y, i)]
+12trace
([g(x, i) − g(y, i)]T Uxx(x − y, i)[g(x, i) − g(y, i)]
), (4.23)
L(2)U(x, y, i) = λ(x − y)∫
Rd
[U(x − y + z) − U(x − y)
]Π(x − y, dz), (4.24)
LU(x, y, i) = L(1)U(x, y, i) + L(2)U(x, y, i) +N∑
j=1
γijU(x − y, j). (4.25)
Lemma 4.5 If there exist functions U ∈ C2(Rn × S; R+) and positive numbers c3, c4 such that
U(0, i) = 0 ∀i ∈ S, (4.26)
c3|x|2 ≤ U(x, i) ∀(x, i) ∈ Rn × S, (4.27)
LU(x, y, i) ≤ −c4|x − y|2 ∀(x, y, i) ∈ Rn × R
n × S. (4.28)
Then Assumption 4.3 holds.
Proof We divide the whole proof into three steps.Step 1. Let x, y ∈ R
n and i ∈ S. Let l be a positive number and define the stopping time
τl = inft > 0 : |Xx,i(t) − Xy,i(t)| ≥ l.
Let tl = τl ∧ t. By the generalized Ito formula
EU(Xx,i(tl) − Xy,i(tl), ri(tl)) = U(x − y, i)
+E
∫ tl
0
LU(Xx,i(s), Xy,i(s), ri(s))ds.
Using (4.27) and (4.28) and then letting l → ∞ produce
c3E|Xx,i(t) − Xy,i(t)|2 ≤ U(x − y, i)
−c4E
∫ t
0
|Xx,i(s) − Xy,i(s)|2ds. (4.29)
This implies
E|Xx,i(t) − Xy,i(t)|2dt ≤ 1c3
U(x − y, i) (4.30)
29
and ∫ ∞
0
E|Xx,i(t) − Xy,i(t)|2dt ≤ 1c4
U(x − y, i) < ∞. (4.31)
Step 2. We now claim that
limt→∞ E|Xx,i(t) − Xy,i(t)|2 = 0. (4.32)
If this is not true, thenlim sup
t→∞E|Xx,i(t) − Xy,i(t)|2 > 4ε
for some ε > 0. Thus there exists a sequence tkk≥1 with tk+1 > tk + 1 such that
E|Xx,i(tk) − Xy,i(tk)|2 ≥ 4ε ∀k ≥ 1. (4.33)
For tk ≤ t ≤ tk + 1, it is easy to see from (4.22) that
E|Xx,i(t) − Xy,i(t)|2
≥ 14E|Xx,i(tk) − Xy,i(tk)|2
−E|∫ t
tk
[f(Xx,i(s), ri(s)) − f(Xy,i(s), ri(s))]ds|2
−E|∫ t
tk
[g(Xx,i(s), ri(s)) − g(Xy,i(s), ri(s))]dB(s)|2
−E|∫
[tk,t]×Rd
[h(Xx,i(s), ρ) − h(Xy,i(s), ρ)]N(dsdρ)|2.
(4.34)
However, by the Holder inequality and hypothesis (H), we derive that
E|∫ t
tk
[f(Xx,i(s), ri(s)) − f(Xy,i(s), ri(s))]ds|2
≤ E
∫ t
tk
|f(Xx,i(s), ri(s)) − f(Xy,i(s), ri(s))|2ds
≤ C1
∫ t
tk
[1 + E|Xx,i(s)|2 + E|Xy,i(s)|2
]ds, (4.35)
where C1 is a constant dependent on K (described in (H)). Moreover,
E|∫ t
tk
[g(Xx,i(s), ri(s)) − g(Xy,i(s), ri(s))]dB(s)|2
= E
∫ t
tk
|g(Xx,i(s), ri(s)) − g(Xy,i(s), ri(s))|2ds
≤ C1
∫ t
tk
[1 + E|Xx,i(s)|2 + E|Xy,i(s)|2
]ds. (4.36)
Since h is bounded by M (described in (H)),
E|∫
[tk,t]×Rd
[h(Xx,i(s), ρ) − h(Xy,i(s), ρ)]N(dsdρ)|2
≤ C2|t − tk|, (4.37)
30
where C2 depends on M . By Assumption 4.2 we therefore see from (4.35), (4.36) and (4.37) that thereis δ ∈ (0, 1) such that
E|∫ t
tk
[f(Xx,i(s), ri(s)) − f(Xy,i(s), ri(s))]ds|2
+E|∫ t
tk
[g(Xx,i(s), ri(s)) − g(Xy,i(s), ri(s))]dB(s)|2
+E|∫
[tk,t]×Rd
[h(Xx,i(s), ρ) − h(Xy,i(s), ρ)]N(dsdρ)|2
≤ ε
2∀t ∈ [tk, tk + δ], k ≥ 1.
It therefore follows from (4.34) and (4.33) that
E|Xx,i(t) − Xy,i(t)|2 ≥ ε
2∀t ∈ [tk, tk + δ], k ≥ 1. (4.38)
Consequently ∫ ∞
0
E|Xx,i(t) − Xy,i(t)|2dt ≥∞∑
k=1
∫ tk+δ
tk
ε
2dt = ∞,
but this contradicts with (4.31) so (4.32) must hold.Step 3. We can now show Assumption 4.3, namely (4.19). Let ε > 0 be arbitrary. It is easy to
observe from (4.30) that there is a δ > 0 such that
E|Xx,i(t) − Xy,i(t)|2 <ε
9∀t ≥ 0 (4.39)
provided x, y ∈ Rn with |x − y| < δ.
Now, given any any compact subset K of Rn, we can find finite vectors x1, ..., xu ∈ K such that
∪uk=1B(xk, δ) ⊇ K, where B(xk, δ) = x ∈ R
n : |x − xk| < δ. From Step 2 we observe that there is aT > 0 such that
E|Xxk,i(t) − Xxl,i(t)|2 <ε
9, ∀t ≥ T, 1 ≤ k, l ≤ u, i ∈ S. (4.40)
Consequently, for any (x, y, i) ∈ K × K × S, find xl, xk for |x − xl| < δ and |y − xk| < δ. It then followsfrom (4.39) and (4.40) that
E|Xx,i(t) − Xy,i(t)|2
≤ 3(E|Xx,i(t) − Xxl,i(t)|2
+E|Xy,i(t) − Xxk,i(t)|2 + E|Xxk,i(t) − Xxl,i(t)|2)
< ε ∀t ≥ T,
as required.
31
Chapter 5
Asymptotic Stability andBoundedness of Delay SwitchingDiffusions
5.1 Introduction
Stochastic stability of stochastic systems was an active area of research from the late 1950’s to the early1970’s. The most often used concept of stochastic stability is that of existence of an invariant measure.In this chapter we restrict attention to almost sure convergence of the state process to the zero value.Control engineering intuition suggests that time-delays are common in practical systems and are oftenthe cause of instability and/or poor performance. Moreover, it is usually difficult to obtain accuratevalues for the delay and conservative estimates often have to be used. The importance of time delayhas already motivated several studies on the stability of switching diffusions with time delay, see, forexample, [8, 15, 28, 45, 58].
Most of the existing results on stochastic stability for switching diffusions rely on the existence of asingle Lyapunov function. Examples from the hybrid systems literature, however, suggest that even in thedeterministic case one can find systems for which a single Lyapunov function does not exist; the systemscan nontheless be shown to be stable if one considers multiple Lyapunov functions. Motivated by thisobservation, in this chapter we study the stability of delayed switching diffusions using multiple Lyapunovfunctions in the spirit of [67, 41, 75, 11, 13, 46], i.e. besides the Lyapunov functions V (x, t, i), i ∈ S, thereis another Lyapunov function U(x, t) (see Theorem 5.1 and Theorem 5.2), which is different from [8, 65].On a more technical note, our new asymptotic stability criteria do not require the diffusion operatorassociated with the underlying stochastic differential equations to be negative definite, as is the case withmost of the existing results.
5.2 Background on Switching Diffusions
Let τ > 0 and C([−τ, 0]; Rn) denote the family of all continuous Rn-valued functions on [−τ, 0]. Let
CbF0
([−τ, 0]; Rn) be the family of all F0-measurable bounded C([−τ, 0]; Rn)-valued random variables ξ =ξ(θ) : −τ ≤ θ ≤ 0. If K is a subset of R
n and x ∈ Rn, denote the distance from x to K by
d(x, K) = infy∈K |x − y|. Let K denote the class of continuous increasing functions α from R+ → R+
with α(0) = 0. We also denote by L1(R+; R+) the family of all functions λ : R+ → R+ such that∫ ∞0
λ(t)dt < ∞ while denote by D(R+; R+) the family of all continuous functions η : R+ → R+ such that∫ ∞0
η(t)dt = ∞.Switching diffusions are a class of stochastic hybrid systems that arises in numerous applications of
systems with multiple modes; examples include fault-tolerant control systems, multiple target tracking,flexible manufacturing systems, etc. The state of the system at time t is given by two components(X(t), r(t)) ∈ R
n × S, S = 1, 2, . . . , N. For simplicity, our attention is limited to one delay, the resultcan be extended to multiple delays. The evolution of the process (X(t), r(t)) is governed by the following
32
equations:dX(t) = f(X(t), X(t− τ), t, r(t))dt + g(X(t), X(t − τ), t, r(t))dB(t) (5.1)
Pr(t + ∆) = j|r(t) = i =
γij(X(t))∆ + o(∆) if i 6= j,1 + γii(X(t))∆ + o(∆) if i = j,
(5.2)
on t ≥ 0 with initial value x(θ) = ξ,−τ ≤ θ ≤ 0 and r(0) = i0, where ∆ > 0, f : Rn ×R
n ×R+ ×S → Rn
and g : Rn × R
n × R+ × S → Rn×m. If i 6= j, γij ≥ 0 is transition rate from i to j, while
γii = −∑i6=j
γij .
r(t), t ≥ 0 is a right-continuous Markov chain on the probability space taking values in a finite state spaceS = 1, 2, . . . , N. As the same in chapter 1, define ∆ij and a function h : S × R → R by
h(x, i, y) =
j − i : if y ∈ ∆ij(x),0 : otherwise.
Then
dr(t) =∫
R
h(X(t), r(t−), y)ν(dt, dy), (5.3)
where ν(dt, dy) is a Poisson random measure with intensity dt × m(dy), in which m is the Lebesguemeasure on R.
We assume that
Assumption 5.1 Both f and g satisfy a local Lipschitz condition and a linear growth condition. Thatis, for each k = 1, 2, . . ., there is an hk > 0 such that
|f(x, y, t, i) − f(x, y, t, i)| + |g(x, y, t, i) − g(x, y, t, i)|≤ hk(|x − x| + |y − y|)
for all i ∈ S, t ≥ 0 and those x, y ∈ Rn with |x| ∨ |y| ∨ |x| ∨ |y| ≤ k; and there is an h > 0 such that
|f(x, y, t, i)| + |g(x, y, t, i)| ≤ h(1 + |x| + |y|)for all x, y ∈ R
n and i ∈ S, t ≥ 0. γij(x) are Lipschitz, i.e. for any x ∈ Rn, there is a γ > 0 such that
|γij(x) − γij(y)| ≤ γ|x − y|.Moreover, ν(·, ·) and B(·) are independent.
Under this hypothesis and from [22, 45, 63, 53], equation (5.1) and (5.3) has a unique continuous solution(X(t), r(t)) on t ≥ 0. We shall omit the proof here and report it elsewhere.
Let C2,1(Rn×R+×S; R+) denote the family of all non-negative functions V (x, t, i) : Rn×R+×S →
R+ which are continuously differentiable twice in x and once in t. For V ∈ C2,1(Rn ×R+×S; R+), definethe operator LV : R
n × Rn × R+ × S → R by
LV (x, y, t, i) = Vt(x, t, i) + Vx(x, t, i)f(x, y, t, i) +N∑
j=1
γijV (x, t, j)
+12trace[gT (x, y, t, i)Vxx(x, t, i)g(x, y, t, i)], (5.4)
Fix any ξ and i0 and write X(t; ξ, i0) = X(t) for simplicity.For the convenience of the reader we also cite the generalized Ito formula for this class of stochastic
hybrid processes (cf. [22, 63]). If V ∈ C2,1(Rn × R+ × S; R+), then for any t ≥ 0
V (X(t), t, r(t)) = V (X(0), 0, r(0)) +∫ t
0
LV (X(s), X(s − τ), s, r(s))ds
+∫ t
0
Vx(X(s), s, r(s))g(X(s), X(s − τ), s, r(s))dB(s)
+∫ t
0
∫R
(V (X(s), s, i0 + h(r(s), l)) − V (X(s), s, r(s)))µ(ds, dl), (5.5)
where µ(ds, dl) = ν(ds, dl) − m(dl)ds is a martingale measure. Since r(t) can be written by a stochasticintegral, the value of the term i0 + h will be in the range of S
33
5.3 Main Results
In this chapter we show the following two results.
Theorem 5.1 Assume that there are functions V ∈ C2,1(Rn × R+ × S; R+), λ ∈ L1(R+; R+), U ∈C(Rn × [−τ,∞); R+), η ∈ D(R+; R+) and w ∈ C(R+; R+), such that
λ(t) − LV (x, y, t, i) − U(x, t) + U(y, t − τ)
≥ max0, η(t)w(x) − |Vx(x, t, i)g(x, y, t, i)|2 (5.6)
for all (x, y, t, i) ∈ Rn × R
n × R+ × S and
lim|x|→∞
[inf
t≥0,i∈SV (x, t, i)
]= ∞. (5.7)
Then Dω = x ∈ Rn : w(x) = 0 6= ∅ and, moreover, for every initial data ξ ∈ Cb
F0([−τ, 0]; Rn), i0 ∈ S
lim inft→∞ d(X(t; ξ, i0), Dω) = 0 a.s. (5.8)
Theorem 5.1 shows that the solutions of equations (5.1) and (5.2) will visit the neighborhood ofDω infinitely many times with probability 1. In other words, Dω attracts the solutions infinitely manytimes so we many say that Dω is a weak attractor for the solutions. However, the theorem does notguarantee that the solution will be attracted by Dω eventually. For this stronger property, we need toimpose additional conditions.
Theorem 5.2 In addition to the conditions of Theorem 5.1 assume
(i) For some constants δ > 0 and p ≥ 1
λ(t) − LV (x, y, t, i) − U(x, t) + U(y, t − τ)
+ |Vx(x, t, i)g(x, t, i)|2 ≥ δUp(x, t) (5.9)
holds for all (x, y, t, i) ∈ Rn × R
n × R+ × S.
(ii) Dω = ∪l∈ıFl, where ı is an index set which may be finite or countably infinite while Fl is a closedsubset of R
n and Fl’s are mutually disjoint. For each l ∈ ı there is an open neighborhood Gl of Fl
and two functions αl, βl ∈ K and constants cl such that
limt→∞ V (x, t, i) = cl uniformly in (x, i) ∈ Fl × S (5.10)
while
αl(d(x, Fl)) ≤ |V (x, t, i) − cl| ≤ βl(d(x, Fl)), (5.11)
for (x, t, i) ∈ (Gl − Fl) × R+ × S.
Then
limt→∞ d(X(t; ξ, i0), Dω) = 0 a.s. (5.12)
The proofs make use of the nonnegative semi-martingale convergence theorem [39].
Lemma 5.1 Let A1(t) and A2(t) be two continuous adapted increasing processes on t ≥ 0 with A1(0) =A2(0) = 0 a.s. Let M(t) be a real-valued continuous local martingale with M(0) = 0 a.s. Let ζ be anonnegative F0-measurable random variable such that Eζ < ∞. Define
X(t) = ζ + A1(t) − A2(t) + M(t) for t ≥ 0.
If X(t) is nonnegative, thenlim
t→∞A1(t) < ∞⊂
lim
t→∞X(t) < ∞∩
lim
t→∞A2(t) < ∞
a.s.
where C ⊂ D a.s. means P (C ∩ Dc) = 0. In particular, if limt→∞ A1(t) < ∞ a.s., then for almost allω ∈ Ω, limt→∞ X(t) < ∞, limt→∞ A2(t) < ∞ and −∞ < limt→∞ M(t, ω) < ∞. That is, all of the threeprocesses X(t), A2(t) and M(t) converge to finite random variables.
34
The following lemma also plays a key role in the proofs.
Lemma 5.2 Assume that there are functions V ∈ C2,1(Rn × R+ × S; R+), λ ∈ Γ(R+; R+) and U ∈C(Rn × [−τ,∞); R+) such that
LV (x, y, t, i) ≤ λ(t) − U(x, t) + U(y, t − τ) (5.13)
for all (x, y, t, i) ∈ Rn × R
n × R+ × S. Then, for every initial value ξ ∈ CbF0
([−τ, 0]; R+), i0 ∈ S, thesolution X(t) = X(t; ξ, i0) of equations (5.1) and (5.2) has the properties that
limt→∞
[V (X(t), t, r(t)) +
∫ t
t−τ
U(X(s), s)ds
]< ∞ a.s. (5.14)
∫ ∞
0
[λ(t) − LV (X(t), X(t − τ), t, r(t)) − U(X(s), s) + U(X(s − τ), s − τ)
+ |Vx(X(t), t, r(t))g(X(t), X(t − τ), t, r(t))|2 ]dt < ∞ a.s. (5.15)
Proof By the Ito formula
V (X(t), t, r(t)) = V (X(0), 0, r(0)) +∫ t
0
LV (X(s), X(s − τ), s, r(s))ds
+∫ t
0
Vx(X(s), s, r(s))g(X(s), X(s − τ), s, r(s))dB(s)
+∫ t
0
∫R
(V (X(s), s, i0 + h(X(s), r(s), l)) − V (X(s), s, r(s)))µ(ds, dl).
Noting that ∫ t
t−τ
U(X(s), s) =∫ 0
−τ
U(X(s), s)ds +∫ t
0
(U(X(s), s) − U(X(s − τ), s − τ))ds.
So
V (X(t), t, r(t)) +∫ t
t−τ
U(X(s), s)ds
=V (X(0), 0, r(0)) +∫ 0
−τ
U(ξ(θ), θ)dθ +∫ t
0
λ(s)ds
−∫ t
0
[λ(s) − LV (X(s), X(s − τ), s, r(s)) − U(X(s), s)
+U(X(s − τ), s − τ)]ds
+∫ t
0
Vx(X(s), s, r(s))g(X(s), X(s − τ), s, r(s))dB(s)
+∫ t
0
∫R
(V (X(s), s, i0 + h(X(s), r(s), l)) − V (X(s), s, r(s)))µ(ds, dl).
Noting that∫ t
0λ(s)ds < ∞ a.s. and (5.13), we can apply Lemma 5.1 to get the assertions (5.14) and∫ ∞
0
[λ(s) − LV (X(s), X(s − τ), s, r(s))
− U(X(s), s) + U(X(s − τ), s − τ)]ds < ∞ a.s.
Moreover
−∞ < limt→∞M(t) < ∞ a.s., (5.16)
where
M(t) =∫ t
0
Vx(X(s), s, r(s))g(X(s), X(s − τ), s, r(s))dB(s)
+∫ t
0
∫R
(V (X(s), s, i0 + h(r(s), l)) − V (X(s), s, r(s)))µ(ds, dl)
=: M1(t) + M2(t).
35
For every integer N ≥ 1, define the stopping time
τN = inft ≥ 0 : |M(t)| ≥ N,where here and throughout this deliverable we set inf ∅ = ∞. Obviously τN is increasing. In particular,by (5.16), there is a subset Ω1 of Ω with P (Ω1) = 1 such that for every ω ∈ Ω1 there is a finite numberN(ω) such that τN (ω) = ∞ for all N ≥ N(ω). On the other hand, since M1(t) is continuous martingaleand M1(t) is discontinuous martingale, we have, for any t > 0, E[M1(t ∧ τN )M2(t ∧ τN )] = 0. Therefore
N2 ≥ E|M(t ∧ τN )|2 = E|M1(t ∧ τN ) + M2(t ∧ τN )|2= E|M1(t ∧ τN )|2 + E|M2(t ∧ τN )|2 ≥ E|M1(t ∧ τN )|2
= E
∫ t∧τN
0
|Vx(X(s), s, r(s))g(X(s), X(s − τ), s, r(s))|2ds.
Letting t → ∞ yields
E
∫ τN
0
|Vx(X(s), s, r(s))g(X(s), X(s − τ), s, r(s))|2ds ≤ N2,
which implies that ∫ τN
0
|Vx(X(s), s, r(s))g(X(s), X(s − τ), s, r(s))|2ds < ∞ (5.17)
holds with probability 1. Hence there is another subset Ω2 of Ω with P (Ω2) = 1 such that if ω ∈ Ω2,(5.17) holds for every N ≥ 1. Therefore, for any ω ∈ Ω1 ∩ Ω2, we have∫ ∞
0
|Vx(X(s), s, r(s))g(X(s), X(s − τ), s, r(s))|2ds
=∫ τN(ω)(ω)
0
|Vx(X(s), s, r(s))g(X(s), X(s − τ), s, r(s))|2ds < ∞.
Since P (Ω1 ∩ Ω2) = 1, we must have (5.15). Proo of Theorem 5.1 Write X(t; ξ, i0) = X(t). By Lemma 5.2 and (5.6) we observe that there is asubset Ω of Ω with P (Ω) = 1 such that for every ω ∈ Ω
lim supt→∞
mini∈S
V (X(t, ω), t, i) < ∞ (5.18)
while ∫ ∞
0
η(t)w(X(t, ω)))dt < ∞ (5.19)
Noting (5.7) and (5.18), we know that x(t, ω) : t ≥ 0 is bounded, moreover, since η ∈ D(R+, R+),(5.19) derives that
lim inft→∞ w(X(t, ω)) = 0.
So there exists a divergent sequence tk such that
limk→∞
w(X(tk, ω)) = 0.
But, due to the boundedness of x(tk, ω), there must be a convergent subsequence x(tk, ω) thatconvergences to x ∈ R
n. Since w is continuous, we must have
w(x) = limk→∞
w(X(tk, ω) = 0.
In other words, x ∈ Dω so Dω 6= ∅ and the required assertion (5.8) follows, as required. Proof of Theorem 5.2 Write X(t; ξ, i0) = X(t). By (5.15) and (5.9), there exists a subset Ω of Ω withP (Ω) = 1 such that for every ω ∈ Ω such that∫ ∞
0
Up(X(t, ω), t, r(t))dt < ∞,
36
this implies
limt→∞
∫ t
t−τ
Up(X(s, ω), s, r(s))ds = 0.
Using the Holder inequality,
lim supt→∞
∫ t
t−τ
U(X(s, ω), s, r(s))ds = 0.
This, together with (5.14), yields
limt→∞ V (X(t, ω), t, r(t)) < ∞. (5.20)
As in the proof of Theorem 5.1, there exists a sequence X(tk) and a vector x ∈ Dω, say x ∈ Fl forsome l ∈ ı, such that
limk→∞
X(tk, ω) = x. (5.21)
Using the standard method of the proof of Theorem 2.2 in [46], leads to
limt→∞ d(X(t, ω), Fl) = 0. (5.22)
The required assertion (5.12) follows.
5.4 Implications for Boundedness and Stability
The results of Section 3 can be used to establish sufficient criteria for the boundedness and the almost surestability of switching diffusions. From the proof of Theorem 5.1, the following result is almost immediate.
Corollary 5.1 Under the assumptions of Theorem 5.1, the solutions of equations (5.1) and (5.2) arebounded in the sense that
sup−τ≤t<∞
|X(t; ξ)| < ∞ a.s.
Moreover, one can also show the following.
Corollary 5.2 Assume (5.6), (5.7) and (5.9) hold and Dω is bounded, i.e.
supx∈Dω
|x| ≤ K < ∞.
Then for any initial data ξ ∈ CbF0
([−τ, 0]; Rn), i0 ∈ S, the solution of equations (5.1) and (5.2) has theproperty that
lim supt→∞
|X(t; ξ, i0)| ≤ K a.s.
Proof Since Dω is bounded, there exists a sequence yl : l ∈ ı which is dense in Dω. By (5.20) weknow that for each i ∈ ı, there is an open neighborhood Gl of yl, cl, αl and βl such that (5.11) holds.Therefore, we must have
limt→0
d(X(t; ξ, i0), Dω) = 0.
The required assertions follows.
Corollary 5.3 If (5.6), (5.7) and (5.9) hold and Dω = 0 and, moreover, there is an open neighborhoodG of the origin and two functions α, β ∈ K such that
α(|x|) ≤ V (x, t, i) ≤ β(|x|) (x, t, i) ∈ G × R+ × S.
Then for any initial data ξ ∈ CbF0
([−τ, 0]; Rn), i0 ∈ S, the solution of equations (5.1) and (5.2) has theproperty that
limt→∞X(t; ξ, i0) = 0 a.s. (5.23)
37
5.5 An Example
To illustrate the results listed in the previous sections we consider a simple example with a one dimensionalcontinuous state. Let B(t) be a scalar Brownian motion and r(t) be a right-continuous Markov chaintaking values in S = 1, 2 with generator
Γ = (γij)2×2 =( −1 1
2 −2
).
Assume that B(t) and r(t) are independent. Consider a one-dimensional stochastic differential delayequation with Markov switching of the form
dX(t) = [−a(t, r(t))X(t) + b(t, r(t))X(t − τ)]dt + σ(t, r(t))X(t)dB(t) (5.24)
on t ≥ 0 with initial data ξ ∈ CbF0
([−τ, 0]; R) and r(0) = i0. Assume a(t, i), b(t, i), σ(t, i) are all boundedcontinuous functions and σ = inft≥0,i∈S σ(t, i) > 0 and
4a(t, 1) + 1 − σ2(t, 1) ≥ 2b(t, 1) + 2b(t − τ, 1),
2a(t, 2) − 2 − σ2(t, 2) ≥ b(t, 2) + b(t − τ, 2).
DefineV (x, t, i) = βix
2, w(x) = x2,
with β2 = 1 and β1 = 2. It is easy to show that the operator LV from R×R×R+ ×S to R has the form
LV (x, y, t, i) =2βix[−a(t, i)x + b(t, i)y] + βiσ2(t, i)x2
+ (γi1β + γi2)|x|2 + (γi1β + γi2)|x|2.
Computing, by the conditions, we then have
LV (x, y, t, 1) ≤ −[4a(t, 1)− 2b(t, 1) + 1 − 2σ2(t, 1)]|x|2 + 2b(t, 1)y2
≤ −2b(t− τ, 1)w(x) + 2b(t, 1)w(y)
and
LV (x, y, t, 2) ≤ −[2a(t, 2)− b(t, 2) − 2 − σ2(t, 2)]|x|2 + b(t, 2)y2
≤ −b(t − τ, 2)w(x) + b(t, 2)w(y).
On the other hand|vx(x, t, i)σ(t, i)x|2 = 4β2
i σ2(t, i)x4 ≥ 4σ2w2(x).
Therefore, by the conditions, we must have
−LV (x, y, t, 1) − 2b(t − τ, 1)w(x) + 2b(t, 1)w(y)
≥ 0 ≥ 4σ2w2(x) − |vx(x, t, 1)σ(t, 1)x|2 (5.25)
and
−LV (x, y, t, 2) − b(t − τ, 2)w(x) + b(t, 2)w(y)
≥ 0 ≥ 4σ2w2(x) − |vx(x, t, 2)σ(t, 2)x|2. (5.26)
By Lemma 5.2, (5.25) and (5.26), we have
limt→∞
[V (X(t), t, r(t)) +
∫ t
t−τ
b(s, r(s))w(X(s))]
ds < ∞ a.s. (5.27)
and ∫ ∞
0
w2(X(s))ds < ∞ a.s. (5.28)
38
By the Holder inequality and the boundedness of b(t, i) we obtain
∫ t
t−τ
b(s, r(s))w(X(s))ds < ∞ a.s.
This, together with (5.27) and (5.28), yields
limt→∞ X2(t) < ∞ and
∫ ∞
0
X4(t)dt < ∞.
So we must havelim
t→∞X(t) = 0.
39
Bibliography
[1] W.J. Anderson, Continuous-Time Markov Chains, Springer-Verlag, 1991.
[2] L. Arnold, Stochastic Differential Equations: Theory and Applications, Wiley, New York, 1972.
[3] M. Athans,Command and control (c2) theory: A challenge to control science, IEEE Trans. Automat.Control 32(1987), 286-293.
[4] G. K. Basak, A. Bisi and M.K. Ghosh, Stability of a random diffusion with linear drift, J. Math.Anal. Appl. 202 (1996), 604–622.
[5] L. Benvenuti, M. D. Di Benedetto and J. W. Grizzle, Approximate output tracking for nonlinear non-minimum phase systems with an application to flight control, J. Nonlin. Robust Cont.4 (1993),397-414
[6] R. N. Bhattacharya, Criteria for recurrence and existence of invariant measures for multi-dimensional diffusions, Ann. Probab. 6 (1978), 541-553.
[7] E.K. Boukas, Control of system with controlled jump Markov disturbances, Control Theory andAdvanced Technology 9 (1993), 577–597.
[8] E.K. Boukas and Z.K. Liu, Robust H∞ Control of discrete-dime Markovian jump linear system withmode-dependent time-delays, IEEE Trans. Automat. Control 46(2001), 1981-1924.
[9] C. Boulanger, Stabilization of a class of nonlinear composite stochastic systems, Stoch. Anal. Appl.18 (2000), 723-753.
[10] S. Boyd,, L. El Ghaoui, R. Feron and Balakrishnan, Linear Matrix Inequalities in System and ControlTheory, SIAM, Philadelphia, 1994.
[11] M.S. Branicky, Multiple Lyapunov functions and other analysis tools for switched and hybrid systems,IEEE Transactions on Automatic Control, vol. 43, no. 4, pp. 475–482, 1998.
[12] R. De Carlo, M. Branickyand S. Pettersson, Perspectives and results on the stability and stabilizabilityof hybrid systems Proceedings of the IEEE, 88 (2000),1069-1082.
[13] A.N. Michel and B. Hu, Towards a stability theory for hybrid dynamical systems, Automatica, vol.35, pp. 371–384, 1999.
[14] C. I. Byrnes and A. Isidori, New results and examples in nonlinear feedback stabilization SystemsContr. Lett. 12 (1989), 437-442.
[15] Y.Y. Cao, Y.X. Sun, and J. Lam, delay-dependant robust H∞ control for uncertain systems withtime-varying delays. IEEE Proceeding- Control Theory and Applications, 145, 338-344.
[16] O.L.V. Costa and K. Boukas, Necessary and sufficient condition for robust stability and stabilizabilityof continuous-time linear systems with Markovian jumps, J. Optimization Theory Appl. 99 (1998),1155–1167.
[17] V. Dragan and T. Morozan, Stability and robust stabilization to linear stochastic systems describedby differential equations with Markovian jumping and multiplicative white noise, Stoch. Anal. Appl.20 (2002), 33–92.
[18] X. Feng, K.A. Loparo, Y. Ji and H.J. Chizeck, Stochastic stability properties of jump linear systems,IEEE Trans. Automat. Control 37 (1992), 38-53.
40
[19] P. Florchinger, Global stabilization of composite stochastic systems, Computers Math. Applic. 33(1997), 127-135.
[20] P. Florchinger, A. Iggidr and G. Sallet Stabilization of a class of nonlinear stochastic systems, Stoch.Proc. Appl. 50 (1994), 235-243.
[21] Z.Y. Gao and N.U. Ahmed, Feedback stabilizability of nonlinear stochastic systems with state-dependent noise, International Journal of Control 45(1987), 729-737.
[22] M.K. Ghosh, A. Arapostahis and S.I. Marcus, Optimal control of switching diffusions with applica-tions to flexible manufacturing systems, SIAM J. Control Optim. 31(1993), 1183-1204.
[23] M. K. Ghosh, A. Arapostathis and S. I. Marcus, Ergodic Control of Switching Diffusions, SIAM J.Contr. Optim. 35 (1997), 152-1988.
[24] R.Z. Has’minskii, Stochastic Stability of Differential Equations, Sijthoff and Noordhoff, Alphen, 1981.
[25] J. Hauser, S. Sastry and P. Kokotovic, Nonlinear Control via Approximate input-output Lineariza-tion: the Ball and Beam Example, IEEE Tran. Automat. Contr. 37 (1992),392-398.
[26] N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, North-HollandPublishing Company, 1981.
[27] A. Isidori, Nonlinear Control Systems, Springer-Verlag, London,1989.
[28] E.T. Jeung, J.H. Kim and H.B. Park, H∞ output feedback controller design for linear systems withtime-varying delayed state. IEEE Trans. Automat. Control 43 (1998), 971-974.
[29] M. Johansson and A. Rantzer, Computation of piecewise quadratic Lyapunov functions for hybridsystems, IEEE Trans. Automat. Contr. 43 (1998), 555-559.
[30] Y. Ji and H.J. Chizeck, Controllability, stabilizability and continuous-time Markovian jump linearquadratic control, IEEE Trans. Automat. Control 35 (1990), 777–788.
[31] I.YA. Kats and N.N. Krasovskii, On stability of systems with random parameters. PrikladnayaMatematika i mekhanika 27 (1960), 809-823 (in Russian).
[32] T. Kazangey and D.D. Sworder, Effective federal policies for regulating residential housing, Proc.Summer Computer Simulation Conference (1971), 1120–1128.
[33] P. Kokotovic, A. Bensoussan and G. Blankenship, eds., Singular Perturbations and Asymptotic Anal-ysis in Control Systems, Lecture Notes in Control and Inform. Sciences Series, 90, Springer-Verlag,Berlin, 1987.
[34] P. Kokotovic and H. J. Sussmann, A positive real condition for global stabilization of nonlinearsystems Systems Contr. Lett. 13 (1989), 125-133.
[35] N.N. Krasovskii and E.A. Lidskii, Analytic design of controller in systems with random attributes,I, II, III. Automation Remote Control 22 (1961), 1021-1289.
[36] H. Kushner, Stochastic Stability and Control, Academic Press, 1967.
[37] H. J. Kushner and P. Dupuis, Numerical Methods for Stochastic Control Problems in ContinuousTime, Springer, 2001.
[38] Y. Lin and E.D. Sontag, A universal formula for stabilization with bounded controls, Systems andControl Lett. 16(1991), 393-397.
[39] R.Sh. Lipster and A.N. Shiryayev, Theory of Martingales, Horwood, Chichester, UK, 1989.
[40] M. Lewin, On the boundedness recurrence and stability of solutions of an Ito equation perturbed bya Markov chain, Stochastic Analysis and Its Applications 4(1986), 431-487.
[41] G. S. Ladde, and V. Lakshmikantham, Random Differential Inequalities, Academic Press, 1980.
[42] D. Liberzon, J.P. Hespanha and A.S. Morse, Stability of switched systems: A Lie algebraic approach,Systems Contr. Lett. 37 (1999), 117-122.
41
[43] X. Mao, Stochastic Differential Equations and Applications, Horwood, England, 1997.
[44] X. Mao, Stability of stochastic differential equations with Markovian switching, Stoch. Proce. Appl.79 (1999), 45–67.
[45] X. Mao, A. Matasov and A.B. Piunovskiy, Stochastic differential delay equations with Markovianswitching, Bernoulli 6 (2000), 73–90.
[46] X. Mao, Some contributions to stochastic asymptotic stability and boundedness via multiple Lyapunovfunctions, J. Math. Ana. Appl. 260(2001), 325-340.
[47] X. Mao, A note on the LaSalle-type theorems for stochastic differential delay equations , Journal ofMathematical Analysis and Applications 268 (2002), 125-142.
[48] X. Mao, G. Yin and C. Yuan, Stabilization and destabilization of hybrid stochastic differentialequations, preprint.
[49] M. Mariton, Jump Linear Systems in Automatic Control, Marcel Dekker, New York, 1990.
[50] M. Mariton and P. Bertrand, Output feedback for a class of linear systems with stochastic jumpparameters, IEEE Trans. Automat. Control, 30(1985), 898-900.
[51] G.N. Milstein, Mean square stability of linear systems driven by Markov chain. Prikladnaya Matem-atika i Mekhanika 36 (1972), 537-545 (In Russian).
[52] G.N. Milstein, Numerical integration of stochastic differential equations, Kluwer Academic Publish-ers, 1995.
[53] S.-E. A. Mohammed, Stochastic Functional Differential Equations, Longman, Harlow/New York,1984.
[54] D.D. Moerder, N. Halyo, J.R. Braussard and A.K. Caglayan, Application of precomputed controllaws in a reconfigurable aircraft flight control system, J. Guidance, Control Dyn 12(1989), 325-333.
[55] T. Morozan, Parametrized Riccati equations for controlled linear differential systems with jumpMarkov perturbations, Stoch. Anal. Appl. 16 (1998), 661–682.
[56] A.S. Morse, Control Using Logic Based Switching in: A. Isidori, ed., Trends in Control, (SpringerVerlag, London, 1995,) 69-114.
[57] H. Nijmeijer and A. J .Van der Schaft, Nonlinear Dynamical Control Systems, Springer-Verlag, 1991
[58] P. Park, A delay-dependant stability criterion for systems with uncertain linear systems. IEEE Trans.Automat. Control (1999) 44.
[59] P.V. Pakshin, Robust stability and stabilization of family of jumping stochastic systems, NonlinearAnalysis 30(1997), 2855-2866.
[60] G. Pan and Y. Bar-Shalom, Stabilization of jump linear Gaussian systems without mode observations,Int. J. Contr. 64 (1996), 631–661.
[61] A. Saberi, P. V. Kokotovic and H. J. Sussmann, Global stabilization of partially linear compositesystems, SIMA J. Control Optim. 28 (1990), 1491-1503.
[62] L. Shaikhet, Stability of stochastic hereditary systems with Markov switching, Theory of StochasticProcesses 2 (1996) 180–184.
[63] A.V. Skorohod, Asymptotic Methods in the Theory of Stochastic Differential Equations, AmericanMathematical Society, Providence, 1989.
[64] C.E. Souza and M.D. Fragoso, H∞ control for linear systems with Markovian jumping parameters,Control Theory and Advanced Technology 9(1993), 457-466.
[65] P. Seiler and R. Sengupta, A Bounded Real Lemma for Jump Systems. IEEE Trans. Automat.Control 48(2003), 1651-1654.
42
[66] D.D. Sworder and V.G. Robinson, Feedback regulators for jump parameter systems with state andcontrol depend transition rates, IEEE Transactions on Automatic Control 18 (1973), 355-360.
[67] L. Salvadori, Some contributions to asymptotic stability theory, Ann. Soc. Sci. Bruxelles 88(1974),183-194.
[68] C.J. Tomlin, J. Lygeros and L. Benvenuti and S.S. Sastry, Output tracking for a non-minimum phasedynamic CTOL aircraft model, CDC, 1995, 1867-1872.
[69] D.D. Sworder and R.O. Rogers, An LQ-solution to a control problem associated with solar thermalcentral receiver, IEEE Trans. Automat. Control 28 (1983), 971–978.
[70] E.I. Verriest, Stability and Control of Time-delay systems, Lecture Notes in Control and InformationSciences 228, Springer-Verlag, 1998.
[71] Z. Wang, H. Qian and K.J. Burnham, On Stabilization of Bilinear Uncertain Time-Delay StochasticSystems with Markovian Jumping Parameters, IEEE Trans. Automat. Control 47(2002), 640-646.
[72] A.S. Willsky and B.C. Levy, Stochastic stability research for complex power systems. journal=” DOEContract, LIDS, MIT, Rep. (1979) ET-76-C-01-2295.
[73] J.L. Willems and J.C. Willems, Feedback stabilizability for stochastic systems with state and controldependent noise, Automatica 12(1976), 343-357.
[74] W.M. Wonham, Random differential equations in control theory, in: A.T. Bharacha-reid(Ed.), Prob-abilistic Methods in Applied Mathematics, Vol. 2, Academic Press, 1970, 131–212.
[75] H. Ye, A. Michel, and L. Hou, Stability theory for hybrid dynamical systems IEEE Transactions onAutomatic Control, vol. 43, no. 4, pp. 461–474, 1998.
[76] C. Yuan and X. Mao, Asymptotic stability in distribution of stochastic differential equations withMarkovian switching, Stoch. Proc. Appl. 103 (2003), 277–291.
[77] C. Yuan and J. Lygeros, Asymptotic stability and boundedness of delay switching difussion, 7thInternational Workshop, HSCC 2004. 646-659.
[78] C. Yuan and J. Lygeros, Stabilization of a class of stochastic differential equations with Markovianswitching, 7th International Workshop, MTNS 2004. In press.
43