+ All Categories
Home > Documents > MarkovSemgroups

MarkovSemgroups

Date post: 03-Jun-2018
Category:
Upload: christina-jones
View: 223 times
Download: 0 times
Share this document with a friend

of 26

Transcript
  • 8/12/2019 MarkovSemgroups

    1/26

    Markov Semigroups

    Doctoral School, Academic Year 2011/12

    Paolo Guiotto

    Contents

    1 Introduction 1

    2 Preliminaries: functional setting 2

    3 Markov Processes 3

    4 Feller processes 4

    5 Strongly continuous semigroups on Banach spaces 8

    6 HilleYosida theorem 13

    7 Generators of Feller semigroups 17

    8 Examples 208.2 Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    8.2.1 Cased = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228.3.1 Cased 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238.11 Ising model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    9 Exponential Formula 28

    1 Introduction

    The main aim of these notes is to connect Markov processes to semigroups of linear operators on functionsspaces, an important connection that allows to a very useful and natural way to define Markov processesthrough their associated semigroup.

    Therere lots of different definitions of Markov process in the literature. If this create a little bit ofconfusion at first sight, all of them are based of course on the same idea: a Markov process is some

    evolution phenomena whose future depends upon the past only by the present. Actually in most of theapplications we are interested in families of Markov processes living in some state spaceEcharacterized bya parameter x Ewhich represents the starting point for the various processes of the family. Moreover,we could define the processes as usual stochastic processes (that is functions of time and of some randomparameter) or, and it is what we prefer here, through their laws, that is measures on the path space that,

    1

  • 8/12/2019 MarkovSemgroups

    2/26

    2

    for a general Markov process, is the space DE[0, +[ ofEvalued functions continuous from the rightand with limit from the left (so they may have jumps).

    Like for ordinary dynamical systems an eventually non linear dynamics induces naturally a lineardynamics on observables, that is numerical functions defined on the state space E. We gain somethingin description (linearity) but we have to move to an infinite dimensional context (functions space). Apriori this is not better or worse, but for some questions may be better to treat with an eventuallyinfinite dimensional but linear setting. In many applications (e.g. Markov processes arising as diffusionsor interacting particle systems) this approach gives a very quick way to define the process itself defining

    just a linear (generally unbounded) operator on observables: the so calledinfinitesimal generator.

    2 Preliminaries: functional setting

    Along this section we will provide the preliminaries about the main settings where we will work. In allwhat follows, (E, d) will play the role ofstate space and will be a locally compact metric space. We will

    call B(E) the algebra of Borel sets ofE;

    B(E) the set of bounded and measurable real valued functions on E: in particular we recall thata function : E Ris called measurable if(A) B(E) for any Borel set A.

    C0(E) the set of continuous real valued functions on Evanishing at infinity. By this we mean, inparticular that

    x0 E, : >0, R()> 0, : |f(x)| , x E , : d(x, x0) R(). (2.1)

    Of course C0(E) B(E). On these spaces it is defined a natural norm

    := supxE

    |(x)|, B(E).

    It is a standard work to check that B(E) and C0(E) are Banach spaces with this norm. In general, afunction : E Rwill be called observable. Moreover, iff C0(E) the supnorm is actually a truemaximum as it is easily proved applying the Weierstrass theorem being E locally compact. Sometimesit will be useful to recall the

    Theorem 2.1 (Riesz). The topological dual ofC0(E) is the space of all bounded real valued measure onB(E). In particular

    , =

    E

    (x) (dx).

    Moreover theC0(E) = closurex : x E, wherex, = (x).

    The natural space for trajectories ofEvalued Markov processes is the space

    DE[0, +[:={: [0, +[E , right continuous and with left limit} .Frenchmen call this type of trajectories cadlag: continue a droite et avec limite a gauche. The space Eis called states space. We define also the classical coordinate mappings

    t : DE[0, +[E, t() :=(t), t 0.

    Moreover, we will define

  • 8/12/2019 MarkovSemgroups

    3/26

    3

    F the smallest algebra ofDE[0, +[ such that all t are measurable;

    Ft the smallest algebra ofDE[0, +[ such that all s fors t are measurable.

    Clearly (Ft) is an increasing family of algebras.

    3 Markov Processes

    Definition 3.1. Let(E, d)be a metric space. A family(Px)xEof probability measures on the path space(DE[0, +[,F) is calledMarkov process if

    i) Px((0) =x) = 1, for anyx E.

    ii) (Markov property) Px((t + ) F | Ft) = P(t)(F), for anyF F andt 0.

    iii) the mappingx Px(F) is measurable for anyF F.

    Let (Px)xEbe a Markov process. We denote by Ex the expectation w.r.t. Px, that is

    Ex[] =

    DE [0,+[

    dPx, L1(DE[0, +[,F, Px).

    The Markov property has a more flexible and general form by means of conditioned expectations:

    Ex[((t + ))| Ft] = E(t)[] , L. (3.1)

    We now introduce the fundamental object of our investigations: lets define

    S(t)(x) := Ex[((t))]

    DE [0,+[

    ((t)) dPx(), B(E). (3.2)

    We will see immediately that any S(t) is well defined for t 0. The family (S(t))t0 is called Markovsemigroupassociated to the process (Px)xE. This is because of the following

    Proposition 3.2. Let(Px)xE be a Markov process on E and(S(t)t0 be the associated Markov semi-group. Then:

    i) S(t) : B(E) B (E) is a bounded linear operator for any t 0 andS(t) for any B(E), t 0 (that isS(t) 1 for any t 0).

    ii) S(0) = I.

    iii) S(t + r) = S(t)S(r), for anyt, r 0.

    iv) S(t) 0 a.e. if 0 a.e.: in particular, if a.e., thenS(t) S(t) a.e..

    v) S(t)1 = 1 a.e. (here1 is the function constantly equal to1).

  • 8/12/2019 MarkovSemgroups

    4/26

    4

    Proof i) It is standard (about the measurability proceed by approximation: the statement is true for = A,A

    B(E) by iii) of the definition of Markov process because

    S(t)A(x) = Px((t)A) =Px(t (A)) ,and F := t (A) F; hence it holds for sum of A, that is for simple functions; for general take firstf 0 and approximate it by an increasing sequence of simple functions). Linearity follows by the linearity of theintegral. Clearly

    |S(t)(x)| Ex[|((t))|] ,xE, = S(t) ,t 0.In other words S(t)L(B(E)) andS(t) 1.ii) Evident.

    iii) This involves the Markov property:

    S(t + r)(x) =Ex[((t + r))] =Ex[Ex[((t + r))|Ft]]

    (3.1)= Ex E(t)[((r))]= Ex[S(t)((r))]

    =S(r) [S(t)] (x).

    iv), v) Evident.

    4 Feller processes

    To treat with bounded measurable observables is in general quite difficult because of their poor properties,so its better to restrict to continuous observables:

    Definition 4.1 (Feller property). Let(S(t))t0 be the Markov semigroup associated to a Markov process

    (Px)xE where(E, d) is locally compact. We say that the semigroup fulfills theFeller property if

    S(t)f C0(E), f C0(E), t 0.

    This property turns out to give the strongly continuityof the semigroup:

    Theorem 4.2(strong continuity). Let(S(t))t0 be the Markov semigroup associated to a Markov process(Px)xE where (E, d) is locally compact. If (S(t))t0 fulfills the Feller property, it is then stronglycontinuous on C0(E), that is

    S() C([0, +[;C0(E)), C0(E).

    Proof First we prove right weak-continuity, that is

    limtt0+ S(t)(x) =S(t0)(x),xE, Cb(E).

    This follows immediately as an application of dominated convergence and because trajectories are right continuous.Indeed

    limtt0+

    S(t)(x) = limtt0+

    DE [0,+[

    ((t))Px(d).

  • 8/12/2019 MarkovSemgroups

    5/26

    5

    Now: |((t))| which is Pxintegrable, and (t) (t0) as t t0 because DE [0, +[. In asimilar way we have

    limtt0S(t)(x),xE, C0(E),t0 > 0.So: fixing xEwe have that the function tS(t)(x) has limit from the left and from the right at any pointof [0, +[. It is a standard first year Analysis exercise to deduce that S()(x) has at most a countable numberof discontinuities, hence is measurable.

    Now, define

    R(x) :=

    +0

    etS(t)(x) dt, >0, xE.

    We will see later what is the meaning of this. The integral is well defined and convergent because

    |etS(t)(x)|= et|S(t)(x)| et.We say that R C0(E) if C0(E). Indeed: everything follows immediately as application of dominatedconvergence because any S(t) C0(E) and because of the usual bound|S(t)(x)| .

    We will show now strong continuity for of type R, that is S()R C([0, +[;C0(E)). This willbe done in steps: the first is to prove strong right continuity, that is

    S()R C0(E) S(t0)R, as tt0+ .We start with the case t0= 0. Notice that

    S(t)R(x) =S(t)

    +0

    er

    S(r)(x) dr =

    +0

    er

    S(r+ t)(x) dr = et +

    t

    er

    S(r)(x) dr,

    hence

    S(t)R(x) R(x) = (et 1) +

    t

    er

    S(r)(x) dr +

    t0

    er

    S(r)(x) dr,

    therefore

    S(t)R R et 1 +

    t er

    dr+ t

    0 er

    dr et

    1

    + t0,as t0+. For generic t0 we have

    S(t)R S(t0)R =S(t0) (S(t t0)R R) S(t t0)R R0,as t t0+. Now we can prove the left continuity at t0: assuming now t < t0,

    S(t)R S(t0)R =S(t) (S(t0 t)R R) S(t0 t)R R0, tt0 .

    We will now show that the set ofR ( > 0) is dense in C0(E). To this aim take C0(E) such that

    0 =, R=

    E

    R(x) d(x), C0(E).

    The aim is to prove

    0. Now notice that

    R(x) = +0

    et

    S(t)(x) d(t) = +0

    er

    S r

    (x) dr.

    Applying the right continuity in 0 of the semigroup and the dominated convergence it is easy to deduce that

    R(x) +0

    er

    S(0)(x) dr = S(0)(x)

    +0

    er

    dr= (x).

  • 8/12/2019 MarkovSemgroups

    6/26

    6

    Moreover, always by previous formula

    R +0

    er dr=.

    Therefore, applying the dominated convergence ( is a finite measure) we have

    0 =

    E

    R d

    E

    d, C0(E).

    But then, = 0 for any C0(E), and this means = 0.Conclusion: take >0 R(C0(E)) such that (such sequence exists by previous step). EachS()n C([0, +[;C0(E)) by the first step. Therefore

    S(t) S(t0) S(t) S(t)+ S(t) S(t0)+ S(t0) S(t0)

    2

    +

    S(t)

    S(t0)

    2 +

    S(t)

    S(t0)

    ,

    thereforelimsup

    tt0S(t) S(t0) 2 + limsup

    tt0S(t) S(t0)= 2,

    and b ecause is arbitrary the conclusion follows.

    Because this will be the main subject of our investigations here we introduce the

    Definition 4.3. A family of linear operators(S(t))t0 L(C0(E)),(E, d) locally compact metric space,is called aFeller semigroup if

    i) S(0) = I;

    ii) S(t + r) = S(t)S(r) for anyt, r 0;

    iii) S() C([0, +[;C0(E));

    iv) S(t) 0 for any C0(E), 0 and for allt 0;

    v) S(t) 1 for any C0(E), 1 and for allt 0.

    We say that a Feller semigroup isconservative if

    (n) C0(E), : n 1 onE, = S(t)n 1, onE.

    Remark 4.4. Notice in particular that by previous properties follows thatS(t) 1. Indeed: if 1,in particular|(x)| 1, that is1 (x) 1 for any xE. By iv) and v) we have

    1 1, =

    1 S(t)(x) 1,

    x

    E, |

    S(t)(x)| 1,

    x

    E,

    S(t)

    1.

    But this means exactlyS(t) 1.

    It is natural to ask if by a Feller semigroup it is possible to construct a Markov process. This is actuallytrue and the first step is the construction of a transition probability function:

  • 8/12/2019 MarkovSemgroups

    7/26

    7

    Proposition 4.5. Let (S(t))t0 be a conservative Feller semigroup on C0(E), (E, d) locally compactmetric space. For anyt 0, x Ethere exists a probability measure

    Pt(x, ) : B(E)[0, 1],

    such that

    S(t)(x) =

    E

    (y)Pt(x,dy), C0(E).

    Moreover:

    i) xPt(x, F) B(E) for anyt 0, F B(E).

    ii) (ChapmanKolomogorov equation) for anyt, r 0,

    Pt+r(x, F) =

    E

    Pt(x,dy)Pr(y, F). (4.1)

    Pt(x,dy) is calledtransition probability.

    Proof Fix t 0 and xE and consider the functional S(t)(x). It is clearly linear and continuous,so by Riesz representation theorem 2.1 there exists a finite Borel measure Pt(x, ) on Esuch that

    S(t)(x) =

    E

    (y) Pt(x,dy), C0(E).

    Moreover, because S(t) 0 if 0 we have Pt(x, ) is a positive measure. Moreover, by conservativity andmonotone convergence,

    Pt(x, E) =

    E

    Pt(x,dy) = limn

    E

    n(y) Pt(x,dy) = limn

    S(t)n(x) = 1,xE, t 0.

    so Pt(x,dy) turns out to be a probability measure. By standard approximation methods i) follows. ii) follows bythe semigroup property: first notice that

    E

    (z)Pt+r(x,dz) =S(t + r)(x) = S(t)S(r)(x) =

    E

    S(r)(y)Pt(x,dy)

    =

    E

    E

    (z)Pr(y,dz)

    Pt(x,dy).

    This holds for any C0(E), hence by approximations, for anyB(E), therefore also in the case = F. Inthis case we obtain

    Pt+r(x, F) =

    E

    E

    F(z)Pr(y,dz)

    Pt(x,dy) =

    E

    Pr(y, F) Pt(x,dy),

    which is just the ChapmanKolmogorov equation.

    Remark 4.6. IfS(t)is not conservative,Pt(x, E)could be, possibly, strictly less than1. This correspondto the case on which the underlying process we are trying to reconstruct leaving at time0 from the state

  • 8/12/2019 MarkovSemgroups

    8/26

  • 8/12/2019 MarkovSemgroups

    9/26

    9

    Definition 5.2. Let(S(t))t0 be a strongly continuous semigroup on a Banach space X. The operator

    A:= limh0+

    S(h) h

    , x D(A) :=

    X : limh0+

    S(h) h

    .

    is called infinitesimal generator of(S(t))t0.

    The reason why we call it infinitesimal generatorwill be clear by the HilleYosida theorem: we will firstcharacterize some properties an infinitesimal generator verifies; once we will have these properties wewill see that they are enough to construct a strongly continuous semigroup by an operator which verifiesthem. A first set of properties is given by the

    Theorem 5.3. Let X be a Banach space, (S(t))t0 a contraction semigroup and A its infinitesimalgenerator. Then

    i) D(A) is dense inX;

    ii) A is closed (i.e. G(A) := {(,A) X X : D(A)} is closed in the product topology ofX X).

    Proof i) LetXand define := 10

    S(r) dr. Clearly as 0 (mean value theorem). Letsprove that D(A) for all t >0. We have

    1

    h(S(h) ) =1

    1

    h

    S(h)

    0

    S(r) dr 0

    S(r) dr

    =

    1

    h

    +hh

    S(r) dr 0

    S(r) dr

    =1

    1

    h

    +h

    S(r) dr 1h

    h0

    S(r) dr

    1

    (S() ),

    always in force of the mean value theorem. This proves i).ii) We have to prove that

    if (n, An)(, ), = (, )G(A), i.e. D(A) and = A.In other words, we have to prove that

    limh0+

    S(h) h

    =.

    NowS(h) = lim

    n(S(h)n n) .

    BecausenD(A) we know that tS(t)n is differentiable in t = 0 from the right. More:Lemma 5.4. IfD(A) then

    ddt

    S(t)= S(t)A= AS(t), t 0.

    Proof of the Lemma We have to prove thattS(t)is differentiable for all t >0 and thatS(t)D(A).We start with the first.First step: exists d

    +

    dtS(t). Indeed

    d+

    dtS(t)= lim

    h0+S(t + h) S(t)

    h =S(t) lim

    h0+S(h)

    h =S(t)A.

  • 8/12/2019 MarkovSemgroups

    10/26

    10

    Second step: exists d

    dtS(t). Indeed

    ddt

    S(t)= limh0+

    S(t h) S(t)h

    = limh0+

    S(t h) S(h)h

    .

    Now: S(t h) S(h)h S(t)A S(t h) S(h)h A

    + S(t h)A S(t)A.Clearly, by the strong continuity, the second term converges to 0. For the first, by the estimateS(t) 1 weobtain S(t h) S(h)h A

    S(h)h A0, h0.

    By this the conclusion follows.Third step: S(t)D(A). Indeed

    S(h)S(t) S(t)h

    =S(t) S(h) h

    S(t)A, = AS(t)= S(t)A.

    Coming back to the proof of the Theorem,

    S(h)n n = h0

    d

    drS(r)n dr=

    h0

    S(r)An dr h0

    S(r) dr.

    To justify the last passage, notice that h0

    S(r)An dr r0

    S(r) dr

    h0

    S(r)(An ) dr hAn 0.

    Finally:

    S(h) = h0 S(r) dr.Therefore,

    S(h) h

    = 1

    h

    h0

    S(r) dr, = D(A), e A = .

    The previous result gives the first indications about the properties of the infinitesimal generator. Ofparticular interest is the weak continuity property given by the closure of the operator A. This propertyis weaker than continuity and it is fulfilled by unbounded operators. For instance,

    Example 5.5. Let X=C([0, 1]) be endowed with the supnorm andA given by

    D(A) :=

    C1([0, 1]) : (0) = 0

    , A= ,

    thenA is closed.

    Sol. Indeed, if (n)D(A) is such that n in X with An =n in X(that is, uniformly on[0, 1]), by a well known result C1([0, 1]) and = , that is A = . To finish just notice that (0) = 0,thereforeD(A).

  • 8/12/2019 MarkovSemgroups

    11/26

    11

    Unfortunately, beingA unbounded in general, it seems difficult to give a good meaning to the exponentialseries

    S(t) etA=n=0

    (tA)n

    n! ,

    to define the semigroup by a given A. Theres however another possible formula to define the exponential,that is

    etA= limn+

    I +

    tA

    n

    n lim

    n+

    I

    tA

    n

    n. (5.1)

    While the first limit seems bad because I+ A and its powers seems bad, the second is much moreinteresting, because we may expect that (I A)1 is nicer than A.

    Example 5.6. Compute(I A)1 in the case ofA defined in the previous example.

    Sol. (I

    A)=

    = ,

    = (I

    A)1.

    We may think that given we have to solve the differential equation

    =

    1

    1

    , (x) = e

    x0

    1

    dy

    x0

    e y0 1dz1

    (y) dy + C

    = e

    x

    1

    x0

    e y (y) dy + C

    .

    Now, imposing that D(A) we have

    (0) = 0, C= 0, (x)(I A)1 (x) = ex

    x0

    e y (y) dy.

    Writing

    (I A)1 = 1

    1

    I A

    1=:

    1

    R 1

    ,

    we find a very familiar concept of Functional Analysis:Definition 5.7. Let C. IfR := (I A)

    1 L(X) we say that (A)(resolvent set) and wecallR resolvent operator. The set (A) := C\(A) is calledspectrum ofA. Analytically the spectrumis divided in

    point spectrum, denoted byp(A), that is the set of C such that I A is not injective (inother words: the elements ofp(A) are the eigenvalues);

    continuum spectrum, denoted byc(A), that is the set of Csuch thatI A is bijective butthe inverse is not continuous;

    residual spectrum, what it remains, that is(A)\{p(A) c(A)}.

    Looking at the second limit in (5.1), we would expect that (A) [0, +[ for some 0. To this aim

    we need a relationship between the resolvent operator and the semigroup. This is formally easy: writingS(t) = etA and treating I A as a negative number we have

    +0

    erS(r) dr=

    +0

    er(A) dr=

    er(A)

    A

    r=+r=0

    = (I A)1.

    This formula turns out to be true. Precisely we have the

  • 8/12/2019 MarkovSemgroups

    12/26

    12

    Theorem 5.8. Let(S(t))t0 be a contraction semigroup on a Banach spaceX. Then:

    i) (A) {Re >0} and

    R=

    +0

    erS(r) dr, C : Re >0, X; (5.2)

    ii) The following estimate holds:

    R 1

    Re, C : Re >0. (5.3)

    A

    Figure 1: The spectrum contained in the half plane Re 0.Therefore, being rerS(r)C([0, +[; X) the integral is well defined. By the same estimate we get

    R +0

    erRe dr= 1Re

    ,X,

    that is (5.3). It remain to prove that the integral operator is indeed the resolvent operator, that is:

    a) RD(A),X, b) (I A)R= , X, c) R(I A)= , D(A).We start from the first. Notice that

    S(h)R Rh

    = 1

    h

    S(h)

    +0

    erS(r) dr

    +0

    erS(r) dr

    = 1

    h +

    h

    e(rh)S(r) dr

    +

    0

    er

    S(r) dr=

    eh 1h

    +h

    er

    S(r) dr+1

    h

    +h

    er

    S(r) dr +0

    er

    S(r) dr

    +0

    er

    S(r) dr = R .

  • 8/12/2019 MarkovSemgroups

    13/26

    13

    Therefore, RD(A) and

    AR= R , (I A)R= , X,that is (I A)R= I, that is b). To finish, we have to prove c). Let D(A). Notice that

    R(I A)= +0

    er

    S(r) dr +0

    er

    S(r)A dr.

    BecauseD(A), by Lemma 5.4 we have that S(r)A= (S(r)). Therefore, integrating by parts, +0

    er

    S(r)A dr=

    +0

    er (S(r)) dr=

    er

    S(r)r=+

    r=0 +0

    erS(r) dr= + R,

    that is the conclusion.

    6 HilleYosida theoremIn the previous section we have seen that:

    Corollary 6.1. The infinitesimal generatorA: D(A) XX of a contraction semigroup fulfills thefollowing properties:

    i) A is densely defined and closed;

    ii) (A) { C : Re >0} andR= (I A)1 1Re

    for any C such thatRe >0.

    Actually we may notice that to be closed is redundant because the following general fact

    Proposition 6.2. LetA: D(A) XXbe a linear operator such that(A)=. ThenA is closed.

    Proof Let (n)D(A) such thatn , An.

    We need to prove D(A) and A = . Now let (A) and considern An= (I A)n, n= (I A)1 (n An)(I A)1 ( )

    because (I A)1 L(X). In particular= (I A)1 ( )D(A), and A= , A= .

    In this section we will see that these conditions are sufficient to construct a unique contraction semigroupwhich generator is just A. The idea is to construct etA by approximations

    etA= lim+

    etA,

    where A L(X) are suitable approximations of A. Such approximations, that will be calledYosidaregularizations, are extraordinarily intuitive because are given by

    A := A

    I 1

    A+

    A.

    Let us introduce formally these operators:

  • 8/12/2019 MarkovSemgroups

    14/26

    14

    Definition 6.3. LetA: D(A) XXbe a densely defined linear operator such that

    (A) { C : Re >0} , R= (I A)1 1Re

    , C : Re >0.

    We callYosida regularization ofA the family(A)>0 L(X) defined as

    A:= AR = A(I A)1, >0.

    Remark 6.4. At first sight may be is not evident thatA L(X): indeed

    (I A)R= IX , = A= AR= (R IX )L(X).

    MorallyAA as +. Indeed we have the

    Lemma 6.5. LetA : D(A) XXan operator fulfilling hypotheses of Definition 6.3 onXBanach.

    Thenlim

    +A= A, D(A).

    Proof Because, as in the remark,AR= R IX we can writeA= AR=

    2R IX .Therefore, ifD(A) we have A= (R ). But R(I A) = ID(A), so

    R( A) = , R = RA, = A= RA.Set := A. If we prove that

    lim+

    R= , X, (6.1)we are done. Assume first that D(A): by the same identity as before,

    R = RA, = R =RA RA 1

    A 0, +.

    In the general case that X, by density of D(A) in X there exists D(A) such that .Therefore,

    R R R + R + 1

    + R + ,

    and by this the conclusion follows easily.

    We are now ready for the main result:

    Theorem 6.6 (HilleYosida). Let X be a Banach space, A : D(A) X Xfulfilling hypotheses ofDefinition 6.3 that is:

    i) A is densely defined;

    ii) (A) { C : Re >0} and

    R= (I A)1

    1

    Re, C : Re >0. (6.2)

  • 8/12/2019 MarkovSemgroups

    15/26

    15

    There exists then a unique contraction semigroup (S(t))t0 which generator isA. This semigroup will bedenoted by(etA)t0.

    Proof As announched, we want to define S(t):= lim+ etA.

    First step : construction of the semigroup. BecauseA L(X), it is well defined the uniformly continuousgroup etA . By the fundamental theorem of calculus,

    etA etA= 10

    etrAet(1

    r)A

    dr=

    10

    etrA tAe

    t(1r)A + etrAet(1r)A(tA)

    dr.

    Clearly, AA = AA. Therefore

    etA etA= t

    10

    etrAe

    t(1r)A (A A) dr,

    henceforth

    etA etA

    t

    1

    0 etrA

    et(1

    r)A

    dr (A A) .

    Let us give an estimate ofeuA. Recall first that A= 2R IX . Therefore (1)

    euA =eu(

    2RIX) =eu2Re

    uIX =eueu2R ,

    so euA= eu eu2R eueu2R eueu = 1. (6.3)We deduce by this that etA etA t (A A) . (6.4)Now, if D(A) by previous Lemma A A. With this in hand it is easy to deduce that the sequenceof functions (eA)>0 is uniformly Cauchy on every interval [0, T], for all T >0. Call S() the uniform limitfunction, defined on [0, +[. Passing to the limit into the (6.4) we have

    etA S(t) t A A , D(A). (6.5)In this way, for all t 0, we have defined S(t):= lim+ etA for all D(A). Clearly

    i) S(t) : D(A)XX is linear;ii) S(0)= ,D(A);

    iii) S(t + r)= S(t)S(r),D(A);iv) tS(t)C([0, +[; X),D(A).

    MoreoverS(t)= lim

    +etA .

    Being X complete and D(A) dense, it is easy to conclude that S(t) is extendible to all X andS(t) 1. Ofcourse, ii) and iii) hold true to all X. It remain to prove that also iv) extends to all X. Fix X and letD(A) such that . Then, ifh >0,

    S(t + h)

    S(t)

    S(t + h)(

    )

    +

    S(t + h)

    S(t)

    +

    S(t)

    S(t)

    2 + S(t + h) S(t)

    2 + S(t + h) S(t).1Here we are using the property eA+B = eAeB . Of course in general it is false, but ifA and B commutes, as in the

    present case, it is true.

  • 8/12/2019 MarkovSemgroups

    16/26

  • 8/12/2019 MarkovSemgroups

    17/26

    17

    and this means exactly that D(A)D(B) and B = A for D(A).

    The inverse inclusion is much more soft. Indeed: because B is the generator of a contraction semigroup, itfulfills the conditions i) and ii) of Theorem 5.8. In particular, 1 (B), that is (I B)1 : X D(B) isbounded and D(B) = (I B)1X. On the other hand, we have seen that D(A) D(B) and B

    D(A)

    = A. In

    particular, (I B)D(A) = (I A)D(A). By our assumption, 1 (A), so again D(A) = (I A)1X, that isX= (I A)D(A). Hence, (I B)D(A) = Xand this is possible iffD(B)D(A). By this the conclusion followseasily.

    7 Generators of Feller semigroups

    What we have seen in the previous two sections are general results that involves the first three propertiesin the Definition 4.3 of Feller semigroup. Now clearly the question is: which other property we need in

    order that, in the specific case of the Banach space X = C0(E), a linear operator A be the generatorof e Feller semigroup? Knowing the general connection between a semigroup and its generator we haveimmediately the

    Proposition 7.1. Let (etA)t0 be a Feller semigroup on C0(E), (E, d) locally compact metric space.Then the positive maximum principle holds:

    if D(A) has a maximum at x0 with(x0) 0 thenA(x0) 0.

    Proof Let x0 a positive maximum for . Notice first that by definition

    A= limh0+

    S(h)(x0) (x0)h

    .

    The limit is intended in C0(E), therefore in the uniform convergence. This means, in particular, that

    A(x0) = limh0+

    S(h)(x0) (x0)h

    .

    Now, setting + := max{, 0} C0(E), we haveS(h) S(h)+ S(h)+ += (x0).

    Therefore, ifh >0,

    S(h)(x0) (x0)h

    (x0) (x0)

    h = 0, = A(x0) 0.

    The maximum principle gives basically the estimate (6.2). Actually

    Proposition 7.2. Let A : D(A) C0(E) C0(E). If A fulfills the positive maximum principle thenit is dissipative that is A , D(A), >0. (7.1)

    Proof Because C0(E), by Weierstrass thm there exists x0Esuch that|(x0)|= max

    xE|(x)|=.

  • 8/12/2019 MarkovSemgroups

    18/26

    18

    We may assume (x0) 0 (otherwise replace with). Therefore

    A= supxE |(x) A(x)| |(x0) A(x0)|.

    By the positive maximum principle, A(x0) 0, therefore (x0) A(x0) 0, hence A |(x0) A(x0)|= (x0) A(x0) (x0) = .

    Notice in particular that

    Lemma 7.3. LetA: D(A) XX, X normed space. IfA is dissipative thenI A is injective. Ifit is also surjective then (A) andR

    1

    for any >0.

    Proof Immediate.

    Dissipativity gives an immediate simplification to check the ii) of HilleYosida thm.

    Theorem 7.4 (Phillips). Let Xbe a Banach space, A : D(A) X Xbe a densely defined linearoperator. Suppose that

    i) A is dissipative;

    ii) R(0I A) = X for some0 > 0 (that is: 0I A is surjective).

    TheA generates a strongly continuous semigroup of contractions. (2)

    Proof By HilleYosida Theorem, we have to prove that

    (A)]0, +[, and R 1

    , > 0.

    By the Lemma 7.3 it is clear that 0(A). Therefore the unique thing to prove is that (A)]0, +[, that is:every >0 belongs to the resolvent set. The idea is to prove that (A)]0, +[ (which is non empty by whatwe have seen just now) is open and closed: it is therefore connected, hence equal to all ]0, +[.First: (A) is open, therefore (A)]0, +[ is open in ]0, +[. This is a general fact so we state itseparately:

    Lemma 7.5. The resolvent set of any linear (eventually unbounded) operator is open in C and RC((A);L(X)). In particular: if (A) thenB (, 12R [(A).Proof The argument is based on the following formal identity: fixed(A) and C,

    R = 1

    A = 1

    + ( A) = 1

    A1

    1 + A

    = R(I+ ( )R)1 . (7.2)

    It is easy to check that if the right hand side makes sense as bounded linear operator then it is exactly R. To

    give a meaning to the right hand side, we have to justify that B = I+ ( )R is invertible with continuousinverse. To this aim recall that

    if I B< 1, = B1 =

    n=0

    (I B)n L(X).

    2Actually, if the space X is reflexive, the Phillips thm. is an iff.

  • 8/12/2019 MarkovSemgroups

    19/26

    19

    In our case

    I

    B

    =

    (

    )R

    =

    |

    |R

    < 1,

    |

    |0. Suppose we have provedthat (n) is bounded, that isn Kfor all n: we are done because, in this case,

    n m K|n m| ,n, m N0(),

    i.e. (n) would be a Cauchy sequence. Boundedness of (n) follows by the same argument due to the dissipativity:indeed

    nn nn An=, = n

    ,n N.Summarizing: (n) is convergent. Then, b ecause An = nn, also (An) is convergent. Now, you willrecall that if (A)

    =

    then A is closed. So, being this the case in our context, we have that n

    D(A),

    An A and passing to the limit in the equation we would get A= , that is the conclusion. Withthis the proof is finished.

    Definition 7.6. Let A : D(A) C0(E) C0(E) be a linear operator, (E, d) be a locally compactmetric space. We say thatA is aMarkov generator if

  • 8/12/2019 MarkovSemgroups

    20/26

    20

    i) D(A) is dense inC0(E).

    ii) A fulfills the positive maximum principle.

    iii) R(0I A) = C0(E) for some0 > 0.

    In other words, combining the HilleYosida thm with the Phillips thm we have the

    Corollary 7.7. A linear operatorA generates a Markov semigroup onC0(E)iffA is a Markov generator.

    8 Examples

    To check that an operator A is a Markov generator is not, in general, an easy business. In particular, isthe third conditions that usually is a little bit difficult to check and often it is useful to have a sort ofmild version of it. The problem is that we have in particular to solve the equation

    A= ,

    for a given C0(E). This is not generally easy. Lets see a first example.

    Example 8.1. The operator

    A: D(A) C0([0, 1]) C0([0, 1]), A:= , D(A) :=

    C2([0, 1]) C0([0, 1]) :

    C0([0, 1])

    ,

    is a Markov generator.

    Sol. Clearly the first two properties are true. Lets see the third. Take C0([0, 1]) and consider theequation

    (x) (x) = (x), x[0, 1].As well known by ODEs theory, the general solution of previous equation is

    (x) = c1w1(x) + c2w2(x) + U(x),

    where (w1, w2) is a fundamental system for the homogeneous equation = 0. If >0 (as in our case) wehavew1,2(x) = e

    x. By the variation of constants formula

    U(x) =

    1

    2

    x0

    (y)e

    ydy

    e

    x +

    1

    2

    x0

    (y)e

    ydy

    e

    x

    = 1

    x0

    (y)e

    (xy) e

    (xy)

    2 dy=

    1

    x0

    (y)sinh

    (x y)

    dy.

    Therefore

    (x) =c1ex + c2e

    x +

    1

    x0

    (y) sinh

    (x y)

    dy.

    Here theres a first problem we meet: ifC([0, 1]) it is not evident that

    C

    2([0, 1]). Indeed, it is easy tocheck that C1([0, 1]) and

    (x) =c1

    ex + c2

    e

    x +

    1

    (x)sinh(

    0) + x0

    (y) cosh(

    (x y)) dy

    =c1

    e

    x + c2

    e

    x +

    x0

    (y) cosh(

    (x y)) dy.

  • 8/12/2019 MarkovSemgroups

    21/26

    21

    Repeating the procedure we see that C2([0, 1]). Lets impose that , C0([0, 1]). This means

    (0) =(1) = 0,(0) =(1) = 0.

    Notice that by the equation we have(x) = (x) (x),

    therefore once we know C0([0, 1]) we get C0([0, 1]) because C0([0, 1]). So the previous conditionsreduce only to the first: (0) =(1) = 0, that is

    c1+ c2= 0,

    c1e

    + c2e

    + 1

    10

    (y) sinh

    (1 y)

    dy= 0.

    This is a 2 2 system in c1, c2 with determinant e

    e

    = 2sinh = 0 if= 0. It is clear that it admitsa unique solution (c1, c2). This means that there exists a unique D(A).

    As you see most of the difficulties are due to the check that I A is surjective because this involvesthe solution of a more or less complicate equation. Other problems are that in general it is not easy todescribe a generator by giving his exact domain. We will meet this problem in the next examples.

    Moreover, when we define processes starting by their infinitesimal generators, we need to have a clearunderstanding of the meaning of the generator for the underlying processes. To this aim notice that wecould write,

    S(h) = hA + o(h), Ex[((h)) ((0))] =hA(x) + o(h),

    or, more in general becauseS(t + h) S(t)= hA + o(h),

    we could writeEx[((t + h)) ((t))| Ft] =hA((t)) + o(h). (8.1)

    Therefore, we could think to A as the rate of infinitesimal variation of the observable along theinfinitesimal displacement of state from (t) to(t + h).

    8.2 Brownian motion

    As well known the case of BM on the state space E:= Rd corresponds to the case

    A:=1

    2.

    Actually, therere some technical aspects that complicate the discussion, mainly due to the problem tosolve the equation

    A= , 1

    2= .

    We discuss separately as d = 1 and d 2 this equation.

  • 8/12/2019 MarkovSemgroups

    22/26

    22

    8.2.1 Case d= 1

    Lets start by solving the equation

    12

    = , 2 = 2, C0(R).

    Because the equation is basically the same of Example 8.1 so we have, for general solution

    (x) =c1e2x + c2e

    2x +

    22

    x0

    (y) sinh

    2(x y)

    dy.

    By the same argument of Example 8.1 we deduce that C2(R). Lets see if C0(R), that is when() = 0.To this aim rewrite the solution in the form,

    (x) =

    c1+

    12

    x0

    (y)e2y dy

    e2x +

    c2 1

    2

    x0

    (y)e2y dy

    e2x.

    Notice that we have a unique possibility in such a way that (+) = 0, that isc1+

    12

    +0

    (y)e2y

    dy= 0. (8.2)

    Indeed: first notice thatc2e

    2x 0, x+,

    and

    e2x

    x0

    (y)e2y dy=

    x0

    (y)e2y dy

    e2x

    (H)=

    (x)e2x

    2e

    2x

    = 1

    2(x)0, x+,

    because C0(R). Moreover, the integral in (8.2) is convergent because|(y)e2y| e

    2y. Therefore

    if the (8.2) is not 0, by previous considerations, we would have

    c1+ 12 x

    0 (y)e

    2y

    dy e2x

    .So the unique possibility is that (8.2) holds true. In that case applying again the Hopital rule we would have

    c1+ 12

    x0

    (y)e2y dy

    e2x

    (H)=

    12

    (x)e2x

    2e2x = 1

    2(x)0, x+,

    again because C0(R). The moral is: the unique possible choice for c1 such that (+) = 0 is given by (8.2).Similarly, at we will have

    c2 12

    0

    (y)e2y dy= 0. (8.3)

    This means that theres a unique C20(R) such that A= . We can summarize the discussion with thestatement

    Theorem 8.3. The operator

    A:=1

    2

    d2

    dx2, D(A) :=

    C2(R) C0(R) :

    C0(R)

    is a Markov generator.

  • 8/12/2019 MarkovSemgroups

    23/26

    23

    8.3.1 Case d 2

    This case is sensibly different from the previous one because now the equation A= is a a PDE 1

    2= . (8.4)

    To solve this equation we invoke the Fourier transform. To this aim recall the Schwarz space

    S(Rd) :=

    C(Rd) : sup

    xRd(1 + |x|)n|(x)|< +,n N, Nd

    ,

    where stands for the usual multi-index notation for derivatives. It is well known that the Fourier transformis a bijection on S(Rd). To solve (8.4) in the Schwarz space is a standard application of the Fourier transform.Indeed: we take S(Rd) and we look for a solution S(Rd). Applying the Fourier transform we have

    1

    2

    = ,

    +

    42||2

    2 =, ( + 22

    |

    |2)=, =

    1

    + 22

    ||2.

    Now it is easy to check that if S(Rd) then 1+22||2

    S(Rd): therefore, inverting the Fourier transform,there exists a unique S(Rd) such that the previous equation holds. So if we would define

    A:=1

    2, D(A) := S(Rd),

    we would haveR(I A)S(Rd) =: Y.

    With respect to or space X := C0(Rd) we have that S(Rd) is dense in X. So we have that 12 fulfills the first

    two conditions of Markov generator and the third is a little bit weaker being

    R(I A) is dense in C0(Rd),

    for any >0. In this case it seems a little bit impossible that when we start with C0(Rd

    ) we getS(Rd

    ).In other words: the domain ofA seems too small to represent the natural domain for A (and indeed: A involves

    only second order derivatives whereas in S(Rd) therere very regular functions). On the other hand it is not at

    all easy to solve the equation directly in C2(Rd). So, how can we solve this impasse? The point is that theres,

    in a suitable sense, a unique possible extension of A which is a Markov generator. Lets first treat

    this question in general.

    We start introducing some definitions in order to make clear what does it means extension here. Itsnot natural to talk about continuous extension because our operators wont be, in general, continuous.The right concept turns out to be that on ofclosed extension.

    Definition 8.4. Let A : D(A) XXbe a linear operator. We say that A isclosable if G(A) isthe graph of some linear (clearly closed) operator. We will denote byA such operator. In particular:

    G(A) = G(A).

    Lemma 8.5. Let A : D(A) X Xbe a densely defined dissipative linear operator on X Banachspace. Then

    i) A is closable.

  • 8/12/2019 MarkovSemgroups

    24/26

  • 8/12/2019 MarkovSemgroups

    25/26

    25

    In particular we deduce the following

    Theorem 8.6. LetA: D(A) XXbe a densely defined, dissipative linear operator such that

    R(0I A) is dense inX for some 0 > 0.

    Then A is closable andA generates a strongly continuous semigroup of contractions onX.

    Proof By the Lemma A is well defined and dissipative (and of course densely defined). By (8.5) and ourassumption

    R(0I A) = X,so the conclusion follows by the Phillips Thm.

    Lets now particularize the discussion to the case of Markov processes. It is useful to introduce thefollowing

    Definition 8.7. Let A : D(A) C0(E) C0(E) be a linear operator, (E, d) be a locally compactmetric space. We say thatA is aMarkov pregenerator if

    i) D(A) is dense inC0(E).

    ii) A fulfills the positive maximum principle.

    iii) R(0I A) = C0(E) for some0 > 0.

    By previous Thm. it follows that if A is a Markov pregenerator, then A is closable and A generatesa strongly continuous semigroup of contractions. Is it a Markov semigroup? We need to check that Afulfills the positive maximum principle:

    Proposition 8.8. IfA is a Markov pregenerator thenA is a Markovgenerator.

    Proof Exercise.

    Therefore, applying this result to our case we could say what follows: A= 12 defined onD(A) = S(Rd)

    is closable and A generates a Markov semigroup. Of course A coincide with A on S(Rd).Now the further question is: is it possible that A is the restriction of some other operator B = A

    generating a strongly continuous semigroup of contractions? In other words: does A identify uniquelythe Markov semigroup? To this aim we introduce the following concept:

    Definition 8.9. LetA: D(A) XXbe a linear operator. A setD D(A) is called a core forAifA coincides with the closure of its restriction toD, that is if

    G(A|D) = G(A).

    HereG(A|D) :={(,A) : D} ,

    and of course the closure is intended to be in the spaceX X.

  • 8/12/2019 MarkovSemgroups

    26/26

    26

    It is clear that ifA and B are two densely defined closed linear operators such that D is a core for bothand they coincide on D, then A = B . So ifD = S(Rd) is a core for A, any other generator B having D

    as core and coinciding with 12 onD will generate the same semigroup ofA. How to check ifD is a corethen? The following proposition is sometimes useful:

    Proposition 8.10. Let A be the generator of a strongly continuous semigroup of contractions on aBanach spaceX. A setD Xis a core forA iff

    i) D is dense inX;

    ii) R(0I A|D) = X for some0 > 0.

    Proof = Assume Dis a core. BecauseAis densely defined (as generator) fixed Xthere existsD(A)such that . But G(A) =G(A|D ): in particular, taking (, A) there exists (, AID)G(AID)close to it. This means that

    , therefore

    2. This proves i). About ii) fix X and

    consider

    D(A) such that

    0 A= .Now by assumption there exists (n)D such that (n, An)(,A). Clearly 0n An , that is R(0I A|D).= Take (,A) G(A) and consider := 0 A. By ii) there exists (n) D such that n := 0nAn. Beingn = R0n we deduce thatn := R0. Therefore An0 = A. But then(n, An)(,A), that is G(A)G(A|D ). The other inclusion is evident.

    In particular, in our context, D = S(Rd) is a core for the generator of the Brownian motion.It is in general not easy to characterize the domains of generators of strongly continuous semigroups.

    In the present case it is possible to show that C20(Rd) is contained into the domain, but this is not,

    however, the full domain. Actually it is possible to show that the domain of the generator is the subsetofC0(R

    d) with C0(Rd) in distributional sense. We dont enter in these details.