+ All Categories
Home > Documents > Statistical Properties of Extended Systems with Random Jumps

Statistical Properties of Extended Systems with Random Jumps

Date post: 06-Feb-2023
Category:
Upload: northamerican
View: 0 times
Download: 0 times
Share this document with a friend
16
Statistical Properties of Extended Systems with Random Jumps Filiz Tumel North American University, 3203 N Sam Houston Pkwy W Houston, TX 77038, USA. Email: [email protected], Phone: 832-367-6066 Abstract. In this paper we prove statistical properties of dynamical systems on a lattice with randomly occurring jumps. We use Perturbation Theory to derive the drift rate and the averaged Central Limit Theorem where the jumps happen on a union of countably many intervals. We obtain an upper bound for the speed of convergence in the Central Limit Theorem and prove that the convergence is with tight maxima. We prove Large Deviation results. We also prove the quenched Central Limit Theorem. Finally, we expand the drift rate results and the averaged Central Limit Theorem to certain non-uniformly expanding systems. 1. Introduction Statistical mechanics provides a framework for relating the microscopic prop- erties of individual atoms and molecules to the macroscopic bulk properties of materials that can be observed in everyday life. The ability to make macroscopic predictions based on microscopic properties is the main goal of statistical mechanics. Particle systems, as they appear in statistical mechanics, have been an important model motivating much development in the field of Dynamical Systems. While these are deterministic systems in the microscopic level, the evolution law is too complicated. Instead one uses a stochastic approach to such systems. More generally ideas from statistical mechanics have been brought to the setting of dynamical systems by Y. Sinai [15], D. Ruelle [13] and R. Bowen [3] in the 1970s. The objects they introduced are called SRB measures and they play an important role in the ergodic theory of dissipative dynamical systems. On the other hand corresponding problems have been studied in the context of the theory of random maps which was much developed by Y. Kifer, [7] and L. Arnold, [1]. The main idea of their approach is that evolution of many physical systems can be better described by compositions of different maps rather than by repeated application of exactly the same transformation. In some recent papers, like in [5, 8, 10], the 1
Transcript

Statistical Properties of Extended Systems with RandomJumps

Filiz Tumel

North American University, 3203 N Sam Houston Pkwy W Houston, TX 77038, USA.

Email: [email protected], Phone: 832-367-6066

Abstract. In this paper we prove statistical properties of dynamical systems

on a lattice with randomly occurring jumps. We use Perturbation Theory toderive the drift rate and the averaged Central Limit Theorem where the jumps

happen on a union of countably many intervals. We obtain an upper bound

for the speed of convergence in the Central Limit Theorem and prove thatthe convergence is with tight maxima. We prove Large Deviation results. We

also prove the quenched Central Limit Theorem. Finally, we expand the drift

rate results and the averaged Central Limit Theorem to certain non-uniformlyexpanding systems.

1. Introduction

Statistical mechanics provides a framework for relating the microscopic prop-erties of individual atoms and molecules to the macroscopic bulk properties ofmaterials that can be observed in everyday life. The ability to make macroscopicpredictions based on microscopic properties is the main goal of statistical mechanics.Particle systems, as they appear in statistical mechanics, have been an importantmodel motivating much development in the field of Dynamical Systems. Whilethese are deterministic systems in the microscopic level, the evolution law is toocomplicated. Instead one uses a stochastic approach to such systems.

More generally ideas from statistical mechanics have been brought to the settingof dynamical systems by Y. Sinai [15], D. Ruelle [13] and R. Bowen [3] in the 1970s.The objects they introduced are called SRB measures and they play an importantrole in the ergodic theory of dissipative dynamical systems. On the other handcorresponding problems have been studied in the context of the theory of randommaps which was much developed by Y. Kifer, [7] and L. Arnold, [1]. The mainidea of their approach is that evolution of many physical systems can be betterdescribed by compositions of different maps rather than by repeated applicationof exactly the same transformation. In some recent papers, like in [5, 8, 10], the

1

2 TUMEL

two approaches are combined in a hybrid model. The common goal is closing thegap between deterministic and stochastic dynamics. On the other hand, the mainquestion is ”What can we say about the global behavior of the particle on thelattice by looking at the local behavior?”. In that sense these models are attemptsto understand the particle systems.

Our main interest is the hybrid model introduced by E. Kobre and L. S. Youngin [8]. The model has an extended phase space given by a lattice structure andmoving particles on that lattice. The local dynamics is given by iterating the sameexpanding circle map. They introduce random jumps from one node of the latticeto another. The jumps give the macroscopic behaviour of the particle and dependon the state of the local dynamics. Their results include the drift rate of the sys-tem and the averaged Central Limit Theorem (CLT). They use the correspondingrandom dynamical system with piecewise uniformly expanding maps together withthe method of [9] and reverse martingle differences.

The first goal of our work is to generalize the local dynamics of the model givenin [8]. First, we are able to prove the drift rate and the CLT results for a process thatleads to a random dynamical system with countably piecewise uniformly expandinginterval maps as the constituent maps by using the method of [14]. Then weextend the results to a more general class of local maps. These maps may havenon-expanding parts, but they induce to uniformly expanding interval maps. Thesecond goal of our work is obtaining more information about the statistical behaviorof these models. We show the speed of the convergence in the CLT is 1/

√n. We

prove that the convergence in the CLT has a property called “tight maxima” whichis stronger than only converging to a normal distribution. We also give the LargeDeviation estimate. Our work uses a different method than the one in [8]. We usePerturbation Theory which is in general used for deterministic dynamical systems.We show that with some modifications the traditional methods for deterministicdynamical systems can be used in the stochastic set up. Lastly, we prove the CLT inthe quenched sense, which helps to understand more of the behavior of the process.

2. Setting and Results

We first introduce the setting by following the notation used in [8] and thenstate our results.

Let τ : I → I be a one dimensional mapping on I = [0, 1]. At each site i ∈ Zwe place a copy of τ : I → I. Although we have identical mappings on each site, wewill denote the I at the ith site by Ii to emphasize the location on the lattice. The

phase space is⋃i∈Z

Ii and every point in the phase space is given by (i, x), meaning

x ∈ Ii. We define the jumps only to be into the neighbouring sites, but all theresults can be generalized to any finite number of jumps. For that we define theright jump map ϕr : Ur → I associated with the right jump probability pr, andthe left jump map ϕ` : U` → I associated with the left jump probability p`, whereUr, U` ⊂ I are open subsets with Ur ∩ U` = ∅ and pr, p` ∈ (0, 1).

STATISTICAL PROPERTIES OF EXTENDED SYSTEMS WITH RANDOM JUMPS 3

Now, we describe the Markov process on the phase space⋃i∈Z

Ii to be the pairs

(Xk, Yk) = (i, x) with i ∈ Z and x ∈ Ii where the transitions are given as follows:

(2.1)

(Xk+1, Yk+1) =

(i, τ(x)) if τ(x) 6∈ U ∪ V,

(i+ 1, ϕr(τ(x))) with probability pr

(i, τ(x)) with probability 1− prif τ(x) ∈ Ur,

(i− 1, ϕ`(τ(x))) with probability p`

(i, τ(x)) with probability 1− p`if τ(x) ∈ V.

We require τ : I → I to be C1 uniformly expanding, and Ur, U` to be unionsof intervals in I, and ϕr, ϕ` to be C1 embeddings with

min|ϕ′r|, |ϕ′`| · |τ ′| > λ0

for some λ0 > 1.Let m be the Lebesgue measure. For the Markov process (Xk, Yk), k > 0

defined above with Xk ∈ Z and Yk ∈ IXk , it is known that there exists α ∈ R suchthat for m-a.e. x ∈ I, if X0 = 0 and Y0 = x then

(2.2)Xn

n−→ α a.s.

if Ur, U` are taken to be intervals, see [8]. The limit value α is called the drift rateof the Markov process and it does not depend on Y0 = x and the jump choices madewhen iterating the system. A similar approach can be used to show the same driftrate result for the Markov process with jump subsets that are union of intervals.

The following results we prove make it possible to understand the statisticalbehaviour of the Markov process.

Theorem 2.1 (Central Limit Theorem). If E[|X0|] <∞ and Y0 has a densityon [0, 1] then for every interval J ⊂ R, we have

(2.3) limn→∞

P

Xn − nα√

n∈ J

=

1√2πσ

∫J

e−u2

2σ du

for some σ > 0 where the convergence is with tight maxima and the speed of con-vergence is O(n−1/2).

The convergence to a normal distribution in the above result is proved in [8]for jump intervals which leads to finitely piecewise expanding maps using differenttechniques. Our technique makes it possible to prove not only the convergence to anormal distribution for a model that corresponds to countably piecewise expandingmaps but also to prove the speed of the convergence.

4 TUMEL

Our next result gives the rate of convergence in Equation 2.2 to the drift rate.The probabilities of being away from the drift rate decays exponentially, which willbe useful to prove the quenched CLT.

Theorem 2.2 (Large Deviations). There exists A > 0 such that for all a ∈(0, A) we have

(2.4) P

|Xn − nα|

n> a

6 Ce−Ca

2n, for some C > 0.

Quenched CLT is proved for random toral automorphisms in [2]. Being ableto see the Markov process in our model as random interval maps makes it possibleto use some of the techniques in [2] to prove the quenched CLT for the Markovprocess as stated below.

Theorem 2.3 (Quenched Central Limit Theorem). If E[|X0|] <∞ and Y0 hasa density on [0, 1] then for every interval J ⊂ R, and for almost every sequence ofjump choices that is made, we have

(2.5) limn→∞

m

Xn − nα√

n∈ J

=

1√2πσ

∫J

e−u2

2σ du

for some σ > 0 where m is the Lebesgue measure.

One of the modern techniques in dynamical systems is to introduce the inducingsystem. By using the properties of the induced system, we can understand aboutthe original system. We introduce the same concept of inducing here for the givenMarkov process. Let Xn be the Markov Process as above with state space

⋃i∈Z Ii

and the local dynamics be given by τ : I → I. If there exists a subset J ⊂ Iwhich is a union of intervals Jj and contains the jump intervals, and a functionR : J → N+ s.t. R|Jj is constant and gives the number of iterations of τ to

return back to J , then we say that the Markov Process XRn with local dynamics

τR : J → J , and with the same jump intervals and jump probabilities is called theinduced process of Xn. Note that the reduction is only on the deterministic part,all the randomness is induced to the new system. We can now state our result.

Theorem 2.4 (Extending the Central Limit Theorem). Let XRn be the induced

process of Xn with summable deterministic return time. If XRn satisfies the CLT

with tight maxima then Xn also satisfies the CLT.

3. Random Maps and Their Invariant Measure

Now we view the process as a random map as given in [8]. The new space isdefined by adjoining the jump intervals to I so we get I = I ∪ Ir ∪ I`. Here Ir andI` are images of Ur and U` under isomorphisms ιr and ι` respectively. We definemaps ϕr : Ir → I and ϕ` : I` → I so that ϕr ιr = ϕr and ϕ` ι` = ϕ`. Now wecan introduce the random maps Tr, T`, Tr` and T∗ on the extended space I. First

STATISTICAL PROPERTIES OF EXTENDED SYSTEMS WITH RANDOM JUMPS 5

we define them on I, for x ∈ I:

Tr =

τ(x), x 6∈ τ−1(Ur)

ιr(τ(x)), x ∈ τ−1(Ur)Tr` =

τ(x), x 6∈ τ−1(Ur ∪ U`)ιr(τ(x)), x ∈ τ−1(Ur)

ι`(τ(x)), x ∈ τ−1(U`)

T` =

τ(x), x 6∈ τ−1(U`)

ι`(τ(x)), x ∈ τ−1(U`)T∗ =

τ(x), x ∈ I

Then for every T ∈ Tr, T`, Tr`, T∗ defined on I as above, we define T on Ir andI` as follows:

T (x) =

T (ϕr(x)), x ∈ IrT (ϕ`(x)), x ∈ I`

The corresponding random maps system then is given by Ω = Tr, T`, Tr`, T∗with a probability distribution P given by pr(1−p`), p`(1−pr), prp`, (1−pr)(1−p`)respectively. We denote the random system by T where we iterate a point x ∈ I bychosing a map from Ω with respect to P. T represents the dynamics of the Markovprocess (Xn, Yn), see [8].

Since each constituent map of T is countably piecewise expanding, we can usethe inequality of Rychlik [14] to prove the existence of unique invariant probabilitymeasure absolutely continuous with respect to the Lebesgue measure, abbreviatedas a.c.i.p.m.. The proof is very similar to the finitely piecewise expanding case in[8] where they use the Lasote-York’s inequality, [9], instead. So we skip the proofhere and refer the reader to [16]. The density h of µ is bounded away from 0 onI since h can be shown to be in BV (I) and h > 0, so there exists an intervalJ ⊂ I such that h is bounded away from 0; and there exists a sequence of mapss.t. T k(J) = I since τ is expanding, so µ is unique.

The drift rate result also follows as in [8] by considering Xn to be the numberof visits to Ir minus number of visits to I` under T , so by Ergodic Theorem wehave

Xn

n=

1

n

n∑i=0

(χIr − χI`) Ti(x) −→ µ(Ir)− µ(I`), as n→∞.

The rest of the paper is devoted to the proofs of our results. The proofs ofTheorem 2.1: averaged CLT with tight maxima together with its speed is givenin section 4.1, Theorem 2.2: the Large Deviation estimate is given in section 4.2and Theorem 2.3: the Quenched CLT is given in section 4.3, while the proof of theTheorem 2.4 which extends the CLT from an induced process is given in section 5.

4. Statistical Properties

To prove the CLT and other statistical properties, we use the characteristicfunction operator so we can also prove the speed of the convergence.

Let X = f ∈ L1(I) : Var(f) <∞ where “Var” is the total variation. Let Tbe a random map on I with constituent maps from Ω and probability distributionP, and µ be the unique a.c.i.p.m.. The random transfer operator for T is givenby LT =

∑T∈Ω pTLT where pT = P(T ) and LT is the transfer operator of the

6 TUMEL

single map T . Let h be the density of µ, so the conjugate transfer operator for Tis defined by L = M−1

h LTMh where Mh is the left multiplication operator withh. Let f ∈X. The characteristic operator for T with f ∈X and θ ∈ R is denotedby Lf,θ and given by

Lf,θ(g) = L(eiθfg) for g ∈X.

The following two results are as in the deterministic case. The reason that theyalso work for random case is because the results only depend on the characteristicoperator, not on the maps used to iterate the system. See [12] for the proofs of thedeterministic case and [16] for the modifications of the proofs for the random case.

Proposition 4.1. For θ ∈ R small enough, Lf,θ is a quasi-compact operatoron the Banach space (X, ‖ · ‖); has a unique eigenvalue of modulus 1, and itseigenspace is one dimensional.

Let ΩN be the set of one sided sequences of constituent maps and PN be theproduct measure on ΩN. For a fixed sequence ω ∈ ΩN and x ∈ I, we define the nth

sum of x under f to be

(4.1) Sωnf(x) =

n−1∑i=0

f ωi . . . ω1(x).

where ω = (ω1, ω2, . . .) ∈ ΩN.

Lemma 4.1. Let f, g ∈ X, and µ be the invariant measure for the randomsystem with constituent maps in Ω. For every n > 1 and θ ∈ R we have

(4.2) µ(Lnf,θ(g)(x)

)= µ

(EΩ

[eiθS

ωnf(x)g(x)

])where EΩ is the expected value with respect to the measure PN.

4.1. Central Limit Theorem. First we prove the CLT for the correspondingrandom maps system of the Markov process below in Proposition 4.2, then deduceTheorem 2.1 from Proposition 4.2.

Proposition 4.2. Let f ∈X so that it admits no solution to f = k+ψT −ψfor ψ ∈ X and k ∈ R. Then the CLT holds for f ωn . . . ω1, n = 1, 2, . . . withrespect to the measure PN × µ. The variance is given by

σ2 =

∫L(g2)− (Lg)2dµ where g = (I − L)−1f.

To prove the CLT we show that the characteristic function of the process, which

can be given by µ

(Lnf, t√

n

(1)(x)

)from Lemma 4.1, converges to the characteristic

function of Gaussian distribution e−t2σ2/2 as n → ∞. For an operator satisfying

Proposition 4.1 we recall the following result:

Theorem 4.1 ([12]). There exists a > 0 such that for |θ| < a we have

(4.3) Lnf,θ(g) = λn(iθ)Nθ(g) +Mnθ (g)

STATISTICAL PROPERTIES OF EXTENDED SYSTEMS WITH RANDOM JUMPS 7

for every g ∈ X and n > 1 where λ(iθ) is the unique greatest eigenvalue of Lf,θand Nθ is the projection operator onto the eigenspace of λ(iθ); Mθ is an operatoron X with spectral radius less than 1 and it maps the eigenspace of λ(iθ) to zero.

The functions θ → λn(iθ), θ → Nθ, and θ →Mθ are analytic.

Proof of Proposition 4.2. The proof follows from Theorem 4.1 with a suit-able choice of g = 1 and θ = t√

n. Then the characteristic function of the process

can be written as

(4.4) µ(Lnf, t√

n(1))

= λn(it√n

)µ(N t√

n(1))

+ µ(Mn

t√n

(1)),

where limn→∞

∣∣∣∣µ(Mnt√n

(g)

)∣∣∣∣ = 0. Furthermore, we can expand the functions

θ → λn(iθ) and θ → Nθ around zero with θ = t√n

since they are analytic, and get

(4.5) N t√n

(1) = N0(1) +it√nN (1)

0 (1) +1

2

(it√n

)2

N (2)0 (1) +

(it√n

)2

N t√n

(1)

(4.6) λ(it√n

) = λ(0) +it√nλ′(0) +

1

2

(it√n

)2

λ′′(0) +

(it√n

)2

λ(it√n

).

Here λ(0) = 1, and µ(N0(1)) = 1 since 1 is the unique greatest eigenvalue of theconjugate transfer operator with eigenfunction 1. The terms other than N0(1) inEquation 4.5 converges to zero as n→∞. The term λ′(0) in Equation 4.6 can beshown to be equal to µ(f) by using Equation 4.4 with n instead of

√n together

with Lemma 4.1. Let us assume that µ(f) = 0 to simplify the calculations. Thus,as n→∞ Equation 4.4 becomes

limn→∞

µ(Lnf, t√

n(1))

= limn→∞

(1− t2

2nλ′′(0) +

t2

nλ(i

t√n

)

)n,

where the last term leads to zero only, so the limit is e−λ′′(0)t2/2. Now we only

need to show that λ′′(0) is the limiting variance of the process. It is again easy tosee that with Equation 4.4 together with Lemma 4.1 that the second derivative of

µ(EΩ

[ei t√

nSωnf(x)

])with respect to t evaluated at t = 0 is −µ

(EΩ

[(Sωnf(x)√

n

)2])

,

negative of the variance of the process. If we use the left hand side of Equation 4.2which can be written as in Equation 4.4 to take the second derivative and evaluateat t = 0, by using Equations 4.5 and 4.6 with n instead of

√n we get −λ′′(0),

which concludes the proof of convergence to Gaussian distribution for f ∈X. Thevariance in Equation 4.2 can be calculated directly, see [16], page 85, and σ2 > 0is a result of the assumption that f ∈X admits no solution to f = k + ψ T − ψ,see [16], page 93.

The speed of the above convergence is a result of Essen’s inequality applied tothe process.

8 TUMEL

Lemma 4.2 ([11]). There exists a real number a > 0 such that for every |t| <a√n we have∣∣∣∣µ(EΩ

[eitSωnf

σ√n

])− e−t

2/2

∣∣∣∣ 6 e−t2/4(2A|t|3

σ3√n

+B|t|σ√n

)+

(C|t|σ√n

)rn

where r = (1 + 2ρ(M0))/3, with ρ standing for the spectrum of M0.Furthermore, for some K > 0, we have

sups∈R

∣∣∣∣µ(EΩ

[Sωnf

σ√n

])− 1

∫ s

−∞e−t

2/2dt

∣∣∣∣6

K

a√n

+1√n

∫ a√n

−a√n

[e−t2/2(2At2 +B/σ) + Crn/(σ

√n)]dt.

Now we are ready to prove Theorem 2.1.

Proof of Theorem 2.1. Proposition 4.2 is also satisfied for non-invariantmeasures that are absolutely continuous with respect to µ, see [8]. Then the CLTresult for the Markov process is given by Proposition 4.2 by choosing f = (χIr −χI`) − (µ(Ir) − µ(I`)) where µ is the a.c.i.p.m. for the process. The speed of theCLT is n−1/2 by Lemma 4.2.

The convergence with tight maxima is given by previously known results of theMarkov process. We say that the convergence of the random process Sωnf/

√n to

Gaussian distribution is with tight maxima, if for every ε > 0 there exists c > 0such that for every n > 1 we have

(4.7) µ

(EΩ

[max

16k6n

Sωk f√n> c

])6 ε.

It is known that if an L1-bounded process that is a sum of reverse martingaledifferences converges to Gaussian distribution, the convergence is with tight maximaby [4], and we know that the random process is given by a sum of reverse martingaledifferences by [8] which concludes the proof.

4.2. Large Deviation Estimate. The goal of this section is to prove The-orem 2.2, the Large Deviation estimate, by using Lemma 4.1. The proof is againfor the corresponding random maps system first, then deduction of Theorem 2.2 isas in proof of the CLT, by choosing f = (χIr − χI`)− (µ(Ir)− µ(I`)).

Proof of Theorem 2.2. For any n > 1 and t > 0 we have

µ(EΩ

[Sωnfn > a

])6 e−at/nµ

(EΩ

[etS

ωnf/n

])= e−at/nµ

(Lnf,t/n(1)

)by Lemma 4.1,

= e−at/n(λn( itn )(1 +O( |t|n )) +O(ρn)

)by Theorem 4.1.

STATISTICAL PROPERTIES OF EXTENDED SYSTEMS WITH RANDOM JUMPS 9

Define function A(a) = sup|θ|6C aθ− ln(λ(iθ)) for some C > 0. The function takes

its maximum at point θ0 = a/λ′′(0) and we know λ′′(0) = σ2, so for t = θ0n we get

µ(EΩ

[Sωnfn > a

])6 e−aθ0n/nen(aθ0−A(a)) (1 + C|θ0|) +O(ρn)

= e−nA(a)(1 + C|θ0|) +O(ρn)

6 2e−a2n(1−ε)/2σ2

6 Ce−Ca2n,

for a 6 Cε where Cε is small enough.

4.3. Quenched Central Limit Theorem. The proof of the Quenched CLTis first for the corresponding random maps system. The statement of the QuenchedCLT for random maps is given in Proposition 4.3. To prove Proposition 4.3, we usethe speed of the convergence of the averaged CLT and the Large Deviation estimate.We also require the process to satisfy the averaged CLT on 2D. We introduce the 2Drandom maps system in Definition 4.1 and show that the 2D random maps systemalso satisfies the Lasota-York inequality in Lemma 4.3 by using the methods of [6]without coupling, but by modifying the proofs suitably for a random maps system.Then the rest of the arguments give the CLT on 2D and then we prove Proposition4.3 and deduce the Quenched CLT for the Markov process.

Proposition 4.3. Let f ∈ X so that it admits no solution to f = k + ψ T − ψ for ψ ∈ X and k ∈ R. Then for PN almost every ω ∈ ΩN, the CLT holdsfor f ωn . . . ω1, n = 1, 2, . . . with respect to the measure µ with variance as inProposition 4.2.

Now we can introduce the 2D random maps.

Definition 4.1. Let T be a piecewise expanding map on I. The corresponding2D map of T is denoted by AT : I2 → I2 and given by A(x, y) = (T (x), T (y)).The transfer operator of A acting on f : I2 → R is given by

LAf(x, y) =∑

A(a,b)=(x,y)

f(a, b)

|T ′(a)T ′(b)|

where the sum is over points of each rectangular region that the map A is C2.

Let I2 = I1 × I2 to specify the coordinates. We define Var(f) to be themaximum of the coordinate-wise variations Var1(f) and Var2(f) given by

Var1(f) =

∫I2

VarI1f(x, y)dy and Var2(f) =

∫I1

VarI2f(x, y)dx

where VarI1(f) is the variation with respect to the first coordinate of the functionf(x, y) by fixing the second coordinate and considering f as a function of x only,and similarly VarI2(f) is the variation of f as a function of y only for fixed x. Thenthe suitable set of observables is XI2 = f ∈ L1(I2) : Var(f) <∞.

Now again we can define AT to be the random map on I2 where AT is thecorresponding 2D map of a random map T with constituent maps from Ω, the

10 TUMEL

corresponding random maps to the Markov process and probability distribution Pas in section 3. We define the random transfer operator LAT

=∑T∈Ω pTLAT .

The following lemma is for a deterministic system given by a map AT : I2 →I2 as defined above. We modify the proof of the lemma to get the Lasota-Yorkinequality for a random maps system on 2D and then the CLT on 2D in Proposition4.4.

Lemma 4.3 ([6]). For f ∈XI2 and n > 1, there exists constants C,R > 0 andr ∈ (0, 1) such that

(4.8) ‖LnAT (f)‖BV 6 Crn‖f‖BV +R‖f‖1.

Proposition 4.4. Let f ∈XI2 so that it admits no solution to f = k+ψAT−ψ for ψ ∈XI2 and k ∈ R. Then the CLT holds for f ω′n . . . ω′1, n = 1, 2, . . . withrespect to the measure PN × µAT

, where ω′i = AT for some T ∈ Ω, for i = 1, 2, . . ..

Proof. Lemma 4.3 implies the Lasota-York inequality for the random Perron-Frobenius operator LAT

since LATis the weighted average of all operators LAT ,

so we can continue as in a deterministic case and deduce that AT has an a.c.i.p.mµAT

. It is unique and bounded away from 0 everywhere on I2 by using the samearguments in 1D, that is the invariant sets with positive measure can be shown tohave full measure since Aτ is expanding.

Note that to prove the CLT for 1D random maps system, we only used thatthe random Perron-Frobenius operator satisfies the Equation 4.3 with respect tothe measure PN × µ, which depends on the Lasota-York inequality and a uniqueergodic measure. Since it is also the case for the random Perron-Frobenius operatorof AT we can follow the same steps and conclude that the random system AT onI2 satisfies the CLT with variance σ2

AT.

The CLT on 2D, as in 1D case, is satisfied with respect to the non-invariantmeasures that are absolutely continuous with respect to µAT

. If we just choose theLebesgue measures, m on I and mA on I2, then it can be calculated easily thatσ2AT

= 2σ2, where σ2 is the variance for the CLT in 1D random dynamical systemwith maps T ∈ Ω.

Now the same approach in [2] can be used to prove the quenched CLT for therandom maps system. We use the deterministic representation of T , so for fixedsequence ω ∈ ΩN, f ∈X and x ∈ I the process is given by Sωnf(x) as in Equation4.1.

Proof of Proposition 4.3. We know that Sωnf(x) satisfies the averagedCLT with respect to PN ×m with constituent functions T ∈ Ω. Furthermore,the process Sω

n F (x) with constituent functions AT ∈ Ω also satisfies the CLT.Therefore, the characteristic functions of the processes converges to the character-istic function of the normal distribution, that is

limn→∞

∫ω∈ΩN

∫I

eitSωnf(x)√

n dm(x) dPN = e−12 t

2σ2

.

STATISTICAL PROPERTIES OF EXTENDED SYSTEMS WITH RANDOM JUMPS 11

The rest of the proof can be divided into two main steps: First the L2 estimate,

(4.9) EPN

[∣∣∣∣∫I

eitSωnf(x)√

n dm(x)− e− 12 t

2σ2

∣∣∣∣2]

= O(

1 + |t|3√n

),

where EPN is the expected value with respect to PN. It can be shown by usingLemma 4.2, the speed of the CLT for both random systems given by T and AT ,see [2].

The second part of the proof uses the Chebyshev’s Inequality together withEquation 4.9 to get

(4.10) PNω :

∣∣∣∣∫I

eitSωnf(x)√

n dm(x)− e− 12 t

2σ2

∣∣∣∣ > ε 6 Cε2 1 + |t|3√n

for some C > 0, see [2]. Then simple calculations on the suitable subsequences ofthe term on the left hand side of Equation 4.10 with respect to n and t, see [2],leads s to the conclusion that the sum over n of Equation 4.10 is finite for everyt ∈ R. We can then apply the Borel-Cantelli Lemma to conclude that the set ofsequences in ΩN that do not satisfy the CLT has zero PN-measure.

Proof of Theorem 2.3. Note that in Proposition 4.3, “the almost everyrandom sequence” corresponds to “the almost every jump option chosen throughoutthe Markov process” in Theorem 2.3. Then again with the choice of f = (χIr −χI`)− (µ(Ir)− µ(I`)), for α = µ(Ir) + µ(I`) we get

limn→∞

m

Xn − nα√

n∈ J

= limn→∞

m

Sωnf(x)√

n∈ J

=

1√2πσ

∫J

e−u2

2σ du

for PN almost every ω ∈ ΩN, i.e. for almost every sequence of jump choices that ismade.

5. Induced Random Maps

Let T denote a random maps system with constituent maps Ω = T1, . . . , TKon I and distribution P. We say T induces the random maps system TR deter-ministically with constituent maps Ω0 = TR1 , . . . , TRK on J ⊂ I and distributionP if there exists a partition Ji of J and a function R : J → N+ such that R|Jj is

constant on each partition element and TRk (x) = TR(x)k (x) for every k = 1, . . . ,m.

The function R is called the return time and it does not depend on the choice ofthe map, which is the reason that we say the inducing is deterministic.

Let Xn be the Markov Process as given in Section 2, except that τ may havenon-expanding parts and T denote the corresponding random maps system as inSection 3. If XR

n is the induced process of Xn with return time R as in section 2,then the corresponding random maps system of XR

n is the induced random maps

system TR of T . The one-to-one correspondence between the inducing of theMarkov Process and inducing of the random maps system can be given by

12 TUMEL

(5.1)

Xn≈−−−−→

n∑i=0

(χIr − χI`) Ti(x)

π1

y yπ2

XRm

≈−−−−→m∑i=0

(χIr − χI`) (TR)i(x)

where π1Xn = XRm with m being the number of visits of the process Xn to the

subset J in first n iterates, so although the total number of jumps to right minusjumps to left does not change, the index of the induced process increases fastersince all the iterations that end up outside of J is considered to be one iterationof the induced process. Here π2 is simply the deterministic inducing with returntime R on each constituent map, and for fixed x0 = x ∈ I and a fixed sequence(ω1, ω2, . . .) of random maps of T we define the sequence of maps of TR to be

(ωR(x0)R(x0) , ω

R(x1)R(x0)+R(x1), . . .) where we define for any n ∈ N,

(5.2)

x0 = x,

x1 = ωR(x0)R(x0)(x0),

x2 = ωR(x1)R(x0)+R(x1)(x1),

x3 = ωR(x2)R(x0)+R(x1)+R(x2)(x2),

. . . and so on, therefore

ωn . . . ω2 ω1(x) = ωn ωn−1 . . . ωR(x0)+...+R(xm−1)+1(xm).

with m ∈ N so that R(x0) + . . . + R(xm−1) 6 n < R(x0) + . . . + R(xm). Againthe index m depends on n, the initial point x ∈ I and the choices of jumps of theMarkov Process, so the choices of the random maps T for the random maps system.

5.1. Induced Central Limit Theorem. The proof of Theorem 2.4 is againfor the corresponding random maps system. Deducing for the Markov procces byconsidering the above correspondence in Equation 5.1 is clear. We prove that if thecorresponding induced random maps system with summable return time satisfiesthe CLT with tight maxima then the original random maps system also satisfiesthe CLT. The function χIr − χI` in Equation 5.1 on I will in general be taken tobe f ∈ X, then the induced function fR on J is defined to be fR(x) = f(x) + f T (x) + . . . + f TR(x)−1(x). Again note that fR can be defined deterministicallysince any random map T takes the same value as long as it does not return to J ,and for f = χIr − χI` , we have fR = χIr − χI` since the jump intervals are in J .

Now, we will use the abstract tower model as in [17] constructed on the induced

random maps system TR to recover the original random maps system T . Note thatwith the random maps defined in Section 3, since the jump intervals are in J if

STATISTICAL PROPERTIES OF EXTENDED SYSTEMS WITH RANDOM JUMPS 13

T (x) is not back in J then the value of T (x) does not depend on the choice ofT , namely Tk(x) = Tl(x) for every k, l ∈ 1, . . . ,K (where K = 4 for the casewith jumps to only right and left, but we will give the proof for more general casewhere there can be jumps to the further sites). Therefore, for any sequence ω ∈ ΩN,ωk . . . ω1(x) = ωk1 (x) for k < R(x)− 1. The iterations are also given in Equation5.2. The deterministic behaviour of the random maps system T out of J makes itpossible to use the abstract tower model on TR : J → J . For that we define thetower ∆ to be

∆ = (x, `) ∈ J × N : ` < R(x)where R : J → N+ is the return time that does not depend on the choice ofconstituent maps. Let F denote a random maps system with constituent mapsΩ∆ = F1, . . . , FK on ∆ given by

Fk(x, `) =

(x, `+ 1) if `+ 1 < R(x),

(TRk (x), 0) if `+ 1 = R(x),

where TRk ∈ Ω0. The correspondence between the random maps system F on tower∆ and the random maps system T on I can be given by π : ΩN

∆ × ∆ → ΩN × I,π(ω∆, x, `) = (ω, T `1 (x)) where we define (ω)i = Tk if (ω∆)i = Fk, for i ∈ N+ andk ∈ 1, . . . ,K. Note that instead of T1 any other constituent map for T can beused since all have the same image for ` < R(x), and although π is not onto thefirst coordinate ΩN it covers the whole dynamics in I, so the following diagramcommutes:

(5.3)

(ω∆, x, `)F−−−−→ (σ(ω∆), ω∆1(x, `))

π

y yπ(ω, T `1 (x))

T−−−−→ (σ(ω), ω1(T `1 (x)))

where σ is the left shift map. We define SFn f(x) =

∑ni=0 f F

i(x).

Lemma 5.1. Assume∫Rdm <∞. If TR has an a.c.i.p.m. on J whose density

is uniformly bounded then the random maps system F has an a.c.i.p.m. on ∆.

Proof. Let µ0 be the a.c.i.p.m. for TR on J . Define µk =∑∞`=0(Fk)`∗(µ0|R >

`) where (Fk)`∗µ0(E) = µ0(F−`k E) for k ∈ 1, . . . ,K. Then the desired a.c.i.p.m.

is the normalization of the measure µ′ =∑Kk=1 P(Fk)µk where P is the probability

distribution on Ω∆.

Let µ be the a.c.i.p.m. for F on ∆. Again for f : ∆ → R, if we define

fR : J → R to be fR(x) =∑R(x)−1i=0 f F i(x) then we can calculate that

∫fdµ =∫

fdµ0/∫Rdµ0. So to prove the CLT on ∆ with f assume that

∫fdµ = 0 to

simplify the calculations. As in π2 in Equation 5.1, we can define the inducingfrom ∆ to J for a fixed sequence ω∆ to be π3 : Ω∆ ×∆ → Ω × J , π3(ω∆, x, `) =(ω0, x) where (ω0)1 = ((π(1)ω∆)R(x)−`)

R, (ω0)2 = ((π(1)ω∆)R(x)−`+R(x1))R, and

14 TUMEL

(ω0)3 = ((π(1)ω∆)R(x)−`+R(x1)+R(x2))R, and so on, where xi is as in Equation 5.2,

and π(1) is the restriction of π to the first coordinate, so π(1) : ΩN∆ → ΩN. Then

for f on ∆, and for a fixed sequence of random maps of F the sum∑ni=0 f F

i(x)

is equal to∑mi=0 fR (TR)i(x) where the sequence of random maps of TR is the

the projection of a fixed sequence by π3. The integer m = mn(x,F ) is the largestinteger with R(x)− `+R(x1) + . . .+R(xm) < n, so it depends on x and the fixedsequence of F .

Now we are ready to prove Theorem 2.4. First we give the random versionof a theorem which is proved in [4] for deterministic systems. We skip the proofhere since the proof follows the same steps of the deterministic version by averagingover all random maps, see [16], page 127 for the proof. Then as a corollary of thattheorem, we prove the CLT on the tower ∆. Then we deduce the CLT on I andthen conclude Theorem 2.4.

Theorem 5.1 ([4]). Let (J ,TR) be an ergodic random dynamical system with

invariant probability measure µ0 with TR ∈ Ω0, and g : J → R. Assume that the

process STR

n g(x) satisfies the CLT with tight maxima. Let (∆,F ) be the random

dynamical system with F ∈ Ω∆ as above, that induces the random system(J ,TR),and let m1,m2, . . . be a sequence of integer valued functions on Ω∆ ×∆ such that

mn/n converges to 1 in probability. Then STR

mn g(x) also satisfies the CLT withrespect to any absolutely continuous measure µ′ with respect to µ0.

Let SFn f(x, `) be the process on tower ∆ and STR

m fR(x) be the induced sumwhere T is the corresponding sequence of F as given in Equation 5.3, and m =mn(x,F ) as above. It is easy to see that

SFn f(x, `) = STR

m fR(x).

We also assume that∫fRdµ0 = 0. Now we can state and prove the corollary.

Corollary 5.1. If the process STR

m fR(x)/√m satisfies the CLT with respect

to µ0 then the process SFn f(x, `)/

√n also satisfies the CLT with respect to µ where

the averages of random maps sequences are with respect to PN.

Proof. We give the proof for ` = 0 since the general case adds finite steps tothe process which converges to `/

√n → 0. Let Ri(x) = R(x = x0) + R(x1) . . . +

R(xi) to simplify the notation. Furthermore, we choose n so that the nth iterationends up in J since for any n ∈ N, |SF

n f(x, 0)−SFRm(x)f(x, 0)|/

√n converges to zero

in µ for PN-almost every F -sequence, see [4].

Now, for n = Rm(x), since SFn f(x, 0) = STR

m fR(x) it is enough to show the

CLT for STR

m fR(x)/√n with respect to the measure µ′ given by dµ′ = χIRdµ.

For that we show how n and mn are related, and use Theorem 5.1. By ErgodicTheorem, we have

n

m=

1

mRm(x) =

1

m

m−1∑i=0

R TR(x) −→∫Rdµ0, as n→∞,

STATISTICAL PROPERTIES OF EXTENDED SYSTEMS WITH RANDOM JUMPS 15

since R is summable. Since the system is ergodic,∫Rdµ0 = µ(J) by Kac’s Lemma,

so we can approximate n with µ(J)mn, that is mn(x,F ) = bn/µ(J)c. Since mn/n

converges to one, now we use Theorem 5.1 to conclude that STR

m fR(x)/√n satisfies

the CLT.

Proof of Theorem 2.4. Note that if the induced Markov process satisfies

the CLT on J , then the corresponding random maps system STR

m fR(x)/√m satisfies

the CLT with fR = χIr −χI` −α on J . By Corollary 5.1, we have that the randommaps system SF

n f(x, `)/√n on tower ∆ also satisfies the CLT, where f = fR since

jump intervals are on the base J . The random maps system corresponds to therandom maps system ST

n f(x)/√n on bi, so it also satisfies the CLT. The process

STn f(x)/

√n is the corresponding random maps system of the Markov process Xn

on I, so the result follows.

6. Conclusion

We used the reduction techniques of extended systems with deterministic localdynamics to prove the statistical properties. We used the methods of deterministicdynamical systems to extend the results to more general lattice systems where thelocal dynamics may have non-expanding points. Although the tower constructionrestricts the jumps to be into the base only, it is still general enough since thebase can be chosen to be the all set, see [17]. To prove the quenched CentralLimit Theorem, again a method for deterministic dynamical systems is generalizedto random dynamical systems in Lemma 4.3, to prove the statistical properties inhigher dimensions.

Here we used the identical local map at each site, but similar techniques of[6] can be used to analyze systems with weakly coupled interval maps as the localdynamics at each site. Another potential generalization of such systems is using alocal map on a more general manifold then an interval. Since most of the resultsdepend on the properties of the random transfer operator, the chaotic maps thatare already known to behave nicely are the candidates for the local dynamics.

References

1. L. Arnold, Random Dynamical Systems, Springer, 2003.

2. A. Ayyer, C. Liverani, and M. Stenlund, Quenched CLT for random toral automorphism,Discrete Contin. Dyn. Syst. 24 (2009), no. 2, 331–348.

3. R. Bowen, Markov Partitions for Axiom A Diffeomorphisms, Amer. J. Math. 92 (1970),

725–747.4. J.-R. Chazottes and S. Gouezel, On almost-sure versions of classical limit theorems for dy-

namical systems, Probab. Theory Related Fields 138 (2007), no. 1-2, 195–234.5. N. Chernov and D. Dolgopyat, The Galton Board: Limit Theorems and Recurrence, J. Amer.

Soc 22 (2009), 821–858.

6. G. Keller and C. Liverani, A Spectral Gap for a One-dimensional Lattice of Coupled PiecewiseExpanding Interval Maps, Lecture Notes in Physics 671 (2005), 115–151.

7. Y. Kifer, Random Perturbations of Dynamical Systems, Birkhauser, 1998.

8. E. Kobre and L. S. Young, Extended Systems with Deterministic Local Dynamics and RandomJumps, Comm. Math. Phys. 275 (2007), no. 3, 709–720.

16 TUMEL

9. A. Lasota and J. A. Yorke, On The Existence of Invariant Measures for Piecewise Monotonic

Transformations, Trans. Amer. Math. Soc. 186 (1973), 481–488.

10. M. Lenci, Typicality of Recurrence for Lorentz Gases, Ergodic Theory Dynam. Systems 26(2006), 799–820.

11. V. V. Petrov, Sums of Independent Random Variables, Springer-Verlag, 1985.12. J. Rousseau-Egele, Un theoreme de la limite locale pour une classe de transformations dila-

tantes et monotones par morceaux, Ann. Probab. 11 (1983), no. 3, 772–788.

13. D. Ruelle, Thermodynamic Formalism, Addison Wesley, Reading, MA (1978).14. M. Rychlik, Bounded Variation and Invariant Measures, Studia Math. 76 (1983), no. 1,

69–80.

15. Ya. G. Sinai, Markov Partitions and C-Diffeomorphisms, Func. Anal. and its Appl. 2 (1968),64–89.

16. F. Tumel, Random Walks on a Lattice with Deterministic Local Dynamics, Ph.D. thesis,

University of Houston, 2012.17. L. S. Young, Recurrence Times and Rates of Mixing, Israel Journal of Mathematics 110

(1999), no. 1, 153–188.

Department of Education, North American University, Houston, USAE-mail address: [email protected]


Recommended