+ All Categories
Transcript

arX

iv:1

110.

4862

v1 [

mat

h-ph

] 2

1 O

ct 2

011

Correlated Markov Quantum Walks

Eman Hamza∗ Alain Joye†‡

Abstract

We consider the discrete time unitary dynamics given by a quantum walk on Zd per-formed by a particle with internal degree of freedom, called coin state, according to thefollowing iterated rule: a unitary update of the coin state takes place, followed by a shift onthe lattice, conditioned on the coin state of the particle. We study the large time behaviorof the quantum mechanical probability distribution of the position observable in Zd forrandom updates of the coin states of the following form. The random sequences of unitaryupdates are given by a site dependent function of a Markov chain in time, with the followingproperties: on each site, they share the same stationnary Markovian distribution and, foreach fixed time, they form a deterministic periodic pattern on the lattice.

We prove a Feynman-Kac formula to express the characteristic function of the averaged

distribution over the randomness at time n in terms of the nth power of an operatorM . Byanalyzing the spectrum ofM , we show that this distribution posesses a drift proportional tothe time and its centered counterpart displays a diffusive behavior with a diffusion matrix wecompute. Moderate and large deviations principles are also proven to hold for the averageddistribution and the limit of the suitably rescaled corresponding characteristic function isshown to satisfy a diffusion equation.

An example of random updates for which the analysis of the distribution can be per-formed without averaging is worked out. The random distribution displays a deterministicdrift proportional to time and its centered counterpart gives rise to a random diffusionmatrix whose law we compute. We complete the picture by presenting an uncorrelatedexample.

1 Introduction

Quantum walks are simple models of discrete time quantum evolution taking place on ad-dimensional lattice whose implementation yields a unitary discrete dynamical system ona Hilbert space. The dynamics describes the motion of a quantum particle with internaldegree of freedom on an infinite d-dimensional lattice according to the following rules. Theone-step motion consists in an update of the internal degree of freedom by means of aunitary transform in the relevant part of the Hilbert space, followed by a finite range shifton the lattice, conditioned on the internal degree of freedom of the particle. Due to their

∗Department of Physics, Faculty of Science, Cairo University, Cairo 12613, Egypt†UJF-Grenoble 1, CNRS Institut Fourier UMR 5582, Grenoble, 38402, France‡Partially supported by the Agence Nationale de la Recherche, grant ANR-09-BLAN-0098-01

similarity with classical random walks on a lattice, quantum walks constructed this way areoften considered as their quantum analogs. In this context, the space of the internal degreeof freedom is called coin space, the degree of freedom is the coin state and the unitaryoperators performing the update are coin matrices.

Quantum walks have become quite popular in the quantum computing community in therecent years, due to the role they play in computer science, and in particular for quantumsearch algorithms. See for example [33], [4], [26], [28], [39], [5], [32] and in the review[36]. Also, quantum walks are used as effective dynamics of quantum systems in certainasymptotic regimes; see e. g. [14], [1], [33], [31], [11], [35], for a few models of this type, and[7], [10], [15], [17], [6] for their mathematical analysis. Moreover, quantum walk dynamicshave been shown to describe experimental reality for systems of cold atoms trapped insuitably monitored optical lattices [24], and ions caught in monitored Paul traps [42].

The literature contains several variants of the quantum dynamics on a lattice as de-scribed above, which may include decoherence effects and/or more general graph, see e.g.the reviews and papers [4], [26], [3], [8]. In this work, we consider the case where the evo-lution of the walker is unitary, and where the underlying lattice is Zd with coin space ofdimension 2d, which is, in a sense, the closest to the classical random walk.

We are interested in the long time behavior of quantum mechanical expectation valuesof observables that are non-trivial on the lattice only, i.e. that do not depend on theinternal degree of freedom of the quantum walker. Equivalently, this amounts to studying afamily of random vectors Xn on the lattice Zd, indexed by the discrete time variable, withprobability laws P(Xn = k) = Wk(n) defined by the prescriptions of quantum mechanics.The initial state of the quantum walker is described by a density matrix.

As is well known, when the unitary update of the coin variable is performed at each timestep by means of the same coin matrix, this leads to a ballistic behavior of the expectationof the position variable characterized by EW (n)(Xn) ≃ nV when n is large, for some vectorV , and by fluctuations of the centered random variable Xn − nV of order n, see e.g. [28].

The case where the coin matrices used to update the coin variable depend on the timestep in a random fashion, a situation of temporal disorder, is dealt with in [21], see also[3]. All coin variables are updated simultaneously and in the same way, independently ofthe position on the lattice. This yields a random distribution W ω

· (n), corresponding tothe random variable Xω

n which, once centered and averaged over the disorder, displays adiffusive behavior in the long time limit.

If the coin matrices depend on the site of the lattice Zd but not on time, i.e. a caseof spatial disorder, one expects dynamical localization, characterized by finite values of allmoments, uniformly bounded in time n, and for (almost) all realizations. In dimensiond = 1, this was proven in [20] for certain sets of random coin matrices, which were furthergeneralized in [2]. See also [27], [38] for related aspects. The higher dimensional case isopen.

The situation addressed here is that of correlated spatio-temporal disorder. We considerrandom coin matrices which depend both on time and space in the following way: Therandom coin matrix at site x ∈ Zd and time n ∈ N is given by Cωn (x) = σx(ω(n)), whereω(j)j∈N is a temporal stationary Markov chain on a finite set Ω of unitary matrices on C2d,and Zd ∋ x→ σx is a given representation of Zd in terms of measure invariant bijections on

2

Ω. In particular, σ0 =Id, the identity on Ω, and Γ = y ∈ Zd s.t. σy = Id forms a periodicsub-lattice of Zd. Therefore, at each site x ∈ Zd, the sequence Cωj (x)j∈N is Markovian

with a distribution independent of x, and at each time n ∈ N, the set Cωn (x), x ∈ Zdis Γ-periodic. This is a natural generalization of the case studied in [21] which displays adeterministic non trivial periodic structure in the spatial patterns of random coin matricesat each time step.

This setup is an analog of the one addressed in [34], [22], [18], where the dynamicsis generated by a quantum Hamiltonian with a time dependent potential generated by arandom process. For quantum walks, the role of the random time dependent potential isplayed by the random coin operators whereas the role of the deterministic kinetic energy isplayed by the shift.

We address the problem by an analysis of the large n behavior of the characteristicfunction of the distribution w·(n), Φn(y) = Ew(n)(e

iyXn), where w·(n) = E(W ω· (n)) is the

averaged quantum mechanical distribution on Zd, with initial condition ρ0, a density matrixon l2(Zd)⊗C2d. By adapting the strategy of [22], [18], inspired by [34], to our discrete timeunitary setup, we first establish a Feynman-Kac type formula to express w·(n) in terms of

(some matrix element) of the nth power of a contraction operator M acting on an extendedHilbert space which involves a space of (density) matrices and the probability space of coinmatrices. Then, we analyze the spectral properties of M , making use of the periodicityand invariance properties of σx which yield a fiber decomposition of a generalized Fouriertransform of M . In turn, this allows us to provide a detailed description of the large nbehavior of the characteristic function Φn(y) in the diffusive regime y → y/

√n, and at y

fixed, in terms of the spectral data of M and their perturbative behavior.The foregoing is the main technical result of the paper, from which several consequences

can be drawn, by arguments similar to those used in [21]. Under natural assumptions on thespectrum of M , the averaged distribution w·(n) displays a diffusive behavior characterizedby the following data: a deterministic drift vector r ∈ Rd and a diffusion matrix D, whichwe compute, such that, for n large and i, j = 1, 2, · · · , d,

Ew(n)(Xn) ≃ nr, Ew(n)((Xn − nr)i(Xn − nr)j) ≃ nDij.

Moreover, we get convergence of the properly rescaled characteristic function of Xn − nr,e−i[tn]ry/

√nΦ[tn](y/

√n), to the Fourier transform of superpositions of solutions to a diffusion

equation of the form∫Td e

− t2〈y|D(p)y〉dv/(2π)d with diffusion matrix D(p), p ∈ Td, the d-

dimensional torus. Also, we get moderate deviations results of the type

P(Xn − nr ∈ n(α+1)/2 Γ) ≃ e−nα infx∈Γ Λ∗(x) as n→ ∞, (1.1)

for any set Γ ∈ Rd, any 0 < α < 1, with some rate function Λ∗ : Rd → [0,∞] we determine.Finally, we improve on [21] by establishing large deviations results for sets in a certainneighborhood of the origin, under stronger hypotheses. Informally, there exists an openball B centered at the origin such that for all sets Γ ∈ B ∩ Rd,

P(Xn − nr ∈ nΓ) ≃ e−n infx∈Γ Λ∗(x) as n→ ∞, (1.2)

where Λ∗: Rd → [0,∞] is another rate function we determine. By Bryc’s argument, [13], a

central limit theorem for Xn holds under the same conditions.

3

To complete the picture, we work out an example introduced in [21] where the distri-bution of coin matrices is supported on the set of unitary permutation matrices. This caseallows us to analyze of the random distributionW ω(n), without averaging over the disorder.We show that under our hypotheses, in this case W ω(n) coincide with the distribution ofa classical walk on the lattice, with increments being neither stationary, nor Markovian.Nevertheless, we can apply spectral methods as well to study the long times asymptotics ofthe corresponding random characteristic function, which allows us to get the the existenceof a random diffusion matrix Dω such that

EWω(n)((Xωn − nr)i(X

ωn − nr)j) ≃ nDωij, i, j = 1, 2, · · · , d,

and whose matrix elements Dωij are distributed according to the law of Xωi X

ωj , where the

vector Xω is distributed according to N (0,Σ), for a matrix Σ we determine.

We also consider the completely decorrelated case where the coin matrices at each sitesare i.i.d, i.e. a situation where no spatial structure is present in the pattern of coin matrices.

Acknowledgements E.H. wishes to thank the CNRS and the Institut Fourier forsupport in the Fall of 2010, where this work was initiated, and A.J. wishes to thank theCRM for support in July 2011, where part of this work was done.

2 General Setup

Let H = C2d ⊗ l2(Zd) be the Hilbert space of the quantum walker in Zd with 2d in-ternal degrees of freedom. We denote the canonical basis of C2d by |τ〉τ∈Id± , where

I± = ±1,±2, . . . ,±d, so that the orthogonal projectors on the basis vectors are notedPτ = |τ〉〈τ |, τ ∈ I±. We shall denote the canonical basis of l2(Zd) by |x〉x∈Zd , or byδxx∈Zd . We shall write for a vector ψ ∈ H, ψ =

∑x∈Zd ψ(x)|x〉, where ψ(x) = 〈x|ψ〉 ∈ C2d

and∑

x∈Zd ‖ψ(x)‖2C2d = ‖ψ‖2 < ∞. We shall abuse notations by using the same symbols

〈·|·〉 for scalar products and corresponding “bra” and “ket” vectors on H, C2d and l2(Zd),the context allowing us to determine which spaces we are talking about. Also, we will oftendrop the subscript C2d of the norm.

A coin matrix acting on the internal degrees of freedom, or coin state, is a unitary matrixC ∈M2d(C) and a jump function is a function r : I± → Zd. The shift S is defined on H by

S =∑

x∈Zd

τ∈I±Pτ ⊗ |x+ r(τ)〉〈x|. (2.1)

By construction, a walker at site y with internal degree of freedom τ represented by thevector |τ〉 ⊗ |y〉 ∈ H is just sent by S to one of the neighboring sites depending on τdetermined by the jump function r(τ)

S |τ〉 ⊗ |y〉 = |τ〉 ⊗ |y + r(τ)〉. (2.2)

The composition by C(y)⊗I, where the coin matrix C(y) is allowed to depend on the site y,reshuffles or updates the coin state so that the pieces of the wave function corresponding to

4

different internal states are shifted to different directions, depending on the internal state.The corresponding one step unitary evolution U of the walker on H = C2d⊗ l2(Zd) is givenby

U =∑

x∈Zd

τ∈I±PτC(x)⊗ |x+ r(τ)〉〈x|. (2.3)

Given a set of n > 0 site-dependent unitary coin matrices Ck(x) ∈M2d(C), k = 1, · · · , nand x ∈ Zd, we construct an evolution operator U(n, 0) from time 0 to time n, characterizedat time k by Uk defined in (2.3) with Ck(x)x∈Zd via

U(n, 0) = UnUn−1 · · ·U1. (2.4)

Let f : Zd → C and define the multiplication operator F : D(F ) → H on its do-main D(F ) ⊂ H by (Fψ)(x) = f(x)ψ(x), ∀x ∈ Zd, where ψ ∈ D(F ) is equivalent to∑

x∈Zd |f(x)|2‖ψ(x)‖2Cd <∞. Note that F acts trivially on the coin state.When f is real valued, F is self-adjoint and will be called a lattice observable.

In particular, consider a walker characterized at time zero by the normalized vector ψ0 =ϕ0 ⊗ |0〉, i.e. which sits on site 0 with coin state ϕ0. The quantum mechanical expectationvalue of a lattice observable F at time n is given by 〈F 〉ψ0(n) = 〈ψ0|U(n, 0)∗FU(n, 0)ψ0〉.

A straightforward computation yields the following expression for the correspondingdiscrete evolution from time zero to time n

Lemma 2.1 With the notations above,

U(n, 0) =∑

x∈Zd

k∈Zd

Jxk (n)⊗ |x+ k〉〈x|, (2.5)

where

Jxk (n) =∑

τ1,τ2,...,τn∈I±n∑n

s=1r(τs)=k

PτnCn

(x+

n−1∑

s=1

r(τs))Pτn−1Cn−1

(x+

n−2∑

s=1

r(τs))· · ·Pτ1C1(x) ∈M2d(C)

(2.6)and Jxk (n) = 0, if

∑ns=1 r(τs) 6= k. Moreover, for any lattice observable F , and any nor-

malized vector ψ0 = ϕ0 ⊗ |0〉,

〈F 〉ψ0(n) = 〈ψ0|U∗(n, 0)FU(n, 0)ψ0〉 =∑

k∈Zd

f(k)〈ϕ0|J0k (n)

∗J0k (n)ϕ0〉

≡∑

k∈Zd

f(k)Wk(n), (2.7)

where Wk(n) = ‖J0k (n)ϕ0‖2C2d satisfy

k∈Zd

Wk(n) =∑

k∈Zd

‖J0k (n)ϕ0‖2C2d = ‖ψ0‖2H = 1. (2.8)

5

Remark 2.2 We view the non-negative quantities Wk(n)n∈N∗ as the probability distri-butions of a sequence of Zd-valued random variables Xnn∈N∗ with

Prob(Xn = k) =Wk(n) = 〈ψ0|U(n, 0)∗(I⊗ |k〉〈k|)U(n, 0)ψ0〉 = ‖J0k (n)ϕ0‖2C2d , (2.9)

in keeping with (2.7). In particular, 〈F 〉ψ0(n) = EWk(n)(f(Xn)). We shall use freely bothnotations.

Remark 2.3 All sums over k ∈ Zk are finite since Jxk (n) = 0 if maxj=1,...,d |kj | > ρn, forsome ρ > 0 independent of x ∈ Zd, since the jump functions have finite range.

We are particularly interested in the long time behavior, n >> 1, of 〈X2〉ψ0(n), the expec-tation of the observable X2 corresponding to the function f(x) = x2 on Zd with initialcondition ψ0. Or, in other words, in the second moments of the distributions Wk(n)n∈N∗ .

Let us proceed by expressing the probabilities Wk(n) in terms of the Ck’s, k = 1, . . . , n.We need to introduce some more notations. Let In(k) = τ1, · · · , τn, where τl ∈ I±,l = 1, . . . , n and

∑nl=1 r(τl) = k. In other words, In(k) denotes the set of paths that link

the origin to k ∈ Zd in n steps via the jump function r. Let us write ϕ0 =∑

τ∈I± aτ |τ〉.

Lemma 2.4

Wk(n) =∑

τ0,τ1,··· ,τn∈In(k)

τ ′0,τ′1,··· ,τ

′n∈In(k)

s.t. τn=τ ′n

aτ ′0aτ0〈τ′0|C∗

1 (0) τ′1〉〈τ1|C1(0)τ0〉 × (2.10)

×n∏

s=2

⟨τ ′s−1

∣∣∣C∗s

( s−1∑

j=1

r(τ ′j))τ ′s⟩⟨τs

∣∣∣Cs( s−1∑

j=1

r(τj))τs−1

⟩.

We approach the problem through the characteristic functions Φn of the probabilitydistributions W·(n)n∈N∗ defined by the periodic function

Φn(y) = EW (n)(eiyXn) =

k∈Zd

Wk(n)eiyk, where y ∈ [0, 2π)d. (2.11)

To emphasize the dependence in the initial state, we will sometimes write Φϕ0n and/or

Wϕ0

k (n). All periodic functions will be viewed as functions defined on the torus, i.e.[0, 2π)d ≃ Td. The asymptotic properties of the quantum walk emerge from the analy-sis of the limit in an appropriate sense as n → ∞ of the characteristic function in thediffusive scaling

limn→∞

Φn(y/√n) (2.12)

3 Correlated Markovian Random Framework

We give here the hypotheses we make on the randomness of the model.

Assumption C:

6

Let Ω = C1, C2, · · · , CF be a finite set of unitary coin matrices on C2d and let ω ∈ΩN be a Markov chain with stationary initial distribution p and transition matrix P s.t.P(η, ζ) = Prob(ω(n + 1) = ζ|ω(n) = η), for all n ∈ N. Let σ be a representation of Zd,x 7→ σx, in terms of measure preserving maps σx : Ω → Ω such that p(σxζ) = p(ζ) andP(σxζ, σxη) = P(ζ, η).

Remarks 3.1 i) This is equivalent to saying that the paths of σx(ω(·)) have the samedistribution as the paths of ω(·), for all x ∈ Zd.ii) Because x 7→ σx is a representation of Zd, σx is a bijection over the finite set Ω for anyx ∈ Zd and σ0 = Id. Moreover, the finite set of bijections σxx∈Zd must commute with oneanother.iii) Let Γ = x ∈ Zd s.t. σx = Id. Then σx = σy is equivalent to x − y ∈ Γ. Ifg ∈ N∗ denotes the cardinal of the group σxx∈Zd, then for any j ∈ 1, · · · , d, the vector(0, · · · , 0, g, 0, · · · , 0)T ∈ Zd, where g sits at the j’th slot, belongs to Γ. Hence the lattice Γis of dimension d.iv) We choose BΓ ⊂ Zd such that 0 ∈ BΓ and σ|BΓ

is a bijection on the set of bijections ofΩ. For any x ∈ Zd, we have a unique decomposition x = x0 + η, with x0 ∈ BΓ and η ∈ Γ.

We consider the random evolution obtained from sequences of coin matrices defined onsite x ∈ Zd at time n ≥ 0 by

Cωn (x) = σx(ω(n)). (3.1)

This means that while the coin matrices at different sites have all the same distributionas Cωn (0) = ω(n), they can take different correlated values depending on σx.

It is more natural in this setting to carry out the analysis in terms of density matrices.The set of density matrices, DM, consists in trace one non negative operators on C2d ⊗l2(Zd). Any bounded operators on H = C2d ⊗ l2(Zd) can be represented by its kernel as

ρ =∑

(x,y)∈Z2d

ρ(x, y)⊗ |x〉〈y|, where ρ(x, y) ∈M2d(C). (3.2)

A non-negative operator ρ on H is trace class iff∑

x∈Zd

‖ρ(x, x)‖ <∞. (3.3)

We say that ρ belongs to l2(Zd × Zd;M2d(C)) when∑

(x,y)∈Zd×Zd

‖ρ(x, y)‖2 <∞. (3.4)

Note that (3.4) is equivalent to the Hilbert-Schmidt norm induced by the scalar producton l2(Zd × Zd;M2d(C))

⟨η, ρ

⟩= Tr(η∗ρ) =

(x,y)∈Zd×Zd

Tr(η(x, y)∗ρ(x, y)), (3.5)

7

where we used the same symbol “Tr” for the trace in different spaces, which make l2(Zd ×Zd;M2d(C)) a Hilbert space. We also note that if ρ is non-negative, this implies for anyx, y ∈ Zd, (see [21] and Lemma 1.21 in [41])

ρ(x, y) = ρ(y, x)∗, ρ(x, x) ≥ 0, and ‖ρ(x, y)‖ ≤ ‖ρ(x, x)‖1/2‖ρ(y, y)‖1/2. (3.6)

Thus DM and the set of non-negative trace-class operators belong to l2(Zd×Zd;M2d(C)).

If ρ0 denotes the initial density matrix, its evolution at time n under U(n, 0) defined by(2.5) is given by

ρn = U(n, 0)ρ0U∗(n, 0). (3.7)

The kernel of ρn reads

ρn(x, y) =∑

(k,k′)∈Zd×Zd

Jx−kk (n)ρ0(x− k, y − k′)Jy−k′

k′∗(n), (3.8)

and the expectation of the lattice observable F = I⊗ f is denoted by

〈F 〉ρ0(n) = Tr(ρn(I⊗ f)) =∑

x∈Zd

Tr(ρn(x, x))f(x), (3.9)

if it exists. Again, we can express 〈F 〉ρ0(n) as the expectation of a random variable on thelattice Zd:

〈F 〉ρ0(n) = EW (n)(f(Xn)), with Prob(Xn = k) =Wk(n) = Tr(ρn(k, k)). (3.10)

In case the evolution is random, the distribution W ω(n) is random and the density matrixρωn is random as well. We consider thus the expectation with respect to the randomness,noted E of the quantum mechanical expectation of the lattice observable, i.e.

E(〈F 〉ωρ0(n)) = EWω(n)(f(Xn)) ≡ Ew(n)(f(Xn)), (3.11)

where the distribution w(n) on Zd is given by

Prob(Xn = k) = wk(n) = E(W ωk (n)) = ETr(ρωn(k, k)). (3.12)

The corresponding characteristic function is defined by

Φρ0n (y) = Ew(n)(eiyXn) =

x∈Zd

eiyxTr(E (ρωn(x, x))). (3.13)

The following assumption gives the required regularity properties on the lattice observ-able F = I ⊗ f and the initial density matrix ρ0 to legitimate the manipulations thatfollow.

Assumption R:

a) The lattice observable is such that, for any µ <∞, ∃Cµ <∞ such that

|f(x+ y)| ≤ Cµ|f(x)|, ∀ (x, y) ∈ Zd × Zd with ‖y‖ ≤ µ. (3.14)

8

b) The kernel ρ0(x, y) is such that

(x,y)∈Zd×Zd

‖ρ0(x, y)‖ <∞ (3.15)

x∈Zd

|f(x)|‖ρ0(x, x)‖ <∞. (3.16)

Lemma 2.11 of [21] applies here as well, with the same proof, to ensure that for any n ∈ N,the kernel ρn(x, y) satisfies Assumption R if the kernel ρ0 does. For more discussion aboutproperties of the density matrices we refer to [21].

3.1 Feynmann-Kac-Pillet formula

We denote by l2(Ω;M2d(C)) the finite dimensional Hilbert space ofM2d(C)-valued functionsdefined on Ω with scalar product defined by

〈ϕ ,ψ〉 =∑

η∈Ωp(η)Tr(ϕ∗(η)ψ(η)), (3.17)

where the measure p on Ω is the stationary initial distribution. We denote by |τ〉〈τ ′| ∈l2(Ω;M2d(C)) the constant map which assigns |τ〉〈τ ′| to any η ∈ Ω and stress that the τ, τ ′

element of a matrix ρ ∈M2d(C), τ, τ′ ∈ I±, can be expressed as

(ρ)τ,τ ′ = Tr(|τ ′〉〈τ | ρ) = Tr((|τ〉〈τ ′|)∗ ρ). (3.18)

Consider now the extended Hilbert space l2(Zd × Zd; l2(Ω;M2d(C))) ≃ l2(Zd × Zd ×Ω;M2d(C)). Any ρ ∈ l2(Zd × Zd × Ω;M2d(C)) can be expressed as

ρ = (ρ(x, y; η))(x,y;η)∈Zd×Zd×Ω, where ρ(x, y; η) ∈M2d(C) (3.19)

satisfies ∑

η∈Ω

(x,y)∈Zd×Zd

p(η)Tr(ρ(x, y; η)∗ρ(x, y; η)) <∞. (3.20)

The following is a version of Feynman-Kac-Pillet formula in the current setting. Letρ0 ∈ l2(Zd × Zd;M2d(C)) denote the initial density matrix, its evolution at time n underthe random evolution operator U(n, 0) defined by (2.5) and (2.6) is given by

ρn = U(n, 0)ρ0U∗(n, 0). (3.21)

Since l2(Zd × Zd;M2d(C)) → l2(Zd × Zd × Ω;M2d(C)), we can consider ρ0 an element ofl2(Zd ⊗ Zd × Ω;M2d(C)), keeping the same notation. With the notation δx = |x〉 we have

Proposition 3.2 Let K = l2(Zd×Zd×Ω;M2d(C)) and assume C holds. Then, if ρ0 ∈ K,we have for any n ∈ N, and any τ, τ ′ ∈ I±,

E(ρn(x, y))τ,τ ′ =⟨δx ⊗ δy ⊗ |τ〉〈τ ′| ,Mnρ0

⟩K, (3.22)

9

where the single step operator M : K → K is given by

(Mρ)(x, y; η) =∑

τ,τ ′∈I±ζ∈Ω

Q(η, ζ)Pτ (σx−r(τ)η)ρ(x − r(τ), y − r(τ ′), ζ)(σy−r(τ ′)η)∗Pτ ′ , (3.23)

where ρ ∈ l2(Zd × Zd ×Ω;M2d(C)) and Q(η, ζ) = Prob(ω(0) = η|ω(1) = ζ).

Remarks 3.3 i) Using that the initial distribution is stationary, it is easy to see that

Q(ζ, η) =p(η)

p(ζ)P(η, ζ). (3.24)

ii) In view of (3.12), the averaged distribution w(n) reads

wx(n) =∑

τ∈I±E(ρn(x, x))τ,τ =

⟨Ψx, M

nρ0⟩

where Ψx = δx ⊗ δx ⊗ Id . (3.25)

iii) The adjoint of M , M∗, acts as follows

(M∗ρ)(x, y; η) =∑

τ,τ ′∈I±ζ∈Ω

P(η, ζ)(σxζ)∗Pτρ(x+ r(τ), y + r(τ ′), ζ)Pτ ′(σyζ). (3.26)

iv) If ρ(x, y; η)x,y∈Zd is self-adjoint, the same is true for (Mρ)(x, y; η)x,y∈Zd . Suchinitial conditions ρ yield real valued quantities wx(n) =

⟨Ψx, M

nρ0⟩.

Proof: First note that⟨δx ⊗ δy ⊗ |τ〉〈τ ′| ,Mnρ0

⟩K =

ζ∈Ωp(ζ)((Mnρ0)(x, y; ζ))τ,τ ′ . (3.27)

Let ti =∑n

s=i r(τs) and t′i =

∑ns=i r(τ

′s). Using the definition of M , we see that

(Mnρ0)(x, y; ζ) =∑

τ1,··· ,τn∈In

τ ′1,··· ,τ ′n∈In

η1,η2,·,ηn∈Ωn

Q(ζ, ηn)Q(ηn, ηn−1) · · ·Q(η2, η1)×

Pτn(σx−tnζ)Pτn−1(σx−tn−1ηn) · · ·Pτ1(σx−t1η2)ρ0(x− t1, y − t′1)×(σy−t′1η2)

∗Pτ ′1 · · · (σy−t′n−1ηn)

∗Pτ ′n−1(σy−t′nζ)

∗Pτ ′n .

Since the initial distribution p is stationary, a straightforward computation shows that

p(ζ)Q(ζ, ηn)Q(ηn, ηn−1) · · ·Q(η2, η1) = p(η1)P(η1, η2) · · ·P(ηn−1, ηn)P(ηn, ζ). (3.28)

Therefore,

⟨δx ⊗ δy ⊗ |τ〉〈τ ′| ,Mnρ0

⟩K =

τ1,··· ,τn∈In

τ ′1,··· ,τ′n∈In

η1,η2,·,ηn,ζ∈Ωn+1

p(η1)P(η1, η2) · · · P(ηn−1, ηn)P(ηn, ζ)×

〈τ |Pτn(σx−tnζ)Pτn−1(σx−tn−1ηn) · · ·Pτ1(σx−t1η2)ρ0(x− t1, y − t′1)×(σy−t′1η2)

∗Pτ ′1 · · · (σy−t′n−1ηn)

∗Pτ ′n−1(σy−t′nζ)

∗Pτ ′n τ′〉. (3.29)

10

On the other hand,

E(ρn(x, y)) =∑

τ1,··· ,τn∈In

τ ′1,··· ,τ′n∈In

η1,η2,·,ηn∈Ωn

η′1,η′2,·,η

′n∈Ωn

Prob(σx−tiω(i) = ηi, σy−t′iω(i) = η′i for all i ∈ 1, · · · , n

)

× PτnηnPτn−1ηn−1 · · ·Pτ1η1ρ0(x− t1, y − t′1

)η′∗1 Pτ ′1 · · ·Pτ ′n−1

η′∗n Pτ ′n . (3.30)

However, it is easy to see that

Prob(σx−tiω(i) = ηi, σy−t′iω(i) = η′i for all i ∈ 1, · · · , n

)

=

Prob

(ω(i) = σ−1

x−tiηi for all i ∈ 1, · · · , n)

if σ−1x−tiηi = σ−1

y−t′iη′i

0 otherwise.(3.31)

Now by letting αi = σ−1x−tiηi, and using that ω is a Markov chain on Ω we get

E(ρn(x, y)) =∑

τ1,··· ,τn∈In

τ ′1,··· ,τ′n∈In

α0,α1,·,αn∈Ωn+1

p(α0)P(α0, α1)P(α1, tα2) · · ·P(αn−1, αn)× (3.32)

Pτn(σx−tnαn)Pτn−1 · · ·Pτ1(σx−t1α1)ρ0(x− t1, y − t′1)

(σy−t′1α1)∗Pτ ′1 · · ·Pτ ′n−1

(σy−t′nα1)∗Pτ ′n .

Comparing (3.29) and (3.32) completes the proof.

3.2 Spectral Analysis of M

Using Feynman-Kac-Pillet formula, studying the time evolution of our systems relies onthe spectral analysis of the “single-step” operator M defined in (3.23). In order to do thatwe first take a closer look at the underlying symmetries of the systems. The operator Mcommutes with a group G of unitary operators generated by translations:

1. Simultaneous translation of position and disorder by an arbitrary element ξ of Zd:

Sξ ρ(x, y;ω) = ρ(x− ξ, y − ξ;σξω),

2. For η ∈ Γ ⊂ Zd such that ση = Id , M commutes with translation of the first positioncoordinate by η:

S(1)η ρ(x, y;ω) = ρ(x− η, y;ω).

Note that SξS(1)η = S

(1)η Sξ, so the group of symmetries G is isomorphic to Zd × Zd. We

have chosen to use translation of the first position in the definition of S(1); however, since

ση = Id, we have SηS(1)−ηρ(x, y;ω) = ρ(x, y − η;ω).

Remark 3.4 For any η ∈ Γ = ξ ∈ Zd : σξ = Id, and any x ∈ Zd, we have σx+η = σx.Moreover, for any x ∈ Zd, there exists a unique x0 ∈ BΓ such that σx = σx0 .

11

In order to take these symmetries into account in the spectral analysis of M , we definea generalized Fourier transform similar to [18]. We shall use the following notations

L2(X;M2d(C)) = f : X →M2d(C) : ‖f‖2 = Tr(f∗f) =∫

Xdm(x)Tr(f∗(x)f(x)) <∞,

where m is a locally finite positive measure on X.Also, we introduce Γ∗ = p∗ ∈ Rd | p∗γ ∈ Z, ∀γ ∈ Γ. If γjdj=1 is a basis of Γ,

let p∗jdj=1 be the basis of Γ∗ defined by p∗jγi = δi,j . We have that Zd ⊂ Γ∗. We note

TdΓ = p =∑d

j=1 pjp∗j | pj ∈ [0, 2π), j = 1, . . . , d. With the linear map P : Rd → Rd

defined by its action on the vectors of the canonical basis as Pej = p∗j , j = 1, . . . , d, we

have TdΓ = PTd where Td denotes the d-dimensional torus. In particular, for any f definedon TdΓ ∫

TdΓ

f(p)dp =

Td

f(Pt)|P |dt, and Vol(T dΓ) = (2π)d|P |, (3.33)

where | · | denotes the Jacobian determinant here. We denote the normalized measure onTdΓ by dp = dp/((2π)d|P |).

We are ready to introduce the map F : l2(Zd × Zd ×Ω;M2d(C)) → L2(BΓ × Td × TdΓ ×Ω;M2d(C)), defined by

(FΨ)(x, k, p; ζ) := Ψ(x, k, p; ζ) =∑

ξ∈Zd

η∈Γ

eip·(x−η)−ik·ξΨ(x− ξ − η,−ξ, σξζ). (3.34)

Since we can add to x any vector of Γ without changing the RHS, Ψ actually dependson x0 ∈ BΓ defined according to remark 3.4. One checks that this generalized Fouriertransform is a unitary operator with inverse

(F−1χ)(x, y; ζ) =

Td×TdΓ

e−iyke−ip(x−y)χ(x− y, k, p;σyζ)dkdp, (3.35)

where dkdp is the normalized measure on Td × TdΓ.

Remarks 3.5 i) If (FΨ)(x, k, p; ζ) = Ψ(x0, k, p; ζ), it satisfies for any p∗ ∈ Γ∗, any η ∈ Γand any k∗ ∈ Zd

Ψ(x0 + η, k + 2πk∗, p+ 2πp∗; ζ) = ei2πp∗x0Ψ(x0, k, p; ζ). (3.36)

ii) The operator ψ(x, y, ζ)x,y∈Zd is self-adjoint, i.e. ψ(x, y, ζ)∗ = ψ(y, x, ζ) if and only if

Ψ(x0, k, p; ζ) = Ψ((−x)0,−k, p − k;σx0ζ)∗.

Because of the symmetries of M , its expression FMF−1 in Fourier space admits a fiberdecomposition of the form

FMF−1 =

∫ ⊕

Td×TdΓ

M(k, p) dkdp, (3.37)

12

where M(k, p) is an operator on l2(BΓ×Ω;M2d(C)) which becomes a multiplication operatorin the variables (k, p) ∈ Td×TdΓ which we compute. The following expression holds for the

(k, p) dependent ”single-step” operator M(k, p) on L2(BΓ × Td × TdΓ × Ω;M2d(C)):

(M(k, p)Ψ)(x, k, p; η) =∑

τ,τ ′∈I±ζ∈Ω

Q(η, ζ)eikr(τ′)eip(r(τ)−r(τ

′)) × (3.38)

Pτ (σx−r(τ)η)Ψ(x− r(τ) + r(τ ′), k, p, σ−r(τ ′)ζ

)(σ−r(τ ′)η)

∗Pτ ′ .

Remark 3.6 The action of the adjoint of M(k, p), denoted by M(k, p)∗, reads

(M (k, p)∗Ψ)(x, k, p; η) =∑

τ,τ ′∈I±ζ∈Ω

P(η, ζ)e−ikr(τ′)e−ip(r(τ)−r(τ

′)) × (3.39)

(σxζ)∗PτΨ

(x+ r(τ)− r(τ ′), k, p, σr(τ ′)ζ

)Pτ ′ζ.

Let us now consider the operator M(k, p) for (k, p) ∈ Td × TdΓ fixed as an operator

on l2(BΓ × Ω;M2d(C)). As BΓ and Ω are finite, M(k, p) can be represented by a squarematrix of dimension 4d|BΓ||Ω| depending parametrically on (k, p). Moreover, the map

(k, p) 7→ M(k, p) is analytic on Cd × Cd. We denote the norm on l2(BΓ × Ω;M2d(C)) by‖ · ‖l2 .Proposition 3.7 Let Spr denote the spectral radius, then for all (k, p) ∈ Td × TdΓ

Spr (M(k, p)) ≤ ‖M(k, p)‖l2 ≤ 1, (3.40)

On the other hand for k = 0 and all p ∈ TdΓ

Spr (M (0, p)) = ‖M(0, p)‖l2 = 1, (3.41)

Remark 3.8 It follows that Spr (M) = ‖M‖ = 1, where M is viewed as an operator onL2(BΓ × Td × TdΓ × Ω;M2d(C)).

Proof: First note that M(k, p) can be written as M(k, p) = ΣSQ, where

(QΨ)(x, k, p; η) =∑

ζ∈ΩQ(η, ζ)Ψ(x, k, p; ζ)

(SΨ)(x, k, p; η) = (σxη)Ψ(x, k, p; η)(σ0η)∗ (3.42)

(ΣΨ)(x, k, p; ζ) =∑

τ,τ ′∈I2±

eikr(τ′)eip(r(τ)−r(τ

′))PτΨ(x− r(τ) + r(τ ′), k, p;σ−r(τ ′)ζ)Pτ ′ .

We fix (k, p) and consider these operators on l2(BΓ × Ω;M2d(C)). Now Q = Id⊗Q whereQ : l2(Ω;C) → l2(Ω;C) given by Qf(η) =

∑ζ∈ΩQ(η, ζ)f(ζ) and Id means the identity on

l2(BΓ;M2d(C)). An easy calculation using Jensen’s inequality shows that for all f ∈ l2(Ω;C)

‖Qf‖2 =∑

η∈Ωp(η)

∣∣∣∣∣∣∑

ζ∈ΩQ(η, ζ)f(ζ)

∣∣∣∣∣∣

2

≤∑

η∈Ωp(η)

ζ∈ΩQ(η, ζ)|f(ζ)|2 (3.43)

=∑

ζ∈Ωp(ζ)|f(ζ)|2 = ‖f‖2. (3.44)

13

Therefore, we have that ‖Q‖l2 ≤ 1.On the other hand, for all Ψ ∈ l2(BΓ × Ω;M2d(C))

‖SΨ‖2l2 =∑

x∈BΓζ∈Ω

p(ζ)Tr [ζΨ∗(x; η)(σxζ)∗(σxζ)Ψ(x; η)ζ∗]

=∑

x∈BΓζ∈Ω

p(ζ)Tr [Ψ∗(x; η)Ψ(x; η)] = ‖Ψ‖2l2 . (3.45)

Where we used the cyclicity of the trace and that elements of Ω are unitary matrices.Finally to see that for any (k, p) ∈ Td × TdΓ, ‖Σ‖ = 1, we notice that

Tr [(ΣΨ)∗(x; ζ)(ΣΨ)(x; ζ)] = (3.46)∑

τ,α∈I2±

〈α|Ψ∗(x− r(τ) + r(α);σ−r(α)ζ)τ〉〈τ |Ψ(x− r(τ) + r(α);σ−r(α)ζ)α〉

Now for fixed α, τ let y = x − r(τ) + r(α) and η = σ−r(α)ζ. Using that σx are measurepreserving transformations on Ω, we have

‖ΣΨ‖2l2 =∑

τ,α∈I2±

y∈BΓη∈Ω

p(η)〈α|Ψ∗(y; η)τ〉〈τ |Ψ(y; η)α〉 (3.47)

=∑

y∈BΓη∈Ω

p(η)Tr [Ψ∗(y; η)Ψ(y; η)] = ‖Ψ‖2. (3.48)

Putting the estimates on the norms of Q, S and Σ together we get the required bound onthe norm of M(k, p) for all (k, p) ∈ Td × TdΓ.

Now consider Ψ1(x; ζ) = δ0 ⊗ Id, where Id ∈ l2(Ω;M2d(C)) takes the constant value Id.We compute

(M(k, p)Ψ1)(x; η) =∑

τ,τ ′∈I±ζ∈Ω

eikr(τ′)eip(r(τ)−r(τ

′))Q(η, ζ)×

Pτ (σx−r(τ)η)(σ−r(τ ′)η)∗Pτ ′δ0(x− r(τ) + r(τ ′))

=∑

τ,τ ′∈I±eikr(τ

′)eip(r(τ)−r(τ′))Pτ δτ,τ ′δ0(x− r(τ) + r(τ ′))

=∑

τ∈I±eikr(τ)Pτδ0(x). (3.49)

where we use∑

ζ∈ΩQ(η, ζ) = 1. From this it is clear that Ψ1 is an eigenvector of M(0, p)

with an eigenvalue 1 for all p ∈ TdΓ. Therefore we have that

Spr (M (0, p)) = ‖M(0, p)‖l2 = 1. (3.50)

14

Remark 3.9 A similar computations shows that

M(0, p)∗Ψ1 = Ψ1. (3.51)

If Ψ1 is considered a vector of L2(BΓ × Td × TdΓ × Ω;M2d(C)) it corresponds to

Ψ1(x, y; η) = F−1Ψ1(x, y; η) = δ0(x)⊗ δ0(y)⊗ Id ≃ Id⊗|0〉〈0|. (3.52)

Also, with the definition (3.25)

FΨx(x, k, p, η) = eikxδ0(x)⊗ Id = eikxΨ1(x). (3.53)

At this point we note that the characteristic function Φρ0n (y) of the distribution w(n)satisfies, see (3.25) and (3.53)

Φρ0n (y) =∑

x∈Zd

eiyx⟨Ψx, M

nρ0⟩=∑

x∈Zd

eiyx⟨Ψx, M(·, ·)nρ0

⟩K (3.54)

=∑

x∈Zd

eiyx∫

Td

e−ikx⟨Ψ1, M(k, ·)nρ0(k)

⟩L2(BΓ×Td

Γ×Ω;M2d(C))dk.

In other words, slightly abusing notations,

Φρ0n (y) =⟨Ψ1, M(y, ·)nρ0(y)

⟩L2(BΓ×Td×Ω;M2d(C))

=⟨(M(y, ·)∗)nδ0 ⊗ Id, ρ0(y)

⟩L2(BΓ×Td

Γ×Ω;M2d(C))

=

TdΓ

⟨(δ0 ⊗ Id, M(y, p))nρ0(y, p)

⟩l2(BΓ×Ω;M2d(C))

dp

=∑

η∈Ωp(η)

TdΓ

Tr(M (y, p)nρ0)(0, y, p, η)dp (3.55)

whereρ0(x, k, p, ζ) =

ξ∈Zd

η∈Γ

eip(x−η)−ikξρ0(x− ξ − η,−ξ) ≡ ρ0(x, k, p) (3.56)

is independent of ζ.

Remark 3.10 If

ρ0 = |ϕ0〉〈ϕ0| ⊗ |0〉〈0| ≃ δ0 ⊗ δ0 ⊗ |ϕ0〉〈ϕ0|, where ϕ0 ∈ Cd is normalized, (3.57)

then

ρ0(x, k, p, η) = δ0(x)⊗ |ϕ0〉〈ϕ0| := R0(x) is independent of (k, p, η) (3.58)

and

Φϕ0n (y) =

η∈Ωp(η)

TdΓ

Tr(M (y, p)n|ϕ0〉〈ϕ0|)(0, y, p, η)dp. (3.59)

15

Hence, in the diffusive scaling, we need to control the large n behavior of the vectorsM∗(y/

√n, p)nδ0⊗Id and ρ0(y/

√n) in L2(BΓ×TdΓ×Ω;M2d(C)). This can be done, following

the arguments of [21], under some spectral hypothesis. We shall discuss the validity of thishypothesis for specific cases later on and proceed by showing that it is sufficient to provethe diffusive character of the (averaged) dynamics, arguing as in [21]. We shall refrain fromspelling out all details, referring the reader to the above mentioned paper.

We work under the following spectral hypothesis on the matrix M(0, p) on l2(BΓ ×Ω;M2d(C)). Let D(z, r) be the open disc of radius r > 0 centered at z ∈ C.

Assumption S: For all p ∈ TdΓ, we have

σ(M (0, p)) ∩ ∂D(0, 1) = 1 and this eigenvalue is simple. (3.60)

Remark 3.11 Actually, because of (3.55), it is enough to assume that M(0, p))|I satisfies

assumption S, where I is the M(k, p)∗-cyclic subspace generated by δ0 ⊗ Id, for all (k, p) ∈Td × TdΓ.

By analytic perturbation theory, there exist δ > 0, ν(δ) > 0 and κ(δ) > 0 such that forall (k, p) ∈ Bdκ ×T d

ν , where Bdκ = y ∈ Cd | ‖y‖ < κ and T dν = y = y1 + iy2, |y1 ∈ TdΓ, y2 ∈

Rd with ‖y2‖ < ν the following holds:

σ(M (k, p)) ∩D(1, δ) = λ1(k, p)σ(M (k, p)) \ λ1(k, p) ⊂ D(0, 1 − δ), (3.61)

and the eigenvalue λ1(k, p) is simple. For such values of the parameters (k, p) we have thecorresponding spectral decomposition

M(k, p) = λ1(k, p)P1(k, p) + MP1(k, p) (3.62)

where MP1(k, p) = P1(k, p)M(k, p)P1(k, p) and P1(k, p) = I− P1(k, p).

The simple eigenvalue λ1(k, p), the corresponding spectral projector P1(k, p) and the

restriction MP1(k, p) are all analytic on Bdν × T d

ν and Spr (MP1(k, p)) < 1 − δ. Moreover,

for any p ∈ TdΓ

limk→0

λ1(k, p) = 1, and limk→0

P1(k, p) = |Ψ0〉〈Ψ0| ≡ Π, (3.63)

where Ψ0 = Ψ1/‖Ψ1‖ = 1√2dδ0 ⊗ Id, see (3.51).

Taking into account the fact that wx(n) is real valued for any selfadjoint and trace classρ0, we have

Φρ0n (k) = Φρ0n (−k), for all k ∈ Td. (3.64)

This yields a symmetry of λ1:

Lemma 3.12 For all k ∈ Bdκ, and all p ∈ TdΓ, the following identity holds

λ1(k, p) = λ1(−k, p − k). (3.65)

16

Proof: It follows from (3.55) that for any k ∈ Rd and any self adjoint trace class ρ0

Φρ0n (k) =

TdΓ

⟨δ0 ⊗ Id, M(k, p)

nρ0(k, p)

⟩l2(BΓ×Ω;M2d(C))

dp

=

TdΓ

⟨δ0 ⊗ Id, M(−k, p)nρ0(−k, p)

⟩l2(BΓ×Ω;M2d(C))

dp (3.66)

The first step consists in showing the pointwise identity of the smooth scalar products inl2(BΓ × Ω;M2d(C)) for ρ0 = δ0(x)⊗ |ϕ0〉〈ϕ0| = R0(x) :

⟨δ0 ⊗ Id, M(k, p)

nR0

⟩−⟨δ0 ⊗ Id, M(−k, p − k)

nR0

⟩= 0. (3.67)

Identity (3.66) holds for any self-adjoint ρ0, thus in particular for ρ0(x0, k, p) = b(k, p)R0(x)where b belongs to the vector space of periodic functions satisfying

b : Td × TdΓ → C, such that b(k, p) = b(−k, p − k), (3.68)

see Remarks 3.5. Therefore, we get for any such b

0 =

TdΓ

⟨δ0 ⊗ Id, M(k, p)

nR0

⟩b(k, p) −

⟨δ0 ⊗ Id, M (−k, p)nR0

⟩b(k, p + k)dp

=

TdΓ

(⟨δ0 ⊗ Id, M(k, p)

nR0

⟩−⟨δ0 ⊗ Id, M(−k, p− k)

nR0

⟩)b(k, p)dp. (3.69)

An example of smooth function b satisfying our requirements is b1(k, p) = f1(k)g1(2p − k)with

g1 : Rd → R, 2πΓ∗ − periodic, and f1 : R

d → R, 2πZd − periodic and even. (3.70)

Note that Zd ⊂ Γ∗ ensures Zd periodicity of b1 in k and that b1(k, p + 2πγ∗/2) = b1(k, p),for all k and γ∗ ∈ Γ∗. Another slightly more complicated choice is constructed with

g2 : Rd → R, 2πΓ∗ − anti-periodic i.e. g2(x+ 2πγ∗i ) = −g2(x), ∀i = 1, . . . , d, (3.71)

and γ∗j dj=1 is the basis of Γ∗. Then f2 : Rd → R is defined as follows: for j = 1, . . . , d,

ej =∑d

i=1mi(j)γ∗i , where mi(j) ∈ Z, and

f2(x+ 2πej) = (−1)∑d

i=1mi(j)f2(x), ∀x ∈ Rd, ∀j = 1, . . . , d. (3.72)

That is f2 is 2π-periodic or 2π-anti-periodic in the direction ej , depending on the compo-nents of the corresponding vector γ∗j . Then, by construction, b2(k, p) = f2(k)g2(2p − k)satisfies our requirements and, moreover b2(k, p + 2πγ∗j /2) = −b2(k, p).

Now, assume (3.67) doesn’t hold at some p0 ∈ TdΓ. By a suitable choice of g1 and g2 asabove, we can construct a smooth b(k, p) = b1(k, p) + b2(k, p) that is non-zero in a smallneighborhood of p0 only so that (3.69) fails, which yields a contradiction.

17

Then one exploits the spectral decomposition (3.62) and (3.63) with⟨δ0 ⊗ Id, R0

⟩= 1

to deduce from the above that if ‖k‖ is small enough

λ1(k, p) = limn→∞

(⟨δ0 ⊗ Id, M(k, p)

nR0

⟩)1/n

= limn→∞

(⟨δ0 ⊗ Id, M(−k, p − k)

nR0

⟩)1/n= λ1(−k, p − k). (3.73)

The result extends to complex k by analyticity of λ1(·, p) in Bdκ.We now compute a second order expansion of λ1(k, p) = Tr(P1(k, p)M (k, p)) around

k = 0, using the decomposition (3.42)

M(k, p) = Σ(k)SQ, (3.74)

where only the unitary map Σ depends on k (and p), as stressed in the notation. We expandΣ(k) as

Σ(k) = Σ(0) + Σ1(k) + Σ2(k) +Op(‖k‖3), (3.75)

where Σj(k) is of order j = 1, 2 in k and the remainder is Op(‖k‖3), uniformly in p ∈ T dν .

Explicitly,

(Σ1(k) + Σ2(k))Ψ(x, k, p; ζ) = (3.76)∑

τ,τ ′∈I2±

(ikr(τ ′)− (kr(τ ′))2/2)eip(r(τ)−r(τ′))PτΨ(x− r(τ) + r(τ ′), k, p;σ−r(τ ′)ζ)Pτ ′ .

Then, in terms of the unperturbed reduced resolvent Sp(z) defined for any p ∈ T dν and z in

a neighborhood of 1, by

(M (0, p)− z)−1 =Π

1− z+ Sp(z) (3.77)

we have see [23], p. 79,

λ1(k, p) = 1 + Tr(Σ1(k)SQΠ) (3.78)

+ Tr(Σ2(k)SQΠ− Σ1(k)SQSp(1)Σ1(k)SQΠ) +Op(‖k‖3).Explicit computations making use of SQΨ0 = Ψ0, SQSp(1) = Σ(0)−1(I−Π+ Sp(1)) and

(Σ(0)−1Φ)(x, η) =∑

(τ,τ ′)∈I2±

e−ip(r(τ)−r(τ′))PτΦ((x+ r(τ)− r(τ ′), σr(τ ′))Pτ ′ (3.79)

yield

Lemma 3.13 For all p ∈ T dν and k ∈ B(0, κ), there exists a symmetric matrix D(p) ∈

Md(C) such that

λ1(k, p) = 1 +i

2d

τ∈I±kr(τ) +Op(‖k‖3)

+1

2d

τ∈I±

(kr(τ))2

2+

τ,τ ′∈I±(kr(τ))(kr(τ ′))

⟨δ0 ⊗ Pτ ′ |(Sp(1)δ0 ⊗ Pτ )

⟩l2− 1

2d

≡ 1 +i

2d

τ∈I±kr(τ)− 1

2〈k|D(p)k〉 +Op(‖k‖3). (3.80)

18

The map p 7→ D(p) is analytic on T dν ; when p ∈ TdΓ, D(p) ∈ M2d(R) is non-negative and

D(p)i,j =∂2

∂ki∂kjλ(0, p), i, j ∈ 1, 2, . . . , d. Moreover, Op(‖k‖3) is uniform in p ∈ T d

ν .

Proof: Existence and analyticity in p of D(p) follow from analyticity of λ1 in y and

analyticity of Sp(1) in p, see (3.77). Since D(p)i,j =∂2

∂ki∂kjλ(0, p), the matrix is symmetric.

For p ∈ TdΓ, Lemma 3.12 implies that λ(0, p) is real valued, hence the matrix elements D(p)for p ∈ TdΓ are real as well. Finally, (3.40) implies that 〈k|D(p)k〉 ≥ 0 for all k ∈ Td.

As a consequence of the spectral analysis above, it follows exactly as in [21], that

Proposition 3.14 Under assumption S, uniformly in p ∈ T dν , in k in compact sets of Cd

and in t in compact sets of R∗+,

limn→∞

M(k/n, p)[tn] = eityrΠ, (3.81)

limn→∞

M(k/√n, p)[tn]e−i[tn]ry/

√n = e−

t2〈y|D(p)y〉Π. (3.82)

4 Diffusion Properties

These technical results lead to the main results of this section which are the existence ofa diffusion matrix and central limit type behaviors in the diffusive scaling, as in [21], withthe same proofs, that we don’t need to repeat.

Let N (0,Σ) denote the centered normal law in Rd with positive definite covariancematrix Σ and let us write Xω ≃ N (0,Σ) a random vector Xω ∈ Rd with distributionN (0,Σ). The superscript ω can be thought of as a vector in Rd such that for any Borel setA ⊂ Rd

P(Xω ∈ A) =1

(2π)d/2√det(Σ)

Ae−

12〈ω|Σ−1ω〉dω. (4.1)

The corresponding characteristic function is ΦN (y) = E(eiyXω) = e−

12〈y|Σy〉.

The first result concerning the asymptotics of the random variable Xn reads as followsfor an initial density matrix of the form ρ0 = |ϕ0〉〈ϕ0| ⊗ |0〉〈0| :

Theorem 4.1 Under Assumption C and S, uniformly in y in compact sets of Cd and in tin compact sets of R∗

+,

limn→∞

Φϕ0

[tn](y/n) = eityr (4.2)

limn→∞

e−i[tn] ry√

nΦϕ0

[tn](y/√n) =

TdΓ

e−t2〈y|D(p)y〉 dp, (4.3)

where the right hand side admits an analytic continuation in (t, y) ∈ C×Cd. In particular,for any (i, j) ∈ 1, 2, . . . , d2,

limn→∞

〈Xi〉ψ0(n)

n= ri (4.4)

limn→∞

〈(X − nr)i(X − nr)j〉ψ0(n)

n=

TdΓ

Di j(p) dp. (4.5)

19

If D(p) = D > 0 is independent of p ∈ TdΓ, then, for any initial vector Ψ0 = ϕ0 ⊗ |0〉, wehave as n→ ∞, with convergence in law,

Xn − nr√n

D−→ Xω ≃ N (0,D). (4.6)

Remark 4.2 We will call diffusion matrices both D(p) and D =∫TdΓD(p) dp.

Remark 4.3 We prove below that a central limit theorem for Xn may hold in cases whereD depends on p, see Theorem 6.4.

For initial conditions corresponding to a density matrix ρ0, we have

Corollary 4.4 Under Assumptions C , S and R for the observable X2, we have for anyt ≥ 0, y ∈ Cd,

limn→∞

Φρ0[tn](y/n) = eityr, (4.7)

limn→∞

e−i[tn] ry√

nΦρ0[tn](y/√n) =

TdΓ

e−t2〈y|D(p)y〉⟨Ψ1|Πρ0(·, 0, p, ·)

⟩L2(BΓ×Td×Ω;M2d(C))

dp

=

TdΓ

e−t2〈y|D(p)y〉 Tr(ρ0)(0, 0, p)dp (4.8)

where, see (3.56),

ρ0(0, 0, p) =∑

ξ∈Zd

ζ∈Γ

e−ipζρ0(ξ − ζ, ξ). (4.9)

Also, for any (i, j) ∈ 1, 2, . . . , d2,

limn→∞

〈Xi〉ρ0(n)n

= ri, (4.10)

limn→∞

〈(X − nr)i(X − nr)j〉ρ0(n)n

=

TdΓ

Di j(p)Tr(ρ0)(0, 0, p)dp. (4.11)

From Corollary 4.4, and Theorem 4.1, we gather that the characteristic function of thecentered variable Xn − nr in the diffusive scaling T = nt, Y = y/

√n, where n → ∞,

converges to ∫

TdΓ

F(

e−12t〈·|D−1(p)·〉

(t2π)d/2√detD(p)

)(y)Tr(ρ0)(0, 0, p)dp, (4.12)

where the function under the Fourier transform symbol F is a solution to the diffusionequation

∂ϕ

∂t=

1

2

d∑

i,j=1

Dij(p)∂2ϕ

∂xi∂xj. (4.13)

As explained in [22], [18], it follows that the position space density wk([nt])δ(√nx−k) con-

verges in the sense of distributions to a superposition of solutions to the diffusion equations(4.13) as n→ ∞.

20

5 Moderate Deviations

It is shown in [21] that the spectral properties of the matrix M(k, p) proven in Section3.2 allow us to obtain further results on the behavior with n of the distribution of therandom variable Xn defined by (3.12) with localized initial condition ρ0 = |ϕ0〉〈ϕ0|⊗ |0〉〈0|,corresponding to the vector R0 ∈ l2(BΓ × Ω;M2d(C)), see (3.58). This section is devotedto establishing some moderate deviations results on the centered random variable Xn−nr.Again, since all proofs are identical to those given in [21], we merely state the results.

Moderate deviations results depend on asymptotic behaviors in different regimes of thelogarithmic generating function of Xn − nr defined for y ∈ Rd by

Λn(y) = ln(Ew(n)(ey(Xn−nr))) ∈ (−∞,∞]. (5.1)

This function Λn is convex and Λn(0) = 0.Let ann∈N be a positive valued sequence such that

limn→∞

an = ∞, and limn→∞

an/n = 0. (5.2)

Define Yn = (Xn − nr)/√nan and, for any y ∈ Rd, let Λn(y) = ln(Ew(n)(e

yYn)) be thelogarithmic generating function of Yn.

Proposition 5.1 Assume C and S and further suppose D(p) > 0 for all p ∈ Td. Lety ∈ Rd \ 0 and assume the real analytic map Td ∋ p 7→ 〈y|D(p)y〉 ∈ R+

∗ is either constantor admits a finite set pj(y)j=1,··· ,J of non-degenerate maximum points in Td. Then, forany y ∈ Rd,

limn→∞

1

anΛn(any) =

1

2〈y|D(p1(y))y〉 (5.3)

which is a smooth convex function of y.

Let us introduce a few more definitions and notations. A rate function I is a lowersemicontinuous map from Rd to [0,∞] s.t. for all α ≥ 0, the level sets x | I(x) ≤ α areclosed. When the level sets are compact, the rate function I is called good. For any setΓ ⊂ Rd, Γ0 denotes the interior of Γ, while Γ denotes its closure.

As a direct consequence of Gartner-Ellis Theorem, see [16] Section 2.3, we get

Theorem 5.2 Define Λ∗(x) = supy∈Rd

(〈y|x〉 − 1

2〈y|D(p1(y))y〉), for all x ∈ Rd. Then, Λ∗

is a good rate function and, for any positive valued sequence ann∈N satisfying (5.2) andall Borel set Γ ⊂ Rd

− infx∈Γ0

Λ∗(x) ≤ lim infn→∞

1

anln(P((Xn − nr) ∈ √

nan Γ))

≤ lim sup1

anln(P((Xn − nr) ∈ √

nan Γ)) ≤ − infx∈Γ

Λ∗(x). (5.4)

Remark 5.3 As a particular case, when D(p) = D > 0 is constant, we get

Λ∗(x) =1

2〈x|D−1x〉. (5.5)

21

Remark 5.4 Specializing the sequence ann∈N to a power law, i.e. taking an = nα, wecan express the content of Theorem 5.2 in an informal way as follows. For 0 < α < 1,

P((Xn − nr) ∈ n(α+1)/2 Γ) ≃ e−nα infx∈Γ Λ∗(x). (5.6)

For α close to zero, we get results compatible with the central limit theorem and for α closeto one, we get results compatible with those obtained from a large deviation principle.

6 Large Deviations

In this section, we push further the analysis of the large n behavior distribution of therandom variable Xn (defined by (3.12) with localized initial condition ρ0 = |ϕ0〉〈ϕ0| ⊗|0〉〈0|) by proving large deviations estimates and a central limit theorem under stronger

assumptions on the spectral properties of the matrix M(k, p).

We change scales and define for n ∈ N∗ and y ∈ Rd a rescaled random variable and thecorresponding convex logarithmic generating function

Yn =Xn − nr

nand Λn(y) = lnEw(n)(e

yYn) ∈ (−∞,∞]. (6.1)

Because of the new scale n, the existence of limn→∞Λ(ny)n = limn→∞

lnEw(n)(eyXn)

n − yr isnot granted for all y, by contrast with the previous section. However, because ‖Yn‖ ≤ c0,for some c0 <∞, we have for any y ∈ Cd, and a fortiori for any y ∈ Rd,

|Λ(ny)|n

≤ ‖y‖c0. (6.2)

Moreover, as the next Proposition states, the limit exists for ‖y‖ small enough, under moreglobal, yet reasonable, hypotheses:

Proposition 6.1 Let y ∈ Rd ∩ B(0, κ) be fixed and assume the function TdΓ ∋ p 7→|λ1(−iy, p)| is either constant or admits a finite set of non-degenerate global maxima pj(y)j=1,...,N

in TdΓ. Further assume ∇pλ1(−iy, pj(y)) = 0, for all j = 1, . . . , N . Then, for κ > 0 smallenough,

limn→∞

Λ(ny)

n= −yr + ln(λ1(−iy, p1(y))) (6.3)

is a smooth real valued convex function of y ∈ B(0, κ) ∩ Rd.

Remarks 6.2 i) In case λ1(−iy, p) ≡ λ1(−iy, 0) is independent of p ∈ Td, the right handside of (6.3) equals −yr + ln(λ1(−iy, 0)).ii) The assumption ∇pλ1(−iy, pj(y)) = 0 may be too strong to deal with certain cases.However, if it does not hold, in which case ∇pλ1(−iy, pj(y)) ∈ iRd, the asymptotics of theintegral that yields Λ(ny)/n is out of reach of a steepest descent method without furtherinformation on the behavior of λ1(−iy, p) for p away from T

pΓ.

22

The proof is a straightforward alteration of that of Proposition 5.1, based on Laplace’smethod to evaluate the asymptotics of the integral

Ew(n)(enyYn) = e−nyr

TdΓ

⟨Ψ1|Mn(−iy, p)R0

⟩l2(BΓ×Ω,M2d(C))

dp (6.4)

= e−nyr∫

TdΓ

en ln(λ1(−iy,p))⟨Ψ1|P (−iy, p)R0

⟩l2(BΓ×Ω,M2d(C))

dp+Op(e−nγ),

where γ > 0 and the prefactor is non-zero, due to the smallness of ‖y‖. For complete-ness, we briefly recall the argument in case there is only one maximum at p1 ∈ TdΓ.Dropping the variable y in the notation and writing ln(λ1(p)) = a(p) + ib(p), P(p) =⟨Ψ1|P (−iy, p)R0

⟩l2(BΓ×Ω,M2d(C))

we have in a neighborhood of p1 ∈ TdΓ determined by

∇a(p1) = 0 and D2a(p1) < 0

en ln(λ1(p))P(p) = en ln(λ1(p1))P(p1)× (6.5)

×ein〈∇b(p1)(p−p1)〉en〈(p−p1)|(D2a(p1)+iD2b(p1))(p−p1)〉/2enO(‖p−p1‖3)(1 +O(‖p− p1‖)).

Making use of D2a(p1) < 0, we can restrict the integration range in (6.4) to B(p1, µ(n)) ⊂Rd, with 1/

√n≪ µ(n) ≪ 1/n1/3, at the cost of an error of order e−nµ(n)

2c, for some c > 0,so that we are lead to

B(0,µ(n))ein〈∇b(p1)|p〉en〈p|(D

2a(p1)+iD2b(p1))p〉/2dp (1 +O(nµ(n)3) +O(µ(n))). (6.6)

When ∇b(p1) 6= 0, the analysis of the large n behavior of (6.4) and (6.6) requires globalinformations about the analytic properties of λ1 for p far from the real set TdΓ, hence werequire ∇b(p1) = 0. Since λ1 = 1 +O(‖y‖) 6= 0, we have

∇a(p1) = 0 ⇔ ℜλ1(p1)∇ℜλ1(p1) + ℑλ1(p1)∇ℑλ1(p1) = 0 (6.7)

∇b(p1) = ∇ arg(λ1(p)) =∇ℑλ1(p1)ℜλ1(p1)

, (6.8)

so that the hypothesis ∇b(p1) ∈ Rd implies ∇λ1(p1) = 0. Now, at the cost of another errorof order e−nµ(n)

2c, (6.6) equals∫

Rd

en〈p|(D2a(p1)+iD2b(p1))p〉/2dp (1 +O(nµ(n)3) +O(µ(n))) +O(e−nµ(n)

2c), (6.9)

where a Gaussian integral yields

Rd

en〈p|(D2a(p1)+iD2b(p1))p〉/2dp =

G

nd/2, where G =

(2π)d/2√det(−D2a(p1)− iD2b(p1))

. (6.10)

Altogether, we get

ln(Ew(n)(enyYn))

n= ln(λ1(−iy, p)) (6.11)

+1

nln(GP(p1)

nd/2(1 +O(nµ(n)3) +O(µ(n)) +O(e−nµ(n)

2c)),

23

which yields the result in the limit n→ ∞.

We set for all y ∈ Rd

Λ(y) = lim supn→∞

Λ(ny)

n∈ (−∞,∞), (6.12)

which is convex, finite everywhere and bounded by c0‖y‖. Moreover, for ‖y‖ < κ, Λ(y)equals the right hand side of (6.3) and is thus smooth, and Λ(0) = Λ(0) = 0. Let usconsider the Legendre transform of Λ

Λ∗(x) = sup

y∈Rd

(〈y|x〉 − Λ(y)

)≥ 0, for all x ∈ Rd. (6.13)

We are now in a position to state our large deviations results via Gartner-Ellis Theorem.

Theorem 6.3 Assume the hypothese of Proposition 6.1 Let Λ and Λ∗be defined by (6.12)

and (6.13). Further assume Λ is strictly convex in neighborhood of the origin. Then Λ∗is

a good rate function and there exists η > 0 such that for any Γ ∈ Rd

lim sup1

nln(P((Xn − nr) ∈ nΓ)) ≤ − inf

x∈ΓΛ∗(x) (6.14)

lim infn→∞

1

nln(P((Xn − nr) ∈ nΓ)) ≥ − inf

x∈Γ0∩B(0,η)Λ∗(x). (6.15)

Proof: Exercise 2.3.25, p. 54 in [16], shows that since Λ is finite on Rd then Λ∗is a good

rate function and that (6.14) holds.To show that (6.15) holds, we invoke Baldi’s Theorem, Thm 4.5.20 in [16]. First, Exer-

cise 4.1.10 of [16], point c), shows that the law of Yn is exponentially tight, as a consequenceof Λ

∗being a good rate function and (6.14) holding true. Then, by Exercise 2.3.25 still,

if x = ∇Λ(y) = ∇Λ(y) for some y ∈ B(0, κ), then x ∈ F , where F is the set of exposedpoint for Λ

∗with exposing hyperplane y. Let us recall that this means that for all z 6= x,

yx− Λ∗(x) > yz − Λ

∗(z). Now, since Λ is strictly convex at the origin, its Hessian at zero

is positive definite and ∇Λ(0) = 0. It thus follows from the implicit function theorem thatfor some η > 0, the map y → ∇Λ(y) is a bijection with range B(0, η). Hence B(0, η) isincluded in the set of exposed points for Λ

∗. Also, the corresponding set of exposing hy-

perplanes belongs to B(0, κ), where Λ coincides with Λ, which is finite everywhere. Hence,all hypotheses of Baldi’s Theorem are met, so that (6.15) holds.

Another direct consequence of Proposition 6.1 together with (6.2) is a central limittheorem for Xn, as proven by Bryc, [13]. A vector valued version of Bryc’s Theorem suitedfor our purpose can be found in [19].

Theorem 6.4 Under the assumptions of Proposition 6.1, we have, with convergence inlaw,

Xn − nr

n1/2−→ N (0,D) (6.16)

where Di,j =∂2

∂yi∂yjΛ(0) ≥ 0.

24

Remark 6.5 The results of this section carry over to the cases considered in [21], see alsoSection 9.

7 Example

Let us consider here a fairly general situation in which the spectral hypotheses we need canbe checked explicitly.

We work in Zd and consider a model characterized by a representation of Zd, x 7→ σxof measure preserving maps, a jump function r : I± → Zd such that

r(τ)− r(τ ′) ∈ Γ, ∀τ, τ ′ ∈ I±, (7.1)

a kernel P with identical lines

P(η, ζ) = P(ζ), ∀η ∈ Ω, (7.2)

and a set of unitary matrices ηη∈Ω with trivial commutant

η′η∈Ω = cI, c ∈ C. (7.3)

This implies that the corresponding stationary distribution is p(ζ) = P(ζ). We address the

simplicity of the eigenvalue 1 of M(0, p)|I , see Remark 3.11.

Proposition 7.1 Under assumptions (7.1), (7.2) and (7.3), M(k, p)|I is independent of p

and M(0, p)|I admits 1 as a simple eigenvalue.

Proof: The simplicity of the eigenvalue 1 of M(0, p)|I is equivalent to the simplicity of

the eigenvalue 1 of M(0, p)∗|I .We first observe that M(k, p)∗ leaves the subspace

J ≡ spanδ0 ⊗A |A : Ω →M2d(C) is constant (7.4)

invariant:

(M (k, p)∗δ0 ⊗A)(x, η) (7.5)

=∑

τ,τ ′∈I±ζ∈Ω

p(ζ)e−ikr(τ′)e−ip(r(τ)−r(τ

′))(σxζ)∗δ0(x+ r(τ)− r(τ ′)

)PτAPτ ′ζ

= eipxδ0 (x)∑

τ,τ ′∈I±ζ∈Ω

p(ζ)(σxζ)∗PτAPτ ′e

−ikr(τ ′)ζ

= δ0(x)∑

ζ∈Ωp(ζ)ζ∗AU(k)ζ,

25

where U(k) =∑

τ ′∈I± Pτ ′e−ikr(τ ′). Hence we have I ⊂ J and M (k, p)∗|J is independent of

p ∈ TdΓ. Thus we can consider M(0, p)∗|J .Note that U(0) = I and that M(0, p)∗|J δ0 ⊗ A = δ0 ⊗ A is equivalent to M(A) = A

whereM(A) :=

ζ∈Ωp(ζ)ζ∗Aζ, ∀A ∈M2d(C). (7.6)

With the scalar product 〈A|B〉 = Tr(A∗B) on M2d(C) we have

‖M(A)‖2 =∑

(ζ,η)∈Ω2

p(ζ)p(η)〈η∗Aη|ζ∗Aζ〉, (7.7)

where |〈η∗Aη|ζ∗Aζ〉| ≤ ‖A‖2, with equality if and only if η∗Aη = eiθηζζ∗Aζ, for someθηζ ∈ R. Hence ‖M(A)‖ = ‖A‖ if and only if η∗Aη = ζ∗Aζ, for all η, ζ. Thus, anyinvariant matrix under M satisfies

M(A) = η∗Aη = A, ∀η ∈ Ω. (7.8)

Since the commutant of ηη∈Ω is assumed to be reduced to cI, c ∈ C, we get the result.

8 Examples of diffusive random dynamics

In this section we consider a specific example of measure dµ on U(2d), the set of coinmatrices, for which we can prove convergence results on the random quantum dynamicalsystem associated with (3.1) for large times. This example is a generalization of the exampleconsidered in [21] for site-independent coin matrices. While the following results hold forvector and density matrix initial conditions, we only consider here the vector case, forshortness.

8.1 Permutation matrices

We start by recalling a few deterministic facts. Let S2d be the set of permutations of the2d elements of I± = ±1,±2, . . . ,±d. For π ∈ S2d, define

C(π) =∑

τ∈I±|π(τ)〉〈τ | ∈ U(2d) so that Cστ (π) = δσ,π(τ), (8.1)

and C(π) is a permutation matrix associated with π. Note the elementary properties: forany π, σ ∈ S2d,

C(I) = I, C∗(π) = CT (π) = C(π−1), C(π)C(σ) = C(πσ). (8.2)

The matrices C(π) allow for explicit computations of the relevant quantities introducedin Section 2. Given a sequence Cj = C(πj)j=1,...,n of such matrices, a direct computationshows that with the definition τj = πj(τj−1), J

0k (n) takes the form

J0k (n) =

τ1∈I± s.t.∑n

j=1r(τj)=k

|τn〉〈π−11 (τ1)|, (8.3)

26

and J0k (n) = 0, if

∑nj=1 r(τj) 6= k.

Consequently, the non-zero probabilities Wk(n) on Zd read for any normalized internalstate vector ϕ0.

Wϕ0

k (n) = ‖J0k (n)ϕ0‖2 =

τ1∈I± s.t.∑n

j=1r(τj)=k

|〈π−11 (τ1)|ϕ0〉|2. (8.4)

Moreover, with τ1 = π1(τ0) we get

ϕ0 =∑

τ0∈I±aτ0 |τ0〉 ⇒ |〈π−1

1 (τ1)|ϕ0〉|2 =∑

τ0∈I±|aτ0 |2δτ1,π1(τ0). (8.5)

Hence Wϕ0

k (n) =∑

τ0∈I± |aτ0 |2δ∑nj=1 r(τj),k

so that for F = I⊗ f and ψ0 = ϕ0 ⊗ |0〉〈0|

〈F 〉ψ0(n) =∑

k∈Zd

Wϕ0

k (n)f(k) =∑

τ0∈I±|aτ0 |2f(

n∑

j=1

r(τj)). (8.6)

Remarks 8.1 In other words, given a set of n permutations, there is no more quantumrandomness in the variable Xn, except in the initial state.If one generalizes the set of matrices by adding phases to the matrix elements of the per-mutation matrices, it does not change the probability distribution Wϕ0

k (n)k∈Zd , see [21].

Therefore the characteristic functions take the form

Corollary 8.2 With τj = (πjπj−1 · · · π1)(τ0), for j = 1, . . . , n,

Φϕ0n (y) =

τ0∈I±eiy

∑nj=1 r(τj)|aτ0 |2 (8.7)

The dynamical information is contained in the sum Sn =∑n

j=1 r(τj) which appears inthe phase. The next section is devoted to its study, in the random version of this modelwhere the coin matrices are random variables with values in C(π), π ∈ S2d distributedaccording to (3.1)

8.2 Random Setup

We consider that the permutation matrices are given by the process defined by (3.1) andwe identify C(π) and π:

Assumption M:

Let ω(n)n∈N be a finite state space Markov chain on Ω ⊂ S2d with transition matrix P

and stationary initial distribution p and a representation σ of Zd of the form x 7→ σx wherefor each x ∈ Zd, σx : Ω → Ω with Ω ⊂ S2d, is a measure preserving bijection. We setCωn (x) = σx(ω(n)) with C

ωn (0) = ω(n).

We have that for every x ∈ Zd, the set of random matrices/permutations πωn (x))n∈N =σx(ω(n))n∈N, with ω(n) ∈ Ω ⊂ S2d, the Markov chain.

27

Given a set of random permutation matrices as above, we start at time zero on site 0 ∈Zd, with initial vector |τ0〉⊗ |0〉. The dynamics induced by the permutation matrices sendsthis state at time n ≥ 1 to the state |∑n

s=1 r(τs)〉 ⊗ |τn〉, where τj = σ∑j−1s=1 r(τs)

(ω(j))τj−1.

Hence, in view of (8.7), we introduce the random variables Sn(ω) =∑n

j=1 r(τj(ω)) ∈ Zd

and r(τj(ω)) where τj(ω) is defined for j = 1, . . . , n by

τ1(ω) = σ0(ω(1))τ0, τj(ω) = σ∑j−1s=1 r(τs(ω))

(ω(j))τj−1(ω), (8.8)

for a given τ0. Note that τj(ω) = τj((ω(j), ω(j − 1), . . . , ω(1)). They have the followingproperties

Lemma 8.3 Let ϕ0 =∑

τ0aτ0 |τ0〉 be the initial vector, and assume M holds true. Let

τj(ω)j∈N be the IN± valued process defined by (8.8). Then, with the notation

Prob((τn(ω), . . . , τ1(ω), τ0(ω)) = (τn, . . . , τ1, τ0)) = T (τn, . . . , τ1, τ0), n ∈ N, (8.9)

we have

T (τn, . . . , τ1, τ0) = |aτ0 |2∑

π1,...,πn∈Ωp(π1)P(σr(τ1)(π1), π2) · · · P(σr(τn−1)(πn−1), πn)×

×〈τn|C(πn)τn−1〉 · · · 〈τ1|C(π1)τ0〉. (8.10)

Proof: We start with T (τ0) = |aτ0 |2, according to the initial condition, and

T (τ1, τ0) = |aτ0 |2Prob(ω1 s.t. σ0(ω1)(τ0) = τ1)

= |aτ0 |2∑

π1∈Ωδτ1,σ0(π1)(τ0)p(π1) = |aτ0 |2

π1∈Ω〈τ1|C(σ0(π1)) τ0〉p(π1). (8.11)

Note that since σ0 is the identity, T (τ1, τ0) = Ep(〈τ1|C(ω) τ0〉)|aτ0 |2. Then

T (τ2, τ1, τ0) = |aτ0 |2Prob((ω1, ω2) s.t. σ0(ω1)(τ0) = τ1 andσr(τ1)(ω2)(τ1) = τ2) (8.12)

= |aτ0 |2∑

π1,π2∈Ωδτ2,σr(τ1)(π2)(τ1)

δτ1,σ0(π1)(τ0)p(π1)P(π1, π2)

= |aτ0 |2∑

π1,π2∈Ω〈τ2|C(σr(τ1)(π2)) τ1〉〈τ1|C(σ0(π1)) τ0〉p(π1)P(π1, π2),

and, by induction

T (τn, . . . , τ1, τ0) = |aτ0 |2∑

π1,...,πn∈Ωp(π1)P(π1, π2) · · ·P(πn−1, πn)× (8.13)

×〈τn|C(σ∑n−1s=1 r(τs)

(πn))τn−1〉 · · · 〈τ1|C(σ0(π1)))τ0〉.

Using the properties of σ, the measure invariant representation of Zd, we get for any j ≥ 1with πj = σ∑j−1

s=1 r(τs)(πj),

πj∈ΩP(πj−1, πj)〈τj |C(σ∑j−1

s=1 r(τs)(πj))τj−1〉 =

πj∈ΩP(σr(τj−1)(πj−1), πj)〈τj|C(πj)τj−1〉,

(8.14)

28

which ends the proof.

The distribution of τj(ω)j∈N is neither stationary, nor Markovian, in general. But wecan express it in a more convenient way as follows.

Consider the space C2d ⊗ C|Ω| with orthonormal basis denoted by |τ ⊗ π〉τ∈I±,π∈Ω.Let N ∈M2d|Ω|(R

+) be defined by its matrix elements

〈τ ′ ⊗ π′|N τ ⊗ π〉 = 〈τ ′|C(π′) τ〉P(σr(τ)(π), π′) = δτ ′,π′(τ)P(σr(τ)(π), π′), (8.15)

and the vectors Ψ1 =∑

τ∈I±π∈Ω

|τ ⊗ π〉 and A(τ0) =∑

π,τ |aτ0 |2p(π)〈τ |C(π)τ0〉|τ ⊗ π〉. Then,(8.10) reads

T (τn, . . . , τ1, τ0) =⟨Ψ1|(|τn〉〈τn| ⊗ I)N(|τn−1〉〈τn−1| ⊗ I) . . . N(|τ1〉〈τ1| ⊗ I)A(τ0)

⟩. (8.16)

Introducing also the matrices D(y) and N(y) on C2d ⊗C|Ω| with y ∈ Td by

D(y) = d(y) ⊗ I, where d(y) =∑

τ∈I±eiyr(τ)|τ〉〈τ | and N(y) = D(y)N (8.17)

we can express the characteristic function ΦTn : Td → C of the random variable Sn(ω) =∑nj=1 r(τj(ω)) as

ΦTn (y) =∑

τn,τn−1,...,τ0∈I±eiy

∑nj=1 r(τj)T (τn, . . . , τ1, τ0) (8.18)

=⟨Ψ1|(N(y))n−1B(y)

⟩, where B(y) = D(y)

τ0∈I±A(τ0).

At this point, we can apply the same methods as above to describe the large n behaviorof Sn(ω) by studying the asymptotic behavior of the suitably rescaled characteristic functionΦTn (y) under appropriate spectral assumptions on N .

Note that N is a stochastic matrix and that P and p are invariant under σx so that wehave

NTΨ1 = Ψ1 , Nχ1 = χ1 , and ‖N‖ = Spr (N) = 1 (8.19)

withΨ1 =

τ∈I±π∈Ω

|τ ⊗ π〉 and χ1 =∑

τ∈I±π∈Ω

p(π)|τ ⊗ π〉. (8.20)

Also, D(y) being unitary for y real, we have ‖N(y)‖ ≤ 1 for all y ∈ Td.

Assumption S:

σ(N) ∩D(0, 1) = 1, and this eigenvalue is simple. (8.21)

Remarks 8.4 The corresponding spectral projector of N reads |χ1〉〈Ψ1|/(2d).Again, it is enough to assume that S holds for the restriction of N to the N(y)∗-cyclicsubspace generated by Ψ1.

29

The perturbative arguments given in Section 4 leading to Corollary 4.1 by means ofLevy Theorem apply here. For y ∈ Cd in a neighborhood of the origin, let λ1(y) be thesimple analytic eigenvalue of N(y) emanating from 1 at y = 0. Let v ∈ Rd and the nonnegative matrix Σ ∈Md(R) defined by the expansion

λ1(y) = 1 + iyv − 1

2〈y|Σy〉+O(‖y‖3). (8.22)

Explicit computations yield for any y ∈ Cd

v =1

2d

τ∈I±r(τ) ≡ r

〈y|Σy〉 = −1

d

τ∈I±

(yr(τ))2

2(8.23)

−1

d

τ,τ ′∈I±(yr(τ))(yr(τ ′))

〈τ ⊗ η1|S(1)τ ′ ⊗ ηp〉 −

1

2d

,

where S(1) is the reduced resolvent of N at 1, η1 =∑

π |π〉 and η1 =∑

π p(π)|π〉.Proposition 8.5 Let ϕ0 =

∑τ0∈I± aτ0 |τ0〉 and let Sn(ω) =

∑nj=1 r(τj(ω)), with τj(ω)

defined by (8.8). Assume M and S and let Σ be defined by (8.23). Then, if Σ > 0 , wehave as n→ ∞

Sn(ω)

nD−→ r (8.24)

Sn(ω)− nv√n

D−→ Xω ≃ N (0,Σ). (8.25)

As a consequence, for any sample of random coin matrices, we obtain the following longtime asymptotics of the quantum mechanical random probability distribution Wϕ0· (n) ofthe variable Xω

n , whose characteristic function is defined by (8.7).

Theorem 8.6 Under the assumptions of Proposition 8.5, the following random variablesconverge in distribution as n→ ∞:

e−iyr√nΦϕ0

n (y/√n) =

τ0∈I±|aτ0 |2

(eiy 1√

n(Sn(ω)−nr)

)−→ eiyX

ω

, (8.26)

where Xω ≃ N (0,Σ). Moreover, for any (i, j) ∈ 1, 2, . . . , d2, as n → ∞, we have indistribution,

〈Xi〉ωψ0(n)

nri (8.27)

〈(X − nr)i(X − nr)j〉ωψ0(n)

n−→ Dωjk (8.28)

where Dωjk is distributed according to the law of Xωj X

ωk , where X

ω ≃ N (0,Σ).

Proof: Identical to that of Corollary 6.8 in [21].

30

8.3 Specific Case

Let us close this section by providing an example that satisfies the assumption S. It isthe case where the kernel P depends on the second index only, i.e., when the permutationsω(j)j∈N are i.i.d. and distributed according to p.

Proposition 8.7 Assume M with a kernel P satisfying P(π′, π) = p(π). Let P be the bi-stochastic matrix acting on C2d defined by

P =∑

π∈Ωp(π)CT (π) ≡ Ep(C

T (ω)) (8.29)

and assume it irreducible and aperiodic. Then S holds and Theorem 8.6 applies with Σgiven by

Σij = − 1

2d〈ri|rj〉+ rirj −

1

2d(〈ri|S(1)rj〉+ 〈rj |S(1)ri〉) , (8.30)

with S(1) the reduced resolvent of P at 1 and, for j ∈ 1, . . . , d, rj =∑

τ∈I± rj(τ)|τ〉 ∈ C2d.

Proof: In this case, (8.15) reduces to

〈τ ′ ⊗ π′|N τ ⊗ π〉 = 〈τ ′|C(π′) τ〉p(π′), (8.31)

so that we can write with η1 =∑

π |π〉,

NT =∑

π

p(π)CT (π)⊗ |η1〉〈π|. (8.32)

Accordingly, for any ξ ∈ C2d ⊗ C|Ω|, we have

NT ξ = ζ(ξ)⊗ |η1〉, with ζ(ξ) =∑

π∈Ωp(π)CT (π)〈π|ξ〉C|Ω| , 〈π|ξ〉C|Ω| ∈ C2d. (8.33)

Hence, any eigenvector Ψ with eigenvalue eiθ, θ ∈ R, needs to be of the form Ψ = ψ ⊗ η1with

eiθψ =∑

π∈Ωp(π)CT (π)ψ = Pψ. (8.34)

The matrix P being bi-stochastic, irreducible and aperiodic, there exists only one solutionto (8.34), given by ψ =

∑τ∈I± |τ〉 and eiθ = 1, which shows that S holds.

The expectation v and correlation matrix Σ can be obtained from Theorem 6.6 in [21].Indeed, under our assumptions, Lemma 8.3 shows that the process τj(ω)j=1,...,n is aMarkov chain on I±, with kernel P = Ep(C

T (ω)) and initial distribution p0(τ0) = |aτ0 |2:

T (τn, . . . , τ1, τ0) = |aτ0 |2∑

π1,...,πn∈Ωp(π1)p(π2) · · · , p(πn)〈τn|C(πn)τn−1〉 · · · 〈τ1|C(π1)τ0〉

= P (τn, τn−1) · · ·P (τ1, τ0)p0(τ0). (8.35)

31

The aforementioned result provides the characteristics v and Σ (8.30) of the functionalcentral limit theorem for the Markov chain τj(ω)j=1,...,n corresponding to the randomvariable Sn(ω) =

∑nj=1 r(τj(ω)).

Proof: With (3.57), (8.10) reads

T (τn, . . . , τ1, τ0) = |aτ0 |2∑

π1,...,πn∈Ωp(π1)p(π2) · · · , p(πn)× (8.36)

×〈τn|C(σ∑n−1s=1 r(τs)

(πn))τn−1〉 · · · 〈τ1|C(σ0(π1)))τ0〉,

where for all j ≥ 1, thanks to the fact that σx is measure preserving,∑

πj

p(πj)〈τj |C(σ∑j−1s=1 r(τs)

(πj))τj−1〉 =∑

πj

p(πj)〈τj |C(πj))τj−1〉. (8.37)

Setting P (τ ′, τ) = Ep(〈τ ′|CT (ω)τ〉) and p0(τ) = |aτ |2, we can write

T (τn, . . . , τ1, τ0) = P (τn, τn−1) · · ·P (τ1, τ0)p0(τ0), (8.38)

which proves the claim.

Remark 8.8 Actually, a strong law of large numbers holds in this case, i.e. limn→∞ Sn(ω)/n→r almost surely.

9 Uncorrelated example

In this last section, we briefly present two cases where the random coin matrices are chosenin an uncorrelated way, in order to complete the picture. In a sense, it can be viewed asthe limiting case where the representation σ of Zd is such that the periodicity lattice Γ isinfinite. This is the complete opposite situation to the one considered in [21], where all coinmatrices where identical in space, at all time step. Nevertheless, the methods developed inthat paper apply here too.

We recall some notations used in Section 2.1 in [21]: let xs =∑s−1

j=1 r(τj), x′j =∑s−1

j=1 r(τ′j), then the generic term in Lemma 2.4 reads

〈τ ′s−1|C∗s (x

′s) τ

′s〉〈τs|Cs(xs) τs−1〉 = 〈τ ′s|Cs(x′s) τ ′s−1〉〈τs|Cs(xs) τs−1〉 (9.1)

≡ 〈τs ⊗ τ ′s|(Cs(xs)⊗ Cs(x′s)) τs−1 ⊗ τ ′s−1〉. (9.2)

Let us introduce the unitary tensor product

Vs(x, y) ≡ Cs(x)⊗ Cs(y) in C2d ⊗ C2d. (9.3)

Now consider the set of paths Gn(K) in Z2d from the origin to K =

(kk′

)∈ Z2d via the

(extended) jump function defined by

R : I2± → Z2d, R

(τsτ ′s

)=

(r(τs)r(τ ′s)

), (9.4)

32

that is paths of the form (T1, · · · , Tn−1, Tn), where Ts =

(τsτ ′s

)∈ I2±, s = 1, 2, . . . , n, and

∑ns=1R(Ts) = K. For s ≥ 2 let Xs =

∑s−1j=1R(Tj), while X1 = 0. This last condition states

that we start the walk at the origin.With these notations, we consider the complex weight of n-step paths in Z2d from the

origin to K, with last step T , defined by

W TK(n) =

(T1,··· ,Tn−1)∈I2±n−1

s.t.

(T1,··· ,Tn−1,T )∈Gn(K)

〈T |Vn(Xn)Tn−1〉 · · · 〈T2|V2(X2)T1〉〈T1V1(0)χ0〉, (9.5)

with χ0 defined via the decomposition

ϕ0 =∑

τ∈I±aτ |τ〉 ⇒ χ0 = ϕ0 ⊗ ϕ0 =

(τ,τ ′)∈I2±

aτaτ ′ |τ ⊗ τ ′〉. (9.6)

The expectation of this complex weight is the key quantity to analyze the averaged char-acteristic function (3.13), see [21]. Under certain assumptions on the distributions of thematrices Cj(x) ∈ Ω, Ω finite, for simplicity, some cases can be readily studied using thismethod.

Assumption A:

a) The matrices V ωj (X) are distributed so that

P(V ωn (Xn) = Zn, V

ωn−1(Xn−1) = Zn−1, · · · , V ω

1 (X0) = Z0) =

n∏

j=1

P(V ωj (Xj) = Zj). (9.7)

b) The expectation E(V ωk (X)) is independent of the position X:

Qk =∑

Z∈Ω⊗Ω

ZP(V ωk (X) = Z) = E(V ω

k (X)).

Assumption A is clearly satisfied in the following cases:

Case 1: Assuming that the distributions of the matrices Cs(x) are i.i.d in time andposition, requirement a) is satisfied with P(V ω

j (X) = Z) independent of j. Moreover,P(V ω(x, y) = Z) = PO(Z), for all x 6= y, and P(V ω(x, x) = Z) = PD(Z), for all x. Furtherassuming ∑

Z∈Ω⊗Ω

ZPO(Z) =∑

Z∈Ω⊗Ω

ZPD(Z) ≡ Q, (9.8)

we meet requirement b) as well.

Case 2: The following holds:i) For X ∈ Z2d, V ω

j (X) is a Markov chain in time on Ω ⊗ Ω with initial distribution pXand transition matrix PX . While for X 6= Y , the random variables V ω(X), V ω(Y ) areindependent.ii) The jump function R : I2± → Z2d is one to one and any X ∈ Z2d can only be reached at

33

most once on ∑T∈I2± αTR(T ), αT ∈ N ⊂ Z2d along any path Xs =∑s−1

j=1R(Tj), s ∈ N.

iii) E(V ωj (X)) =

∑Z∈Ω⊗Ω Z〈pX |P

j−1X Z〉 ≡ Qj is independent of X, for any j ∈ N.

Under assumption A, we get the following expression for the expectation of W TK(n)

E(W TK(n)) =

(T1,··· ,Tn−1)∈I2±n−1

s.t.

(T1,··· ,Tn−1,T )∈Gn(K)

〈T |QnTn−1〉n−1∏

j=2

〈Tj |QjTj−1〉〈T1|Q1χ0〉. (9.9)

Now we proceed as in [21]. Introduce the vectors in C4d2 ≃ C2d ⊗ C2d with Y ∈ T2d and

n ≥ 0Φn(Y ) =

T∈I2±

K∈Z2d

eiY KW TK(n) |T 〉 and Φ0 =

T∈I2±

AT |T 〉. (9.10)

Using the notation

D(Y ) =∑

T∈I2±

eiY R(T ) |T 〉〈T |, with Y ∈ T2d and Mk(Y ) = D(Y )Qk, (9.11)

we obtain the following expression for the expectation

E(Φn(Y )) = Mn(Y )Mn−1(Y ) · · ·M1(Y )Φ0. (9.12)

We get the following expression for the expectation of characteristic function (Proposition2.9 in [21])

E(Φϕ0n (y)) =

Td

〈Ψ1|Mn(Yv)Mn−1(Yv) · · ·M1(Yv)Φ0〉 dv, (9.13)

where

Ψ1 =∑

T∈H±

|T 〉 =∑

τ∈I±|τ ⊗ τ〉 and Yv =

(y − vv

)∈ R2d (9.14)

At this stage the exact dependence of the matrix Mj on time j becomes crucial. InCase 1, Mj =M for all j, so that we are directly lead to the asymptotic study of

Td

〈Ψ1|Mn(Yv)Φ0〉 dv, (9.15)

as in [21], which allows us to get diffusion properties and deviation estimates as in sections4, 5, 6, provided M(Yv) satisfies the required spectral properties.

In order to deal with Case 2 for non stationary initial distribution pX , an analysis ofthe large j behavior of Qj based on the spectral properties of the transition matrix PX isin order. This should provide the necessary information to reach similar conclusions as inCase 1.

34

References

[1] Y. Aharonov, L. Davidovich, N. Zagury, Quantum random walks, Phys. Rev. A, 48,1687-1690, (1993).

[2] A. Ahlbrecht, V.B. Scholz, A.H. Werner, Disordered quantum walks in one latticedimension, arXiv:1101.2298.

[3] A. Ahlbrecht, H. Vogts, A.H. Werner, and R.F. Werner, Asymptotic evolution ofquantum walks with random coin, J. Math. Phys., 52, 042201 (2011).

[4] A. Ambainis, D. Aharonov, J. Kempe, U. Vazirani, Quantum Walks on Graphs, Proc.33rd ACM STOC, 50-59 (2001)

[5] A. Ambainis, J. Kempe, A. Rivosh, Coins make quantum walks faster, Proceedings ofSODA’05, 1099-1108 (2005).

[6] J. Asch , O. Bourget and A. Joye, Localization Properties of the Chalker-CoddingtonModel, Ann. H. Poincare, 11, 1341-1373, (2010).

[7] J. Asch , P. Duclos and P. Exner, Stability of driven systems with growing gaps,quantum rings, and Wannier ladders, J. Stat. Phys. 92 , 1053–1070 (1998).

[8] S. Attal, F. Petruccione, C. Sabot, I. Sinayski. Open Quantum Random Walks,Preprint

[9] P. Billingsley: Convergence of Probability Measures, John Wiley and Sons 1968

[10] O. Bourget, J. S. Howland and A. Joye, Spectral analysis of unitary band matrices,Commun. Math. Phys. 234, 191–227 (2003)

[11] G. Blatter and D. Browne, Zener tunneling and localization in small conducting rings,Phys. Rev. B 37, 3856 (1988)

[12] L. Bruneau, A. Joye and M. Merkli, Infinite Products of Random Matrices and Re-peated Interaction Dynamics , Ann. Inst. Henri Poincar (B) Prob. Stat., 46, 442-464,(2010).

[13] W. Bryc, A remark on the connection between the large deviation principle and thecentral limit theorem, Statist. Probab. Lett. 18, 253-256, (1993)

[14] Chalker, J.T., Coddington, P.D.: Percolation, quantum tunneling and the integer Halleffect, J. Phys. C 21, 2665-2679, (1988).

[15] C. R. de Oliveira and M. S. Simsen, A Floquet Operator with Purely Point Spectrumand Energy Instability, Ann. H. Poincare 7 1255–1277 (2008)

[16] A. Dembo, O. Zeitouni, Large Deviations Techniques and Applications, Springer, 1998.

[17] E. Hamza, A. Joye and G. Stolz, Dynamical Localization for Unitary Anderson Mod-els”, Math. Phys., Anal. Geom., 12, (2009), 381-444.

[18] E. Hamza, Y. Kang, J. Schenker, Diffusive propagation of wave packets in a fluctuatingperiodic potential, Lett. Math. Phys., 95, 5366, (2011).

[19] V. Jaksic, Y. Ogata, Y. Pautrat, C.-A. Pillet, Entropic Fluctuations in QuantumStatistical Mechanics. An Introduction, arXiv:1106.3786v1

35

[20] A. Joye, M. Merkli, Dynamical Localization of Quantum Walks in Random Environ-ments, J. Stat. Phys., 140, 1025-1053, (2010).

[21] A. Joye, Random Time-Dependent Quantum Walks, Commun. Math. Phys., 307, 65-100, (2011).

[22] Y. Kang, J. Schenker, Diffusion of wave packets in a Markov random potential, J. Stat.Phys., 134, 1005-1022, (2009).

[23] T. Kato Perturbation Theory for Linear Operators, Springer, 1980.

[24] M. Karski, L. Forster, J.M. Chioi, A. Streffen, W. Alt, D. Meschede, A. Widera,Quantum Walk in Position Space with Single Optically Trapped Atoms, Science, 325,174-177, (2009).

[25] J. P. Keating, N. Linden, J. C. F. Matthews, and A. Winter, Localization and itsconsequences for quantum walk algorithms and quantum communication, Phys. Rev.A 76, 012315 (2007)

[26] J. Kempe, Quantum random walks - an introductory overview, Contemp. Phys., 44,307-327, (2003)

[27] N. Konno, One-dimensional discrete-time quantum walks on random environments,Quantum Inf Process 8, 387399, (2009)

[28] N. Konno, Quantum Walks, in ”Quantum Potential Theory”, Franz, Schurmann Edts,Lecture Notes in Mathematics, 1954, 309-452, (2009)

[29] J. Kosk, V. Buzek, M. Hillery, Quantum walks with random phase shifts, Phys.Rev. A74, 022310, (2006)

[30] C. Landim, Central Limit Theorem for Markov Processes, From Classical to ModernProbability CIMPA Summer School 2001, Picco, Pierre; San Martin, Jaime (Eds.),Progress in Probability 54, 147207, Birkhauser, 2003.

[31] D. Lenstra and W. van Haeringen, Elastic scattering in a normal-metal loop causingresistive electronic behavior. Phys. Rev. Lett. 57, 1623–1626 (1986)

[32] F. Magniez, A. Nayak, P.C. Richter, M. Santha, On the hitting times of quantumversus random walks, 20th SODA, 86-95, (2009)

[33] D. Meyer, From quantum cellular automata to quantum lattice gases, J. Stat. Phys.85 551574, (1996)

[34] C.A. Pillet, Some Results on the Quantum Dynamics of a Particle in a MarkovianPotential, Commun. Math. Phys., 102, 237-254, (1985).

[35] J.-W. Ryu, G. Hur, and S. W. Kim, Quantum Localization in Open Chaotic Systems,Phys. Rev. E, 037201 (2008)

[36] M. Santha, Quantum walk based search algorithms, 5th TAMC, LNCS 4978, 31-46,2008

[37] D. Shapira, O. Biham, A.J. Bracken, M. Hackett, One dimensional quantum walk withunitary noise, Phys. Rev. A, 68, 062315, (2003)

36

[38] Y. Shikano, H. Katsura, Localization and fractality in inhomogeneous quantum walkswith self-duality, Phys. Rev. E 82, 031122, (2010)

[39] N. Shenvi, J. Kempe, and K. B. Whaley , Quantum random-walk search algorithm,Phys. Rev. A 67, 052307 (2003)

[40] Y. Yin, D.E. Katsanos and S.N. Evangelou, Quantum Walks on a Random Environ-ment, Phys. Rev. A 77, 022302 (2008)

[41] X. Zhan, Matrix Inequalities, LNM 1790, Springer (2002)

[42] F. Zahringer, G. Kirchmair, R. Gerritsma, E. Solano, R. Blatt, C. F. Roos, Realizationof a quantum walk with one and two trapped ions, Phys. Rev. Lett. 104, 100503 (2010)

37


Top Related