Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Dynamical Localization of Quantum Walks
in Random Environments
Alain Joyeââ Marco MerkliâĄÂ§
Abstract
The dynamics of a one dimensional quantum walker on the lattice with two internaldegrees of freedom, the coin states, is considered. The discrete time unitary dynamics isdetermined by the repeated action of a coin operator in U(2) on the internal degrees offreedom followed by a one step shift to the right or left, conditioned on the state of thecoin. For a fixed coin operator, the dynamics is known to be ballistic.
We prove that when the coin operator depends on the position of the walker and isgiven by a certain i.i.d. random process, the phenomenon of Anderson localization takesplace in its dynamical form. When the coin operator depends on the time variable only andis determined by an i.i.d. random process, the averaged motion is known to be diffusiveand we compute the diffusion constants for all moments of the position.
1 Introduction
The dynamics of Quantum Walks (QW for short) have become a popular topic in theQuantum Computing community as the simplest quantum generalization of classical ran-dom walks, see for example the reviews [3], [21], [24]. In the same way classical randomwalks play an important role in theoretical computer science, typically in search algorithms,QW provide a natural and fruitful extension in the study of quantum search algorithms,see e.g. [33], [4], [27] and the review [31] and references therein. On the other hand, QWalso appeal to physicists interested in quantum dynamics. Indeed, QW can be consideredas simple discrete dynamical systems governed by an effective unitary operator, not nec-essarily given as the exact exponential of i times a microscopic Hamiltonian. For a fewmodels of this type, see e.g. [1], [28], [26], [10], [30], and [6], [9], [14], [17] for their math-ematical analysis. Moreover, there are recent experimental realizations of QW dynamics:[19] showed that cold atoms trapped in optical lattices exhibit a QW for suitably monitoredoptical lattices and [35] show that the same is true for ions caught in monitored Paul traps.
While several types of QW have been defined and studied in different contexts, we willfocus on the simplest one dimensional, discrete time QW on the lattice, defined in analogywith the classical random walk on the lattice. Consider a quantum walker on the latticecarrying two internal degrees of freedom (spin states), called the coin states in this context.The unit time step dynamics is defined by the action of a coin operator in U(2) on theinternal degrees of freedom followed by a one step shift to the right or left, conditionedon the state of the coin degree of freedom, see (2.5) below. As is well known, when thecoin operator is identical at each time step, the QW typically exhibits ballistic dynamics
âInstitut Fourier, UMR 5582, CNRS-Universite de Grenoble I BP 74, 38402 Saint-Martin dâHeres, France.â Partially supported by the Agence Nationale de la Recherche, grant ANR-09-BLAN-0098-01âĄDepartment of Mathematics, Memorial University of Newfoundland, Canada.§Partly supported by NSERC Discovery Grant 205247, and by the Institut Fourier through a one-month
stay as a professeur invite.
due to translation invariance. Moreover, if the coin operator is chosen at random at eachtime step, which yields a non autonomous random dynamical system, then, typically, theaveraged motion is diffusive, see e.g. [24], [25], see also [32].
In this paper we address the time-homogeneous case where the coin operator dependson the position of the walker on the lattice and is given by a sequence of random matricesin U(2). This defines a QW in a random environment. Such situations were first consid-ered numerically in [20], [34] for continuous time QW on graphs, that is, for an evolutionoperators generated by some random discrete Hamiltonian. The outcome of these numeri-cal works provides evidence that Anderson localization, in a strong dynamical form, takesplace. Discrete time QW in random environments were then considered in [22]. By con-trast, the author establishes that a certain choice of random coin operators does not leadto Anderson localization. The random coin operators in [22] are given by a fixed unitarymatrix whose diagonal elements carry random phases.
We consider here a family of i.i.d. random coin operators characterized by generalrequirements on the amplitude and transition probabilities to the right and to the leftexpressed as Assumptions (a), (b) and (c) below. This family is indexed by a real deter-ministic parameter and the randomness is determined by i.i.d. phases carried by all matrixelements of the coin operator, see (2.8). We prove that for all values of the deterministicparameter, dynamical localization takes place everywhere in the spectrum, for almost allrealizations of coin operators.
The lack of Anderson localization in the model considered in [22] is explained by theexistence of a gauge transformation that fully eliminates the randomness of the model. Theset of spatially random unitary matrices we consider coincides with the one considered by[25] in the study of temporal randomness, in one space dimension.
For completeness, we briefly reconsider the study [25] of the non-autonomous case wherethe coin operators are random in time. We relate the averaged motion to that of a persistentrandom walk and provide a simple proof of the fact that for all L â N, the moments of order2L, at time n, behave as D(2L)nL, for large n, with an explicit formula for the diffusionconstants D(2L) > 0.
2 Setup and Main Results
The Hilbert space of pure statesH = C
2 â l2(Z) (2.1)
represents two internal degrees of freedom, also called a coin, with Hilbert space C2, and
the walker whose position Hilbert space is l2(Z). We fix a canonical basis of C2 denotedby |âă, |âă, and the position basis consisting of vectors denoted by |nă, n â Z (eigenvectorsof the position operator, the operator of multiplication by the variable n).
The dynamics of the system is composed of discrete steps, each step consisting of aunitary evolution of the coin (operator C on C2) followed by the motion of the walker,conditioned on the state of the coin. The latter step is determined by the action
|âă â |nă 7â |âă â |n+ 1ă (2.2)
|âă â |nă 7â |âă â |nâ 1ă (2.3)
extended by linearity to H. This means that if the coin is pointing up the walker will moveto the right one step, and if the coin points down the walker moves to the left. The actionof (2.2), (2.3) is implemented by the unitary operator
S =â
kâZ
Pâ â |k + 1ăăk| + Pâ â |k â 1ăăk|
2
where we have introduced the orthogonal projections
Pâ = |âăăâ | and Pâ = |âăăâ |. (2.4)
The one step dynamics consists in tossing the quantum coin and then performing thecoin dependent shift
U = S(C â I) with C =
[a bc d
]s.t. Câ = Câ1. (2.5)
Hence, if one starts form the state | âă â |nă, the (quantum) probability to reach, in onetime step, the site |n+1ă equals |a|2 whereas that to reach |nâ1ă equals 1â|a|2. Similarly,starting form | âă â |nă, the probability to reach the site |n â 1ă equals |a|2 and that toreach |n+ 1ă is 1 â |a|2. The evolution operator at time n reads Un.
Despite the similarity of this dynamics with that of a classical random walk, there isnothing random in the quantum dynamical system at hand. The dynamics is invariantunder translations on the lattice Z, which hints at ballistic transport properties.
More precisely, letX = Iâx be the operator defined on its maximal domain in C2âl2(Z),where x is the position operator given by x|kă = k|kă, for all k â Z. For any L > 0 andany Κ in the domain of XL, we define
ăXLăΚ(n) :=âšÎš, UânXLUnΚ
â©. (2.6)
The analog definition holds for ă|X |LăΚ(n). We recall in the appendix the proof that inour one dimensional setup we have
Lemma 2.1 (Deterministic walk) Let Κ belong to the domain of X2. Then
limnââ
ăX2ăΚ(n)
n2= B â„ 0
with B = 0 iff C is off diagonal.
Note: When C is off diagonal, one has complete localization, see the remarks followingTheorem 2.2 below.
A QW in a non-trivial environment is characterized by coin operators that depends onthe position of the walker: for every k â Z we have a unitary Ck on C2, and the one stepdynamics is given by
U =â
kâZ
PâCk â |k + 1ăăk| + PâCk â |k â 1ăăk| . (2.7)
We consider a random environment in which the coin operator Ck is a random element ofU(2), satisfying the following requirements:
Assumptions:(a) CkkâZ are independent and identically distributed U(2)-valued random variables.(b) The quantum amplitudes of the transitions to the right and to the left are independentrandom variables.(c) The quantum transition probabilities between neighbouring sites are deterministic andindependent of the site.
The first assumption means that the sites k â Z are independent. If the walker is local-ized at site n, then the state of the system is of the form Ïâ|nă for some normalized Ï â C
2.The probability amplitudes for transitions to states Ï â |n + 1ă and Ï â |n â 1ă (Ï â C2
3
normalized) are ăÏ, âă ăâ, CnÏă and ăÏ, âă ăâ, CnÏă, respectively. The second requirementsays that these transitions are statistically independent. Finally, the last requirement saysthat randomness appears as phases (see after (2.5)) and that, in a certain sense, we remainclose to a classical asymmetric random walk on the lattice.
We show in Lemma 2.5 below that under assumptions (a), (b) and (c), we can considerwithout loss of generality
Ck =
[eâiÏâ
kt âeâiÏâkr
eâiÏâkr eâiÏâ
k t
], with 0 †r, t †1 and r2 + t2 = 1 (2.8)
and with ÏâkkâZ âȘ Ïâ
kkâZ i.i.d. random variables. Thus, the action of the randomoperator UÏ is
UÏ|âă â |nă = eâiÏânt|âă â |n+ 1ă + eâiÏâ
nr|âă â |nâ 1ă (2.9)
UÏ|âă â |nă = âeâiÏânr|âă â |n+ 1ă + eâiÏâ
nt|âă â |nâ 1ă.
The elimination of the randomness in the model [22] is a consequence of Lemma 2.5 also.
To define the random phases properly, we introduce the probability space (Ω,F ,P)where Ω = TZ, (T = R/2ÏZ), F is the Ï-algebra generated by cylinders of Borel sets,and P = âkâZ”, where ” is a probability measure on T. We note expectation values withrespect to P by E.
The random variables Ïâk and Ïâ
k on (Ω,F ,P) are defined by
Ïâk : Ω â T, Ïâ
k = Ï2k , Ïâk : Ω â T, Ïâ
k = Ï2k+1. (2.10)
A realization is thus denoted by Ï = (· · · , Ïââ1, Ï
ââ1, Ï
â0 , Ï
â0 , Ï
â1 , Ï
â1 , · · · ) â Ω.
Main result. Our main result is Theorem 2.2 and its corollary. Let UÏ be the onestep dynamics of a QW in a random environment defined by (2.7) with Ck, k â Z given by
(2.8), where Ï#k kâN,#ââ,â are the i.i.d. random variables defined in (2.10), distributed
according to a measure ” on T.
Theorem 2.2 (Spatial disorder) Suppose that ” is an absolutely continuous measured”(Ï) = Ï(Ï)dÏ, with density Ï â Lâ(T), and that the support of ” contains a non-emptyopen set. Then, for any r â (0, 1), there exist C < â, α > 0 such that for any j, k â Z
and any Ï, Ï â â, â
E
[sup
fâC(S), âfâââ€1
âŁâŁâŁ ăÏ â j, f(UÏ) Ï â kăâŁâŁâŁ]†Ceâα|jâk|. (2.11)
Remarks:1) The extreme cases r = 1, t = 0 and r = 0, t = 1 lead to deterministic results that areaddressed in the remarks following Lemma 2.5. As seen from (2.9), when r = 0, the upand down components are independent and propagate respectively to the right and to theleft. We get ballistic behaviour for any deterministic choice of phases. When t = 0, thewalker oscillates back and forth, switching its coin state. Hence localization takes place forany deterministic choice of phases.2) Specializing the result to the function f(z) = zn, z â S, (2.11) implies the followingalmost sure result on the evolution in time of the quantum moments of the QW, see [17].
4
Corollary 2.3 There exists a set Ω0 of probability one, such that for any Ï â Ω0, anyL > 0 and for any Κ â C2 â l2(Z) of finite support, there exists CÏ <â with
supnâZ
ă|X |LăΚ(n) < CÏ . (2.12)
Remarks:1) Another Corollary of Theorem 2.2 is that the spectrum of UÏ is pure point, almostsurely, see [17].2) The proof of this localization result leans on the fact that the unitary operator UÏ canbe viewed as a doubly infinite five-diagonal band matrix on l2(Z), a set of operators firstconsidered in [9]. Moreover, under Assumptions a), b), c), the randomness appears insuch a way that it is possible to adapt the Aizenman-Molchanov method [2] developed forthe study of Anderson localization in the self-adjoint case to this unitary setup, along thelines of [17]. Indeed, we show below that UÏ takes the form DÏS in an adapted basis, whereDÏ is a diagonal unitary operator whose elements are i.i.d. phases and S is a deterministicunitary band matrix. This is the starting point of [17]. Nevertheless, the random operatorat hand differs from those considered in [9, 17] by the form of the deterministic matrixS. This forces us to revisit the arguments based on the analysis of products of randomtransfer matrices and the associated Lyapunov exponents, coupled with fractional momentestimates of finite volume restrictions of the associated resolvent.
For the sake of comparison, we compute the asymptotics of the expectation of all(integer) moments of the position operator for a dynamics that is random in time. Wefollow [25] and consider the random evolution operator at time n given by
UÏ(n, 0) = UnUnâ1 · · ·U2U1, where Uk = S(Ck â I), (2.13)
where the i.i.d. random variables CkkâZ are defined by (2.8). The random phases areassumed in this case to satisfy
E(eâiÏâk) = E(eâiÏ
âk) = 0, (2.14)
but their common probability measure ” is otherwise arbitrary. We extend the results of[25] on the diffusive behavior of the QW in this setup by proving the following
Proposition 2.4 (Temporal disorder) Assume the evolution operator is given by (2.13)with random phases (2.10) distributed according to a measure d” such that (2.14) holds.Let Κ0 = Ï0 â |0ă be of norm one. Then, for any L â N, we have
limnââ
E(ăXLăÏ0(n))
nL/2= D(L) (2.15)
where
D(L) =
0 if L is odd
1 · 3 · 5 · · · (Lâ 1)(t2/r2)L/2 if L is even.(2.16)
Remark:As observed in [25], assumption (2.14) allows one to express the averaged motion as apersistent classical random walk on Z, which is known to display a diffusive behaviour.Actually, using results showing convergence of certain Markov chains to Brownian motion,e.g. [7], one can deduce the above result. Nevertheless, by considering the associated gener-ating function and by analyzing its large times asymptotics, we provide a short elementaryderivation of the values of all diffusion constants.
Before we turn to the proofs of these results, we briefly investigate the invariance prop-erties of the model stemming from its structure (2.7) and assumptions (a), (b) and (c).
5
2.1 Structure of UÏ
Lemma 2.5 Under the assumptions (a), (b) and (c), the operator UÏ defined by (2.7) isunitarily equivalent to the one defined by the choice
[eâiÏâ
kt âeâiÏâkr
eâiÏâkr eâiÏâ
kt
]where 0 †t, r †1 and r2 + t2 = 1 (2.17)
and ÏâkkâZ âȘ Ïâ
kkâZ are iid random variables defined by (2.10), up to multiplication bya global deterministic phase.
Proof. We first note that assumption (a) implies that we can consider one Ck at a timeand (b) and (c) imply that the randomness appears as phases only (see after (2.5)). Theindependence of the right and left probability amplitudes implies that the rows of Ck areindependent random variables.
As Ck must be unitary for all realizations, the scalar product of its columns equals zero.This shows that the random phases of the elements on the same line are identical, i.e.
Ck =
[eâiÏâ
ka eâiÏâkb
eâiÏâkc eâiÏâ
kd
]=
[eâiÏâ
k 0
0 eâiÏâk
][a bc d
], (2.18)
with C âĄ[a bc d
]unitary and independent of k. Parametrizing C as
C = eâiΞ[teâiα ireiÎł
ireâiÎł teiα
], with 0 †r, t †1 and α, Îł, Ξ â T, (2.19)
one sees that at the cost of multiplying UÏ by eiΞ, we can assume Ξ = 0. Moreover, with
ÎŁ =
[1 00 âieiÎł
], we compute
ÎŁCÎŁâ1 =
[teâiα ârr teiα
]. (2.20)
Up to unitary equivalence, we can assume that C has the form of the r.h.s. of (2.20).
To get rid of the last phase α, we make use of the representation of UÏ in the orderedbasis . . . , |âă â |nâ 1ă, |âă â |nâ 1ă, |âă â |nă, |âă â |nă, . . . as the band matrix
UÏ = DÏS, with S =
. . . r teiα
0 00 0 r teiα
teâiα âr 0 00 0 r teiα
teâiα âr 0 00 0 .. .teâiα âr
, (2.21)
where (upon relabeling the indices of the random phases) DÏ is diagonal with i.i.d. entries,DÏ = diag(. . . , eâiÏk , eâiÏk+1 , . . .), and the diagonal of S consists of zeroes. Our conventionis to fix a labeling of the canonical basis en of that space so that the odd rows containr, teiα and the even rows contain teâiα,âr.
6
Following [9], we introduce the unitary operator V defined by V en = eiζnen, whereζn â T, n â Z. We compute, for any k â Z,
V â1UÏV e2k = ei(ζ2kâζ2kâ1)eâiÏ2kâ1re2kâ1 + ei(ζ2kâζ2k+2)eâiÏ2k+2teâiαe2k+2 (2.22)
V â1UÏV e2k+1 = ei(ζ2k+1âζ2kâ1)eâiÏ2kâ1teiαe2kâ1 â ei(ζ2k+1âζ2k+2)eâiÏ2k+2re2k+2.
Choosing for any k â Z
ζ2k+2 = ζ2k+1 = âkα, (2.23)
yields the result.
Remarks:1) The last argument of the proof above can be adapted to show that the randomness ofthe model of QW in random environment considered by [22] can be gauged away, whichexplains the absence of localization. Indeed, this model is characterized by random coinmatrices of the form (with t = r = 1/
â2)
Ck =
[teiÏk rr âteâiÏk
], where ÏkkâZ are i.i.d. random variables on T. (2.24)
The corresponding operator UÏ then satisfies
V â1UÏV e2k = ei(ζ2kâζ2kâ1)re2kâ1 + ei(ζ2kâζ2k+2)eâiÏ2kte2k+2
V â1UÏV e2k+1 = âei(ζ2k+1âζ2kâ1)eiÏ2kte2kâ1 + ei(ζ2k+1âζ2k+2)re2k+2. (2.25)
Choosing ζ0 = 0, ζâ1 = 0 and, for any k â Z+,
ζ2k+2 = â(Ï2k + Ï2kâ2 + · · · + Ï0),
ζâ2k = Ïâ2k + Ïâ2kâ2 + · · · + Ïâ2,
ζ2k+1 = â(Ï2k + Ï2kâ2 + · · · + Ï0),
ζâ(2k+1) = Ïâ2k + Ïâ2kâ2 + · · · + Ïâ2, (2.26)
we see that UÏ is unitarily equivalent to the deterministic operator U0, characterized by
the the deterministic coin operator C =
[t rr ât
].
2) We will consider from now on UÏ to be defined by (2.21) (with α = 0) acting on theHilbert space l2(Z), with canonical basis e2k = | âă â |kă, e2k+1 = | âă â |kă. We notethat the position operator x on l2(Z) is related to X defined by (2.6) on C2 â l2(Z) by(xâ 1)/2 †X †x/2, so that dynamical localization is equivalent in both representations.3) In case r = 0, t = 1, (2.21) shows that UÏ is equivalent to a direct sum of two shifts,which, in other words, corresponds to Ck = I, k â Z. This leads to ballistic behaviour. Incase r = 1, t = 0, the two-dimensional subspaces spane2kâ1, e2k, k â Z, are invariant,which forbids any kind of transport.4) The appealing five diagonal representation (2.21) of QW on N is also related so calledCMV matrices associated with certain orthogonal polynomials on the unit circle, whenrestricted to l2(Z+). This fact is used in [11] to study certain properties of deterministicQW.
Finally note that the structure of S can be described also via the two unitary operators(âevenâ and âoddâ)
Be =
[r tt âr
]and Bo =
[0 11 0
]. (2.27)
7
S is the product of SoSe, where Se is block-diagonal, with blocks Be placed so that the(1, 1) entry of Be lies on even indices of the diagonal, and So is block-diagonal with blocksBo placed so that the (1, 1) entry of Bo lies on odd diagonal elements. Obviously Se andSo are unitary, and thus so is S and hence UÏ.
3 Proofs
We make use of the structure just described to define finite volume approximations of Sand thus UÏ.
3.0.1 Finite and semifinite volume truncation
On l2(2n0, 2m0) we define the matrices
So =
1Bo
. . .
Bo
, Se =
Be. . .
Be1
, (3.1)
where each 1 is a 1 Ă 1 block, and the 2 Ă 2 blocks Bo, Be are given in (2.27). The finitevolume propagator is
UÏ = DÏS, with S = SoSe =
r t0 0 r tt âr 0 0
0 0 r tt âr 0 0
0 0t âr
. . .
r t 00 0 00 0 1t âr 0
. (3.2)
Here, DÏ = diag(eâiÏ2n0 , . . . , eâiÏ2m0 ).Similarly we define U±
Ï = DÂ±Ï S
± acting on the Hilbert spaces l2(2n0, . . .) and l2(. . . , 2m0).The matrix S+ is defined as in (3.2), but where the 2 Ă 4 blocks are repeated indefinitelytowards the bottom right. Similarly we define Sâ, and the D±
Ï are simply the correspond-ing semifinite truncations of the diagonal matrix DÏ. Analogous definitions hold for theHilbert spaces l2(2n0 + 1, . . .), l2(. . . , 2m0 + 1).
3.0.2 Transfer matrix
Infinite volume. For z â C\0, consider the equation (U â z)Ï = 0 (dropping thesubscript Ï in the notation). In what follows we always consider z 6= 0; an analysis forz = 0 can be done in a similar way, but is not of any use in this work. Due to the bandstructure of U , we obtain a recurrence relation for the components of vectors solving theabove equation. Denote by Ïn the component of Ï along the basis element en. A vector Ï
8
solves (U â z)Ï = 0 if and only if its components satisfy
[Ï2n+1
Ï2n
]= Tz(Ï2n, Ï2nâ1)
[Ï2nâ1
Ï2nâ2
], (3.3)
where the transfer matrix is given by
Tz(Ï2n, Ï2nâ1) =eâiÏ2n
zt
[z2ei(Ï2nâ1+Ï2n) + r2 ârt
ârt t2
]. (3.4)
We havedetTz = eâi(Ï2nâÏ2nâ1), (3.5)
so equation (3.3) can be inverted. It follows that if we choose Ï1 and Ï0 then all componentsof Ï are fixed, by (3.3). A priori therefore, each eigenvalue could be doubly degenerate;but this does not happen as we see from the following argument. Suppose that Ï and Ïare two eigenvectors with the same eigenvalue z. Then we have
[Ï2n+1 Ï2n+1
Ï2n Ï2n
]= Tz(Ï, n)
[Ï1 Ï1
Ï0 Ï0
], (3.6)
whereTz(Ï, n) := Tz(Ï2n, Ï2nâ1) · · ·Tz(Ï2, Ï1). (3.7)
The determinant of the matrix on the left side of (3.6) tends to zero as n â â, since Ïand Ï are eigenvectors and hence belong to l2. On the other hand, | detTz(Ï, n)| = 1 forall n. We conclude that the columns of the matrix to the right in (3.6) are multiples ofeach other, and hence so are the vectors Ï and Ï.
Solutions ϱ. Take 0 6= z â C from the resolvent set of U . Then there are unique (upto multiplication by a scalar) solutions ϱ â l2± to (U â z)Ï = 0. Here, l2± = Ï â l2 :ânâ„1 |ϱn|2 < â. To see existence we set Ï0 = (U â z)â1e0, so that (U â z)Ï0 = e0,
and we choose Ï+k = [Ï0]k for k â„ 1, Ïâ
k = [Ï0]k for k †â1 . We then define ϱk for the
remaining components k by applying the transfer matrix. Uniqueness of ϱ is shown justas in the above argument following (3.6).
Similarly to (3.6), we have
[Ï+
2n Ïâ2n
Ï+2nâ1 Ïâ
2nâ1
]= T (Ï, z, n)
[Ï+
2 Ïâ2
Ï+1 Ïâ
1
], (3.8)
for a matrix satisfying | detT (Ï, z, n)| = 1. The columns of the matrix to the right arelinearly independent, for otherwise Ï+ and Ïâ would be multiples of each other. This inturn would mean that ϱ â l2, which is not the case since z is not an eigenvalue of U . Bytaking the determinant in (3.8) we see that for all n â Z,
Ï+2nÏ
â2nâ1 â Ï+
2nâ1Ïâ2n 6= 0. (3.9)
Semifinite volume. Consider U+ acting on l2(2n0, . . .). We solve the equation (U+ âz)Ï = 0 recursively and obtain for n â„ n0 + 2
[Ï2n+1
Ï2n
]= T+
z (Ï, n)
[â rt
zt2 eiÏ2n0+1(zeiÏ2n0 â r)
1 0
] [Ï2n0+2
Ï2n0
], (3.10)
and
Ï2n0+1 =eiÏ2n0
t(z â reâiÏ2n0 )Ï2n0
. (3.11)
9
Here, we have set T+z (Ï, n) = Tz(Ï2n, Ï2nâ1) · · ·Tz(Ï2n0+4, Ï2n0+3). A similar argument
as in the infinite volume case shows that each eigenvalue has multiplicity one.
Lyapunov exponent. Products of transfer matrices yield generalized eigenvectors Ï solu-tions to (U â z)Ï = 0 and their asymptotic behaviour at infinity is related to the associ-ated Lyapunov exponents. The following theorem is an important ingredient in the proofof Lemma 3.8 below (see [15], Appendix A, for a proof of that lemma, which we do notreproduce in the present paper.
Theorem 3.1 Let ” be absolutely continuous with density Ï â Lâ(T) and having supportwith nonempty interior. Then, there is an Ç« > 0 such that for every nonzero z â C with1 â Ç« < |z| < 1 + Ç«, the limit
limnââ
1
nlog âTz(Ï2n, Ï2nâ1) · · ·Tz(Ï2, Ï1)â = Îł(z) (3.12)
exists almost surely, Îł(z) is deterministic and strictly positive. Moreover, z 7â Îł(z) iscontinuous.
Remark: The limit Îł(z) is called the Lyapunov exponent.The proof of this theorem is given in the appendix.
3.0.3 Greenâs function
Infinite volume. Let z 6= 0 be in the resolvent set of U , and let ϱ be the unique solutionsof (U â z)ϱ = 0 in l2±. We define the resolvent by
GÏz = (UÏ â z)â1 (3.13)
(leaving sometimes out the superscript Ï). Using the explicit form of U we solve theequation [(U â z)Gzen]k = ÎŽk,n (Kronecker symbol) for Gzen and hence obtain the matrixelements of the resolvent. The following is the result. For all n â Z, we have
ăek, GÏz e2nă =1
z
1
Ï+2nâ1Ï
â2n â Ï+
2nÏâ2nâ1
Ï+
2nâ1 Ïâk , k †2nâ 1
Ï+k Ïâ
2nâ1, k â„ 2n(3.14)
and
ăek, GÏz e2nâ1ă =1
z
1
Ï+2nâ1Ï
â2n â Ï+
2nÏâ2nâ1
Ï+
2n Ïâk , k †2nâ 1
Ï+k Ïâ
2n, k â„ 2n(3.15)
Notice that these matrix elements are well defined due to (3.9).
Semifinite volume. One can easily construct a vector Ïa with components Ïak, k â Z,satisfying (U â z)Ïa = 0 and [(U+ â z)Ïa]k = 0 for all k â„ a := 2n0. Here, U+ is actingon l2(2n0, . . .). The components of Ïa are obtained by the transfer matrix for a suitableâinitial conditionâ Ïa2n0
and Ïa2n0+1.In the same way, one can construct Ïb with components Ïbk, k â Z, satisfying (Uâz)Ïb =
0 and [(Uâ â z)Ïb]k = 0 for all k †b := 2m0. Here, Uâ acts on l2(. . . , 2m0).The Greenâs functions on the semi-finite spaces are defined by
G±z = (U± â z)â1, (3.16)
for |z| 6= 1 and where the operator is acting on the l2 space over the appropriate half-lineof integers. Proceeding as in the infinite-volume case, one shows that ăek, G+
z e2nă andăek, G+
z e2nâ1ă, for k â„ 2n0, are given by the r.h.s. of (3.14) and (3.15) respectively, where
10
Ïâ is replaced by Ïa. Furthermore, ăek, Gâz e2nă and ăek, Gâ
z e2nâ1ă, for k †2m0, are givenby the r.h.s. of (3.14) and (3.15) respectively, where Ï+ is replaced by Ïb.
Finite volume. On l2(2n0, . . . , 2m0) the reduced unitary is given by (3.2). The matrix
elements of the Greenâs function G[2n0,2m0]z = (U [2n0,2m0] â z)â1 is easily obtained in terms
of the vectors Ïa, Ïb defined in the previous paragraph. We mention explicitly only theformula
âše2n0
, G[2n0,2m0]z e2m0
â©= â1
z
1
Ïa2m0â1Ïb2m0
â Ïb2m0â1Ïa2m0
Ïb2m0â1Ïa2n0
. (3.17)
3.1 Bound on fractional moments of Greenâs function
For |z| 6= 1 denote the matrix elements of the resolvent of UÏ by
âšek, (UÏ â z)â1el
â©= GÏz (k, l). (3.18)
Theorem 3.2 Assume that the random variables Ïk, k â Z, are iid with distributiond”(Ï0) = Ï(Ï0)dÏ0 with Ï â Lâ(T). Let 0 < s < 1 and 0 < Ç« < 1. Then there exists aconstant 0 < C(s, Ç«) <â such that
E[|GÏz (k, l)|s
]†C(s, ǫ), (3.19)
for all z â ζ â C : 1 â Ç« < |ζ| <â, |ζ| 6= 1 and all k, l â Z.
Remark: The result holds for the semifinite volume restrictions G±z (k, l) as well.
The proof of this result is given in Hamzaâs thesis (Theorem 6.1) and [17]. It only dependson the fact that UÏ is the productDÏS, where S is unitary andDÏ = diag(. . . , Ïk, Ïk+1, . . .)has the specified random properties.
3.2 Exponential decay of fractional moments of Greenâs function
Theorem 3.3 Assume that the random variables Ïk, k â Z, are iid with distributiond”(Ï0) = Ï(Ï0)dÏ0 with Ï â Lâ(T) and that the support of ” contains a non-empty openset. Let 0 < Ç« < 1. There exists an s with 0 < s < 1/2, and there exist constants0 < C <â, α > 0, such that
E[|GÏz (k, l)|s
]†Ceâα|kâl|, (3.20)
for all z â C satisfying |z| 6= 1 and 11+Ç« < |z| < 1 + Ç«, and for all k, l â Z.
The proof is done in three steps that we now describe.
3.2.1 Reduction to even matrix elements
Theorem 3.4 Suppose that |k â l| â„ 4 and let m,n be the unique integers s.t. k â2m, 2m+ 1 and l â 2nâ 1, 2n. Then we have
|GÏz (k, l)| †|z| + t
r
[ |z|r
+|z|2 + r2
rt
] â
p1, p2â0,1
|GÏz (2mâ 2p1, 2n+ 2p2)|. (3.21)
11
It follows that for any s > 0
|GÏz (k, l)|s †C(z, t)sâ
p1, p2â0,1
|GÏz (2mâ 2p1, 2n+ 2p2)|s, (3.22)
where the constant C is that in front of the sum in (3.21) and where we have we used that(a+ b)s †[as + bs] for a, b ℠0 and 0 < s < 1.
Proof of Theorem 3.4. We write simply G for GÏz in this proof. We first show that fork 6= n,
G(2k + 1, 2n) =z
reiÏ2kâ1G(2k â 2, 2n) â z2eâi(Ï2k+Ï2kâ1) + r2
rtG(2k, 2n), (3.23)
and that for k â„ 2n+ 2 or k †2nâ 1,
G(k, 2nâ 1) =z
re+iÏ2nâ1G(k, 2n) â t
r
dn+1
dnG(k, 2n+ 2), (3.24)
where dn = [Ï+2nâ1Ï
â2n âÏ+
2nÏâ2nâ1]
â1 (c.f. (3.9)). Combining (3.23) and (3.24), and using
the fact that | dn
dn+1| = 1, the bound (3.21) is easily obtained. The latter fact follows from
dâ1n+1 = â det
[Ï+
2n+2 Ïâ2n+2
Ï+2n+1 Ïâ
2n+1
]= â det
(T
[Ï+
2n Ïâ2n
Ï+2nâ1 Ïâ
2nâ1
])= dâ1
n detT,
where | detT | = 1.Let us now derive (3.24) for k †2nâ 1. We have G(k, 2nâ 1) = 1
zdnÏ+2nÏ
âk , and Ï+
2n is
given, using the transfer matrix as in (3.3), by Ï+2n = eâiÏ2n
zt
(ârtÏ+
2nâ1 + t2Ï+2nâ2
). In this
relation, we replace Ï+2nâ2 by solving (3.3) (using the inverse transfer matrix), yielding the
expression Ï+2nâ2 = r
z eâiÏ2nâ1Ï+
2n+1 + eâiÏ2nâ1
zt [z2ei(Ï2nâ1+Ï2n) + r2]Ï+2n. It follows that
G(k, 2nâ 1) = â rzeâiÏ2nG(k, 2n) +
rt
z2eâi(Ï2n+Ï2nâ1)
dn+1
dnG(k, 2n+ 2)
+1
z2eâi(Ï2n+Ï2nâ1)[z2ei(Ï2nâ1+Ï2n) + r2]G(k, 2nâ 1),
which yields (3.24). If k â„ 2n, the same arguments with Ïâ2n in place of Ï+
2n give the result.Expression (3.23) is obtained in an analogous manner.
3.2.2 Reduction to finite volume
Theorem 3.5 Assume the hypotheses of Theorem 3.2, with 0 < s < 1/2. For any pairof integers m 6= n let k+ = maxm,n and kâ = minm,n. There exists a constantC”(s, t, Ç«) <â s.t. we have
E[|Gz(2m, 2n)|s
]2 †C”(s, t, Ç«)E[|G[2kâ,2k+]
z (2kâ, 2k+)|2s], (3.25)
for all z satisfying |z| 6= 1 and 1 â Ç«1+Ç« < |z| < 1 + Ç«.
Proof. For a given m, we decompose l2(Z) = l2(. . . , 2mâ 1)â l2(2m, . . .), and set
U = Uâ â U+ + Î, where Uâ = DÏS(ââ,2mâ1]o S
(ââ,2mâ1]e (of course, the diagonal DÏ is
12
restricted to the correct half-space) and where U+ = DÏS[2m,â)o S
[2m,â)e . The operator Î
is explicitly given by
ăek,Îelă = Î(k, l) =
reâiÏ2mâ1 , (k, l) = (2mâ 1, 2mâ 1), (2mâ 1, 2m)âreâiÏ2m , (k, l) = (2m, 2mâ 1), (2m, 2m)âteâiÏ2mâ1 , (k, l) = (2mâ 1, 2mâ 2)teâiÏ2mâ1 , (k, l) = (2mâ 1, 2m+ 1)teâiÏ2m , (k, l) = (2m, 2mâ 2)âteâiÏ2m , (k, l) = (2m, 2m+ 1)
(3.26)
and Î(k, l) = 0 for all other values k, l. Denote
Gmz := (Uâ â U+ â z)â1 = G(ââ,2mâ1]z âG[2m,â)
z , (3.27)
then the second resolvent identity gives Gz = Gmz â GzÎGmz . Suppose that m †n â 1.
Then Gmz (2m, 2n) = G[2m,â)z (2m, 2n) and we have
Gz(2m, 2n) = G[2m,â)z (2m, 2n) â
â
k,l
Gz(2m, k)Î(k, l)Gmz (l, 2n). (3.28)
According to (3.26), the matrix elements Î(k, l) vanish unless l = 2m, 2mâ1, 2mâ2, 2m+1.However, for l = 2mâ1, 2mâ2 we have Gmz (l, 2n) = 0 (by the block diagonal form (3.27)),so the only terms in the sum in (3.28) are with l = 2m, 2m+1. It follows that (still m < n)
|Gz(2m, 2n)| (3.29)
â€
1 + 2[1 + (|z| + r)tâ1] maxk=2m,2mâ1
|Gz(2m, k)||G[2m,â)
z (2m, 2n)|.
To arrive at this bound we use in (3.28) the relation
Gmz (2m+ 1, 2n) =eiÏ2m
t[z â reâiÏ2m ] Gmz (2m, 2n)
which follows readily from the explicit expressions of the semi-finite volume Greenâs func-tion, see (3.11).
In a next step, we estimate |G[2m,â)z (2m, 2n)| in (3.29) from above by the finite-volume
Greenâs function |G[2m,2n]z (2m, 2n)|. We split the space as l2(2m, . . .â) = l2(2m, 2n)â
l2(2n+ 1, . . .â), so that U [2m,â) = U [2m,2n] â U [2n+1,â) + Î, with
âšek, Îel
â©= Î(k, l) =
(r â 1)eâiÏ2nâ1, (k, l) = (2nâ 1, 2n),teâiÏ2nâ1 , (k, l) = (2nâ 1, 2n+ 1),teâiÏ2n+2, (k, l) = (2n+ 2, 2n)â(r + 1)teâiÏ2n+2, (k, l) = (2n+ 2, 2n+ 1)
(3.30)
and Î(k, l) = 0 for all other values k, l. By the second resolvent identity, we have G[2m,â)z =
(G[2m,2n]z âG
[2n+1,â)z )(1â ÎG
[2m,â)z ). Taking the matrix elements, with m < n, we obtain
G[2m,â)z (2m, 2n)
= G[2m,2n]z (2m, 2n)â
â
k,l
âše2m, G
[2m,2n]z âG[2n+1,â)
z ek
â©Î(k, l)G[2m,â)
z (l, 2n).
Note that due to (3.30), the sum over k is really only over k = 2nâ 1, 2n+2, and the termwith k = 2n+ 2 is absent since the scalar product in the sum vanishes for this value of k.
13
By using additionally that G[2m,2n](2m, 2nâ 1) = zeiÏ2nâ1G[2m,2n](2m, 2n), which followsfrom the explicit formulas for the Greenâs function, we obtain the bound
|G[2m,â)z (2m, 2n)| â€
1 + 3|z| max
k=2n,2n+1|G[2m,â)
z (k, 2n)||G[2m,2n]
z (2m, 2n)|. (3.31)
Combining (3.31) with (3.29) yields that for m †nâ 1,
|Gz(2m, 2n)| â€
1 + 2(1 + (|z| + r)tâ1) maxk=2m,2mâ1
|Gz(2m, k)|
Ă1 + 3|z| max
k=2n,2n+1|G[2m,â)
z (k, 2n)||G[2m,2n]
z (2m, 2n)|. (3.32)
We take the expectation of the inequality (3.32), use Holderâs inequality and Theorem 3.2to arrive at the bound (m †nâ 1)
E[|Gz(2m, 2n)|s
]2 †C”(s, t, ǫ, z) E[|G[2m,2n]
z (2m, 2n)|2s], (3.33)
for 0 < s < 1/2, 0 < Ç« < 1, |z| 6= 1 s.t. 1 â Ç« < |z| < â, and where C”(s, Ç«, z) is boundeduniformly in compact regions of z.
Next we deal with matrix elements of the resolvent with m â„ n+1. One can proceed asin the above argument, or instead use the following path. Since U is unitary, |Gz(k, l)| =|âšek, (U â z)â1el
â©| = |
âšel, (U
â1 â z)â1ekâ©|, and (Uâ1â z)â1 = â1/zâ(1/z)2(Uâ1/z)â1.
Hence we have for m â„ n+ 1
|Gz(2m, 2n)| =1
|z|2 |G1/z(2n, 2m)|. (3.34)
We want to use the bound (3.33) on the right hand side of (3.34). The condition 1 â Ç« <1/|z| < 1 + Ç« is equivalent to 1 â Ç«
1+Ç« < |z| < 1 + Ç«1âÇ« . It follows that under the last
condition on |z| we have, for m â„ n+ 1,
E[|Gz(2m, 2n)|s
]2 †C”(s, t, ǫ, 1/z)1
|z|4sE[|G[2n,2m]
1/z (2n, 2m)|2s]. (3.35)
Combining (3.33) and (3.35) yields the bound (3.25), provided |z| 6= 1, 1 â Ç« < |z| < 1 + Ç«and 1 â Ç«
1+Ç« < |z| < 1 + Ç«1âÇ« .
3.2.3 Exponential decay in finite volume
We assume that the support of the measure ” contains a non-empty open set in [0, 2Ï). Thisimplies positivity of the Lyapunov exponent, see Theorem 3.1, a result we use implicitly inthis section.
Theorem 3.6 There are numbers α, s satisfying α > 0, 0 < s < 1 such that for all z â C,|z| 6= 0, 1 and n < m, we have
E
[|G[2n,2m]
z (2n, 2m)|s]†Ceâα(mân). (3.36)
The constant C depends on s, z, t and ”, but not on m,n. It is furthermore uniform in z,for |z| 6= 1 restricted to compact sets of C \ 0 (see explicit bound in proof).
Proof of Theorem 3.6. The proof is based on the following two Lemmas.
14
Lemma 3.7 Let 0 < s < 1. Then there is a constant 0 < C”(s) <â s.t. for all |z| 6= 0, 1,
E[|G[2n,2m]
z (2n, 2m)|s]†|z|âs(1 + |z|âs)(1 + 2sC”(s)) E
[â„â„â„â„Ïa2mÏa2mâ1
â„â„â„â„âs
]. (3.37)
Here, Ïa is the solution satisfying the left boundary condition at a = 2n, see also before(3.16).
We present a proof of this Lemma at the end of the present section. The following apriori bound has been adapted from the self-adjoint Anderson model situation (see [12],Lemma 5.1), and is given in Appendix A of [15]. The proof uses strict positivity andcontinuity of the Lyapunov exponent, which we prove in Theorem 3.1 below. We omit theproof of this lemma which is very similar to the one given for the self-adjoint Andersonmodel in [12], see Appendix A in [15] for details.
Lemma 3.8 Let Î â C be a compact set not containing the origin. There are numbersα > 0 and 0 < s < 1 (depending on Î), such that the product of transfer matrices (3.4)satisfy, for all m > n,
E[âTz(Ï2m, Ï2mâ1) · · ·Tz(Ï2n, Ï2nâ1)vââs
]†Ceâα(mân), (3.38)
for any normalized vector v â C2. The constant C is independent of z â Î and v.
A combination of these two Lemmas yields a proof of Theorem 3.6 as follows. Solvingthe equation (3.3) for Ïa2mâ1 gives the relation Ïa2mâ1 = eâiÏ2mâ1 [tÏa2m+1 + rÏa2m]/z, andtherefore
[Ïa2m+1
Ïa2m
]=
1
t
[âr zeiÏ2mâ1
t 0
] [Ïa2mÏa2mâ1
]⥠T
[Ïa2mÏa2mâ1
].
We have âTâ †C(1+ |z|)/t, for some C independent of z, r, t and any of the phases. Usingthis expression and definition of the transfer matrices (3.3) we obtain
â„â„â„â„[
Ïa2mÏa2mâ1
]â„â„â„â„ â„â„â„Tz(Ï2m, Ï2mâ1) · · ·Tz(Ï2n+2, Ï2n+1)v
aâ„â„ t
C(1 + |z|)(3.39)
where we chose
[Ïa2n+1
Ïa2n
]to be the normalized vector
va =1â
t2 + |zeiÏ2n â r|2
[zeiÏ2n â r
t
].
Combining (3.39) with (3.37) and (3.38) proves the bound (3.36) and hence Theorem 3.6.It remains to give the
Proof of Lemma 3.7. The components of Ïb, with b = 2m, satisfy âzÏb2mâ1 +eâiÏ2mâ1Ïb2m = 0, see before (3.16). Moreover, since Ïb, is defined modulo a multiplica-tive factor only, we set Ïb2mâ1 = 1 (note that Ïb2mâ1 = 0 would imply that Ïb = 0, sothe normalization Ïb2mâ1 = 1 is possible). Therefore we obtain from (3.17) the followingexpression for Greenâs function:
G[2n,2m]z (2n, 2m) =
1
z
1
Ïa2m â zeiÏ2mâ1Ïa2mâ1
. (3.40)
15
Note that Ïa2m and Ïa2mâ1 cannot both vanish, since otherwise we would have Ïa = 0. Itfollows from (3.3) that the components of Ïa satisfy
[Ïa2mâ1
Ïa2mâ2
]= Tz(Ï2mâ2, Ï2mâ3)
[Ïa2mâ3
Ïa2mâ4
], (3.41)
and furthermore, that Ïa2m = eâiÏ2m
z
[tÏa2mâ2 â rÏa2mâ1
]. Consequently, Ïa2m depends only
on Ï2m and Ïj, with j †2mâ 2. We have
E
[âŁâŁâŁG[2n,2m]z (2n, 2m)
âŁâŁâŁs]
= |z|âsE[â« 2Ï
0
d”(Ï2mâ1)1
|Ïa2m â zeiÏ2mâ1Ïa2mâ1|s], (3.42)
where E is the expectation over all Ïj, j = 2n, . . . , 2mâ 2 and j = 2m. The dependenceon Ï2mâ1 of the integrand in
I :=
â« 2Ï
0
d”(Ï2mâ1)1
|Ïa2m â zeiÏ2mâ1Ïa2mâ1|s(3.43)
is concentrated exclusively in eiÏ2mâ1 . Let us define the vector
vm :=
[Ïa2mÏa2mâ1
]. (3.44)
If Ïa2mâ1 = 0 then we have I = |Ïa2m|âs = âvmââs. If Ïa2m = 0 then we have I =|z|âs|Ïa2mâ1|âs = |z|âsâvmââs. Next suppose that both Ïa2mâ1 and Ïa2m are nonzero. Wedistinguish two cases: either |Ïa2m| â„ |Ïa2mâ1| or |Ïa2m| < |Ïa2mâ1|. In the former case wehave âvmâ †|Ïa2m| + |Ïa2mâ1| †2|Ïa2m|, and
I = |Ïa2m|âsâ« 2Ï
0
d”(Ï2mâ1)âŁâŁeâiÏ2mâ1 â zÏa2mâ1/Ï
a2m
âŁâŁâs
†C”(s)|Ïa2m|âs †2sC”(s)âvmââs. (3.45)
Here, we have used that for all 0 < s < 1 there exists 0 < C”(s) < â such that for allÎČ â C â« 2Ï
0
d”(Ï)|e±iÏ â ÎČ|âs †C”(s),
see [18]. Next we consider the case |Ïa2m| < |Ïa2mâ1|, in which case we have âvmâ â€|Ïa2m| + |Ïa2mâ1| †2|Ïa2mâ1|, and
I = |Ïa2mâ1|âs|z|âsâ« 2Ï
0
d”(Ï2mâ1)âŁâŁeiÏ2mâ1 â zâ1Ïa2m/Ï
a2mâ1
âŁâŁâs
†|z|âsC”(s)|Ïa2mâ1|âs †|z|âs2sC”(s)âvmââs. (3.46)
Combining these estimates, we see that in any event,
I †(1 + |z|âs)(1 + 2sC”(s))âvmââs. (3.47)
This bound, together with (3.42), yields (3.37) with E instead of E. But both expressionsare the same since vm does not depend on Ï2mâ1.
This completes the proof of Lemma 3.7 and with that the proof of Theorem 3.6.
16
3.3 Proof of Theorem 3.3
Let s, α, C be as in Theorem 3.6. Combining the latter theorem with Theorem 3.5, weobtain the bound
E
[|Gz(2n, 2m)|s/2
]†C1e
â2α|mân|, (3.48)
for all |z| 6= 0, 1 satisfying the bound indicated in Theorem 3.5. The constant C1 and allfurther constants Cj introduced in this proof depend on z, s, Ç« and t and are uniform in zin compacts of C \ 0. We use the inequality (3.22) after Theorem 3.4 to arrive at thefollowing bound for |k â l| â„ 4,
E
[|Gz(k, l)|s/2
]†C2
â
p1,p2â0,1
E
[|Gz(2mâ 2p1, 2n+ 2p2)|s/2
]. (3.49)
Next, since |mâ p1 â (n+ p2)| â„ |mân|â 2, and |kâ l| †|2mâ 2n|+ 2, combining (3.48)and (3.49) gives
E
[|Gz(k, l)|s/2
]†C3e
âα|kâl|, (3.50)
provided |k â l| â„ 4 and |z| 6= 1, 11+Ç« < |z| < 1 + Ç«. Finally, if |k â l| < 4, the bound (3.50)
is implied by Theorem 3.2. This completes the proof of Theorem 3.3.
3.4 Proof of Theorem 2.2
Exponential decay of fractional moments of Greenâs function implies dynamical localizationfor band matrices of the type UÏ = DÏS, as shown in the following result.
Theorem 3.9 ([17], Theorem 3.2) Assume that the random variables Ïk satisfy theconditions of Theorem 2.2 and that for some s â (0, 1), C <â, α > 0 and Ç« > 0,
E(|Gz(k, l)|s) †Ceâα|kâl|
for all k, l â Z and all z â C s.t. 1 â Ç« < |z| < 1. Then there is a C <â such that
E
[sup
fâC(S),âfâââ€1
âŁâŁ ăek, f(UÏ)elăâŁâŁ]†Ceâα|kâl|/4
for all k, l â Z.
With the identification of the basis elements e2k = | âă â |kă, e2k+1 = | âă â |kă, see after(2.21), we immediately obtain that Theorems 3.3 and 3.9 imply Theorem 2.2.
4 Temporal disorder
For the sake of comparison, we briefly consider in this section the case where the disorder isintroduced in the model through the time variable. This means that at each time step, theevolution of the coin variable is randomly chosen within a set of iid unitary 2Ă 2 matricesCkkâZ. Therefore, the random evolution after n steps reads
U(n, 0) = UnUnâ1 · · ·U2U1, where Uk = S(Ck â I). (4.1)
As we will see, our choice of random unitary matrices CkkâZ
Ck =
[eâiÏ+
k t âeâiÏ+
k r
eâiÏâk r eâiÏâ
k t
](4.2)
17
with Ï+k kâZ âȘ Ïâ
k kâZ iid subjected to the condition
E(eiÏ+
k ) = E(eiÏâk ) = 0, (4.3)
naturally leads to the study of (classical) persistent random walks. For notational reasonswhich will be clear below, we change notations to â = +1, â = â1 so that
|âă = | + 1ă, |âă = | â 1ă, Pâ = P+1, Pâ = Pâ1. (4.4)
We first state a deterministic result dealing with expectation values of position opera-tors at time n.
Let X denote the position operator on C2 â l2(Z) defined by (XÏ)(x) = (I â x)Ï(x),x â Z, on its maximal domain
D =
Ï â C
2 â l2(Z), s.t.â
xâZ
â
Ïâ+1,â1
â(PÏ â x)Ï(x)â2C2 <â
.
For f : Z 7â C, we define the operator F (X) on C2 â l2(Z) by (FÏ)(x) = (I â f(x))Ï(x)on its maximal domain DF via the spectral Theorem. For any Κ0 â C
2 â l2(Z) such thatU(n, 0)Ï0 â DF , we note
ăΚ0, U(n, 0)âF (X)U(n, 0)Ï0ă ⥠ăF (X)ăÏ0(n). (4.5)
Explicit computations with P±1 = | ± 1ăă±1| yield the following Lemma:
Lemma 4.1 With the notations above,
UnUnâ1 · · ·U1 =â
Ïn,··· ,Ï1Ïjââ1,1
â
xâZ
PÏnCn · · ·PÏ1
C1 â |x+nâ
j=1
Ïjăăx|
=â
xâZ
nâ
k=ân
Jk(n) â |x+ kăăx| (4.6)
whereJk(n) =
â
Ïn,··· ,Ï1Pn
j=1Ïj=k
PÏnCn · · ·PÏ1
C1 âM2(C) (4.7)
satisfies Jk(n) = 0 if k and n have different parities or if k > n or k < ân.
Moreover, if Κ0 = Ï0 â |0ă, we have for any f : Z 7â C and all n â N,
ăF (X)ăÏ0(n) =
nâ
k=ân
f(k) ăÏ0, Jâk (n)Jk(n)Ï0ăC2 , (4.8)
where Wk(n) ⥠ăÏ0, Jâk (n)Jk(n)Ï0ăC2 satisfies
Wk(n) â„ 0 and
nâ
k=ân
Wk(n) = âÏ0â2. (4.9)
18
Remarks:i) If the initial coin vector Ï0 is normalized, which we assume from now on, then thequantum mechanical expectation value of the operator F (X) at time n coincides withthe expectation value of the function f with respect to the classical discrete probabilitydistribution Wk(n)kâân,...,n on ân, . . . , n â Z. The quantity Wk(n) is interpretedas the probability to reach site k â Z in n steps.ii) The probabilities Wk(n)kâân,...,n are actually Ï0-dependent random variables forrandomly chosen coin operators CkkâZ with arbitrary distribution. Taking expectationwith respect to the distribution of the Ckâs, we get a new discrete probability distributionon ân, . . . , n â Z given by wk(n)kâân,...,n with
wk(n) = E(Wk(n)), k â ân, . . . , n, â n â N, (4.10)
with same interpretation in terms of a classical random walk on Z.
We shall focus on the expectation value of the quantum mechanical moments of theposition operator for our choice (4.2) of CkkâZ under condition (4.3), for a normalizedinitial condition of the form Ï0 = Ï0 â |0ă, i.e. on
E(ăXLăÏ0(n)) =
nâ
k=ân
kLwk(n) as nâ â. (4.11)
We remind the reader that a persistent or correlated random walk on Z is determinedby a probability p that the walker moves one unit in the same direction as in the previousstep, and a probability 1 â p that the walker moves one unit in the opposite direction tothe last step. See e.g. [29].
We let Sn =ân
j=1 Ïj with Ïj â â1,+1 be the random walk defined by P(Sn = k) =wk(n). This random walk is characterized as follows.
Proposition 4.2 Assume the matrices CkkâZ are given by (4.2) and (4.3) holds. TakeÏ0 = Ï0 â |0ă with Ï0 = α|+ă + ÎČ|âă â C2 normalized. Then Sn is a persistent randomwalk with parameters
P(Ï1 = +1) = |α|2t2 + |ÎČ|2r2 â 2â(αÎČ)rt := a
P(Ï1 = â1) = |α|2r2 + |ÎČ|2t2 + 2â(αÎČ)rt := b = 1 â a
P(Ïj = +1|Ïjâ1 = +1) = P (Ïj = â1|Ïjâ1 = â1) = t2
P(Ïj = â1|Ïjâ1 = +1) = P (Ïj = +1|Ïjâ1 = â1) = r2 = 1 â t2,
for j â„ 2.
Proof: For n â N, let Ï = (Ï1, Ï2, . . . , Ïn) â â1,+1n. We need to consider
wk(n) =â
Ï,ÏâČââ1,+1nPn
j=1Ïj=
Pnj=1
ÏâČj=k
E(âšÏ0, C
â1PÏâČ
1Câ
2PÏâČ2· · ·Câ
nPÏâČnPÏn
Cn · · ·PÏ1C1Ï0
â©), (4.12)
where Ïn = ÏâČn. With PÏ = |ÏăăÏ|, Ï = ±1, and reorganizing the product, the scalar
product under the sum equals
ăÏâČ1, C1Ï0ă ăÏ1, C1Ï0ă
nâ
j=2
âšÏâČj , CjÏ
âČjâ1
â©ăÏj , CjÏjâ1ă ,
19
where the first two factors can be further expanded as
(α ăÏâČ
1, C1 +ă + ÎČ ăÏâČ1, C1 âă
)(αăÏ1, C1 +ă + ÎČăÏ1, C1 âă
).
By independence, the expectation factorizes and, with the notation ăÏ,CÏ ă = CÏ,Ï , Ï, Ï â+1,â1, we immediately get from (4.2) and (4.3)
E(CÏâČj,ÏâČ
jâ1CÏj ,Ïjâ1
) = 0 if ÏâČj 6= Ïj , â j â„ 2.
This, together with the conditions Ïn = ÏâČn and
ânj=1 Ïj =
ânj=1 Ï
âČj = k in (4.12), imposes
Ïj = ÏâČj for all j â„ 1. Hence, (4.12) reduces to
wk(n) = (4.13)
â
Ïââ1,+1nPn
j=1Ïj=k
E(|α|2|CÏ1,+1|2 + |ÎČ|2|CÏ1,â1|2 + 2â(αÎČCÏ1,â1CÏ1,+1))
nâ
j=2
E(|CÏj ,Ïjâ1|2),
where
E(|C+1,+1|2) = E(|Câ1,â1|2) = t2
E(|C+1,â1|2) = E(|Câ1,+1|2) = r2
E(Câ1,â1Câ1,+1) = âE(C+1,â1C+1,+1) = rt. (4.14)
The result then follows with the definition of Sn =ânj=1 Ïj .
Persistent random walks or correlated random walks are well known and have beenstudied in many details and greater generality, ours being the simplest instance. See e.g.[29], [13] and the references therein. In particular, when r = t = 1/
â2, the persistent and
symmetric random walks are equivalent. Also it is known that the first moment is finiteand that the second moment is proportional to n for n large. This leads to the followingdiffusive behaviour:
limnââ
E(ăXăÏ0(n))
n= 0
limnââ
E(ăX2ăÏ0(n))
n= t2/r2. (4.15)
For completeness, we provide below a simple proof of the fact that all moments of thepersistent random walk display a diffusive behaviour, a statement that we couldnât find assuch in the literature, although certainly well known.
Proof of Proposition 2.4. We assume the hypotheses of Proposition 4.2 and use thefamiliar setup of generating functions together with a classical scaling argument.We consider only the situation r 6= t which differs from the usual symmetric random walk.Let w±
k (n) be the conditional probabilities
w±k (n) = P(Sn = k|Ïn = ±1) s.t. w+
k (n) + wâk (n) = wk(n).
Thus we have, for n ℠1 and |k| †n
w+k (n+ 1) = r2wâ
kâ1(n) + t2w+kâ1(n)
wâk (n+ 1) = t2wâ
k+1(n) + r2w+k+1(n) (4.16)
20
withw+
1 (1) = a, wâ1 (1) = b. (4.17)
Moreover, w±k (n) = 0 if |k| > n. We introduce the generating functions Ί±
n and Κn by
Ί±n (z) =
nâ
k=ân
eizkw±k (n),
Κn(z) = Ί+n (z) + Ίâ
n (z), âz â C. (4.18)
As a consequence of (4.16), introducing Ίn(z) = (Ί+n (z),Ίâ
n (z))T , we have
Ίn+1(z) = M(z)Ίn(z), with (4.19)
M(z) =
[t2eiz r2eiz
r2eâiz t2eâiz
]and Ί1(z) =
[aeiz
beâiz
].
This allows to determine explicitly Ίn(z) and
Κ(z) =
âš[11
],Mnâ1(z)Ί1(z)
â©, n â„ 1, (4.20)
and, in turn, all moments of the probability distribution wk(n)kâZ.
We consider now the diffusive scaling introducing the macroscopic time variable N =nÏ , where Ï >> 1, Ï â N and the macroscopic space variable K =
âÏk, such that
K/âN = k/
ân remains finite. Expecting a probability distribution wk(n)kâZ asymp-
totically invariant under this scaling, we are led to the study of the generating function atz = y/
âÏ in the limit Ï â â:
Lemma 4.3
limÏââÏâN
Ïnâ
k=âÏn
ei yâ
Ïkwk(Ïn) = eân
t2
2r2 y2
, (4.21)
uniformly in y in any compact set of C.
Since the functions of y involved are entire and the convergence is uniform in compact sets,we can differentiate the above identity w.r.t. y as many times as we wish. In particular,differentiating L times with n = 1 and setting y = 0 immediately yields the followingCorollary which ends the proof of the Proposition 2.4:
Corollary 4.4
limÏââÏâN
Ïâ
k=âÏ
(ik)L
ÏL/2wk(Ï) =
(t2
r2
)L/2HL(0)(â1)L, (4.22)
where HL denotes the Hermite polynomial HL(z) = (â1)Lez2/2
(ddz
)Leâz
2/2.
Proof of Lemma 4.3. Note that the left side of (4.21) equals
limÏââ
ΚÏn(y/âÏ ) = lim
Ïââ
âš[11
],M Ïnâ1(y/
âÏ)Ί1(y/
âÏ )
â©(4.23)
so that we are interested in small values of z = y/âÏ . For |z| small enough, we compute
the spectrum of the analytic matrix M(z)
Ï(M(z)) = λ1(z), λ2(z) (4.24)
21
with distinct eigenvalues
λ 1
2
(z) = t2 cos(z) ±ât4 cos2(z) + 1 â 2t2. (4.25)
Hence, for |z| small, there exists an invertible matrix R(z), analytic in z, such that
Mn(z) = Râ1(z)
[λn1 (z) 0
0 λn2 (z)
]R(z). (4.26)
For |z| small enough, we have
λ1(z) = 1 â z2 t2
2r2+O(z4)
λ2(z) = (t2 â r2) + z2 t2(t2 â r2)
2r2+O(z4), (4.27)
whereas
Mâ1(0) = Mâ1(0)â =1
t2 â r2
[t2 âr2âr2 t2
]satisfies Mâ1(0)
[11
]=
[11
],
and
R(0) = Râ(0) = Râ1(0) =1â2
[1 11 â1
].
Since (4.27) yields for any y â C and Ï large enough
λ1(y/âÏ )Ïn = eâny
2 t2
2r2 +O(ny4/Ï)
λ2(y/âÏ )Ïn = (t2 â r2)Ïneny
2 t2
2r2 +O(ny4/Ï), (4.28)
we eventually obtain, since |r2 â t2| < 1 and a+ b = 1,
limÏââ
âš[11
],M Ïnâ1(y/
âÏ )Ί1(y/
âÏ )
â©= (4.29)
âš[eâny
2 t2
2r2 00 0
] [10
],
[a+ baâ b
]â©= eâny
2 t2
2r2
A Appendix
A.1 Proof of Lemma 2.1
Introducing the discrete Fourier transform F : L2([0, 2Ï),C2) â C2 â l2(Z) by
(Ff)(k) = f(k) =1
2Ï
â« 2Ï
0
f(x)eâikxdx, (A.1)
we get that U is unitarily equivalent to a multiplication operator by a matrix V (x).
Fâ1UF = multV (x) = mult
[eâix 0
0 eix
]C. (A.2)
22
The matrix V (x) is entire in x, and unitary for x real, which means that its eigenvaluesα1(x), α2(x) and eigenprojectors P1(x), P2(x) are also analytic functions of x â R,even at the possible crossings of eigenvalues for x â R. Moreover, for any n â Z,
V n(x) = P1(x)αn1 (x) + P2(x)α
n2 (x). (A.3)
Similarly, the position operator K = (I â k) is unitarily equivalent to differentiation w.r.t.x (on its natural domain):
Fâ1KF = âiâx. (A.4)
In particular, for Κ â C2 â l2(Z) such that Fâ1Κ = f â L2([0, 2Ï),C2) and Κ in the
domain of K2 = (I â k)2, we have for all n â Z,
ăK2ăΚ(n) :=âšÎš, UânK2UnΚ
â©=
â« 2Ï
0
ââx(V n(x)f(x))â2C2dx. (A.5)
Now, it is not difficult to see by explicit computations that this quantity behaves as n2 forn â â, unless the analytic eigenvalues α1(x), α2(x) of V (x) are independent of x. Wehave
α1(x)α2(x) = detV (x) = detC and α1(x) + α2(x) = tr V (x) = eâixa+ eixd. (A.6)
Hence the eigenvalues αj(x) are independent of x iff a = d = 0.
A.2 Positivity and continuity of Lyapunov exponent
This Section is devoted to the proof of Theorem 3.1.
Remember that we take ” to be absolutely continuous with density Ï â Lâ(T) havingsupport with nonempty interior and that the transfer matrices Tz(Ξ, η) are given in (3.4)(with Ï2n = Ξ and Ï2nâ1 = η).
To prove Theorem 3.1, we use Furstenbergâs theorem, which is developed for real squarematrices. We thus map Tz â M2(C) (square 2 Ă 2 matrices with complex entries) intoΟ(Tz) â M4(R) using the bijection
Ο :
[a bc d
]7â
[IRea+ JIma IReb+ JImbIRec+ JImc IRed+ JImd
], (A.7)
where
I =
[1 00 1
]and J =
[0 1â1 0
].
We refer to [9] for more detail on this transformation. In particular, we have
âΟ(A)â =â
2âAâ, (A.8)
where the norms are given by âXâ2 = Tr(XâX), for X â M2(C) or X â M4(R), andthat if A â M2(C) satisfies | det(A)| = 1, then | det Ο(A)| = 1. Due to (3.5), we have| det Ο(Tz(Ξ, η))| = 1, for all z 6= 0 and all Ξ, η.
Relation (A.8) together with the fact that Ο(AB) = Ο(A)Ο(B) shows that the statementof Theorem 3.1 is equivalent to limnââ
1nâΟ(Tz(Ξn, ηn)) · · · Ο(Tz(Ξ1, η1))â = Îł almost surely,
for the same deterministic Îł > 0 as in Theorem 3.1.
The measure ” on T induces a measure on M4(R), supported on the subset
M := Ο(Tz(Ξ, η)), Ξ, η â supp”. (A.9)
23
We call the induced measure again ”. Let Y1, Y2, . . . be iid random matrices in M4(R) withcommon distribution ”. If the integrability condition
E [maxlog âY1â, 0] <â (A.10)
is satisfied, then the upper Lyapunov exponent Îł â R âȘ ââ is defined as
Îł := limnââ
1
nE[log âYn · · ·Y1â].
The theorem of Furstenberg and Kesten ([8], Theorem 4.1) states that if in addition thematrices Yj are invertible (Yj â GL4(R)), then
limnââ
1
nlog âYn · · ·Y1â = Îł (A.11)
almost surely. In our case, Yj = Ο(Tz(Ξj , ηj)) is invertible, and âY1â †C is uniformlybounded in Ξ1, η1 (see (3.4)), so that (A.10) is trivially satisfied. This implies that (3.12)holds almost surely, with a deterministic Îł. Moreover, since S = Tz(Ξn, ηn) · · ·Tz(Ξ1, η1) is a2Ă2 invertible matrix, we have âSâ = âSâ1â â„ 1, and therefore Îł â„ 0. The remaining partof the proof of Theorem 3.1 consists in proving that Îł is strictly positive and continuous.
Let G” â GL4(R) be the (multiplicative) group of matrices generated by the transfermatrices Tz(Ξ, η), where Ξ, η vary throughout the support of the measure ”. Here, z is fixedand not displayed in G”.
Theorem A.1 (Furstenberg, [8] Thm. 6.3) Suppose that G” is strongly irreducible andnon-compact. Then the upper Lyapunov exponent associated with any sequence of randommatrices Y1, Y2, . . . in SL4(R)⩠supp”, iid with common distribution ”, is strictly positive.This means that (A.11) holds with γ > 0.
Now we show that G” is strongly irreducible and non-compact.
Lemma A.2 If supp” contains two distinct points, then G” is not compact for all z 6= 0.
Proof. For fixed z = Reiα, we have
Tz(Ξ, η) =eâi(Ξ+α)
R
[R2ei(Ξ+α+η+α) + r2 ârt
ârt t2
],
and a direct calculation shows that
Tz(η, Ξ)Tz(η, η)â1 = (Tz(Ξ, Ξ)
â1Tz(Ξ, η))â =
[ei(Ξâη) r
t (ei(Ξâη) â 1)
0 1
].
Both operators Tz(η, Ξ)Tz(η, η)â1 and Tz(Ξ, Ξ)
â1Tz(Ξ, η) belong to G”, and hence so doestheir positive-definite product,
0 â€M := Tz(η, Ξ)Tz(η, η)â1Tz(Ξ, Ξ)
â1Tz(Ξ, η) = 1l +r
t
[rt |ei(Ξâη) â 1| ei(Ξâη) â 1eâi(Ξâη) â 1 0
].
Note that the r.h.s. does not depend on z at all. Since the determinant of the last matrix
is â r2
t2 |ei(Ξâη)â1|2 < 0, G” contains a non-negative matrix M having an eigenvalue strictlylarger than one. It follows that âMnâ â â as n â â. Thus the group generated by theΟ(Tz(Ξ, η)), Ξ, η â supp”, is not compact.
24
Finally we prove strong irreducibility of G”. Let J be a non-empty open interval,J â supp”. Since the subset of matrices MâČ obtained from M, (A.9), by restrictingΞ, η â J , is a continuous image of the connected set J Ă J â R2, MâČ is a connected setin M4(R). Strong irreducibility of MâČ (which implies strong irreducibility of M) is thenequivalent to irreducibility of MâČ, see [8] Exercise IV.2.9.
Lemma A.3 The only subspaces V â R4 invariant under the action of MâČ are V = 0and V = R4. Hence MâČ is strongly irreducible. Since MâČ â G”, we have that G” is stronglyirreducible.
Proof. We just need to show that V = 0 and V = R4 are the only invariant subspacesof MâČ. Let z = Reiα 6= 0 be fixed. One easily finds that
Ο(Tz) =1
R[T1 cosÏ+ T2 sinÏ+ T3 cosÏ+ T4 sinÏ] , (A.12)
where Ï = α+ η, Ï = α+ Ξ vary in the open interval α+ J , and the matrices Tj are givenby
T1 = R2
1 0 0 00 1 0 00 0 0 00 0 0 0
, T2 = R2
0 1 0 0â1 0 0 00 0 0 00 0 0 0
,
T3 =
r2 0 ârt 00 r2 0 ârt
ârt 0 t2 00 ârt 0 t2
, T4 =
0 âr2 0 rtr2 0 ârt 00 rt 0 ât2
ârt 0 t2 0
.
By taking derivatives in the angles, we see that if V is invariant under MâČ, then V isinvariant also under the action of âT1 sinÏ+T2 cosÏ (and âT3 sinÏ+T4 cosÏ), and hence(by again differentiating) V is as well invariant under the action of T1 cosÏ+ T2 sinÏ (andT3 cosÏ+T4 sinÏ). Therefore, âT1 sin2 Ï+T2 sinÏ cosÏ and T1 cos2 Ï+T2 sinÏ cosÏ leaveV invariant, and hence so does T1 (the difference) and consequently T2 as well. Similarlyone sees that T3, T4 leave V invariant, too. Hence any subspace V invariant under MâČ
must be invariant separately under T1, T2, T3 and T4.Consider first V with dimV = 1, i.e., V = ăvă (real span). It is easy to see that the
only possibility for v resulting in a V invariant under T1 +T2 and T1 âT2 is v = αe3 + ÎČe4(canonical basis elements of R4). But either such V is not left invariant under the actionof T3. Consequently, no one-dimensional subspace is invariant under M.
Consider next V with dimV = 2. Since T3 is real symmetric and must leave V invari-ant, V must be spanned by two eigenvectors of T3. One easily finds that T3 has eigenvalues0, 1, both twice degenerate, that a basis for the kernel of T3 is [1, 0, r/t, 0]t, [0, 1, 0, r/t]t(transpose), and a basis of the eigenspace with eigenvalue 1 is [1, 0,ât/r, 0]t, [0, 1, 0,ât/r]t.There are three possible cases: 1. both eigenvectors spanning V belong to the kernel of T3,2. both eigenvectors spanning V belong to the eigenspace with eigenvalue 1 of T3, or 3.one eigenvector belongs to the kernel of T3 and the other one belongs to the other spectralsubspace. Either of these three cases can be analyzed separately, and one finds that noneof the thus formed spaces V with dimV = 2 is invariant under all of the Tj , j = 1, 2, 3, 4.In conclusion, no two-dimensional subspace is invariant under M.
Consider now V with dimV = 3. Then V â„ has dimension 1 and is invariant under T3,since the latter is real symmetric. Hence V â„ is spanned by one of the eigenvectors of T3.In the same way, one sees that V â„ must also be invariant under T1, and one finds easilythat this implies that V â„ = 0, a contradiction to dimV = 3. This shows that there isno three-dimensional subspace invariant under M. Lemma A.3 follows.
25
This proves all assertions of Theorem 3.1, except for the continuity of z 7â Îł(z). How-ever, the latter has been shown to hold in Section VII of [16] (Theorem 7.1).
References
[1] Y. Aharonov, L. Davidovich, N. Zagury, Quantum random walks, Phys. Rev. A, 48,1687-1690, (1993)
[2] M. Aizenman, S. Molchanov, Localization at large disorder and at extreme energies:an elementary derivation, Commun. Math. Phys. 157, 245-278, (1993).
[3] A. Ambainis, D. Aharonov, J. Kempe, U. Vazirani, Quantum Walks on Graphs, Proc.33rd ACM STOC, 50-59 (2001)
[4] A. Ambainis, J. Kempe, A. Rivosh, Coins make quantum walks faster, Proceedings ofSODAâ05, 1099-1108 (2005).
[5] P. Ao, Absence of localization in energy space of a Bloch electron driven by a constantelectric force, Phys. Rev. B, 41, 3998â4001 (1989).
[6] J. Asch , P. Duclos and P. Exner, Stability of driven systems with growing gaps,quantum rings, and Wannier ladders, J. Stat. Phys. 92 , 1053â1070 (1998).
[7] P. Billingsley: Convergence of Probability Measures, John Wiley and Sons 1968
[8] O. Bougerol, J. Lacroix: Products of Random Matrices with Applications toSchrodinger Operators. Birkhauser Progress in Probability and Statistics 1985
[9] O. Bourget, J. S. Howland and A. Joye, Spectral analysis of unitary band matrices,Commun. Math. Phys. 234, 191â227 (2003)
[10] G. Blatter and D. Browne, Zener tunneling and localization in small conducting rings,Phys. Rev. B 37, 3856 (1988)
[11] M.J. Cantero, F.A. Grunbaum, L. Morales, L. Velazquez, Matrix Valued Szego Poly-nomials and Quantum Random Walks, Commun. Pure and Appl. Math., 63, 464-507,(2009)
[12] R. Carmona, A. Klein, F. Martinelli: Anderson Localization for Bernoulli and OtherSingular Potentials, Commun. Math. Phys., 108, 41-66 (1987)
[13] A.Chen, E.Renshaw: The Gillis-Domb-Fisher correlated random walk, Journal of Ap-plied Probability, 29, 792-813, (1992)
[14] C. R. de Oliveira and M. S. Simsen, A Floquet Operator with Purely Point Spectrumand Energy Instability, Ann. H. Poincare 7 1255â1277 (2008)
[15] E. Hamza, Localization properties for the unitary Anderson model. PhD thesis, Uni-versity of Alabama at Birmingham, 2007.https://www.mhsl.uab.edu/dt/2008r/hamza.pdf
[16] E. Hamza, G. Stolz: Lyapunov exponents for unitary Anderson models. J. Math. Phys.48, 043301 (2007)
[17] E. Hamza, A. Joye and G. Stolz, Dynamical Localization for Unitary Anderson Mod-elsâ , Math. Phys., Anal. Geom., 12, (2009), 381-444.
[18] A. Joye, Fractional moment estimates for random unitary operators. Lett. Math. Phys.72, no. 1, 51â64 (2005) .
[19] M. Karski, L. Forster, J.M. Chioi, A. Streffen, W. Alt, D. Meschede, A. Widera,Quantum Walk in Position Space with Single Optically Trapped Atoms, Science, 325,174-177, (2009).
26
[20] J. P. Keating, N. Linden, J. C. F. Matthews, and A. Winter, Localization and itsconsequences for quantum walk algorithms and quantum communication, Phys. Rev.A 76, 012315 (2007)
[21] J. Kempe, Quantum random walks - an introductory overview, Contemp. Phys., 44,307-327, (2003)
[22] N. Konno, One-dimensional discrete-time quantum walks on random environments,Quantum Inf Process 8, 387399, (2009)
[23] N. Konno, Localization of an inhomogeneous discrete-time quantum walk on the line,Quantum Information Processing to appear.
[24] N. Konno, Quantum Walks, in âQuantum Potential Theoryâ, Franz, Schurmann Edts,Lecture Notes in Mathematics, 1954, 309-452, (2009)
[25] J. Kosk, V. Buzek, M. Hillery, Quantum walks with random phase shifts, Phys.Rev.A 74, 022310, (2006)
[26] D. Lenstra and W. van Haeringen, Elastic scattering in a normal-metal loop causingresistive electronic behavior. Phys. Rev. Lett. 57, 1623â1626 (1986)
[27] F. Magniez, A. Nayak, P.C. Richter, M. Santha, On the hitting times of quantumversus random walks, 20th SODA, 86-95, (2009)
[28] D. Meyer, From quantum cellular automata to quantum lattice gases, J. Stat. Phys.85 551574, (1996)
[29] E.Renshaw, R.Henderson: The correlated random walk, Journal of Applied Probabil-ity, 18, 403-414, (1981)
[30] J.-W. Ryu, G. Hur, and S. W. Kim, Quantum Localization in Open Chaotic Systems,Phys. Rev. E, 037201 (2008)
[31] M. Santha, Quantum walk based search algorithms, 5th TAMC, LNCS 4978, 31-46,2008
[32] D. Shapira, O. Biham, A.J. Bracken, M. Hackett, One dimensional quantum walkwith unitary noise, Phys. Rev. A, 68, 062315, (2003)
[33] N. Shenvi, J. Kempe, and K. B. Whaley , Quantum random-walk search algorithm,Phys. Rev. A 67, 052307 (2003)
[34] Y. Yin, D.E. Katsanos and S.N. Evangelou, Quantum Walks on a Random Environ-ment, Phys. Rev. A 77, 022302 (2008)
[35] F. Zahringer, G. Kirchmair, R. Gerritsma, E. Solano, R. Blatt, C. F. Roos, Realizationof a quantum walk with one and two trapped ions, arXiv:0911.1876v1 [quant-ph]
27