47
Uniform spanning forests and the bi-Laplacian Gaussian field Gregory F. Lawler * University of Chicago Xin Sun MIT Wei Wu New York University Abstract For N = n(log n) 1 4 , on the ball in Z 4 with radius N we construct a ±1 spin model coupled with the wired spanning forests. We show that as n →∞ the spin field has bi-Laplacian Gaussian field on R 4 as its scaling limit. For d 5, the model can be defined on Z d directly without the N -cutoff. The same scaling limit result holds. Our proof is based on Wilson’s algorithm and a fine analysis of intersections of simple random walks and loop erased walks. In particular, we improve results on the intersection of a random walk with a loop-erased walk in four dimensions. To our knowledge, this is the first natural discrete model (besides the discrete bi-Laplacian Gaussian field) that has been proved to converge to the bi-Laplacian Gaussian field. Acknowledgment: We are grateful to Scott Sheffield for motivating this problem and helpful discussions. We thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme Random Geometry where part of this project was undertaken. 1 Introduction This paper connects two natural objects that have four as the critical di- mension: the bi-Laplacian Gaussian field and uniform spanning trees (UST) (and the associated loop-erased random walk (LERW)). We will construct a * Research supported by National Science Foundation grant DMS-1513036. 1

Uniform spanning forests and the bi-Laplacian Gaussian eldlawler/4dUST.pdf · The discrete bi-Laplacian Gaussian eld (in physics literature, this is known as the membrane model) is

  • Upload
    volien

  • View
    220

  • Download
    0

Embed Size (px)

Citation preview

Uniform spanning forests and the bi-LaplacianGaussian field

Gregory F. Lawler∗

University of Chicago

Xin SunMIT

Wei WuNew York University

Abstract

For N = n(log n)14 , on the ball in Z4 with radius N we construct

a ±1 spin model coupled with the wired spanning forests. We showthat as n→∞ the spin field has bi-Laplacian Gaussian field on R4 asits scaling limit. For d ≥ 5, the model can be defined on Zd directlywithout the N -cutoff. The same scaling limit result holds. Our proofis based on Wilson’s algorithm and a fine analysis of intersections ofsimple random walks and loop erased walks. In particular, we improveresults on the intersection of a random walk with a loop-erased walkin four dimensions. To our knowledge, this is the first natural discretemodel (besides the discrete bi-Laplacian Gaussian field) that has beenproved to converge to the bi-Laplacian Gaussian field.

Acknowledgment: We are grateful to Scott Sheffield for motivating thisproblem and helpful discussions. We thank the Isaac Newton Institute forMathematical Sciences, Cambridge, for support and hospitality during theprogramme Random Geometry where part of this project was undertaken.

1 Introduction

This paper connects two natural objects that have four as the critical di-mension: the bi-Laplacian Gaussian field and uniform spanning trees (UST)(and the associated loop-erased random walk (LERW)). We will construct a

∗Research supported by National Science Foundation grant DMS-1513036.

1

sequence of random fields on the integer lattice Zd (d ≥ 4) using LERW andUST and show that they converge in distribution to the bi-Laplacian field.

We will now describe the construction for each positive integer n, letN = Nn = n(log n)1/4. Let AN = x ∈ Zd : |x| ≤ N. We will constructa ±1 valued random field on AN as follows. Recall that a wired spanningtree on AN is a tree on the graph AN ∪ ∂AN where we have viewed theboundary ∂AN as “wired” to a single point. Such a tree produces a spanningforest on AN by removing the edges connected to ∂AN . We define the uniformspanning forest (USF) on AN to be the forest obtained by choosing the wiredspanning tree of AN∪∂AN from the uniform distribution. (Note this is notthe same thing as choosing a spanning forest uniformly among all spanningforests of AN .) The field is then defined as follows. Let an be a sequence ofpositive numbers (we will be more precise later).

• Choose a USF on AN . This partitions AN into (connected) components.

• For each component of the forest, flip a coin and assign each vertex inthe component value 1 or −1 based on the outcome. This gives a fieldof spins Yx,n : x ∈ AN. If we wish we can extend this to a field onx ∈ Zd by setting Yx,n = 1 for x 6∈ AN .

• Let φn(x) = an Ynx,n which is a field defined on Ln := n−1 Zd.

This random function is constructed in a manner similar to the Edward-Sokalcoupling of the FK-Ising model [7]. The content of our main theorem whichwe now state is that for d ≥ 4, we can choose an that φn converges to thebi-Laplacian Gaussian field on Zd. If h ∈ C∞0 (Rd), we write

〈h, φn〉 = n−d/2∑x∈Ln

h(x)φn(x).

Theorem 1.1.

• If d ≥ 5, there exists a such that if an = a n(d−4)/2, then for everyh1, . . . , hm ∈ C∞0 (Rd), the random variables 〈hj, φn〉 converge in distri-bution to a centered joint Gaussian random variable with covariance∫ ∫

hj(z)hk(w) |z − w|4−d dz dw.

2

• If d = 4, if an = 1/√

3 log n, then for every h1, . . . , hm ∈ C∞0 (Rd) with∫hj(z) dz = 0, j = 1, . . . ,m,

the random variables 〈hj, φn〉 converge in distribution to a centeredGaussian random variable with variance

−∫ ∫

hj(z)hk(w) log |z − w| dz dw.

Remark 1.2.

• For d = 4, we could choose the cutoff N = n(log n)α for any α > 0.For d > 4, we could do the same construction with no cutoff (N = 0)and get the same result. For convenience, we choose a specific value ofα.

• We will prove the result for m = 1, h1 = h. The general result followsby applying the k = 1 result to any linear combination of h1, . . . , hm.

• We will do the proof only for d = 4; the d > 4 case is similar buteasier (see [22] for the d ≥ 5 case done in detail). In the d = 4 proof,we will have a slightly different definition of an but we will show thatan ∼

√3 log n.

Our main result uses strongly the fact that the scaling limit of the loop-erased random walk in four or more dimensions is Brownian motion, thatis, has Gaussian limits [15, 14]. Although we do not use explicitly the re-sults in those papers, they are used implicitly in our calculations where theprobability that a random walk and a a loop-erased walk avoid each other iscomparable to that of two simple random walks and can be given by the ex-pected number of intersections times a non-random quantity. This expectednumber of intersections is the discrete biharmonic function that gives thecovariance structure of the biLaplacian random field.

Gaussian fluctuations have been observed and studied for numerous phys-ical systems. Many statistical physics models with long correlations convergeare known to converge to the Gaussian free field. Typical examples come fromdomino tilings [9], random matrix theory [3, 19] and random growth models[6]. Our model can be viewed as an analogue of critical statistical mechanics

3

models in four dimension, in the sense that the correlation functions are scaleinvariant.

The discrete bi-Laplacian Gaussian field (in physics literature, this isknown as the membrane model) is studied in [20, 10, 11], whose continuouscounterpart is the bi-Laplacian Gaussian field. Our model can be viewed asanother natural discrete object that converges to the bi-Laplacian Gaussianfield. In one dimensional case, Hammond and Sheffield constructed a rein-forced random walk with long range memory [8], which can be associatedwith a spanning forest attached to Z. Our construction can also be viewedas a higher dimensional analogue of “forest random walks”.

1.1 Uniform spanning trees

Here we review some facts about the uniform spanning forest (that is, wiredspanning trees) on finite subsets of Zd and loop erased random walks (LERW)on Zd. Most of the facts extend to general graphs as well. For more details,see [16, Chapter 9].

Given a finite subset A ⊂ Zd, the uniform wired spanning tree in A isa subgraph of the graph A ∪ ∂A, choosing uniformly random among allspanning trees of A∪∂A. (A spanning tree T is a subgraph such that anytwo vertices in T are connected by a unique simple path in T ). We definethe uniform spanning forest (USF) on A to be the uniform wired spanningtree restricted to the edges in A. One can also consider the uniform spanningforest on all of Zd [18, 2], but we will not need this construction.

The uniform wired spanning tree, and hence the USF, on A can be gen-erated by Wilson’s algorithm [23] which we recall here:

• Order the elements of A = x1, . . . , xk.

• Start a simple random walk at x1 and stop it when in reaches ∂A givinga nearest neighbor path ω. Erase the loops chronologically to producea self-avoiding path η = LE(ω). Add all the edges of η to the treewhich now gives a tree T1 on a subset of A ∪ ∂A that includes ∂A.

• Choose the vertex of smallest index that has not been included and runa simple random walk until it reaches a vertex in T1. Erase the loopsand add the new edges to the tree giving a tree T2.

• Continue until all vertices are included in the tree.

4

Wilson’s theorem states that the distribution of the tree is independent ofthe order in which the vertices were chosen and is uniform chosen amongall spanning trees. In particular we get the following. Suppose that at eachx ∈ A, we have an independent simple random walk path ωx from x stoppedwhen it reaches ∂A.

• If x, y ∈ A, then the probability that x, y are in the same componentof the USF is the same as

PLE(ωx) ∩ ωy = ∅. (1)

Using this characterization, we can see the three regimes for the dimensiond. Let us first consider the probabilities that neighboring points are in thesame component. Let qN be the probability that a nearest neighbor of theorigin is in a different component as the origin.

q∞ := limN→∞

qN > 0, d ≥ 5,

qN (logN)−1/3, d = 4,

For d < 4, qN decays like a power of N .

• If d > 4, and |x| = n, the probability that that 0 and x are in the samecomponent is comparable to |x|4−d. This is true even if N =∞.

• If d = 4 and |x| = n, the probability that that 0 and x are in the samecomponent is comparable to 1/ log n. However, if we chose N = ∞,the probability would equal one.

The last fact can be used to show that the uniform spanning tree in all ofZ4 is, in fact, a tree. For d < 4, the probability that that 0 and x are in thesame component is asymptotic to 1 and our construction is not interesting.This is why we restrict to d ≥ 4.

1.2 Intersection probabilities for loop-erased walk

The hardest part of the analysis of this model is estimation of the intersectionprobability (1) for d = 4. We will review what is known and then state thenew results in this paper. We will be somewhat informal here; see Section 2for precise statements.

5

Let ω denote a simple random walk path starting at the origin and letη = LE(ω) denote its loop erasure. We can write

η = 0 ∪ η1 ∪ η2 ∪ · · · ,

where ηj denotes the path from the first visit to |z| ≥ ej−1 stopped at thestep immediately before the first visit to |z| ≥ ej. Let S be an independentsimple random walk starting at the origin, write S for S = [1,∞), and let usdefine

Zn = Zn(η) = PS ∩ (0 ∪ η1 ∪ η2 · · · ∪ ηn) = ∅ | η

,

Zn = Zn(η) = PS ∩ (η1 ∪ η2 · · · ∪ ηn) = ∅ | η

,

∆n = ∆n(η) = PS ∩ ηn 6= ∅ | η.With probability one, Z∞ = 0, and hence E[Zn] → 0 as n → ∞. In [14]building on the work on slowly recurrent sets [13], it was shown that forintegers r, s ≥ 0,

pn,r,s := E[Zrn Z

s

n

] n−(r+s)/3,

where means that the ratio of the two sides are bounded away from 0 and∞ (with constants depending on r, s). The proof uses the “slowly recurrent”nature of the paths to derive a relation

pn,r,s ∼ pn−1,r,s (1− (r + s)E[∆n]) . (2)

We improve on this result by establishing an asymptotic result

limn→∞

n(r+s)/3 pn,r,s = cr,s ∈ (0,∞).

The proof does not compute the constant cr,s except in one special caser = 2, s = 1. Let pn = pn,2,1. We show that

pn ∼π2

24n.

This particular value of r, s comes from a path decomposition in tryingto estimate

PS ∩ ηn 6= ∅ = E [∆n] .

On the event S ∩ ηn 6= ∅ there are many intersections of the paths. Byfocusing on a particular one using a generalization of a first passage decom-position, it turns out that

PS ∩ ηn 6= ∅ ∼ G2n pn,

6

where G2n denotes the expected number of intersections of S with a simple

random walk starting at the sphere of radius en−1 stopped at the sphere ofradius n. This expectation can be estimated well giving G2

n ∼ 8/π2, andhence we get a relation

PS ∩ ηn 6= ∅ ∼ 8

π2pn.

Combining this with (2), we get a consistency relation,

pn ∼ pn−1

[1− 24

π2pn

],

which leads to n pn ∼ π2/24 and

PS ∩ ηn 6= ∅ ∼ 1

3n.

From this we also get the asymptotic value of (1),

PLE(ωx) ∩ ωy = ∅ ∼ 1

3 logN.

1.3 Estimates for Green’s functions

Let GN(x, y) denote the usual random walk Green’s function on AN , and let

G2(x, y) =∑z∈Zd

G(x, z)G(z, y),

G2N(x, y) =

∑z∈A

GN(x, z)GN(z, y),

G2N(x, y) =

∑z∈A

GN(x, z)G(z, y),

Note thatG2N(x, y) ≤ G2

N(x, y) ≤ G2(x, y).

Also, G2(x, y) <∞ if and only if d > 4.The Green’s function G(x, y) is well understood for d > 2. We will use

the following asymptotic estimate [16, Theorem 4.3.1]

7

G(x) = Cd |x|2−d +O(|x|−d), where Cd =dΓ(d/2)

(d− 2)πd/2. (3)

(Here and throughout we use the convention that if we say that a functionon Zd is O(|x|)−r) with r > 0, we still imply that it is finite at every point.In other words, for lattice functions, O(|x|−r) really means O(1∧ |x|−r). Wedo not make this assumption for functions on Rd which could blow up at theorigin.) It follows immediately, and will be more convenience for us, that

G(x) = Cd Id(x)[1 +O(|x|−2)

], (4)

where we write

Id(x) =

∫V

ddζ

|ζ − x|d−2,

and V = Vd = (x1, . . . , xd) ∈ Rd : |xj| ≤ 1/2 denotes the d-dimensionalcube of side length one centered at the origin. One aesthetic advantage isthat Id(0) is not equal to infinity.

Proposition 1.3. If d ≥ 5, there exists cd such that

G2(x, y) = cd |x− y|4−d +O(|x− y|2−d

).

Moreover, there exists c <∞ such that if |x|, |y| ≤ N/2, then

G2(x, y)−G2N(x, y) ≤ cN4−d.

We note that it follows immediately that

G2(x, y) = cd Jd(x− y)[1 +O

(|x− y|−2

)],

where

Jd(x) =

∫V

ddζ

|ζ − x|d−4.

The second assertion shows that it does not matter in the limit whether weuse G2(x, y) or G2

N(x, y) as our covariance matrix.

Proof. We may assume y = 0. We use (3) to write

G2(0, x) = C2d

∑z∈Zd

Id(z) Id(z − x)[1 +O(|z|−2) +O(|z − x|−2)

]= C2

d

∫ ∫dζ1 dζ2

|ζ1|d−2 |ζ2 − x|d−2+O(|x|2−d)

= C2d |x|4−d

∫ ∫dζ1 dζ2

|ζ1|d−2 |ζ2 − u|d−2+O(|x|2−d),

8

where u denotes a unit vector in Rd. This gives the first expression with

cd = C2d

∫ ∫dζ1 dζ2

|ζ1|d−2 |ζ2 − u|d−2.

For the second assertion, Let Sx, Sy be independent simple random walksstarting at x, y stopped at times T x, T y, the first time that they reach ∂AN .Then,

G2(x, y)−G2N(x, y) ≤ E

[G2(Sx(T x), y)

]+ E

[G2(x, Sy(T y))

],

and we can apply the first result to the right-hand side.

Lemma 1.4. If d = 4, there exists c0 ∈ R such that if |x|, |y|, |x− y| ≤ N/2,then

G2N(x, y) =

8

π2log

[N

|x− y|

]+ c0 +O

(|x|+ |y|+ 1

N+

1

|x− y|

).

Proof. Using the martingale Mt = |St|2 − t, we see that∑w∈AN

GN(x,w) = N2 − |x|2 +O(N). (5)

Using (4), we see that∑w∈AN

G(0, w) = O(N) +2

π2

∫|s|≤N

ds

|s|2= 2N2 +O(N).

Let δ = N−1[1 + |x|+ |y|] and note that |x− y| ≤ δ N . Since∑|w|<N(1−δ)

Gn(x− y, w)Gn(0, w) ≤∑w∈Dn

Gn(x,w)Gn(y, w)

≤∑

|w|≤N(1+δ)

Gn(x− y, w)Gn(0, w),

it suffices to prove the result when y = 0 in which case δ = (1 + |x|)/N .Recall that

minN≤|z|≤N+1

G(x, z) ≤ G(x,w)−Gn(x,w) ≤ maxN≤|z|≤N+1

G(x, z).

9

In particular,

Gn(0, w) = G(0, w)− 2

π2N2+O(N−3),

Gn(x,w) = G(x,w)− 2

π2N2+O(δN−2).

Using (5), we see that∑w

Gn(x,w)Gn(0, w) =∑w∈Dn

[G(x,w)− 2

π2N2+O

(δN−2

)]Gn(0, w)

= O(δ)− 2

π2+∑w∈Dn

G(x,w)Gn(0, w).

Similarly, we write,∑w∈Dn

G(x,w)Gn(0, w) =∑w∈Dn

G(x,w)

[G(0, w)− 2

π2 N2+O

(N−3

)],

and use (5) to see that∑w∈Dn

G(x,w)Gn(0, w) = − 2

π2+O(δ2) +

∑w∈Dn

G(x,w)G(0, w).

Hence it suffices to show that there exists c′ such that∑w∈Dn

G(x,w)G(0, w) = log(N/|x|) + c+O(|x|−1) +O(δ).

Define ε(x,w) by

ε(x,w) = G(x,w)G(0, w)−(

2

π2

)2 ∫Sw

ds

|x− s|2 |s|2.

The estimate for the Green’s function implies that

|ε(x,w)| ≤

c |w|−3 |x|−2, |w| ≤ |x|/2c |w|−2 |x− w|−3, |x− w| ≤ |x|/2c |w|−5, other w,

10

and hence,∑w∈Dn

G(x,w)G(0, w) = O(|x|−1) +∑w∈Dn

(2

π2

)2 ∫Sw

ds

|x− s|2 |s|2

= O(|x|−1) +

(2

π2

)2 ∫|s|≤N

ds

|x− s|2 |s|2.

(The second equality had an error term of O(N−1) but this is smaller thanthe O(|x|−1) term already there.) It is straightforward to check that thereexists c′ such that∫

|s|≤N

d4s

|x− s|2 |s|2= 2π2 log

(N

|x|

)+ c′ +O

(|x|N

).

1.4 Proof of Theorem

Here we give the proof of the theorem leaving the proof of the most importantlemma for Section 2. The proofs for the d = 4 and d > 4 are similar with afew added difficulties for d = 4. We will only consider the d = 4 case here.We fix h ∈ C∞0 with

∫h = 0 and allow implicit constants to depend on h.

We will write just Yx, for Yx,n. Let K be such that h(x) = 0 for |x| ≤ K .Let us write Ln = n−1 Z4 ∩ |x| ≤ K and

〈h, φn〉 = n−4 an∑x∈Ln

h(x)Ynx = n−4 an∑

nx∈AnK

h(x)Ynx,

where an is still to be specified. We will now give the value. Let ω be aLERW from 0 to ∂AN in AN and let η = LE(ω) be its loop-erasure. Letω1 be another simple random walk starting at the origin stopped when itreaches ∂AN and let ω′1 be ω1 with the initial step removed. Let

u(η) = Pω1 ∩ η = 0 | η,

u(η) = Pω′1 ∩ η = ∅ | η.We define

pn = E[u(η) u(η)2

].

In [] it was shown that pn 1/ log n if d = 4. One of the main goals ofSection 2 is to give the following improvement.

11

Lemma 1.5. If d = 4 and pn is defined as above, then

pn ∼π2

24 log n.

We can write the right-hand side as

1

(8/π2)

1

3 log n,

where 8/π2 is the nonuniversal constant in Lemma 1.4 and 1/3 is a universalconstant for loop-erased walks. If d ≥ 5, then pn ∼ c n4−d. We do not provethis here.

For d = 4, we define

an =

√8

π2pn∼ 1√

3 log n.

Let δn = exp−(log log n)2; this is a function that decays faster thanany power of log n. Let Lkn,∗ be the set of x = (x1, . . . , xk) ∈ Lkn such that|xj| ≤ K for all j and |xi − xj| ≥ δn for each i 6= j. Note that #(Lkn) n4k

and #(Lkn \ Lkn,∗) k2 n4k δn. In particular,

n−4k a2kn

∑x∈L2k

n,∗

h(x1)h(x2) · · · h(x2k) = o2k(√δn).

Let qN(x, y) be the probability that x, y are in the same component ofthe USF on AN , and note that

E [Yx,n Yy,n] = qN(x, y),

E[〈h, φn〉2

]= n−4

∑x∈Ln

h(x)h(y) a2n qN(nx, ny).

An upper bound for qN(x, y) can be given in terms of the probability thatthe paths of two independent random walks starting at x, y, stopped whenthen leave AN , intersect. This gives

qN(x, y) ≤ c log[N/|x− y|]logN

.

12

In particular,

qN(x, y) ≤ c(log log n)2

log n, |x− y| ≥ n δn. (6)

To estimate E [〈h, φn〉2] we will use the following lemma, the proof ofwhich will be delayed.

Lemma 1.6. There exists a sequence rn with rn ≤ O(log log n), a sequenceεn ↓ 0, such that if x, y ∈ Ln with |x− y| ≥ 1/

√n ,∣∣a2

n q(nx, ny)− rn + log |x− y|∣∣ ≤ εn.

It follows from the lemma, (6), and the trivial inequality qN ≤ 1, that

E[〈h, φn〉2

]= o(1) + n−d

∑x,y∈Ln

[rn − log |x− y|] h(x)h(y)

= o(1)− n−d∑x,y∈Ln

log |x− y|h(x)h(y)

= o(1)−∫h(x)h(y) log |x− y| dx dy,

which shows that the second moment has the correct limit. The secondequality uses

∫h = 0 to conclude that

rnnd

∑x,y∈Ln

h(x)h(y) = o(1).

We now consider the higher moments. It is immediate from the con-struction that the odd moments of 〈h, φn〉 are identically zero, so it suf-fices to consider the even moments E[〈h, φn〉2k]. We fix k ≥ 2 and allowimplicit constants to depend on k as well. Let L∗ = Lkn,∗ be the set ofx = (x1, . . . , xk) ∈ Lkn such that |xj| ≤ K for all j and |xi− xj| ≥ δn for eachi 6= j. We write h(x) = h(x1) . . . h(x2n). Then we see that

E[〈h, φn〉2k〉

]= n−2kd a2k

n

∑x∈L2k

N

h(x)E [Ynx1 · · ·Ynx2k ]

= O(√δn) + n−2kd a2k

n

∑x∈Lk

n\L∗

h(x)E [Ynx1 · · ·Ynx2k ]

13

Lemma 1.7. For each k, there exists c < ∞ such that the following holds.Suppose x ∈ L2k

n,∗ and let ω1, . . . , ω2k be independent simple random walksstarted at nx1, . . . , nx2k stopped when they reach ∂AN . Let N denote thenumber of integers j ∈ 2, 3, . . . , 2k such that

ωj ∩ (ω1 ∪ · · · ∪ ωj−1) 6= ∅.

Then,

PN ≥ k + 1 ≤ c

[(log log n)3

log n

]k+1

.

We write yj = nxj write Yj for Yyj . To calculate E[Y1 · · ·Y2k] we firstsample our USF which gives a random partition P of y1, . . . , y2k Note thatE[Y1 · · ·Y2k | P ] equals 1 if it is an “even” partition in the sense that eachset has an even number of elements. Otherwise, E[Y1 · · ·Y2k | P ] = 0. Anyeven partition, other than a partition into k sets of cardinality 2, will haveN ≥ k + 1. Hence

E[Y1 · · ·Y2k] = O

([(log log n)3

log n

]k+1)

+∑

P (Py) ,

where the sum is over the (2k − 1)!! perfect matchings of 1, 2, . . . , 2k andP (Py) denotes the probability of getting this matching for the USF for thevertices y1, . . . , y2k.

Let us consider one of these perfect matchings that for convenience wewill assume is y1 ↔ y2, y3 ↔ y4, . . . , y2k−1 ↔ y2k. We claim that

P(y1 ↔ y2, y3 ↔ y4, . . . , y2k−1 ↔ y2k) =

O

([(log log n)3

log n

]k+1)

+ P(y1 ↔ y2)P(y3 ↔ y4) · · ·P(y2k−1 ↔ y2k).

Indeed, this is just inclusion-exclusion using our estimate on PN ≥ k + 1.If we write εn = εn,k = (log log n)3(k+1)/ log n, we now see from symmetry

that

E[〈h, φn〉2k〉

]= O(εn) + n−2kdan (2k − 1)!!

∑x∈L∗

Pnx1 ↔ nx2, . . . , nx2k−1 ↔ nx2k

= O(εn) + (2k − 1)!![E(〈h, φn〉2

)]k.

14

1.5 Proof of Lemma 1.7

Here we fix k and let y1, . . . , y2k be points with |yj| ≤ Kn and|yi− yj| ≥ n δnwhere we recall log δn = (log log n)2. Let ω1, . . . , ω2k be independent simplerandom walk starting at yj stopped when they get to ∂AN . We let Ei,j denotethe event that ωi ∩ ωj 6= ∅, and let Ri,j = P(Ei,j | ωj).

Lemma 1.8. There exists c <∞ such that for all i, j, and all n sufficientlylarge,

PRi,j ≥ c

(log log n)3

log n

≤ 1

(log n)4k.

Proof. We know that there exists c < ∞ such that if |y − z| ≥ n δ2n, then

the probability that simple random walks starting at y, z stopped when thereach ∂AN intersect is at most c(log log n)2/ log n. Hence there exists c1 suchthat

PRi,j ≤

c1 (log log n)2

log n

≥ 1

2. (7)

Start a random walk at z and run it until one of three things happens:

• It reaches ∂AN

• It gets within distance n δ2n of y

• The path is such that the probability that a simple random walkstarting at y intersects the path before reaching ∂AN is greater thanc1(log log n)2/ log n.

If the third option occurs, then we restart the walk at the current site anddo this operation again. Eventually one of the first two options will occur.Suppose it takes r trials of this process until one of the first two eventsoccur. Then either Ri,j ≤ r c1 (log log n)2/ log n or the original path startingat z gets within distance δ2

n of y. The latter event occurs with probabilityO(δn) = o((log n)−4k). Also, using (7), we can see the probability that ittook at least r steps is bounded by (1/2)r. By choosing r = c2 log log n, wecan make this probability less than 1/(log n)4k.

Proof of Lemma 1.7. LetR be the maximum ofRi,j over all i 6= j in 1, . . . , 2k.Then, at least for n sufficiently large,

PR ≥ c

(log log n)3

log n

≤ 1

(log n)3k.

15

Let

E·,j =

j−1⋃i=1

Ei,j.

On the event R < c(log log n)3/ log n, we have

PE·,j | ω1, . . . , ωj−1

≤ c(j − 1) (log log n)3

log n.

If N denotes the number of j for which E·,j occurs, we see that

PN ≥ k + 1 ≤ c

[(log log n)3

log n

]k+1

.

2 Loop-erased walk in Z4

The purpose of the remainder of this paper is to improve results about theescape probability for loop-erased walk in four dimensions.

We start by setting up our notation. Let S be a simple random walk in Z4

defined on the probability space (Ω,F ,P) starting at the origin. Let G(x, y)be the Green’s function for simple random walk which we recall satisfies

G(x) =2

π2 |x|2+O(|x|−4), |x| → ∞.

Using this we see that there exists a constant λ such that as r →∞,∑|x|≤r

G(x)2 =8

π2log r + λ+O(r−1). (8)

It will be easiest to work on geometric scales, and we let Cn, An be thediscrete balls and annuli defined by

Cn = z ∈ Z4 : |z| < en, An = Cn \ Cn−1 = z ∈ Cn : |z| ≥ en−1.

Letσn = min j ≥ 0 : Sj 6∈ Cn ,

16

and let Fn denote the σ-algebra generated by Sj : j ≤ σn. If V ⊂ Z4, wewrite

H(x, V ) = PxS[0,∞) ∩ V 6= ∅, H(V ) = H(0, V ), Es(V ) = 1−H(V ),

H(x, V ) = PxS[1,∞) ∩ V 6= ∅, Es(V ) = 1−H(0, V ).

Note that Es(V ) = Es(V ) if 0 6∈ V and otherwise a standard last-exit de-composition shows that

Es(V 0) = GV 0(0, 0) Es(V ),

where V 0 = V \ 0.We have to be a little careful about the definition of the loop-erasures of

the random walk and loop-erasures of subpaths of the walk. We will use thefollowing notations.

• S[0,∞) denotes the (forward) loop-erasure of S[0,∞) and

Γ = S[1,∞) = S[0,∞) \ 0.

• ωn denotes the finite random walk path S[σn−1, σn]

• ηn = LE(ωn) denotes the loop-erasure of S[σn−1, σn].

• Γn = LE(S[0, σn]) \ 0, that is, Γn is the loop-erasure of S[0, σn] withthe origin removed.

Note that S[1,∞) is the concatenation of the paths ω1, ω2, . . .. However, itis not true that Γ is the concatenation of η1, η2, . . ., and that is one of thetechnical issues that must be addressed in the proof.

Let Yn, Zn be the Fn-measurable random variables

Yn = H(ηn), Zn = Es[Γn], Gn = GZ4\Γn(0, 0).

As noted above,Es(Γn ∪ 0) = G−1

n Zn.

It is easy to see that 1 ≤ Gn ≤ 8, and using transience, we can see that withprobability one

limn→∞

Gn = G∞ := GΓ\0(0, 0).

17

Theorem 2.1. For every 0 ≤ r, s <∞, there exists cr,s <∞, such that

E[ZrnG−sn

]∼ cr,snr/3

.

Moreover, c3,2 = π2/24.

Our methods do not compute the constant cr,s except in the case caser = 3, s = 2 . The proof of this theorem requires several steps which we willoutline now. For the remainder of this paper we fix 0 < r ≤ r0 and allowconstants to depend on r0 but not otherwise on r. We write

pn = E [Zrn] , pn = E

[Z3nG−2n

].

Here are the steps in order with a reference to the section that the argumentappears. Let

φn =n∏j=1

exp−E[H(ηj)]

.

• Section 2.1. Show that E [H(ηn)] = O(n−1), and hence,

φn = φn−1 exp−E[H(ηn)] = φn−1

[1 +O

(1

n

)].

• Section 2.2. Show that

pn = pn−1

[1 +O

(log4 n

n

)]. (9)

• Section 2.4. Find a function φn such that there exist cr with

pn4 = cr φrn

[1 +O

(log2 n

n

)].

• Section 2.5. Find u, c > 0 such that

φn = c φn4

[1 +O(n−u)

].

• Combining the previous estimates we see that there exist c′ = c′r, u suchthat

pn = c′ φn

[1 +O

(log4 n

n1/4

)].

18

• Section 2.6. Show that there exists c′r,s, u such that

E[ZrnG−sn

]= c′r,s φn

[1 +O

(log4 n

n1/4

)].

• Section 2.7. Use a path decomposition to show that

E[H(ηn)] =8

π2pn[1 +O(n−u)

],

and combine the last two estimates to conclude that

pn ∼π2

24n.

The error estimates are probably not optimal, but they suffice for provingour theorem. We say that a sequence εn of positive numbers is fast decayingif it decays faster than every power of n, that is

limn→∞

nk εn = 0,

for every k > 0. We will write εn for fast decaying sequences. As is theconvention for constants, the exact value of εn may change from line toline. We will use implicitly the fact that if εn is fast decaying then so isε′n where

ε′n =∑m≥n

εm.

2.1 Preliminaries

In this subsection we prove some necessary lemmas about simple random walkin Z4. The reader could skip this section now and refer back as necessary.We will use the following facts about intersections of random walks in Z4,see [12, 13].

Proposition 2.2. There exist 0 < c1 < c2 < ∞ such that the following istrue. Suppose S is a simple random walk starting at the origin in Z4 anda > 2. Then

c1√log n

≤ P S[0, n] ∩ S[n+ 1,∞] = ∅

≤ P S[0, n] ∩ S[n+ 1, 2n] = ∅≤ c2√

log n,

19

PS[0, n] ∩ S[n (1 + a−1),∞) 6= ∅

≤ c2

log a

log n.

Moreover, if S1 is an independent simple random walk starting at z ∈ Z4,

PS[0, n] ∩ S1[0,∞) 6= ∅

≤ c2

logα

log n, (10)

where

α = max

2,

√n

|z|

.

An important corollary of the proposition is that

supnnE [H(ηn)] ≤ sup

nnE [H(ωn)] <∞, (11)

and henceexp −E [H(ηn)] = 1− E [H(ηn)] +O(n−2),

and if m < n,

φn = φm[1 +O(m−1)

] n∏j=m+1

[1− E

[H(ηj)

]].

Corollary 2.3. There exists c < ∞ such that if n > 0, α ≥ 2 and we letm = mn,α = (1 + α−1)n,

Y = Yn,α = maxj≥m

H(Sj, S[0, n]),

Y ′ = Y ′n,α = max0≤j≤n

H(Sj, S[m, 2n]),

then for every 0 < u < 1,

PY ≥ logα

(log n)u

≤ c

(log n)1−u , PY ′ ≥ logα

(log n)u

≤ c

(log n)1−u .

Proof. Let

τ = min

j ≥ m : H(Sj, S[0, n]) ≥ logα

(log n)u

.

20

The strong Markov property implies that

P S[0, n] ∩ S[m,∞) 6= ∅ | τ <∞ ≥ logα

(log n)u,

and hence Proposition 2.2 implies that

Pτ <∞ ≤ c2

(log n)1−u .

This gives the first inequality and the second can be done similarly lookingat the reversed path.

Lemma 2.4. There exists c <∞ such that for all n:

• if m < n,PS[σn,∞) ∩ Cn−m 6= ∅ | Fn ≤ c e−2m; (12)

• if φ is a positive (discrete) harmonic function on Cn and x ∈ Cn−1,

|log[φ(x)/φ(0)]| ≤ c |x| e−n. (13)

Proof. These are standard estimates, see, e.g., [16, Theorem 6.3.8, Proposi-tion 6.4.2].

Lemma 2.5. Let

σ−n = σn − bn−1/4 e2nc, σ+n = σn + bn−1/4 e2nc,

S−n = S[0, σ−n ], S+n = S[σ+

n ,∞),

R = Rn = maxx∈S−n

H(x,S+n ) + max

y∈S+nH(y,S−n ),

Then, for all n sufficiently large,

PRn ≥ n−1/3 ≤ n−1/3. (14)

Our proof will actually give a stronger estimate, but (14) is all that weneed and makes for a somewhat cleaner statement.

21

Proof. Using the fact that Pσn ≤ (k + 1) e2n | σn ≥ k e2n is boundeduniformly away from zero, we can see that there exists c0 with

Pσn ≥ c0 e2n log n ≤ O(n−1).

Let N = Nn = dc0 e2n log ne, k = kn = bn−1/4 e2n/4c and let Ej = Ej,n be

the event that either

maxi≤jk

H(Si, S[j(k + 1), N ]) ≥ log n

n1/4,

or

max(j+1)k≤i≤N

H(Si, S[0, jk]) ≥ log n

n1/4,

Using the previous corollary, we see that P(Ei) ≤ O(n−3/4) and hence

P

[ ⋃jk≤N

Ej

]≤ O

(log n

n1/2

).

Lemma 2.6. LetMn = max |Sj − Sk| ,

where the maximum is over all 0 ≤ j ≤ k ≤ n e2n with |j − k| ≤ n−1/4 e2n.Then,

PMn ≥ n−1/5 en

is fast decaying.

Proof. This is a standard large deviation estimate for random walk, see, e.g.,[16, Corollary A.2.6].

Lemma 2.7. Let Un be the event that there exists k ≥ σn with

LE(S[0, k]) ∩ Cn−log2 n 6= Γ ∩ Cn−log2 n.

Then P(Un) is fast decaying.

Proof. By the loop-erasing process, we can see that the event Un is containedin the event that either

S[σn− 12

log2 n,∞) ∩ Cn−log2 n 6= ∅

orS[σn,∞) ∩ Cn− 1

2log2 n 6= ∅.

The probability that either of these happens is fast decaying by (12).

22

Proposition 2.8. If Λ(m,n) = S[0,∞) ∩ A(m,n), then the sequences

PH[Λ(n− 1, n)] ≥ log2 n

n

, P

H(ωn) ≥ log4 n

n

,

are fast decaying.

Proof. Using Proposition 2.2 we can see that there exists c < ∞ such thatfor all |z| ≥ en−1,

PzH(S[0,∞)) ≥ c

n

≤ 1

2.

Using this and the strong Markov property, we can by induction that forevery positive integer k,

PH[Λ(n− 1, n)] ≥ ck

n

≤ 2−k.

Setting k = bc−1 log2 nc, we see that the first sequence is fast decaying.For the second, we use (12) to see that

Pωn 6⊂ A(n− log2 n, n)

is fast decaying, and if ωn ⊂ A(n− log2 n, log n),

H(ωn) ≤∑

n−log2 n≤j≤n

H [Λ(j − 1, j)] .

2.2 Sets in Xn

Simple random walk paths in Z4 are “slowly recurrent” sets in the termi-nology of [13]. In this section we will consider a subcollections Xn of thecollection of slowly recurrent sets and give uniform bounds for escape prob-abilities for such sets. We will then apply these uniform bounds to randomwalk paths and their loop-erasures.

If V ⊂ Z4 we write

Vn = V ∩ A(n− 1, n), hn = H(Vn).

23

Using the Harnack inequality, we can see that there exist 0 < c1 < c2 < ∞such that

c1 hn ≤ H(z, Vn) ≤ c2 hn, z ∈ Cn−2 ∪ A(n+ 1, n+ 2).

Let Xn denote the collection of subsets of V of Z4 such that for all integersm ≥

√n,

H(Vm) ≤ log2m

m.

Proposition 2.9. PS[0,∞) 6∈ Xn is a fast decaying sequence.

Proof. This is an immediate corollary of Proposition 2.8.

Let En denote the event

En = En,V = S[1, σn] ∩ V = ∅.

Proposition 2.10. There exists c <∞ such that if V ∈ Xn, m ≥ n/10, andP(Em+1 | Em) ≥ 1/2, then

P(Ecm+2 | Em+1) ≤ c log2 n

n.

Proof. For z ∈ ∂Cm+1, let

u(z) = PS(σm+1) = z | Em.

Then,

P(Ecm+2 | Em+1) =

∑z∈Cm+1

u(z)PzS[0, σm+2] ∩ V 6= ∅.

Note that

u(z) =PS(σm+1) = z, Em+1

P(Em+1)

≤ 2PS(σm+1) = z, Em+1

P(Em)≤ 2PS(σm+1) = z | Em.

Using the Harnack principle, we see that

PS(σm+1) = z | Em ≤ supw∈∂Cm

PwS(σm+1) = z ≤ cPS(σm+1) = z.

24

Therefore,

P(Ecm+2 | Em+1) ≤ cPS[σm+1, σm+2] ∩ V 6= ∅

≤ c

m+2∑k=1

PS[σm+1, σm+2] ∩ Vk 6= ∅.

Note that

PS[σm+1, σm+2] ∩ (Vm ∪ Vm+1 ∪ Vm+2) 6= ∅ ≤ H(Vm ∪ Vm+1 ∪ Vm+2)

≤ clog2 n

n.

Using (12), we see that for λ large enough,

PS[σm+1, σm+2] ∩ Cm−λ logm 6= ∅ ≤ cn−2.

For m− λ logm ≤ k ≤ m− 1, we estimate

PS[σm+1, σm+2] ∩ Vk 6= ∅ ≤ P[E ′]PS[σm+1, σm+2] ∩ Vk 6= ∅ | E ′,

where E ′ = S[σm+1, σm+2] ∩ Ck+1 6= ∅. The first probability on the right-hand side is exp−O(k−m) and this second is O(log2 n/n). Summing overk we get the result.

Proposition 2.11. There exists c < ∞ and a fast decaying sequence εnsuch that if V ∈ Xn, and P(En) ≥ εn, then

P(Ecj+1 | Ej) ≤

c log2 n

n,

3n

4≤ j ≤ n.

Proof. It suffices to consider n sufficiently large. If P(Em+1 | Em) ≤ 1/2for all n/4 ≤ m ≤ n/2, then P(En) ≤ (1/2)n/4 which is fast decaying. IfP(Em+1 | Em) ≥ 1/2 for some n/4 ≤ m ≤ n/2, then (for n sufficientlylarge) we can use the previous proposition and induction to conclude thatP(Ek+1 | Ek) ≥ 1/2 for m ≤ k ≤ n.

Corollary 2.12. There exists c such that for all n,

| log(pn/pn+1)| ≤ c log4 n

n.

25

In particular, there exists c such that if n4 ≤ m ≤ (n+ 1)4,

| log(pm/pn4)| ≤ c log4 n

n.

Proof. Except for an event with fast decaying probability, we have

Γn4 Γn+1 ⊂ Λ(n− log2 n, n+ 1).

and

H(Λ(n− log2 n, n+ 1)

)≤ c log4 n

n.

This implies that there exists a fast decaying sequence εn such that, exceptfor an event of probability at most εn, either Zn ≤ εn or

|logZn − logZn+1| ≤c log4 n

n.

2.3 Loop-free times

One of the technical nuisances in the analysis of the loop-erased random walkis that if j < k, it is not necessarily the case that

LE(S[j, k]) = LE(S[0,∞)) ∩ S[j, k].

However, this is the case for special times called loop-free times. We say thatj is a (global) loop-free time if

S[0, j] ∩ S[j + 1,∞) = ∅.

Proposition 2.2 shows that the probability that j is loop-free is comparableto 1/(log j)1/2. From the definition of chronological loop erasing we can seethe following:

• If j < k and j, k are loop-free times, then for all m ≤ j < k ≤ n,

LE (S[m,n]) ∩ S[j, k] = LE (S[0,∞)) ∩ S[j, k] = LE (S[j, k]) . (15)

26

It will be important for us to give upper bounds on the probability thatthere is no loop-free time in a certain interval of time. If m ≤ j < k ≤ n, letI(j, k;m,n) denote the event that for all j ≤ i ≤ k − 1,

S[m, i] ∩ S[i+ 1, n] 6= ∅.

Proposition 2.2 gives a lower bound on P [I(n, 2n; 0, 3n)],

P [I(n, 2n; 0, 3n)] ≥ PS[0, n] ∩ S[2n, 3n] 6= ∅ 1

log n.

The next lemma shows that P [I(n, 2n; 0, 3n)] 1/ log n by giving the upperbound.

Lemma 2.13. There exists c <∞ such that

P [I(n, 2n; 0,∞)] ≤ c

log n.

Proof. Let E = En denote the complement of I(n, 2n; 0,∞). We need toshow that P(E) ≥ 1−O(1/ log n).

Let kn = bn/(log n)3/4c and let Ai = Ai,n be the event that

Ai = S[n+ (2i− 1)kn, n+ 2ikn] ∩ S[n+ 2ikn + 1, n+ (2i+ 1)kn] = ∅ .

and consider the events A1, A2, . . . , Ar where r = b(log n)3/4/4c. Theseare r independent events each with probability greater than c (log n)−1/2.Hence the probability that none of them occurs is exp−O((log n)−1/4) =o((log n)−3), that is,

P(A1 ∪ · · · ∪ Ar) ≥ 1− o(

1

log3 n

).

Let Bi = Bi,n be the event

Bi = S[0, n+ (2i− 1)kn] ∩ S[n+ 2ikn,∞) = ∅ ,

and note thatE ⊃ (A1 ∪ · · · ∪ Ar) ∩ (B1 ∩ · · · ∩Br).

Since P(Bci ) ≤ c log log n/ log n, we see that

P(B1 ∩ · · · ∩Br) ≥ 1− cr log log n

log n≥ 1−O

(log log n

(log n)1/4

),

27

and hence,

P(E) ≥ P [(A1 ∪ · · · ∪ Ar) ∩ (B1 ∩ · · · ∩Br)] ≥ 1−O(

log log n

(log n)1/4

). (16)

This is a good estimate, but we need to improve on it.Let Cj, j = 1., . . . , 5, denote the independent events (depending on n)

I

(n

[1 +

3(j − 1) + 1

15

], n

[1 +

3(j − 1) + 2

15

];n+

(j − 1)n

5, n+

jn

5

).

By (16) we see that P[Cj] ≤ o(1/(log n)1/5

), and hence

P(C1 ∩ · · · ∩ C5) ≤ o

(1

log n

).

Let D = Dn denote the event that at least one of the following ten thingshappens:

S

[0, n

(1 +

j − 1

5

])∩ S

[n

(1 +

3(j − 1) + 1

15

),∞)6= ∅, j = 1, . . . , 5.

S

[0, n

(1 +

3(j − 1) + 2

15

)]∩ S

[n

(1 +

j

5

),∞)6= ∅, j = 1, . . . , 5.

Each of these events has probability comparable to 1/ log n and hence P(D) 1/ log n. Also,

I(n, 2n; 0,∞) ⊂ (C1 ∩ · · · ∩ Cn) ∪ D.

Therefore, P [I(n, 2n; 0,∞)] ≤ c/ log n.

Corollary 2.14.

1. There exists c <∞ such that if 0 ≤ j ≤ j + k ≤ n, then

P [I(j, j + k; 0, n)] ≤ c log(n/k)

log n.

2. There exists c <∞ such that if 0 < δ < 1 and

Iδ,n =n−1⋃j=0

I(j, j + δn; 0, n),

then

P [Iδ,n] ≤ c log(1/δ)

δ log n.

28

Proof.

1. We will assume that k ≥ n1/2, since otherwise log(n/k)/ log n ≥ 1/2.Note that

I(j, j+k; 0, n) ⊂ I(j, j+k; j−k, j+2k)∪S[0, j−k]∩S[k+2k, n] 6= ∅,

and

P [I(j, j + k; j − k, j + 2k)] ≤ c

log k= O

(1

log n

),

PS[0, j − k] ∩ S[k + 2k, n] 6= ∅ ≤ c log(n/k)

log n.

2. We can cover Iδ,n by the union of O(1/δ) events each of probabilityO(log(1/δ)/ log n).

2.4 Along a subsequence

It will be easier to prove the main result first along the subsequence n4.We let bn = n4, τn = σn4 , qn = pn4 , Gn = Fn4 . Let ηn denote the (forward)loop-erasure of S[σ(n−1)4+(n−1), σn4−n]. and

Γn = ηn ∩ A(bn−1 + 4(n− 1), bn − 4n),

Γ∗n = Γ ∩ A(bn−1 + 4(n− 1), bn − 4n).

We state the main result that we will prove in this subsection.

Proposition 2.15. There exists c0 such that as n→∞,

qn =

[c0 +O

(log4 n

n2

)]exp

−r

n∑j=1

hj

,

wherehj = E

[H(Γj)

].

29

Lemma 2.16. There exists c <∞ such that

P

Γn 6= Γ∗n

≤ c

n4.

P∃m ≥ n with Γm 6= Γ∗m

≤ c

n3.

Proof. The first follows from Corollary 2.14, and the second is obtained bysumming over m ≥ n.

Let S ′ be another simple random walk defined another probability space(Ω′,P′) with corresponding stopping times τ ′n. Let

Qn = Es[Γ1 ∪ · · · ∪ Γn

], Yn = H(Γn).

Using only the fact that

Γn ⊂ expbn−1 + 4(n− 1) ≤ |z| ≤ expbn − 4n ,

we can see thatQn+1 = Qn [1− Yn] [1 +O(εn)]. (17)

for some fast decaying sequence εn. On the event that S[0,∞) ∈ Xn,

Es[Γn+1] = Es[Γn]

[1− Yn +O

(log4 n

n3

)]

H(ηn) = Yn +O

(log4 n

n3

).

and for m ≥ n,

Zm4 = Zn4 exp

m∑j=n+1

Yj

[1 +O

(log4 n

n2

)]. (18)

Lemma 2.17. There exists a fast decaying sequence εn such that we candefine Γ1, Γ2, . . . and Γ′1, Γ

′2, . . . on the same probability space so that Γj has

the same distibution as Γ′j for each j; Γ′1, Γ′2, . . . are independent; and

PΓn 6= Γ′n ≤ εn.

30

In particular, if Y ′n = H(Γ′n),

E

[m∏j=n

[1− Yj] | Gn−1

]= [1 +O(εn)]

m∏j=n

E[1− Yj]

=

[1 +O

(log4 n

n4

)]exp

m∑j=n

E(Yj)

.

Also, using (18), except on an event of fast decaying probability,

E [Zrm4 | Gn] = Zr

n4 exp

m∑j=n

E(Yj)

[1 +O

(log4 n

n2

)].

Taking expectations, we see that

qm = qn exp

m∑j=n

E(Yj)

[1 +O

(log4 n

n2

)],

which implies the proposition.

2.5 Comparing hitting probabilities

The goal of this section is to prove the following. Recall that bn = n4, hn n−1, hn n−1.

Proposition 2.18. There exists c <∞, u > 0 such that∣∣∣∣∣∣hn −bn∑

j=bn−1+1

hj

∣∣∣∣∣∣ ≤ c

n1+u.

Proof. For a given n we will define a set

U = Ubn−1+1 ∪ · · · ∪ Ubn ,

such that the following four conditions hold:

U ⊂ Γn, (19)

Uj ⊂ ηj, j = bn−1 + 1, . . . bn, (20)

31

E[H(Γn \ U)

]+

bn∑j=bn−1+1

E [H(ηn \ Uj)] ≤ O(n−(1+u)), (21)

maxbn−1<j≤bn

maxx∈Uj

H(x, U \ Uj) ≤ n−u. (22)

We claim that finding such a set proves the result. Indeed,

H(U) ≤ H(Γn) ≤ H(U) +H(Γn \ U),

bn∑j=bn−1+1

H(Uj) ≤bn∑

j=bn−1+1

H(ηj) ≤bn∑

j=bn−1+1

[H(Uj) +H(ηj \ Uj)

].

Also it follows from (22) and the strong Markov property that

H(U) =[1−O(n−u)

] bn∑j=bn−1+1

H(Uj) = −O(n−1−u) +bn∑

j=bn−1+1

H(Uj).

Taking expectations and using (19)–(21) we get

hn = E[H(Γn)

]= O(n−1−u) +

bn∑j=bn−1+1

E[H(ηj)

]= O(n−1−u) +

bn∑j=bn−1+1

hj.

Hence, we need only find the sets Uj.Recall that σ±j = σj ± bj−1/4e2jc and let ωj = S[σ+

j−1, σ−j ]. Using Lemma

2.6, we can see that

P

diam(S[σ+

j−1, σ−j ])≥ j−1/5 ej

is fast decaying. We will set Uj equal to ηj ∩ ωj unless one of the followingsix events occurs in which case we set Uj = ∅. We assume (n− 1)4 < j ≤ n4.

1. if j ≤ (n− 1)4 + 8n or j ≥ n4 − 8n.

2. If H(ωj) ≥ n−4 log2 n.

3. If ωj ∩ Cj−8 logn 6= ∅.

4. If H(ωj \ ωj) ≥ n−4−u.

5. If it is not true that there exist loop-free points in both [σj−1, σ+j−1] and

[σ−j , σj].

32

6. Ifsupx∈ωj

H(x, S[0,∞) \ ωj) ≥ n−u.

The definition of Uj immediately implies (20). Combining conditions 1and 3, we see that (for n sufficiently large which we assume throughout) thatUj ⊂ A(bn−1 + 6n, bn − 6n). Moreover, if there exists loop-free points in[σj−1, σ

+j−1] and [σ−j , σj], then ηn ∩ ωj = ηj ∩ ωj. Therefore, (19) holds. Also,

condition 6 immediately gives (22).In order to establish (21) we first note that

(Γn ∪ ηbn−1+1 ∪ · · · ∪ ηbn) \ U ⊂bn⋃

j=bn−1+1

Vj,

where

Vj =

ωj if Uj = ∅ωj \ ωj if Uj = ηj ∩ ωj

.

Hence, it suffices to show that

bn∑j=bn−1+1

E [H(ωj);Uj = ∅] +bn∑

j=bn−1+1

E [H(ωj \ ωj)] ≤ c n−1−u.

We write the event Uj = ∅ as a union of six disjoint events

Uj = ∅ = E1j ∪ · · · ∪ E6

j ,

where Eij is the event that the ith condition above holds but none of the

previous ones hold. It suffices to show that for bn−1 < j ≤ bn,

E [H(ωj \ ωj)] +6∑i=1

E[H(ωj) 1Ei

j

]≤ n−4−u.

• In order to estimate E [H(ωj \ ωj)], we use standard estimates on ran-dom walk to show that for δ > 0, except perhaps on an event of prob-ability o(n−2),

Cap[ωj \ ωj] ≤ O(n−18

+δ)n−1/2 en, diam[ωj \ ωj] ≤ n−12

+δ en

and by choosing δ = 1/64, we have off the bad event,

H(ωj \ ωj) ≤ O(n−1− 116 ). (23)

33

We now consider the events E1j , . . . , E

6j .

1. Since for each j, E[H(ωj)] n−4∑j≤(n−1)4+8n or j≥n4−8n

E[H(ωj)] = O(n−3).

2. We have already seen that

PH(ωj) ≥

log2 n

n4

≤ O(εn).

We note that given this, for j > 2,

E[H(ωj) 1Ej

n

]≤ log2 n

n4P(Ej

n).

Hence it suffices for j = 3, . . . , 6 to prove that P(Ejn) ≤ n−u for some u.

3. The standard estimate gives

Pωj ∩ Cj−log j 6= ∅ ≤ O(j−2).

4. This we did in (23).

5. This follows from Corollary 2.14 for any u < 1/4.

6. This follows from Lemma 2.5.

2.6 s 6= 0

The next proposition is easy but it is important for us.

Proposition 2.19. For every r, s there exists cr,s such that

E[ZrnG−sn

]= cr,s φ

rn

[1 +

(log4 n

n1/4

)]

34

Proof. Using Lemma 2.7, we can see that there exists a fast decaying sequenceεn such that if m ≥ n,

P|Gn −Gm| ≥ εn ≤ εn.

Since 1 ≤ Gn ≤ 8, this implies that[φ−1n Zr

nG−sn

]−[φ−1n Zr

nG−s∞]

is fast decaying and∣∣∣E(φ−1n Zr

nG−s∞

)− E

(φ−1m Zr

mG−s∞

)∣∣∣ ≤ c∣∣E (φ−1

n Zrn

)− E

(φ−1m Zr

m

)∣∣ .

2.7 Determining the constant

The goal of this section is to prove

pn ∼π2

24n. (24)

We will give the outline of the proof using the lemmas that we will proveafterwards. By combining the results of the previous sections, we see that

E[ZrnG−sn

]= c′r,s

[1 +O(n−u)

]exp

−r

n∑j=1

hj

,

where hj = E [H(ηj)] , and c′r,s are unknown constants. In particular,

pn = c′3,2[1 +O(n−u)

]exp

−3

n∑j=1

hj

. (25)

We show in Section 2.7.2 that

hn =8

π2pn +O(n−1−u).

It follows that the limit

limn→∞

[log pn +

24

π2

n∑j=1

pj

],

35

exists and is finite. Using this and pn+1 = pn[1 +O(log4 n/n)

](see Corollary

2.12) we can conclude (see Lemma 2.20) the result. We note that it followsfrom this and (25) that there exists c such that

φ3n = exp

−3

n∑j=1

hj

∼ c

n.

Let W denote a simple random walk independent of S. Then, hn = P(E),where where

E = En = W [0,∞) ∩ ηn 6= ∅ .

2.7.1 A lemma about a sequence

Lemma 2.20. Suppose p1, p2, . . . is a sequence of positive numbers withpn+1/pn → 1 and such that

limn→∞

[log pn + β

n∑j=1

pj

]

exists and is finite. Thenlimn→∞

n pn = 1/β.

Proof. It suffices to prove the result for β = 1, for otherwise we can considerpn = β pn. Let

an = log pn +n∑j=1

pj.

We will use the fact that an is a Cauchy sequence.We first claim that for every δ > 0, there exists Nδ > 0 such that if

n ≥ Nδ and pn = (1 + 2ε)/n with δ ≤ ε, then there does not exists r > nwith

pk ≥1

k, k = n, . . . , r − 1.

pr ≥1 + 3ε

r.

Indeed, suppose these inequalities hold for some n, r. Then,

log(pn+r/pn) ≥ log1 + 3ε

1 + 2ε+ log(r/n),

36

r∑j=n+1

pj ≥ log(r/n)−O(n−1).

and hence for n sufficiently large,

ar − an ≥1

2log

1 + 3ε

1 + 2ε≥ 1

2log

1 + 3δ

1 + 2δ.

Since an is a Cauchy sequence, this cannot be true for large n.We next claim that for every δ > 0, there exists Nδ > 0 such that if

n ≥ Nδ and pn = (1 + 2ε)/n with δ ≤ ε, then there exists r such that

1 + ε

k≤ pk <

1 + 3ε

k, k = n, . . . , r − 1. (26)

pr <1 + ε

r.

To see this, we consider the first r such that pn+r ≤ 1+εn+r

. By the previousclaim, if such an r exists, then (26) holds for n large enough. If no such rexists, then by the argument above for all r > n,

ar − an ≥ log1 + ε

1 + 2ε+ε

2log(r/n)− (1 + ε)O(n−1).

Since the right-hand side goes to infinity as r →∞, this contradicts the factthat an is a Cauchy sequence.

By iterating the last assertion, we can see that for every δ > 0, thereexists Nδ > 0 such that if n ≥ Nδ and pn = (1 + 2ε)/n with δ ≤ ε, then thereexists r > n such that

pr <1 + 2δ

r,

pk ≤1 + 3ε

k, k = n, . . . , r − 1.

Let s be the first index greater than r (if it exists) such that either

pk ≤1

kor pk ≥

1 + 2δ

k.

Using pn+1/pn → 1, we can see, perhaps by choosing a larger Nδ if necessary,that

1− δk≤ ps ≤

1 + 4δ

k.

37

If ps ≥ (1 + 2δ)/k, then we can iterate this argument with ε ≤ 2δ to see that

lim supn→∞

n pn ≤ 1 + 6δ.

The lim inf can be done similarly.

2.7.2 Exact relation

We will prove the following. Let S,W be simple random walks with cor-responding stopping times σn and let Gn = GCn . We will assume thatS0 = w,W0 = 0. Let η = LE (S[0, σn]). Let

G2n(w) =

∑z∈Z4

G(0, z)Gn(w, z) =∑z∈Cn

G(0, z)Gn(w, z).

Note that we are stopping the random walk S at time σn but we are allowingthe random walk W to run for infinite time. If w ∈ ∂Cn−1, then

G2n(w) =

8

π2+O(e−n). (27)

This is a special case of the next lemma.

Lemma 2.21. If w ∈ Cn, then

G2n(w) =

8

π2[n− log |w|] +O(e−n) +O(|w|−2 log |w|).

Proof. Let f(x) = 8π2 log |x| and note that

∆f(x) =2

π2 |x|2+O(|x|−4) = G(x) +O(|x|−4).

where ∆ denotes the discrete Laplacian. Also, we know that for any functionf ,

f(w) = Ew [f(Sσn)]−∑z∈Cn

Gn(w, z) ∆f(z).

Since en ≤ |Sσn| ≤ en + 1,

Ew [f(Sσn)] =8n

π2+O(e−n).

38

Therefore, ∑z∈Cn

Gn(w, z)G(z) =8

π2[n− log |w|] +O(e−n) + ε,

where

|ε| ≤∑z∈Cn

Gn(w, z)O(|z|−4) ≤∑z∈Cn

O(|w − z|−2)O(|z|−4).

We split the sum into three pieces.∑|z|≤|w|/2

O(|w − z|−2)O(|z|−4) ≤ c |w|−2∑

|z|≤|w|/2

O(|z|−4)

≤ c |w|−2 log |w|,

∑|z−w|≤|w|/2

O(|w − z|−2)O(|z|−4) ≤ c |w|−4∑

|x|≤|w|/2

O(|x|−2)

≤ c |w|−2,

If we let C ′n the the set of z ∈ Cn with |z| > |w|/2 and |z −w| > |w|/2, then∑z∈C′n

O(|w − z|−2)O(|z|−4) ≤∑

|z|>|w|/2

O(|z|−6) ≤ c |w|−2.

Proposition 2.22. There exists α <∞ such that if n−1 ≤ e−n |w| ≤ 1−n−1,then ∣∣∣logPW [0,∞] ∩ η 6= ∅ − log[G2

n(w) pn]∣∣∣ ≤ c

logα n

n.

In particular,

E [H(ηn)] =8 pnπ2

[1 +O

(logα n

n

)].

The second assertion follows immediately from the first and (27). We canwrite the conclusion of the proposition as

PW [0,∞] ∩ η 6= ∅ = G2n(w) pn [1 + θn] ,

39

where θn throughout this subsection denotes an error term that decays atleast as fast as logα n/n for some α (with the implicit uniformity of theestimate over all n−1 ≤ e−n |w| ≤ 1− n−1).

We start by giving the sketch of the proof which is fairly straightforward.On the event W [0,∞]∩η 6= ∅ there are typically many points in W [0,∞]∩η. We focus on a particular one. This is analogous to the situation whenone is studying the probability that a random walk visits a set. In the lattercase, one usually focuses on the first or last visit. In our case with two paths,the notions of “first” and “last” are a little ambiguous so we have to takesome care. We will consider the first point on η that is visited by W andthen focus on the last visit by W to this first point on η.

To be precise, we write

η = [η0, . . . , ηm].

i = mint : ηt ∈ W [0,∞),

ρ = maxt ≥ 0 : St = ηi,

λ = maxt : Wt = ηi.

Then the event ρ = j;λ = k;Sρ = Wλ = z is the event that:

I : Sj = z, Wk = z,

II : LE (S[0, j]) ∩ (S[j + 1, σn] ∪W [0, k] ∪W [k + 1,∞)) = z,

III : z 6∈ S[j + 1, σn] ∪W [k + 1,∞).

Using the slowly recurrent nature of the random walk paths, we expect thatas long as z is not too close to 0,w, or ∂Cn, then I is almost independent of(II and III) and

P II and III ∼ pn.

This then gives

Pρ = j;λ = k;Sρ = Wλ = z ∼ PSj = Wk = z pn,

and summing over j, k, z gives

PW [0,∞] ∩ η 6= ∅ ∼ pn G2n(w).

The rest of this section is devoted to making this precise.

40

Let V be the event

V = w 6∈ S[1, σn], 0 6∈ W [1,∞).

A simple argument which we omit shows that there is a fast decaying sequenceεn such that

|P(V )−G(0, 0)−2| ≤ εn,

|PW [0,∞] ∩ η 6= ∅ | V − PW [0,∞] ∩ η 6= ∅| ≤ εn.

Hence it will suffice to show that

P [V ∩ W [0,∞] ∩ η 6= ∅] =G2n(w)

G(0, 0)2pn [1 + θn] . (28)

Let E(j, k, z), Ez be the events

E(j, k, z) = V ∩ ρ = j;λ = k;Sρ = Wλ = z, Ez =∞⋃

j,k=0

E(j, k, z).

Then,

P [V ∩ W [0,∞] ∩ η 6= ∅] =∑z∈Cn

P(Ez).

Let

C ′n = C ′n,w = z ∈ Cn : |z| ≥ n−4en, |z − w| ≥ n−4 en, |z| ≤ (1− n−4)en.

We can use the easy estimate

P(Ez) ≤ Gn(w, z)G(0, z),

to see that ∑z∈Cn\C′n

P(En) ≤ O(n−6),

so it suffices to estimate P(Ez) for z ∈ C ′n.We will translate so that z is the origin and will reverse the paths W [0, λ],

S[0, ρ]. Using the fact that reverse loop-erasing has the same distributionas loop-erasing, we see that P[E(j, k, z)] can be given as the probability ofthe following event where ω1, . . . , ω4 are independent simple random walks

41

starting at the origin and li denotes the smallest index l such that |ωil − y| ≥en. (

ω3[1, l3] ∪ ω4[1,∞))∩ LE(ω1[0, j]) = ∅,

ω2[0, k] ∩ LE(ω1[0, j]) = 0,j < l1, ω1(j) = x, x 6∈ ω1[0, j − 1],

ω2(k) = y, y 6∈ ω2[0, k − 1],

where x = w − z, y = −z. Note that

n−1 en ≤ |y|, |x− y|, |x| ≤ en [1− n−1],

We now rewrite this. We fix x, y and let Cyn = y + Cn. Let W 1,W 2,W 3,

W 4 be independent random walks starting at the origin and let T i = T in =minj : W i

j 6∈ Cyn,

τ 1 = minj : W 1j = x, τ 2 = mink : W 2

k = y.

Let Γ = Γn = LE (W 1[0, τ 1]) . Note that

Pτ 1 < T 1 =Gn(0, x)

Gn(x, x)=Gn(0, x)

G(0, 0)+ o(e−n),

Pτ 2 <∞ =G(0, y)

G(y, y).

Let p′n(x, y) be the probability of the event

Γ ∩(W 2[1, τ 2] ∩W 3[1, T 3]

)= ∅, Γ ∩W 4[0, T 4] = 0,

where W 1,W 2,W 3,W 4 are independent starting at the origin; W 3,W 4 areusual random walks; W 1 has the distribution of random walk conditionedthat τ 1 < T 2 stopped at τ 1; and W 2 has the distribution of random walkconditioned that τ 2 < ∞ stopped at τ 2. Then in order to prove (28), itsuffices to prove that

p′n(x, y) = pn [1 + θn] .

We write Q for the conditional distribution. Then for any path ω =[0, ω1, . . . , ωk] with 0, ω1, . . . , ωk−1 ∈ Cy

n \ x,

Q[W 10 , . . . ,W

1k ] = ω = P[S0, . . . , Sk] = ω φx(ωk)

φx(0), (29)

42

whereφx(ζ) = φx,n(ζ) = Pζτ 1 < T 1

n.

Similarly if φy = Pζτ 2 < ∞ and ω = [0, ω1, . . . , ωk] is a path with y 6∈0, ω1, . . . , ωk−1,

Q[W 10 , . . . ,W

1k ] = ω = P[S0, . . . , Sk] = ω φy(ωk)

φy(0).

Using (13), we can see that if ζ ∈ x, y, and if |z| ≤ en e− log2 n,

φζ(z) = φζ(0) [1 +O(εn)] .

where εn is fast decaying.Let σk be defined for W 1 as before,

σk = minj : |W 1j | ≥ ek.

Using this we see that we can couple W 1, S on the same probability spacesuch that, except perhaps on an event of probability O(εn),

Γ ∩ Cn−log3 n = LE (S[0,∞)) ∩ Cn−log3 n.

Using this we see that

qn ≤ pn−log3 n +O(εn) ≤ pn [1 + θn] .

Hence,Γ ∩ Cn−log3 n ⊂ Γ ⊂ Θn ∪ (Γ ∩ Cn−log3 n),

whereΘn = W 1[0, τ 1] ∩ A(n− log3 n, 2n).

We consider two events. Let W = W 2[1, τ 2] ∪W 3[1, T 3] ∪W 4[0, T 4] andlet E0, E1, E2 be the events

E0 = 0 6∈ W 2[1, τ 2] ∪W 3[1, T 3],

E1 = E0 ∩W ∩ Γ ∩ Cn−log3 n = 0

,

E2 = E1 ∩W ∩Θn = ∅

,

43

Then we show thatQ(E1) = pn [1 + θn] ,

Q(E1 \ E2) ≤ n−1 θn,

in order to show thatQ(E2) = pn [1 + θn] .

Find a fast decaying sequence εn as in Proposition 2.8 such that

PH (S[0,∞) ∩ A(m− 1,m)) ≤ log2m

m

≤ εm, ∀m ≥

√n.

Let δn = ε1/10n which is also fast decaying and let ρ = ρn = minj : |W 1

j −x| ≤en δn. Using the Markov property we see that

PW 1[ρ, τx] 6∈ |z − x| ≤ en

√δn

= O(δn),

and since |x| ≥ e−n n−1, we know that

H[|z − x| ≤ en

√δn]≤ n2 δn.

Hence, we see that there is a fast decaying sequence δn such that such that

PH (S[0,∞) ∩ A(m− 1,m)) ≤ log2m

m

≤ δm, ∀m ≥

√n.

In particular, using (29), we see that

Q

H(W [0, τx] ∩ A(n− log4 n, n+ 1)

)≤ log7 n

n

is fast decaying. Then using Proposition 2.11 we see that there exists a fastdecaying sequence εn such that, except for an event of Q probability at mostεn, either Es[Γ] ≤ εn or

Es[Γ] = Es[Γ ∩ Cn−log3 n] [1 + θn].

This suffices for W 3 and W 4 but we need a similar result for W 2, that is,if

Esy[Γ] = QW 2[1, τ y] = ∅ ∩ Γ | Γ

,

44

then we need that except for an event of Q probability at most εn, eitherEs[Γ] ≤ εn or

Esy[Γ] = Es[Γ ∩ Cn−log3 n] [1 + θn].

Since the measure Q is almost the same as P for the paths near zero, we canuse Proposition 2.11 to reduce this to showing that

QW 2[0, τ y] ∩ (Γ \ Cn−log3 n) 6= ∅ | Γ ≤ θn.

This can be done with similar argument using the fact that the probabilitythat random walks starting at y and x ever intersection is less than θn.

Corollary 2.23. If n−1 ≤ e−n |w| ≤ 1− n−1, then

PW [0,∞) ∩ η 6= ∅ ∼ π2 G2n(w)

24n.

More precisely,

limn→∞

maxn−1≤e−n |w|≤1−n−1

∣∣∣24nPW [0,∞) ∩ η 6= ∅ − π2 Gn(w)∣∣∣ = 0.

By a very similar proof one can show the following. Let

G2n(z) =

∑z∈Z4

Gn(0, z)Gn(w, z) =∑z∈Cn

Gn(0, z)Gn(w, z),

andσWn = minj : Wj 6∈ Cn.

We note that

G2n(w)−G2

n(w) =∑z∈Cn

[G(0, z)−Gn(0, z)]Gn(w, z) = O(1).

The following can be proved in the same way.

Proposition 2.24. There exists α <∞ such that if n−1 ≤ e−n |w| ≤ 1−n−1,then ∣∣logPW [0, σWn ] ∩ η 6= ∅ − log[G2

n(w) pn]∣∣ ≤ c

logα n

n.

We note that if |w| = n−r en, then

G2n ∼ G2

n(w) ∼ log(1/r)

n.

This gives Lemma 1.6 in the case that x or y equals zero. The general casecan be done similarly.

45

References

[1] Itai Benjamini, Harry Kesten, Yuval Peres, and Oded Schramm. Geome-try of the uniform spanning forest: transitions in dimensions 4, 8, 12, . . . .Ann. of Math. (2), 160(2):465–491, 2004.

[2] Itai Benjamini, Russell Lyons, Yuval Peres, and Oded Schramm. Uni-form spanning forests. Ann. Probab., 29(1):1–65, 2001.

[3] Alexei Borodin and Vadim Gorin. General beta Jacobi corners processand the Gaussian free field. arXiv preprint arXiv:1305.3627, 2013.

[4] Maury Bramson and Rick Durrett, editors. Perplexing problems in prob-ability, volume 44 of Progress in Probability. Birkhauser Boston, Inc.,Boston, MA, 1999. Festschrift in honor of Harry Kesten.

[5] Bertrand Duplantier, Remi Rhodes, Scott Sheffield, and Vincent Var-gas. Log-correlated Gaussian fields: an overview. arXiv preprintarXiv:1407.5605, 2014.

[6] Patrik L Ferrari and Alexei Borodin. Anisotropic growth of randomsurfaces in 2+ 1 dimensions. arXiv preprint arXiv:0804.3035, 2008.

[7] Geoffrey Grimmett. The random-cluster model. Springer, 2004.

[8] Alan Hammond and Scott Sheffield. Power law Polyas urn and fractionalBrownian motion. Probability Theory and Related Fields, pages 1–29,2009.

[9] Richard Kenyon. Dominos and the Gaussian free field. Annals of prob-ability, pages 1128–1137, 2001.

[10] Noemi Kurt. Entropic repulsion for a class of Gaussian interface mod-els in high dimensions. Stochastic processes and their applications,117(1):23–34, 2007.

[11] Noemi Kurt. Maximum and entropic repulsion for a Gaussian membranemodel in the critical dimension. The Annals of Probability, 37(2):687–725, 2009.

[12] Gregory F. Lawler, Intersections of random walks in four dimensions II,Comm. Math. Phys. 97, 583–594. 1985.

46

[13] Gregory F. Lawler, Escape probabilities for slowly recurrent sets, Prob-ability Theory and Related Fields 94, 91–117, 1992.

[14] Gregory F. Lawler. The logarithmic correction for loop-erased walk infour dimensions. In Proceedings of the Conference in Honor of Jean-Pierre Kahane (Orsay, 1993), number Special Issue, pages 347–361,1995.

[15] Gregory F. Lawler. Intersections of random walks. Modern BirkhauserClassics. Birkhauser/Springer, New York, 2013. Reprint of the 1996edition.

[16] Gregory F. Lawler and Vlada Limic. Random walk: a modern intro-duction, volume 123 of Cambridge Studies in Advanced Mathematics.Cambridge University Press, Cambridge, 2010.

[17] Asad Lodhia, Scott Sheffield, Xin Sun, and Samuel S Watson. FractionalGaussian fields: a survey. arXiv preprint arXiv:1407.5598, 2014.

[18] Robin Pemantle. Choosing a spanning tree for the integer lattice uni-formly. The Annals of Probability, pages 1559–1574, 1991.

[19] Brian Rider and Balint Virag. The noise in the circular law and theGaussian free field. arXiv preprint math/0606663, 2006.

[20] Hironobu Sakagawa. Entropic repulsion for a Gaussian lattice fieldwith certain finite range interaction. Journal of Mathematical Physics,44:2939, 2003.

[21] Scott Sheffield. Gaussian free fields for mathematicians. Probabilitytheory and related fields, 139(3-4):521–541, 2007.

[22] Xin Sun and Wei Wu, Uniform spanning forests and bi-Laplacian Gaus-sian field. arXiv preprint arXiv:1312.0059, 2013.

[23] David Bruce Wilson. Generating random spanning trees more quicklythan the cover time. In Proceedings of the twenty-eighth annual ACMsymposium on Theory of computing, pages 296–303. ACM, 1996.

47