The Uniform Spanning Tree and Related Models

  • Upload
    alurana

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    1/65

    The Uniform Spanning Tree and related

    models

    Antal A. Jarai

    December 18, 2009

    1 Introduction

    The following type of question is characteristic of discrete probability: given alarge finite set of objects, defined by some combinatorial property, what doesa typical element look like, that is, can we describe some of its statisticalfeatures. Simple as it is to state such questions, they can lead to surprisinglydeep and difficult mathematical problems.

    In this course we are going to look at a very special example: randomspanning trees of graphs (see precise definitions later in this introduction).Spanning trees have been well-known in combinatorics for a long time; see[6]. However, the study of random spanning trees is relatively recent. Thetopic proves to be very interesting from several points of view. There isa surprising connection between spanning trees, random walk and electric

    circuits. The connection with random walk yields efficient algorithms togenerate a spanning tree of a graph uniformly at random. We will see twosuch beautiful algorithms. Uniform spanning trees are also interesting fromthe point of view of statistical physics, as they are a special case of so-called random cluster measures; see Exercise 1.1. The notion ofamenability,that comes from group theory, will enter some of our discussions. Anotherinteresting aspect of random spanning trees is that they are an example of adeterminantal process(see its definition later in this course).

    We now introduce some basic notions.

    1.1 Spanning TreesA graph is called a tree, if it is connected, and contains no cycles. A graphthat contains no cycles, and is not necessarily connected is called a forest(its connected components are trees). LetG = (V, E) be a connected graph.

    1

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    2/65

    Figure 1: A subgraph of the square grid and one of its spanning trees

    A spanning subgraph ofG is one that contains every vertex ofV (some ofthese may be isolated vertices not incident on any edge of the subgraph). A

    spanning tree ofG is a spanning subgraph ofG that is a tree. Similarly wedefine spanning forest. See Figure 1. We can identify a spanning subgraphofG with an element of the space ={0, 1}E, by writing a 1 if an edge ispresent, and 0 if it is not. That is, = (e)eE represents the spanningsubgraph which contains precisely those edges e E for which e = 1.Assume now that G is a finite graph. By the Uniform Spanning Tree on G,we mean the probability distribution UST on that assigns equal mass toeach spanning tree ofG, and no mass to subgraphs that are not spanningtrees:

    UST[] = 1N(G)

    if is a spanning tree ofG;

    0 otherwise,

    ,

    whereN(G) is the number of spanning trees ofG. By drawing an exampleor looking at Figure 1, you can convince yourself that a general connectedgraph usually has many spanning trees, so this definition meaningful.

    Having introduced the space , we can view the Uniform Spanning Tree asa 0-1 valued random process (e) indexed by the edges ofG. It should be clearthat this is very far from an i.i.d. process, since cycles are forbidden, and thesubgraph is constrained to be connected. We will writeT = {e E:e= 1}.Some questions we will consider: what is the probability that a given edgee Ebelongs to T? More generally, given K E, what is UST[K T]?A different type of question that proves to be very interesting is as follows.Suppose now thatGis thed-dimensional integer lattice: G= (Zd,Ed),d 2,with x, y Zd connected by an edge, if|x y| = 1, that is, Ed ={{x, y} :x, y Zd,|xy| = 1}. ThisG has infinitely many spanning trees, hence it isnot clear what should be meant by picking one uniformly at random. It turns

    2

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    3/65

    out, as will be discussed later in the course, that there is a very natural waythis can be done. Consider a sequence V1 V2 . . . V n Zd, of finiteconnected subsets of Zd, such thatnVn = Zd. Let Gn = (Vn, En) be thesubgraph induced byVn, that is the graph with vertex setVnand edge set theset of all edges ofG that connect vertices in Vn (Figure 1 shows an exampleV when d = 2.) On each Gn, we can construct the measure USTn. Do theseapproximate, in a suitable sense, a measure on ={0, 1}Ed? Making thisprecise: is it true that for a fixed e Ed, the probabilities USTn[e T]converge to some limit (noting that the probability is indeed well definedas soon as e En)? More generally, forK Ed finite, does USTn[K T]converge to some limit? The reader is invited to give some thought to what itwould involve to prove such convergence, before moving on. Once we provethat the limit exists, we will examine in detail what type of graph is thelimitingT.

    Exercise 1.1 (Random Cluster Model). Let G = (V, E) be a finite graph,and consider the following probability measure on = {0, 1}E.

    p,q() = 1

    Zp,qpn()(1 p)|E|n()qK(), ,

    where 0 < p < 1, and q > 0 are parameters, n() =

    eEe, K() =number of connected components of the graph , andZp,q is a constant nor-malizing p,q to be a probability distribution. This is called the RandomCluster Measure on G.(a) Check that when q= 1, the edges are i.i.d. This is called the PercolationModel on G with edge density p.(b) Prove that asp 0 andq/p 0,p,q[] UST[].

    3

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    4/65

    2 The Aldous-Broder algorithm

    A large graph usually has many spanning trees; often exponentially manyin the number of edges of the graph. (The famous Matrix-Tree Theorem incombinatorics [6, Corollary II.13] gives the exact number of them.) Hence

    for a general graph, it is not obvious how to simulate a uniformly randomspanning tree in reasonable time; that is, polynomial in the number of edges.The algorithm below that achieves this, was discovered independently byAldous [1] and Broder [7].

    LetG = (V, E) be a connected finite graph. We will write deg(u) for thenumber of edges incident on u. Let (Xn)n0 be the simple random walk onG, that is, the Markov chain with state space V and transition probability

    puv =

    1

    deg(u) if{u, v} E;0 otherwise.

    Since G is connected, (Xn) is irreducible. Let Tv be the first hitting time ofv:

    Tv := inf{n 0 :Xn= v}.LetCbe the cover timeof the random walk:

    C:= maxv

    Tv = inf{n 0 : {X0, X1, . . . , X n} =V}.

    By irreducibility, Cis finite with probability 1.

    Algorithm 2.1 (Aldous-Broder Algorithm). Choose X0 arbitrarily. Run

    simple random walk onG up to the cover time C. Consider the set of edges:

    T := {{XTw1, XTw} :w=X0}.

    That is, every time you visit a vertex that you have not seen before, recordwhich edge was used to enter that vertex. Then Tis a spanning tree, and itis uniformly distributed over all spanning trees ofG.

    Let us note that it is easy to see that T is a spanning tree: we alwaysdraw edges to vertices that have not been visited before, so no cycles areformed during the process. By induction, T is connected, since every edge

    has an endvertex that has been visited already.The starting point for the proof that the algorithm works is a beautiful

    and surprising theorem that shows a connection between spanning trees andMarkov chains.

    4

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    5/65

    2.1 The Markov Chain-Tree Theorem

    Let us put aside simple random walk for a moment, and let (Xn)n0 bean arbitrary irreducible finite state Markov chain on a state space V, withtransition matrixP = (pxy). We associate to it a directed graph G = (V, E),

    where [x, y] E, ifpxy > 0. That is, we draw an edge fromx to y, if it ispossible to move from x toy in one step. By a rooted tree (t, r), we mean atreet with a distinguished vertexr, called the root. From every vertex of thetree, there is a unique path to the root that involves no backtracking. Thusfor every vertex v of a rooted tree, there is a unique edge{v, w} of the treethat leads one step closer to the root, and we orient this edge towards theroot, that is as [v, w]. It is easy to see that this way each edge of the rootedtree gets an orientation.

    Consider a rooted spanning tree (t, r) ofG. We assign to this the weight

    q(t, r) = [v,w]E(t,r)pvw.That is, the weight of a rooted spanning tree is the product of the transitionprobabilities along the edges of the tree. Let

    p(v) :=(t,v)

    q(t, v), v V,

    where the sum is over all rooted spanning trees ofG with rootv .

    Theorem 2.1 (Markov Chain-Tree Theorem). The stationary distribution

    ofX is proportional to p(v).Proof. The key to the proof is that one can build a Markov chain on rootedtrees on top of (Xn). Consider the following operation. Given a rootedspanning tree (t, v) ofG, and an edge [v, v] E, we define a new rootedspanning tree (t, v). We first add the edge [v, v] to t. Since t alreadycontains a directed path fromv to v , this together with the edge [v, v] formsa directed cycle. Note that due to the orientation of edges in t, all otheredges are oriented towards this cycle. Hence if we now remove the outgoingedge from v, [v, w] say, then we obtain a rooted spanning tree t rooted atthe new vertex v. See Figure 2. Note that it may happen that w = v, in

    which case the cycle consists of the edges [v

    , v], [v, v

    ]. It is also no problemifpvv > 0, and hence the loop-edge [v, v] is present. In this case we add[v, v] and then remove it. Let us note here what the backwards operationis. If (t, v) and w are specified, then adding the edge [v, w] we get back thedirected cycle, and then v is uniquely determined as the vertex preceeding

    5

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    6/65

    Figure 2: The operation that changes a spanning tree rooted at v into aspanning tree rooted at v. The dashed lines indicate the unique orientedcycle that is created by adding the edge [v, v].

    v in this cycle, and hence also t is uniquely determined. Also note that forfixed (t, v), to any w with [v, w] E there corresponds a precursor tree(t, v). Again, since adding [v, w] creates a unique directed cycle, etc. When(t, v) is fixed, let us write (t, v) = prec((t, v), w), to indictae the relationshipbetween (t, v),w and (t, v).

    We turn the above operation into a Markov chain (Yn), by letting theroot perform the underlying Markov chain (Xn). That is,

    P[Yn+1= (t, v) | Yn= (t, v)] := P[Xn+1= v | Xn= v ] =pvv.We will see in Exercise 2.1, that (Yn) is irreducible.

    We claim that the stationary distribution of (Yn) is proportional toq(t, v).We can write

    q(t, v) = w:[v,w]E

    pvwq(t, v). (1)

    Let (t, v) denote the rooted tree uniquely determined by (t, v), w. Then

    pvwq(t, v) = pvvq(t, v).

    Hence the right-hand side of (1) can be rewritten(t,v)

    pvvq(t, v). (2)

    Since pvv is the transition probability for moving from (t

    , v

    ) to (t, v), thisproves the claim (assuming Exercise 2.1).The Theorem now follows, since p(v) is proportional to the long run

    fraction of time that the root equalsv, which is the long run fraction of timeXn= v.

    6

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    7/65

    Exercise 2.1. Show that given a rooted tree (t, v), and for any X0 andY0, there exists N and a sequence X1, . . . , X N of allowed steps such thatYN = (t, v). Conclude that the chain (Yn) is irreducible. (Hint: traverse theedges oft in a suitable order.)

    2.2 The backwards tree construction

    Let . . . , Y 1, Y0, Y1, . . . be the stationary tree-chain run from time. Wewill write P for the probability measure governing this chain. Then Xn, theroot ofYn is the stationaryX chain.

    We define the backward treeat time n as follows. For w=Xn, letLnw = sup{k: k < n, Xk = w}.

    Due to irreducibility, Lnw > with probability 1. Then we haveYn

    ={

    [XLn

    w, X

    Ln

    w+1] :w

    =X

    n}.

    That is, if for every w=Xn, we mark the edge that was used on the last exitfromw prior to time n, we getYn. This is easily verified from the definitionof the Y-chain.

    2.3 Verification of the Aldous-Broder algorithm

    We now return to the task of generating a uniform spanning tree of a con-nected graphG= (V, E). This is done by reversing the time in the backwardstree construction.

    Let Xn be the simple random walk on G. Then the directed graph con-structed from X is the same as G, with each edge being present with bothorientations. It is easy to check that the stationary distribution is propor-tional to deg(u). We still write P for the probability measure correspondingto the stationary chain.

    The simple random walk is reversible, that is, P[X0= x0, . . . , X k = xk] =P[Xk =x0, . . . , X 0= xk]. In particular,

    P[X1= x1, . . . , X k = xk | X0= v] = P[Xk =xk, . . . , X 1= x1 | X0= v].(3)

    Proof. [Aldous-Broder Algorithm works] Orient each edge ofTtowards the

    root. Let us prove that T is uniformly distributed. For simple random walk,the weight of a rooted tree (t, v) is

    q(t, v) =w=v

    1

    deg(w)= const deg(v). (4)

    7

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    8/65

    It is clear from the definitions of the backwards tree and T, that the forwardtree constructed from the sequence v, x1, x2, . . . will be (t, v) if and only ifthe backwards tree constructed from . . . , x2, x1, v is (t, v). Hence (3) gives

    P[T = (t, v) | X0= v] = P[Y0= (t, v) | X0= v] =P[Y0= (t, v)]

    P[X0 = v]

    = const q(t, v)const deg(v) .

    Due to (4), the right hand side is a constant independent of (t, v). Henceeach tree rooted atvis equally likely to result from the algorithm. Forgettingthe root then yields a uniformly distributed spanning tree.

    8

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    9/65

    3 Wilsons algorithm

    In this section, we present a second algorithm to generate random spanningtrees, due to Wilson [34]. It turns out to be extremely useful not only as asimulation tool, but also for theoretical analysis. Before presenting the algo-

    rithm, we generalize the notion of random spanning trees from the uniformcase to a weighted case.

    3.1 Some terminology

    LetG = (V, E) be an unoriented graph. We allow multiple edges or loops inthe graph. For many purposes we will be able to disregard loops, as they cannever be contained in a tree. Often we will use oriented edges, and in thiscase, each edge will be present in Ewith both orientation. For an orientededge e we write e= [e, e], and call e the tail ofe, and e the head ofe. We

    define e= [e, e] the reversal ofe.By a network, we mean a pair (G, C), where C:E (0, ), defined onunoriented edges (or, equivalently, C is symmetric: C(e) = C(e)). We callC(e) theconductanceofe. For any vertex v V, we callCv :=

    e:e=vC(e),

    the conductance ofv. For infinite networks, we assume that Cv

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    10/65

    remove loops, as they are created, following the path. When loop-erasure isapplied to a random walk path, we talk about Loop-Erased Random Walk(LERW).

    3.3 Wilsons methodLet (G, C) be a finite connected network. Pick any r V, called the root.We define a growing sequence of subtrees T(i), i0 ofG. We letT(0) :={r}. Letv1, . . . , vn1be an enumeration ofV\ {r}. SupposeT(i) has beengenerated. Start a network random walk at vi+1, and stop when it hitsT(i).Let

    T(i + 1) :=T(i) LE(path fromvi+1 to T(i)).The output of the algorithm is T =T(n 1).Theorem 3.1. T is distributed proportional to weight.

    Proof. To each v V,v=r, we assign an infinite stack of random orientededges (arrows) evj , j = 1, 2, . . . , such that e

    vj =v, and P[e

    vj =e] =C(e)/Cv.

    All elements in all the stacks are independent. We colour the i-th element ofeach stack with colouri.

    The arrows on top of the stacks define a random oriented graph withvertex set V. This may contain oriented cycles. If there are no orientedcycles, then, since every vertex v=r has a unique outgoing edge, the graphis a tree directed towards the root r .

    Consider the notion ofcycle popping. IfC =e1, . . . , ek is an orientedcycle on top of the stacks, by popping C, we mean removing all the edges

    ej ,j = 1, . . . , kfrom their stacks, and lifting those stacks up by one level, sothat now new edges sit of top of those stacks.

    The collection of stacks provides a probability space on which the al-gorithm can be defined. Since the arrows at each vertex were chosen withthe random walk transition probabilities, the random walk started at v1 canfollow the oriented edges. When a vertex is revisited for the first time, anoriented cycle has been found. If we pop this cycle from the stacks, then therevisited vertex receives a fresh, independent arrow, which can be followed tocontinue the random walk. It is easy to see that if every completed cycle ispopped, then following the arrows on top of the current stacks is precisely the

    network random walk. Moreover, when r is hit, on top of the stacks we seethe LERW fromv1 tor, as well as fresh arrows (independent of previouslyvisited ones) on top of the stacks of v T(1). Similarly, the LERW fromv2 correpsonds to following the arrows from v2 and popping cycles as they

    10

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    11/65

    are encountered. When the algorithm is complete, the algorithm outputs thetree on top of the stacks.

    The above shows that the algorithm defines a particular way of poppingcycles until a tree is uncovered. Note that each popped cycle is coloured(each of its oriented edges has a colour), and a particular coloured cycle canbe popped at most once. We now prove that we may in fact pop cycles inany order we wish, and regardless of what we do, the same coloured cycleswill be popped, and the same tree uncovered.

    Let C1, . . . , C m be a sequence of oriented coloured cycles that can bepopped (in this order), with a tree t resulting. We know from the algorithm,that such a finite sequence exists with probability 1. Let D1, D2, . . . be anyother possible sequence of coloured oriented cycles that can be popped. Weshow that the D-sequence consists of the same coloured cycles as the C-sequence. We prove this by induction on m. Ifm = 0, then there are nocycles to be popped, hence the D-sequence is also empty. Assume now that

    m 1, and that the statement is true forC-sequences of length less thanm.LetDibe the first cycle in the D-sequence that is not disjoint fromC1. HenceDiand C1share a vertexw1. SinceD1, . . . , Di1are disjoint fromC1, they donot containw1, and hence the colour ofw1is the same inC1andDi. It followsthatw1has the same successorw2inC1andDi. Using again thatw2does notoccur inD1, . . . , Di1, we see that w2 has the same colour inC1 andDi, andso on. We get thatC1andDi are the same coloured cycle. Popping Di= C1commutes with popping D2, . . . , Di1, hence popping the D-sequence hasthe same effect as popping Di= C1, D2, D3, . . . , Di1, Di+1, . . . . Now popC1from the top of the stacks. By the induction hypothesis, C2, . . . , C m consistsof the same coloured cycles as D2, . . . , Di1, Di+1, . . . , and it uncoveres the

    same tree t. The claim follows.We now show that the tree uncovered by cycle popping has probability

    proportional to weight. LetC1, . . . , C m be any sequence of coloured orientedcycles that can be popped in this order, and t any tree rooted at r. Theprobability that C1 can be popped is the probability that the arrows in C1all point the right way, that is

    eC1

    C(e)/Ce. Using independence of theelements in the stacks, the conditional probability that Ci can be popped,given thatC1, . . . , C i1can be popped is

    eCi

    C(e)/Ce. Finally, using againindependence of the elements in the stacks, conditioned on all the cyclespopped, the probability that the tree t is uncovered, is

    etC(e)/Ce. Hence

    we haveP[C1, . . . , C m can be popped and as a result tis uncovered]

    =mi=1

    eCi

    C(e)

    Ce

    et

    C(e)

    Ce.

    11

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    12/65

    The last factor is const weight(t). Note that t varies independently of thecycles: given any sequence of cycles popped, the tree uncovered may be anytree rooted atr. This shows more than we need: even if we condition on thesequence of cycles popped, the tree has the claimed distribution.

    Remark. The proof showed not only that the result of the algorithm is inde-pendent of the sequencev1, . . . , vn, but also that we may, if we wish, pick vi+1depending on what happened in the algorithm up to that point. The proofalso showed that the distribution of the number of random walk steps neededis independent of how v1, . . . , vn was chosen. The proof in fact establishes aprobability space on which any choice results in the same number of steps.This is called a couplingof the different instances of the algorithm.

    Remark. The same proof works for any Markov chain in place of a networkrandom walk. That is, ifG is the directed graph constructed from a Markovchain as in Section 2, the algorithm outputs a rooted tree (t, r) proportional

    to q(t, r) = [v,w]tpvw .Remark.The expected number of Markov chain steps used by the algorithmcan be found as follows. To find the expected number of times that a stepout ofu is used, we may assume, by the proof, that v1= u. Then we need tofind E[# visits to u before r], where r = inf{n 0 : X(n) = r}. For anyirreducible Markov chain, this expectation equals (u)[Eur+ Eru], whereis the stationary distribution. [2, Chapter 2]. Hence the expected runningtime of the algorithm (measured by the number of Markov chain steps) is

    u=r(u)[Eur+ Eru].

    Note that this is always at most 2EC, where C is the cover time, and maybe smaller.

    Remark. Later we will see variants of Wilsons algorithm on infinite graphsgive useful information. As a first example, consider the square lattice G=(Z2,E2). Simple random walk is recurrent, that is, for all v, w Z2,

    Pv[X(n) =w for some n 1] = 1.

    Pick r Z2, and let v1, v2, . . . list all the vertices of Z2. Since T(i) is hitwith probability 1, the algorithm can be run, and results in a spanning treeof (Z2,E2).

    12

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    13/65

    4 Electric networks and spanning trees

    In this section we describe the relation between electrical networks, spanningtrees and random walk. To a great extent, we follow the exposition in [5].The goal is to define the notion of current, and use it to describe the joint

    probability that edgese1, . . . , ek belong to a random spanning tree. Althoughthe physical interpretations of resistance, current, etc. will not be needed,we note that a nice introduction to these and their connection with randomwalk can be found in [11].

    4.1 The gradient and divergence operators

    Let (G, C) be a finite network. We denote by 2(V) the space of real-valuedfunctions onV, with the inner product:

    (f, g)C := vV Cvf(v)g(v), (5)and normfC. We denote by 2(E) the space ofantisymmetricfunctionson oriented edges, that is, functions : E R satisfying (e) =(e) forall e E. We equip this space with the inner product:

    (, )R:=1

    2

    eE

    R(e)(e)(e) =eE1/2

    R(e)(e)(e), (6)

    whereE1/2 contains each edge ofG with exactly one orientation. The energyof is

    E() := (, )R = 2R.Thegradient operator is :2(V) ell2(E), defined by

    (F)(e) :=C(e)(F(e) F(e)). (7)

    Thedivergence operatoris div : 2(E) 2(V), defined by

    (div )(v) := 1

    Cv

    e:e=v

    (e). (8)

    and div are adjoints of each other: (, F)R = (div , F)C. To seethis, write

    (, F)R= 12

    eE

    R(e)(e)C(e)(F(e) F(e)) =12

    eE

    (e)(F(e) F(e)).

    13

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    14/65

    Since(e)(F(e)) =(e)F(e), this can be written aseE

    (e)F(e) =eE

    vV

    I[e= v]F(v)(e) =vV

    CvF(v) 1

    Cv

    eE

    I[e= v](e)

    = vV

    CvF(v)div (v).

    To motivate what follows, imagine that the network (G, C) is an electricnetwork, where edge e is a resitor with resistance R(e). Suppose that wehook up a battery between the two endpoints of the edge e, and supposethat a unit of current flows through the battery. LetIe(f) be the amount ofcurrent that flows along the edge f. How to determine Ie(f)? We know thatcurrent is conserved at each vertex v=e, e. Hence, div Ie(v) = 0. A unit ofcurrent comes in ate, and a unit of current is taken out at e, so

    div Ie

    = 1

    Ce1e

    1

    Ce1e. (9)

    This does not uniquely specifyIe, however, there is a physical principle calledThompsons principle that states thatIe has minimal energy among all flowsthat have divergence equal to (9). Here we will adopt this characterizationas the mathematical defintion of the current in (10) below.

    4.2 The gradient and cycle spaces

    We define the unit flow along the edge e: e 2(E) defined bye :=1e1e,where 1fdenotes the function that is 1 on the edge fand zero elsewhere.Note that this has the required divergence (9). Let Gr denote the space ofgradients:

    Gr := 2(V).The flow1v =

    e;e=v

    e is called thestaratv . The space Gr is spanned

    by the stars. Ife1, . . . , ek is an oriented cycle in G, then the flowk

    i=1 ei

    2(E) is called a cycle. We call

    Cyc := linear span of all cycles 2(E).Lemma 4.1. We have2(E) = Gr

    Cyc.

    Proof. Stars are orthogonal to cycles. For this note that ifv is not on thecycle e1, . . . , ek, then the star at v and this cycle are supported on disjointedges, and hence are orthogonal. Whenv = ei = ei+1, then the inner productis R(ei)(C(ei)) +R(ei+1)C(ei+1) = 0 (note that ifv occurs multiple times

    14

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    15/65

    in the cycle, a similar calculation remains valid). Therefore, what we needto show is that if is orthogonal to Cyc, then it is a gradient. This goesby a well-known argument. FixoV, and for any v V let e1, . . . , ek beany path in Gfrom o to v . Define F(v) :=

    jR(ej)(ej). This definition is

    independent of the path chosen, since ife1

    , . . . , el

    is another path, then isorthogonal to the cycle e1, . . . , ek,el, . . . , e

    1. It is easy to see thatF = ,

    and this completes the proof.

    For a subspace Z of2(E), let PZbe the orthogonal projection onto Z,and letPZ be the orthogonal projection onto the orthogonal complement ofZ.

    We are ready to defineIe :=PGr

    e. (10)

    Since Ie e Gr, we have, for any F 0 = (Ie e, F)R =(div Ie div e, F)C, which is equivalent to div I

    e = div e. Hence Ie indeed has

    smallest energy among flows with divergence (9).

    4.3 Connection with spanning trees

    Lete and fbe oriented edges ofG. We define

    (e, f) = P[path from e to e in random spanning tree uses f].

    Start a network random walk ate, and stop when it hits e. LetJe(f) be thenet number of times edge f is used, that is

    Je(f) := E[# times f is used

    # times fis used].

    Theorem 4.1. We have

    (e, f) (e, f) =Je(f) =Ie(f).In particular,

    P[e T] = Pe[first hite via the edgee] =Ie(e). (11)Remark.The equality of the spanning tree quantity and the current is due toKirchhoff [19], and the equality of the random walk quantity and the currentis due to Doyle and Snell [11].

    Proof. By Wilsons algorithm, that path in the random spanning tree is aLERW frometo e. Hence

    (e, f)(e, f) = E[# times LERW uses f# times LERW uses f]. (12)

    15

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    16/65

    A cycle is traversed an equal number of times in expectation in both direc-tions, since ife1, . . . , ek is a cycle with v = e1, then

    P[e1, . . . , ek are traversed | X(n) = v]

    =

    C(e1)

    Ce1

    C(e2)

    Ce2 . . .

    C(ek)

    Cek

    = P[ ek, . . . ,e1 are traversed | X(n) =v].(13)

    This imples that adding the loops of the random walk in the right hand sideof (12), does not change the expected net number of times f is used, hencethe (e, f) (e, f) =Je(f).

    To prove the second equality, let

    F(v) := Ee[# visits to v up to e].

    Since for every v= e, e, any incoming step to v is balanced by an outgoingstep fromv, we have div J

    e

    (v) = 0. At v = e there is one more outgoing stepthan incoming step, and atv = e, there is one incoming step and no outgoingstep. Hence div Je = (1/Ce)1e (1/Ce)1e = div Ie. Hence, Je Ie Gr,and it is enough to show that Je Gr. Let

    v = Pv[first step of r.w. uses f] Pv[first step of r.w. uses f]= 1

    Cv1v Gr.

    Then it is easy to check that Je =

    vF(v)v Gr. It follows that Je =Ie,and the proof is complete.

    Exercise 4.1. Complete the argument regarding loops being traversed anequal number of times on average. Condition on the loop-erasure of therandom walk path frometo e, and decompose the random walk path into itsloop-erasure v0= e, v1, . . . , vN=e, and the oriented cycles C0, C1, . . . , C N1,where Ci is a cycle based at vi. Further condition on the cycles withouttheir orientation, and deduce from (13) that the two orientations are equallylikely. Deduce that the expectation of the quantity defining Je(f) is equalto (e, f) (e, f).

    The matrix Y(e, f) =Ie(f) = 1R(e)

    (Ie, f)R =C(e)(PGre, f)R is called

    the transfer current matrix.

    4.4 Contracting edges in a network

    Definition 4.1. Let G = (V, E) be a graph, F E. We denote by G/Fthe graph obtained by identifying, for every f F, the endpoints off. We

    16

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    17/65

    identify the edges of G with those of G/F, where some edges in G havebecome loops inG/F. Hence we also have an identification of the space 2( E)for the two graphs.

    Let us write TG for the random spanning tree ofG.

    Proposition 4.1. Assume thatG is a finite connected network, and that nocycle can be formed of the edges inF. ThenTG conditioned onF TG hasthe same distribution asTG/F F.Proof. For any C E, C F =, we have that C F contains a cycleofG if and only ifCcontains a cycle ofG/F. Hence C F is a spanningtree ofG if and only ifCis a spanning tree ofG/F. Since weight(C F) =eCC(e)

    fFC(f), we get the statement.

    The above proposition shows that if we condition on a certain set of

    edges Fto be present in TG, the conditional probabilities are given by thecontracted network G/F. Hence we would like to see how the current Ie

    changes when we contract edges in a network. This will turn out to be givenby applying an orthogonal projection toIe.

    Recall that we identify edges in G and edges in G/F(where the edges inFbecame loops in G/F). This gives an identification of the spaces 2(E)for the two graphs.

    LetGr denote the gradients in G/F, and letCyc denote the linear spanof cycles in G/F.

    Lemma 4.2.

    Cyc = Cyc + F,whereF is short for the linear span of{f :f F}.Proof. It is clear that cycles inG remain cycles inG/F, so Cyc is a subspace

    ofCyc. Since edges in F become loops in G/F, it is also clear thatFis a subspace ofCyc. This shows that the right hand side is contained inthe left hand side. Suppose now that e1, . . . , ek form a cycle in G/F. Weshow that we can insert edges fromFinto this sequence to get a cycle in G,which will show thatCyc is contained in the right hand side. Ifei1 = eiin the graphG, then there is no need to insert an edge between ei1 and ei.Ifei1 = v1

    = v2 = ei in G, then v1 and v2 belong to the same connected

    component of F (since they are contracted to the same vertex in G/F).Hence we can insert edges f1, . . . f k(i) Fto get a path from v1 to v2.

    17

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    18/65

    Now write2(E) = Gr Cyc =Gr Cyc,

    where

    Cyc Cyc, and consequently

    Gr Gr. We can write Gr =

    Gr

    (Gr

    Cyc) andCyc = (Gr Cyc) Cyc. Hence2(E) =Gr (Gr Cyc) Cyc.

    We show that the middle subspace equals Z:=PGrF. For this note thatGr Cyc =PGrCyc =PGrCyc + PGrF =Z.

    Hence2(E) =

    Gr Z Cyc.Let nowe be an edge that does not form a cycle together with edges from F(so that it is not a loop in G/F). Then writing

    Ie for the current in G/F,

    we have Ie =PcGre =PZPGre =PZIe. (14)4.5 The transfer current theorem

    The expression (14) allows us to prove the following beautiful theorem on thejoint probability that edges e1, . . . , ek belong to the random spanning tree.The theorem is due to [8], the proof we present is from [5].

    Theorem 4.2 (Transfer current theorem). [Burton, Pemantle; 1993] LetGbe a finite connected network. For distinct edgese1, . . . , ek E, we have

    P[e1, . . . , ek TG] = det[Y(ei, ej)]1i,jk,whereY(e, f) = Ie(f).

    Proof. The left hand side is 0, if a cycle can be formed from the edgese1, . . . , ek, and we need to show that in this case the determinant is also0. Write the cycle as

    jj

    ej Cyc (where j {1, 0, 1}). Then thefollowing linear combination of columns of the matrix Y vanishes:

    j

    jR(ej)Y(ei, ej) =j

    jR(ej)Iei(ej)

    ej(ej) =j

    j(Iei, ej)R

    = (Iei, j

    jej)R= (PGrei, j

    jej)R= 0.

    The last equality follows, since gradients are orthogonal to cycles. Therefore,the determinant is also 0.

    18

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    19/65

    Assume now that no cycle can be formed from the edges e1, . . . , ek. Wewrite, using thatPGr = P

    2Gr, and thatPGr is self-adjoint:

    Y(e, f) = Ie(f) = 1

    R(f)(Ie, f)R = C(f)(I

    e, f)R

    =C(f)(PGre, f)R= C(f)(PGre, PGrf)R

    =C(f)(Ie, If)R.

    (15)

    Hence

    det[Y(ei, ej)] =ki=1

    C(ei)det Yk, (16)

    whereYk(ei, ej) = (Iei, Iej)R. The matrix Yk has the following structure: we

    havekvectors,Ie1, . . . , I ek, and the (i, j) element is the inner product of the i-th andj-th vectors. Such a matrix is called a Gram matrix. The determinant

    of a gram matrix equals the squared volume of the paralelepiped spannedby its determining vectors. This is easy to see if the determining vectorsare pairwise orthogonal, as then the matrix is diagonal with diagonal entriesequal to the squared length of the determining vectors. When the vectors arenot orthogonal, we can applied the Gram-Schmidt orthogonalization processto them, and it is not hard to check that the determinant does not change,and neither the squared volume of the paralelepiped spanned by the vectors.Therefore, we can write (16) as

    k

    i=1C(ei)PZiIei2R, (17)

    whereZi= span{Ie1, . . . , I ei1} =PGr(span{e1, . . . , ei1}).

    Now we have

    P[e1, . . . , ek TG] =ki=1

    P[ei TG | e1, . . . , ei1 TG]

    Prop. 4.1=

    k

    i=1P[ei TG/{e1,...,ei1}]

    (11)=

    k

    i=1Iei(ei)

    (15)=

    ki=1

    C(ei)(Iei,Iei)R (14)= ki=1

    C(ei)PZiIei2R.

    The last expression is the same as (17), which completes the proof.

    19

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    20/65

    Remark. A 01 valued process{X}I is called determinantal if

    P[X1, . . . , X k = 1] = det[K(i, j)]1i,j,k

    for some kernel K.

    4.6 Monotonicity properties of currents

    We need some monotnicity properties, that will be very important when welook at limits of spanning trees on infinite graphs.

    Let (G, C) be a finite network ande E. When we need to indicate thata current is computed in a network H, we write IeH.

    Proposition 4.2. (a) IfG is a subgraph ofG containinge, thenIeG(e)IeG(e).(b) IfF

    E andF

    {e

    }has no cycles containinge, thenIeG/F(e)

    IeG(e).

    Proof. In both cases, the inequality results from the fact that one of thecurrents can be written as an orthogonal projection of the other, that reducesthe norm.

    (a) Write G = (V, E). Since E E, there is a natural embeddingof 2(E

    ) into 2(E), by setting (e) = 0 for all e E\ E. We have2(E

    ) = Gr Cyc, where Cyc Cyc (any cycle inG is also a cycle in G).We havee 2(E) 2(E). Since IeG is the projection ofe onto Gr, wecan write e =IeG+ f

    , with f Cyc. Then

    IeG= PGre =PGrI

    eG+PGrf

    =PGrIeG.

    Hence

    IeG(e) (15)

    = C(e)(IeG, IeG)R = C(e)IeG2R= C(e)PGrIeG2R C(e)IeG2R

    =IeG(e).

    (b) We have IeG/F =PZI

    eG. Hence, similarly to (a):

    IeG/F(e) =C(e)IeG/F2R=C(e)PZIeG2R C(e)IeG2R = IeG(e).

    Proposition 4.2 has the following consequence for random spanning trees.

    20

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    21/65

    Proposition 4.3. LetG= (V, E) be a finite connected network, F E.(a) IfG is a subgraph ofG containingF, then

    P[F TG] P[F TG].

    (b) Ife =f thenP[f TG | e TG] P[f TG].

    More generally, ifF F = , then

    P[F TG/F] P[F TG].

    Proof. (a) We may assume that Fcontains no cycles. We induct on|F|. IfF = {e}, then by (11) and Proposition 4.2(a),

    P[e TG] =IeG(e) IeG(e) = P[e TG].

    Induction step: F =F1 {e}. ThenG/F1is a subgraph ofG/F1. Therefore,P[F TG ] = P[e TG/F1]P[F1 TG ] P[e TG/F1 ]P[F1 TG]

    = P[F TG].

    (b) We have

    P[f TG|e TG] = P[f TG/e] =IfG/e(f) IfG(f) = P[f TG].

    General case follows by induction on|F|. IfF = {e}, then

    P[e TG/F] =IeG/F(e) IeG(e) = P[e TG].For the induction step, write F =F1 {e}. Note thatG/F/F1 is a contrac-tion ofG/F1. Therefore,

    P[F TG/F ] = P[e TG/F/F1 ]P[F1 TG/F] P[e TG/F1]P[F1 TG] = P[F TG].

    21

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    22/65

    5 Random spanning forests of infinite graphs

    Some properties of random spanning trees become especially transparentwhen we consider the limit of an infinite graph. As an analogy, think aboutsimple random walk, and how the phenomenon of recurrence/transience is

    most naturally expressed as a property of an infinite walk (although it couldbe formulated as a limiting property of finite walks).

    It is not obvious, how to define a uniform spanning tree on an infinitegraph, and we will encounter some surprises. The first is that there willbe more than one natural way to define the limit, and the results can bedifferent. The second is that in the context of infinite graphs, the naturalobjects will be spanning forests, rather than spanning trees.

    To construct the limiting objects on infinite graphs, we exploit the monot-nicity properties proved in the previous section. For this section, G = (V, E)is an infinite connected network.

    5.1 Measurable space

    We will work on the space ={0, 1}E. This is a compact metric space inthe product topology. The Borel-algebra is generated by the elementarycylinders, that is, sets of the form

    AB,K= {F E:F K=B},

    where B K are finite sets of edges. The event AB,K expresses that theedges in B are present and the edges in K\ B are not.

    5.2 Exhaustion

    Consider V1 V2 V, such thatnVn = V, Vn is finite. LetGn =(Vn, En) be the subgraph induced by Vn. The sequenceGn is called anexhaustion ofG by finite subgraphs.

    5.3 Free spanning forest

    Let Fn be the random spanning tree measure on Gn (this measure can berealized on ).

    SinceGnis a subgraph ofGn+1, Proposition 4.3 implies that for any fixedB Efinite we have

    Fn [B T] Fn+1[B T].

    22

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    23/65

    Note that this makes sense for large enough n, as then B En. Hence wecan define

    F[B T] := limn

    Fn [B T], (18)where the limit exists by monotonicity.

    Exercise 5.1. Show that ifB Kare fixed finite sets of edges, thenF[T K=B] := lim

    nFn [T K=B],

    exists. Hint: use the inclusion-exclusion principle to reduce the statement to(18).

    By the result of the exercise, F is defined on all elementary cylinders.By Kolmogorovs extension theorem, F has a unique extension to a measureon .

    Exercise 5.2. Show that Fn FSF, in the sense of weak convergence ofprobability measures on .

    The limit does not depend on the exhaustion: ifGn is another exhaus-tion, we can find

    Vn1 Vn1 Vn2 Vn

    2 . . .

    and the limit along this third exhaustion has to coincide with both the limitalongGn and the limit alongGn.

    The measure F is called the (weighted) free spanning forest measure onG, we will denote it FSF. We will writeF for the set of edges present inan element of . Then FSF[

    Fcontains a cylce] = 0, since any specific cycle

    being present is a cylinder event that has 0 probability under any Fn , andhence also under FSF. It is obvious that FSF[F is spanning] = 1. HenceFSF[Fis a spanning forest] = 1.

    5.4 Wired spanning forest

    The word free refers to the fact how in defining Fn , we disconnected Gnfrom the rest of the graph G. There is another natural way of taking thelimit, that is to force all connections outside ofGn to occur.

    Define the graphGWn as the result of identifying all vertices inV\Vn to asingle vertexzn. This will result in infitely many loop edges at zn, which weomit, as they cannot be part of a spanning tree. Hence another descriptionof the graph GWn is that we add to the graph Gn a new vertex zn, and foreach edge ofG connecting a vertex v Vn with a vertex inV\ Vn, we placean edge between v andzn.

    23

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    24/65

    Let Wn be the random spanning tree measure on GWn (also realized on

    ). Now GWn is obtained by contracting some edges in GWn+1. Therefore, by

    Proposition 4.3, we have

    Wn [B

    T]

    Wn+1[B

    T],

    where this makes sense as soon as B En. We again conclude thatW[B T] := lim

    n[B T]

    exists, and similalry forW[TK=B]. The limit again does not depend onthe exhaustion. The measure defined this way is called the (weighted) wiredspanning forest measure, and we call it WSF. It is again concentrated onspanning forests ofG.

    The name wired refers to how the complement ofGn has been short-circuited in the graph GWn .

    5.5 Examples

    Let us pause a little to consider some simple examples of the constructionsabove.

    Consider the d-dimensional integer lattice, that is the graph with vertexset Zd, where{x, y} is an edge if and only if|x y| = 1. We setC(e) = 1for all edges. Let Vn= [n, n]d Zd, and letGn = (Vn, En) be the subgraphinduced byVn. We considered the random spanning tree measure (in this casethis is the uniform spanning tree measure)Fn , and showed that it convergesweakly to a measure FSF. This was first proved by Pemantle [30]. The word

    free refers to boundary condition we used by disconnecting Gn from therest of the lattice. We also considered a different boundary condition, calledwired, where we obtained the graphGWn by adding a new vertexzn toGnand for every pair v Vn, w V\ Vn, v w, we place an edge between vand zn. The random spanning tree measure on G

    Wn was

    Wn , and we showed

    it has a weak limit WSF. We will see later that on Zd, and on many othergraphs, we have FSF = WSF. That is, in the case ofZd, the difference inboundary conditions washes away as n .

    When are the two measures different? Here is a simple example wherethey are. Let G = (V, E) be a 3-regular tree, o a fixed vertex in V, and

    Vn ={xV : dist(o, x)n}. Since Gn is a tree, its only spanning tree isitself, and hence Fn concentrates on the single point{Gn}. It follows thatFSF concentrates on the single point{G}. The limit is more interesting forthe wired boundary condition. Leta be a neighbour ofo, and use Wilsonsmethod to generate the random spanning tree on GWn , starting with the

    24

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    25/65

    vertices o and a. Since a random walk started at o has probability 2/3 tostep further fromoand probability 1/3 to step closer to o, whenever not ato, we have

    Po[Xk hits zn beforea] >0

    uniformly inn, and similarly,

    Pa[Xk hits zn beforeo] >0.

    It follows that

    WSF[oa F] = limn

    Wn [oa F] 2 >0.

    Therefore, WSF= FSF. Later we will see that under WSF, there are in-finitely many trees a.s.

    We will also see later that independent random walks on Zd with d5 have a positive probability of never intersecting, and we will be able toconclude that in this case there are also infinitely many trees a.s.

    5.6 Wilsons method on transient networks

    Assume that (G, C) is an infinite network on which the network random walkis transient. The following variant of Wilsons method was introduced in [5].

    We construct a random spanning forest of G. LetF0 :=, and letv1, v2, . . . be an enumeration ofV. We inductively define Fnas follows. Starta network random walk at vn, and stop it if it hitsFn1, otherwise run itindefinitely. Let

    Pnbe the path of this walk. Due to transience,

    Pnvisits any

    vertex finitely often. Hence LE(Pn) is well-defined. Set Fn := Fn1LE(Pn).Finally, letF := nFn. It is clear thatFcontains no cycles, and it spans G,soFis a random spanning forest ofG. We call this processWilsons methodrooted at infinity.

    Lemma 5.1. [5] The distribution ofFdoes not depend on the chosen enu-meration ofV.

    This lemma can be proved similarly to the finite case. Its statement alsofollows from Theorem 5.1 below, however, the proof gives more: it showsthat one can realize the method with any choice of enumeration on the same

    probability space in such a way that the same random forest is generated.

    Proof. Consider stacks and cycle popping as in the finite case. Fix all thestacks. Let us call a sequence D1, D2, . . . of coloured cycles legal, if (i) thecycles can be popped in this sequence; (ii) if at any stage an (uncoloured)

    25

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    26/65

    cycle is present then it is popped at some later stage. We say that a legalsequence terminates, if any finite F E, becomes void of cycles eventually.For a legal, terminating sequence, the end result of popping all the cyclesD1, D2, . . . is well-defined, and is a spanning forest.

    A fixed choice ofv1, v2, . . . yields, almost surely, a legal, terminating se-quence C1, C2, . . . of coloured cycles. Fix the stacks so that this holds. Thenwe show that any other other legal sequence D1, D2, . . . is necessarily alsoterminating, and results in the same spanning forest. Let Di1 be the firstcycle in theD-sequence that is not disjoint from C1. As in the finite case, wesee that C1 =Di1 as coloured cycles. PoppingDi1 commutes with poppingD1, . . . , Di11, hence D

    =Di1, D1, D2, . . . , Di11, Di1+1, . . . is also legal.Pop now C1 = Di1 from the stacks. Repeating the argument, we see thatthere existsi2 {1, 2, . . . } \ {i1}, such thatC2 = Di2 as coloured cycles, andthat moving Di2 to the beginning, we get a legal sequence. Inductively, wefind ik {1, 2, . . . } \ {i1, . . . , ik1}, such that Ck = Dik as coloured cycles.We show that{i1, i2, . . . } = {1, 2, . . .}. Consider Dk. The C-sequence guar-antees that the uncoloured cycle Dk is not present after popping C1, . . . , C Nfor some N. Hence, k {i1, . . . , iN}. We have thus proved that the C- andD-sequences consist of the same coloured cycles. It follows that D1, D2, . . . isterminating, and results in the same spanning forest. The result follows.

    We now identify the distribution of the resulting spanning forest.

    Theorem 5.1. [5] On any transient network, Wilsons method rooted atinfinity yields a spanning forest with distributionWSF.

    Proof. If

    P =

    xk : k

    0

    is a deterministic path that visits every vertex

    finitely often, then

    LE(xk :k K) K LE(P).The meaning of convergence here is that for any i, the i-th step of the pathon the left hand side is the same for all large enough K, and coincides withthe i-th step on the right hand side. IfP is a random walk path, almostsurely,

    LE(Xk : k K) K LE(P),by transience.

    Recall thatGW

    n is obtained fromGby identifyingV\Vnto a single vertexzn. Consider the network random walk onG up to the hitting time ofV\Vn.This is identical in law to the network random walk on GWn up to the hittingtime of zn, hence we can use the network random walk on G for Wilsonsalgorithm onGWn with rootzn.

    26

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    27/65

    We write T(n) for the random spanning tree onGWn , and we writeG forits limit, the wired spanning forest on G. Fixe1, . . . , eM E. Letu1, . . . , uLcontain all vertices that are endpoints of someej . Let Xk(uj) be the randomwalk started atuj. Wilsons method onG

    Wn , rooted atzn, uses the stopping

    times nj

    that are the first time when the earlier part of the spanning treeis hit. Wilsons method on G, rooted at infinity, uses the stopping times j .Let

    Pnj := Xk(uj) :k njP := Xk(uj) :k j

    nj := LE(Pnj)j := LE(Pj)

    Fnj := Fnj1 nj.We have

    P[e1, . . . , eM T(n)] = P e1, . . . , eM Lj=1nj . (19)We claim that almost surely, nj j , nj j, j = 1, . . . , L. This can beproved by induction on j .

    When j = 1, 1 =. Here n1 = hitting time ofV\ Vn, that goes toinfinity asn .

    Assume nowj 2. Assume first the event{j = }. Fix a large integerN. By the induction hypothesis, we have ni GN = i GN for all largeenough n, i < j. (The statement is clear if i is a finite path. When iis infinite, the statement holds ifPi does not return to GNafter time ni .)Therefore, we have

    Fnj1 GN= i

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    28/65

    5.7 Automorphisms

    By a graph automorphism ofG we mean a pair of bijections : V V, : E E (for convenience, we denote them by the same letter), suchthat e is an edge between v1 and v2, if and only if(e) is an edge between

    (v1) and (v2). By anetwork automorphism of (G, C), we mean a graphautomorphism ofG that in addition satisfies C((e)) = C(e) for all e E.

    A graph automorphism induces a map : by letting e(F) ifand only if1(e) F.Proposition 5.1. FSFandWSF are invariant under any network automor-phisms.

    Proof. IfGn is any exhaustion, then1(Gn) is also an exhaustion. Forany finite B E, we have

    WSF[B (F)] = WSF[1

    (B) F] = limn W

    1

    (Gn)[

    1

    (B) T]= lim

    nWGn[B T] = WSF[B F].

    5.8 Trees are infinite

    Proposition 5.2. On any infinite network(G, C), all components are infi-niteWSF-a.s. andFSF-a.s.

    Proof. For any finite tree t

    G, the event

    {tis a component

    }is a cylinder

    event, that has probability 0 with respect to Fn for all large n(since underFn , T is connected). Hence this event has probability 0 in the limit, andthere are countably many such events. The proof for WSF is the same.

    5.9 Wilsons method on recurrent networks / equality

    Let (G, C) be an infinite recurrent network. We can make sense of Wilsonsmethod onG, if we place the root at a fixed vertex r V. Indeed, ifv1, v2, . . .is any enumeration ofV\ {r}, then letF :={r}, letPn be the path of thenetwork random walk started atvn, stopped when it hits Fn1, which is finite

    almost surely, by recurrence. Then let Fn:= Fn1LE(Pn), and F := nFn.It is clear thatF is a spanning tree ofG.Proposition 5.3. [5] On any recurrent network, and for any enumeration,Wilsons method rooted atr yields a tree with distributionFSF = WSF.

    28

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    29/65

    Proof. Let B be a cylinder event,Gn an exhaustion. LetK0 be the end-points of edges on which B depends, and K :={vj :i j, vi K0}.Write

    intGn= {x Vn: y V\ Vn, x y}.

    Run Wilsons method on GWn , rooted at r to generate TGWn (note here that

    we use r, rather than zn, as the root, which is the same, due to Theorem3.1). Run Wilsons method, rooted atr, to generate F. The random walks inthe two constructions are indistinguishable until one ofintGn is hit. Hencewe can put the two constructions on the same probability space in sucha way that they use the same random walks until intGn is first hit. LetC1 := {F B}, C2:= {TGWn B}. thenP[F B] Wn (B) = |P[C1] P[C2]|

    P[C1C2]

    P

    [some random walk started in Khits intGn] vK

    Pv[intGn < r] n 0,

    by recurrence. This shows thatF has the distribution of WSF. In exactlythe same way, we obtain thatFhas the distribution of FSF.

    It follows from the above theorem that on Z2, FSF = WSF.

    5.10 Stochastic domination

    In this section, we prove that WSF is always stochastically smaller thanFSF; see (20) below for what this means precisely. This is a very powerfulcomparison of the two measures, that sometimes allows us to conclude thatthe two measures are in fact the same.

    LetGn be an exhaustion of G by finite subnetworks. Since Gn is asubgraph ofGWn , we have, by Proposition 4.3(a),

    Fn [e T] Wn [e T]for any edge e En. Letting n , this implies that

    FSF[e

    F]

    WSF[e

    F], e

    E.

    We now state a more general inequality.

    Definition 5.1. An eventA is called increasing (orupwardly closed) ifF1 A,F2 F1 imply that F2 A.

    29

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    30/65

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    31/65

    Proof. We have{e F1} {e F2}-a.s. By assumption, we also have,

    [e F1] = WSF[e F] =F SF[e F] =[e F2].

    Hence

    {e

    F1

    }=

    {e

    F2

    }-a.s., and

    F1 =

    F2 -a.s. Note that without

    the monotone coupling, we could could gain little from the equality of theone-dimensional marginals.

    The following proposition is based very much on the same idea. It saysthat if the expected degree of vertexv is the same in the free spanning forestas in the wired spanning forest, for each v V, then the free and wiredspanning forests coincide.

    Proposition 5.6. [5] IfE[degF(v)]is the same under the measuresFSF andWSF for allv V, thenFSF = WSF.Proof. Under the measure , the set of edges incident onv in

    F1is a subset

    of the set of edges incident on v inF2. Due to the assumption, the two setsof edges coincide -a.s., and henceF1= F2 -a.s.

    31

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    32/65

    6 The number of trees on Zd

    In this section we look at the question: is the random spanning forest on Zd

    connected (that is, a single tree) or not? We first state a theorem that wewill prove in greater generality in the next section. The theorem is implicitin [30], and is proved explicitly in [15].

    Theorem 6.1. OnZd, we haveFSF = WSF for alld 1.In view of this theorem, we can speak simply of the Uniform Spanning

    Forest on Zd: there is no need to specify the boundary condition, and wecall it uniform, since we use C(e) 1.

    We will frequently use the notation: f g to denote that the positivequantities f and g (depending on some argument), are of the same order,that, is, there exist constants c1, c2> 0, such that c1g f c2g.

    The main result of this section is the following theorem of Pemantle [30].

    Theorem 6.2. The uniform spanning forest on Zd is a.s. a single tree, ifd 4, and it has infinitely many components a.s., if d 5. Whend 5,u =v Zd, then

    P[u, v in the same component ofF] |u v|4d. (22)

    In what follows, we will also use the notation |u| := 1 + |u|, so thatwe do not have to make explicit exceptions for a negative power of 0, whenu= 0. Note that the right hand side of (22) is comparable to|u v|4d.

    Proof ford= 1, 2. The case d = 1 is easy to see. Since Gn= [n, n] Zd

    isa tree, the FSF concentrates on the single point{Z}.Ford = 2, we saw in Proposition 5.3 that the forest is connected.

    Henceforth we assume d 3, so the random walk is transient, and dueto Theorem 5.1 we can use Wilsons method rooted at infinity to generatethe spanning forest. The proof we present is adapted from [27], [4] and [26],where substantially more general theorems are proved.

    Fix u Zd, and letP0 :=Xk : k 0 andPu :=Yk : k 0 be thepaths of independent simple random walks started at X0 = 0 and Y0 = u,respectively. Using Wilsons method with an enumeration starting with 0, u,

    we immediately obtain the following proposition.Proposition 6.1. Vertices 0 and u belong to the same component a.s., ifand only ifP[LE(P0) Pu = ] = 1.

    32

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    33/65

    We will first focus on the case d 5.Preliminaries. TheGreen functionof the walk is defined by

    G(x, y) :=

    n=0pn(x, y) = E

    n=0I[Xn= y]

    X0= x= E[# visits ofXn toy |X0= x] =G(y, x).Herepn(x, y) is the probability for simple random walk started at xto be atyaftern steps. The last equality follows from symmetry ofpn(x, y) (reversibil-ity). Good estimates on pn(x, y) yield important information for G(x, y).Such estimates are provided by the Local Central Limit Theorem. We statethis below (although we will not need such a detailed estimate in what fol-lows). We say that x Zd isevenif the sum of its coordinates are.Theorem (Local CLT; [22, Section 1.2]). Ifx has the same parity asn 1,

    thenpn(0, x) = 2

    d2n

    d/2e

    d|x|2

    n + E(n, x),

    where

    E(n, x) =

    O

    nd21

    O

    nd/2

    |x|2

    .

    Summing over n, one can deduce (see [22, Theorem 1.5.4]) that thereexists a constant ad > 0, such that

    G(x, y)

    ad

    |x y|d2, as

    |x

    y|

    .

    In particular,G(x, y) |x y|2d. (23)

    We first consider how likely is it thatP0 andPu intersect. Consider

    V :=n=0

    m=0

    I[Xn = Ym] = # intersections ofP0 andPu.

    We find

    EV = z

    n

    m

    P[Xn= z= Ym] = z

    n

    pn(0, z)m

    pm(u, z)=

    z

    G(0, z)G(u, z) z

    |z|2d|u z|2d.

    33

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    34/65

    Lemma 6.1. Ifd 5, we havez

    |z|2d|u z|2d |u|4d. (24)

    Proof. Suppose that 2N

    |u|

    < 2N+1. By symmetry, it is enough toconsider the contribution from|z||u z|. Then, separating the termswithn N+ 1 andn > N+ 1, and using that there are order (2n)d verticeswith 2n |z|

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    35/65

    By separating the cases n iand i < n, we can writen,i

    P[Xn = z, Xi= w] =n=0

    i=n

    P[Xn= z]P[Xi= w|Xn= z]

    +i=0

    n=i+1

    P[Xi= w]P[Xn= z|Xi = w]

    G(0, z)G(z, w) + G(0, w)G(w, z).We similarly get

    m,j

    P[Ym= z, Yj =w] G(u, z)G(z, w) + G(u, w)G(w, z).

    The product of the two sums gives four terms. Two and two of these areidentical, when summed over z, w, by symmetry, so we get

    E[V2] 2z,w

    G(0, z)G(u, z)G(z, w)2 + 2z,w

    G(0, z)G(z, w)2G(w, u)

    z

    |z|2d|u z|2dw

    |z w|42d

    +z

    |z|2dw

    |z w|42d|w u|2d.

    (26)

    Since 42d= d+(4d)< d, the sum over w in the first term is boundedby a finite constant. The remaining sum overz, by Lemma 6.1, is bounded by

    |u|

    4d. In the second term, the sum over w gives|

    z

    u|

    2d; this can beseen by a decomposition similar to that used in the proof of Lemma 6.1, andwe do not give the details. The remaing sum overzcan then be estimatedusing Lemma 6.1. Putting things together, we get E[V2] C|u|4d.

    By the Cauchy-Schwarz inequality,

    E[V2]P[V 1] (E[V I[V 1]])2 = (EV)2.Hence

    P[P0 Pu = ] = P[V 1] c|u|4d.We need more, to estimate P[LE(P0) Pu = ]. Here is the basic idea, howwe handle the loop-erasure. Suppose that the pathsP

    0

    andPu

    intersectat Xn = z = Ym. Loop-eraseP0 up to the intersection at time n, andcall the loop-erasure . If the earliest intersection of(as measured along) withYm+k : k 0 comes no later than the earliest intersection withXn+k : k 0, then the intersection of withYm+k : k 0 stays after

    35

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    36/65

    the rest ofX is loop-erased. Given the intersection Xn= z= Ym, this eventwill happen with conditional probability at least 1/2, which will yield thestatement.

    To write the argument precisely, we denote

    LE(Xk :k n) =: n(i) :i K(n)(n) := min{0 i K(n) :n(i) =Xk for some k n}

    (m, n) := min{0 i K(n) :n(i) =Yk for some k m}.

    LetIm,n :=I[Xn = Ym]I[(m, n)(n)]. Given the event{Xn =z=Ym},the pathsXn+k :k 0 andYm+k : k 0are exchangeable. Therefore,

    P[(m, n) (n)|Xn= z= Ym] 1/2. (27)

    PutW := n m Im,n,

    and note that W 1 implies that LE(P0) andPu intersect. Then

    EW =n

    m

    EIm,n =z

    n

    m

    E[Im,n|Xn= z=Ym]P[Xn= z= Ym]

    12EV.

    SinceW V, we also have EW2 EV2. Hence

    P[LE(P0

    ) Pu

    = ] P[W 1] (EW)2

    E[W2] (EV)2

    4E[V2] c|u|4d

    .

    This concludes the proof of Theorem 6.2 in the case d 5.The quantitative estimate shows that the trees in the uniform spanning

    forest are 4-dimensional, when d 5: considerVn:= |B(n) component of 0|

    =z:|z|n

    I[zand 0 are in the same component].

    ThenE

    [Vn] z:|z|n |z|4d n4. It is remarkable, that the dimension ofthe trees is stable, and does not depend on d for large d. See [4] for furtherinteresting geometric properties of the uniform spanning forest.

    We now deal with the case d = 3, 4. It is easy to see, using (23) thatEV =

    zG(0, z)G(u, z) =, when d = 3, 4. The difficulty is in handling

    36

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    37/65

    the loop-erasure, and show that an intersection occurs with probability 1.It turns out to be helpful to prove more, and show that LE(P0) andPuintersect infinitely often, with probability 1. To work with something finite,we define

    VN :=Nn=0

    Nm=0

    I[Xn= Ym]

    GN(x, y) =Nn=0

    pn(x, y) = GN(y, x).

    Lemma 6.2. Assume thatX0= 0 =Y0. Then

    (EVN)2

    E[V2N] 1

    4. (28)

    Proof. We have

    EVN=z

    Nn=0

    Nm=0

    pn(0, z)pm(u, z) =z

    GN(0, z)2 =:bN.

    Similarly to the computation in (26), and using the Cauchy-Schwarz inequal-ity for the second term arising, we get EV2N 4b2N.

    We will need the fact that the lower bound (28) still holds asymptotically,when the walks start at arbitrary verticesu, v Zd. Write Pu,v, Eu,v to denotethatX0 = u, Y0= v.

    Lemma 6.3. Assume nowX0 = u andY0= v. Then we have

    lim infn

    (Eu,vVN)2

    Eu,v[V2N] 1

    4. (29)

    Proof. Similarly to the computations in (26), and using Cauchy-Schwarz, weget Eu,vV

    2N 4b2N. For the first moment, we write:

    Eu,vVN=z

    Nn=0

    Nm=0

    pn(u, z)pm(v, z) =N

    n,m=0

    pn+m(u, v).

    Similarly,

    E0,0VN=

    N

    n,m=0pn+m(0, 0). (30)By the Local CLT, as n+m we have, pn+m(u, v)pn+m(0, 0). Sincethe sum in (30) diverges as N , we conclude that Eu,vVN E0,0VN, asN . The claim follows.

    37

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    38/65

    The key to the proof is the following lemma, which says that infinitelymany intersections will occur with a probability that is uniform over thestarting pointsu andv , and an arbitrary initial segment of theX-path.

    Lemma 6.4. Fix a path

    xj

    1j=, and setXj := xj for

    j

    1. Let

    X0= u, Y0= v. Then

    P [|LE(Xn: n Ym| = ] 116

    .

    Proof. Denote

    n(i) : 0 i K(n) := LE(Xk : k n)(n) := min{0 i K(n) :n(i) =Xk for some k n}

    (m, n) := min{0 i K(n) :n(i) =Yk for some k m}.LetIm,n = I[Xn= Ym]I[(m, n)

    (n)], and

    WN :=Nn=0

    Nm=0

    Im,n.

    As in (27), we see that EWN 12Eu,vVN , as N . Also, EW2NEu,vV

    2N. Cauchy-Schwarz gives

    E[W2N]P[WN EWN] (E[WNI[WN EWN]])2 (EWN EWN)2= (1 )2(EWN)2.

    Hence, for large N,

    P[WN EWN] (1 )2 (EWN)2

    E[W2N] (1 )

    2

    4

    (Eu,vVN)2

    Eu,v[V2N]

    (1 )2

    4

    1

    4

    .

    SinceWNis monotone, this implies that WN with probability at least14

    (1 )2( 14 ). Letting 0, we get that P[WN ] 1/16.

    Finally, notice that on the event{WN }, the intersection LE(Xn :n )Ym: m 0 is infinite. This is because every intersection countedin WNis counted at most a finite number of times, by transience.

    Proof of Theorem 6.2 whend= 3, 4. Let E ={|LE(P0) Pu| =}. ByLevys 01 law [12, Section 4.5], we have

    I[E] a.s.= lim

    nP0,u[E|X1, . . . , X n, Y1, . . . , Y n]. (31)

    38

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    39/65

    By the Markov property, the conditional law of the continuationYn+k:k0 is a random walk started at v = Yn, and the law of the continuationXn+k : k 0 is a random walk started at u = Xn. Noting that for anyn0, E is the same event as{|LE(P0) Yn+k :k 0| =}, we are inthe setting of Lemma 6.4, and can conclude that the right hand side of (31)equals

    limnPXn,Yn[E|X1, . . . , X n]

    1

    16.

    Therefore, I[E] 1/16 a.s., and hence P[E] = 1.

    39

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    40/65

    7 Average degrees and amenability

    We start this section by proving Theorem 6.1, following [5]. It will turn outthat the proof is applicable in much greater generality. The generalization isrelated to the notion ofamenability, and we explore this concept in the rest

    of this section.

    Proof of Theorem 6.1. Let Vn = [n, n]d Zd, Gn = (Vn, En) the inducedsubgraph. First consider a fixed (deterministic) spanning forest FofZd, suchthat all components ofF are infinite. For K V, we define the externaledge boundary ofKas the set of edges between Kand V\ K:

    EK:= {e E:e K, e V\ K}.Letknbe the number of trees inF En. Thenkn |EVn|, since an infinitetree intersectingEncontains a boundary edge, and different tree components

    ofF En cannot use the same boundary edge. We claim that (uniformly inF)limn

    1

    |Vn|xVn

    degF(x) = 2. (32)

    To see this, we note that the sum of the degrees counts each edge in F Entwice, and it counts each edge inF EVn once. Therefore,

    xVn

    degF(x) 2|F En| + |EVn| = 2(|Vn| kn) + |EVn|

    2|Vn| + |EVn|.

    At the equality sign, we used that a tree component ofF Enwithnverticeshas n 1 edges, and thatFis spanning. On the other hand, we have

    xVn

    degF(x) 2|F En| = 2(|Vn| kn) 2|Vn| 2|EVn|.

    Hence

    2 2 |EVn||Vn| xVn

    degF(x) 2 +|EVn|

    |Vn| .

    Since

    |EVn

    |=O(nd1), we get (32).

    Now take expectation in (32) with respect to either FSF or WSF. By thedominated convergence theorem, we have

    limn

    1

    |Vn|xVn

    E[degF(x)] = 2.

    40

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    41/65

    By translation invariance, Proposition 5.1, we see that E[degF(x)] does notdepend on x, and in fact E[degF(x)] = 2 for all x Zd. This shows that theexpected degree of each vertex is the same under both FSF and WSF. ByProposition 5.6, we conclude that FSF = WSF.

    The above proof used properties ofZd only in two places: we used trans-lation invariance, and we used that the sets Vn have small boundary relativeto their size. We now generalize these properties.

    Definition 7.1. A graph or network is called transitive, if given x, y V,there is an automorphism such that (x) =y.

    By Proposition 5.1, for a transitive network, E[degF(x)] does not dependon xfor either FSF or WSF.

    A large class of examples of transitive graphs is provided byCayley graphs.Let be a countable group. We say that a set S generates , if thesmallest subgroup containing S is . In what follows we will assume that is finitely generated, that is, a finite generating set S exists. We will alsoassume thatS is symmetric, that is, whenever s S, we also have s1 S.Then anyx can be written as x = s1s2 . . . sn,sj S.Definition 7.2. The (right-) Cayley graph of (, S) is the graphG = (V, E)withV= , and E= {[x, y] :x = ysfor some s S}.

    Left multiplication by any is an automorphism: if(x) =x, wehave x = ys if and only ifx = ys. It follows that any Cayley graph istransitive, since given x, y V, yx1(x) =yx1x= y .

    Examples of Cayley graphs:1. Zd with S ={ei : i = 1, . . . , d}, where ei are the unit coordinate

    vectors.

    2. The free group on two letters, with S ={a,b,a1, b1}. This groupcan be described as the set of all finite words formed of elements ofSwith no occurrence of the stringsaa1, a1a,bb1, b1b. Composition isby concatenation, and removal of any forbidden strings. Some thoughtreveals that the Cayley graph is a 4-regular tree.

    3. The free product of three copies ofZ2. This can be described as the set

    of all finite words in the letters a, b, csuch that noaa, bb, ccoccur. Mul-tiplication is again by concatenation, and removal of forbidden strings.WithS= {a,b,c}, the Cayley graph is a 3-regular tree.

    41

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    42/65

    4. The free product of Z2 and Z3. This can be described as the set ofall words in the letters a, b, such that no aaor bbboccurs. With S ={a,b,bb}, the Cayley graph consists of triangles joined together withline segments in a tree-like fashion.

    Now we come to the generalization of the property of small boundary tovolume ratio.

    Definition 7.3. The edge-isoperimetric constant of a graph G = (V, E) isdefined by

    E(G) := inf

    |EK||K| :K V finite

    .

    We call the graph amenable, ifE(G) = 0. We call the graph non-amenable,ifE(G)> 0.

    Hence amenability of a graph is equivalent to the existence of finite subsets

    Kn such that limn |EKn|/|Kn| = 0. In a non-amenable graph, each finitesubset has a large edge boundary, in the sense that the size of the boundaryis at least a constant fraction of the volume. In an amenable graph, thereexist sets with small boundary.

    Remark. A more general notion of amenability is introduced in [26, Chapter6].

    Exercise 7.1. Show that a d-regular tree is non-amenable ifd 3.With the above notions, the same proof we had for Zd, yields the following

    theorem.

    Theorem 7.1. On a transitive, amenable network, FSF = WSF.

    The notion of amenability was originally introduced in the context ofgroups, and in the rest of this section we will illustrate it from the pointof view of Cayley graphs of groups. This will also explain the origin of thename.

    We define the external vertex boundary ofKas

    VK:= {x V\ K: x K such thatx y}.

    Definition 7.4. Thevertex-isoperimetric constantof a graphG = (V, E) isdefined by

    V(G) := inf

    |VK||K| :K V finite

    .

    42

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    43/65

    Proposition 7.1. For any transitive graph,E(G)> 0 if and only ifV(G)>0.

    Proof. Clear from the inequalities|VK| |EK| D|VK|, where D isthe degree of a vertex.

    In particular, for Cayley graphs, amenability is equivalent to

    Kn Vfinite such that limn

    |VKn|/|Kn| = 0. (33)

    The origin of the name amenable is explained by the concept of aninvariant mean on a group. Let () be the space of all bounded realfunctions on . A linear map : () R is called a mean if (1) =1, and (f) 0 for f 0. For and f (), we define thefunctionRf() by Rf(x) :=f(x). The mean is called invariant,if(Rf) = (f) for all

    and f

    (). Hence an invariant mean

    is a way of averaging functions on in such a way that the average valueis invariant under the transformations R. As a play on words, we call amenable, if an invariant mean exists on .

    Let us see the connection of invariant means to the notion introducedfor graphs. We remind that we restrict the discussion to finitely generatedgroups, although amenability of groups applies in greater generality. If theCayley graphG is amenable, as witnessed by the sequence Kn, then we canconsider the means

    n(f) := 1

    |Kn|xKn

    f(x).

    We now show that these means are almost invariant. For S, we have

    |n(f) n(Rf)| = 1|Kn|

    xKn

    (f(x) f(x)) .

    Ifx1 = x2,x1, x2 Kn, then the termsf(x1) andf(x2) cancel in the sum.The terms that do not cancel are: x1 Kn, such thatx2 = x11 Kn, andx2Kn such that x1 =x2Kn. Hence the number of terms that do notcancel is bounded by 2|VKn|, and therefore,

    |n(f) n(Rf)| 2f |VKn

    ||Kn|n

    0. (34)Since any can be written as a product of elements ofS, we concludethat (34) holds for all .

    43

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    44/65

    In order to use (34) to extract an invariant mean, we need a few factsfrom functional analysis. The spaceX=() is a Banach space with thesupremum norm, and means belong to the unit ball of the dual space X.The weak* topology on X is given by specifying the following base: with

    X,f1, . . . , f k

    X, and >0, the sets

    U(, f1, . . . , f k, ) := { X : |(fj) (fj)| < , j = 1, . . . , k}

    form a base. Since in our case X is not separable, care should be taken asthis topology is not metrizable. Let B denote the unit ball in X. By atheorem of Alaoglu [25, Theorem 12.3],B is compact in the weak* topology.The setsAN := {n: n N} have the finite intersection property, and hencethe intersectionNAN is non-empty. Let NAN. Then is a weak*cluster point of the sequence{n}, that is, given any neighbourhood U of,and any N, there exists n N such that n U. Now apply this to theneighbourhoods U(,f ,R

    f, ) to conclude that that(f)

    (R

    f) = 0. We

    have shown that the condition (33) implies that an invariant mean existson .

    A remarkable theorem of Flner [13] implies that the converse is also true:

    Theorem 7.2. If an invariant mean exists on, then for any Cayley graphof, (33) holds.

    We do not prove this here.We conclude this section with a few results on the connection between

    algebraic properties of groups and their amenability. Write

    B(n) := {x :x = s1 . . . sk, k n, sj S};this is the ball of radius ncentred at the identity in the Cayley graph.

    Proposition 7.2. If |B(n)|does not grow exponentially, then is amenable.Proof. We have

    |B(n + 1)| = |B(n) VB(n)| (1 + V(G))|B(n)|.

    Hence, if is non-amenable, then, V(G) > 0, and|B(n)| grows exponen-tially.

    Proposition 7.3. An Abelian group is amenable.

    Proof. We have|B(n)| (2n+ 1)|S|, hence by Proposition 7.2, the group isamenable.

    44

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    45/65

    Proposition 7.4. If is amenable, H is a subgroup of , then H is alsoamenable.

    Proof. Let be an invariant mean on . We construct an invariant mean onH. Let f

    (H). The idea is to lift fto a function on , and use the

    inavriant mean on . In each left coset ofH, we fix an element x0, so thatthe coset takes the form x0H. We define f(x0h) = f(h), h H. We put(f) := (f). The requirments(1) = 1 and positivity are immediate. Toprove invariance, letg H. Chasing the defintions, we have

    Rgf(x0h) = (Rgf)(h) = f(hg) = f(x0hg) = (Rgf)(x0h),henceRgf=Rgf. It follows that

    (Rgf) = (

    Rgf) = (Rgf) = (f) =(f).

    A consequence of Proposition 7.4 is that if contains a non-amenablesubgroup, then it has to be non-amenable. For example, if contains a freegroup, then, by Exercise 7.1 and Theorem 7.2, is non-amenable.

    Proposition 7.5. If H is a normal subgroup of , and H and /H areamenable, then is amenable.

    Corollary 7.1. Solvable groups are amenable.

    The proof of Proposition 7.5 is based on somewhat similar ideas as the

    proof of Proposition 7.4, and we do not give it here. (We can average overcosets using the invariant mean on H, then average the averages using theinvariant mean on /H.)

    45

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    46/65

    8 Currents on infinite networks

    In this section we extend the definition of current to infinite networks. Therewill be two natural ways to do this, that will correspond to the free and wiredboundary conditions.

    Let (G, C) be an infinite network. The space2(E) consists of antisym-metric functions on the edges such that E() = 2R= 12

    eER(e)(e)

    2 0.

    Proof. Call (n, m) a*-last intersection, ifX(n) =Y(m), andX(n1) =Y(m1)for (n, m) (n1, m1). Note that in general, *-last intersections are notunique. Since P[V < ] = 1, there exists at least one *-last intersection a.s.This implies that

    1 n

    m

    P[(n, m) is a *-last intersection]

    =n

    m

    P[X(n) = Y(m), X(n1) =Y(m1) for (n, m) (n1, m1)]

    =

    n mP[X(n) = Y(m)]h= hEV.

    This shows thath (EV)1 >0.We introduce the notationX(i) :i 0 := LE(X(n) : n 0) Let

    (i) = time of last visit to X(i) = sup{n: X(n) = X(i)}.

    50

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    51/65

    Let(j) =i forij < i+1.

    Then it is clear that ((i)) = i, and X(i) = X((i)). We will use theshorthandX[k, l] = X(n) :k n l, and similalryX(k, l], etc. Let

    Zn:=1 if(i) = n for some i 0;

    0 otherwise.

    Then we have

    (n) =n

    j=1

    Zj

    = # points remaining ofX[0, n] after loop-erasure.

    It will be useful to extend Xto a two-sided walk:

    Xn:=Xn 0 n < ;

    Yn < n 0.Note that the increments X(n + 1) X(n), < n < are i.i.d. We call

    j loop-free ifX(, j] X(j, ) = . Thenb:= P[j loop-free] = P[X(, j] X(j, ] = ] h >0.

    Lemma 9.2. Ifd5, with probability1, there are infinitely many positiveand negative loop-free points.

    Proof. LetU:= # positive loop-free points. We callj n-loop-free, ifX[jn, j] X(j, j+ n] = . Then

    P[j n-loop-free] =:bn b asn .Let

    Vi,n := {(2i 1)nis loop-free}Wi,n := {(2i 1)nis n-loop-free}.

    Then Wi,n,i = 1, 2, . . . are independent. We have

    P[U k] P mi=1

    I[Vi,n] k P

    mi=1

    I[Wi,n] k

    P mi=1

    I[Wi,n \ Vi,n] 1

    51

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    52/65

    Given >0, the first term can be made at least 1 , by choosing mlarge(uniformly inn). The absolute value of the second term is at mostm(bnb),which can be made less than by choosingnlarge. Hence P[U k] 12.Letting 0 and k , we get P[U= ] = 1.

    Theorem 9.1. Ifd 5, there existsa= a(d)> 0, such that

    limn

    (n)

    n =a a.s.

    Proof. Letj0 := inf{j 0 :j loop-free}. Let the sequence of loop-free pointsbe

    < j2 < j1 < j0< j1 < j2 < . . .Erase loops on each piece X[ji, ji+1] separately. Let

    Zn:= I[n-th point is not erased]

    =I[LE(X[ji, n]) X(n, ji+1] = ],where ji n ji+1. The following observation is crucial: ifn j0, thenZn = Zn. This is due to the following. Since j0 is loop-free, LE(X[0, j0])does not influence loop-erasure of the continuation X(j0, ]. Similarly, since

    j1 is loop-free, LE[j0, j1] does not influence loop-erasure of the continuationX(j1, ), etc. This implies the claim.

    Shifting the path X so that X(n) becomes the origin, we see that thesequenceZn : n 0 is stationary. It is also ergodic, as a function of thei.i.d. process X(n+ 1) X(n), < n 0.Since(n) a.s., it follows from Theorem 9.1 that almost surely,

    a = limn

    ((n))

    (n) = lim

    n

    n

    (n). (38)

    This gives the time-rescaling necessary so that we can compare scaled loop-erased walk to scaled simple random walk. Let us writefor weak conver-gence in the spaceC[0, 1] of continuous functions on [0, 1] with the supremummetric. Write{B(t)}0t1 for a standard d-dimensional Brownian motion.

    52

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    53/65

    Theorem 9.2. Letd 5. Put

    Wn(t) :=d

    aX(nt)

    n ,

    (with linear interpolation in place). ThenWn(t) B(t).Proof. Put

    Wn(t) :=d

    aX(nta)

    n .

    Then Wn(t) B(t). From (38), we get

    sup0t1

    a(nt)n t 0 in probability, as n .

    We can rewrite Wn(t) =d

    aX((nt))/

    n. Given >0, chooseK

    C[0, 1]

    compact so that P[{Wn(t)} K], n = 1, 2, . . . This can be done due totightness of the sequence{Wn(t)}. By Arzelas Theorem [20], compactnessofKimplies that we can find >0, such that for allf K, |f(t)f(s)| ,if|t s| . Then

    P[ sup0t1

    |Wn(t) Wn(t)| ]

    + P

    sup0t1

    daX((nt))n d

    aX(nta

    )n

    ,{Wn(t)} K

    + P sup0t1

    a(nt)

    n t 2

    for n large. Hence{Wn(t) Wn(t)} 0 in probability in the space C[0, 1],and therefore they have the same weak limit.

    In d = 4, Lawler [22] proves the following. Let

    an= P[n-th point is not erased] = P[LE(X[0, n]) X(n, ) = ].Lawler shows that this sequence grows only logarithmically, and

    (n)(nan)1 1 in probability.Then he proves that Wn(t) =d

    anX(nt)/

    n B(t).

    Regardingd = 3, Kozma has shown that a scaling limit exists [21] and isinvariant under dilations and rotations.

    53

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    54/65

    9.2 LERW in dimension d= 2

    In two dimensions, LERW has a deep connection to complex analytic func-tions. We informally describe this connection in this section. We identifyR2= C. Let U, V

    Cbe domains. Recall thatf :U

    V is called confor-

    mal, if it is analytic and one-to-one. The Riemann mapping Theorem statesthat given any two simply connected domains U, V C, U, V= C, thereexists a conformal mapping between them.

    Locally around a point z, an analytic function f has the form f(w) =f(z) + f(z)(w z) + o(|w z|). Hence f is approximately the compositionof a translation, a rotation and a dilation. The traslation and rotation ofa Brownian path is again a Brownian path, and the dilation of a Brownianpath is again Brownian path (with time rescaled). It turns out that for ananalytic function f and Brownian motion B(t), f(B(t)) is a time-changedBrownian motion. The proof of this relies on stochastic integrals and Itosformula. Recall that Itos formula in one dimension says that for a sufficientlysmooth function f : R R

    f(B(t)) f(B(0)) = t

    0

    f(B(t))dB(t) +1

    2

    t0

    f(B(s))ds,

    where the first term is a stochastic integral. For a function f : R2 R, theformula takes the form

    f(B(t)) f(B(0)) = t

    0

    f(B(s)) dB(t) +12

    t0

    f(B(s))ds. (39)

    Suppose now thatf : C

    Cis analytic, and write f=u + iv. The Cauchy-Riemann equations 1u= 2v, 2u=1v imply that u= 0 and v= 0,hence the terms with uand vvanish in (39). Write B(t) =B1(t)+iB2(t)for a complex Brownian motion. Then formally applying Itos formula weget:

    f(B(t)) f(B(0)) = t

    0

    f(B(s)) dB(t), (40)

    where represents complex multiplication, and dB(s) = dB1(s) + idB2(s).The right hand side is continuous a.s., has independent increments, that areinfinitesimally centred Gaussians. By properties of stochastic integrals:

    E t0

    |f(B(s))dB(s)2 = t0

    |f(B(s))|2ds=:(t).

    This suggests, thatf(B(t)) is a Brownian motion looked at at time (t).For a proof of the following theorem, see [29].

    54

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    55/65

    Theorem 9.3. Letf :U C be analytic, zU, and letB(t) be a planarBrownian motion started at z. Let U := inf{t 0 : B(t) U}. Thenf(B(t)) = B((t)), 0 t U, for some planar Brownian motionB startedatf(z).

    Heuristically, the above theorem suggests that the scaling limit of LERWin d= 2 should also be conformally invariant, if it exists, since a one-to-onetransformation does not affect the loop-structure. In fact, it is natural toask, if one can erase loops from Brownian motion to define the scaling limit.This does not quite work: there are loops on all scales, so one cannot definechronological loop-erasure.

    Let us see a bit more precisely what conformal invariance of LERW shouldmean. Suppose that 0 U C is a domain, and consider LERW on thelattice U Z2, from 0 to (U Z2). Saying that a scaling limit exists,means that the random path of the LERW converges in suitable sense to

    some random path in U from 0 to U. If now f : U V is a conformalmap, where for simlicityf(0) = 0, then conformal invariance means that thelimit we get from LERW on V Z2 from 0 to (V Z2) has the samedistribution as the image of the random curve in Uunder f.

    The scaling limit process was discovered by Schramm [24]. It is most con-veniently described in the setting U = {z C: |z|< 1}, as a curve growingfrom Utowards 0 (rather than from 0 to U). Hence we consider the time re-versal of the LERW. By a result of Lawler [23], the time reversal of LERW hasthe same distribution as the reverse loop-erasureof the random walk path.That is, for a random walk path X[0, Uc] from 0 toU, LE(X[0, Uc]) has thesame distribution as the loop-erasure ofX(Uc), X(Uc

    1), . . . , X (1), X(0).

    (HereUc is the exit time fromU.) The following simple Markovian propertyis fundamental.

    Lemma 9.3. (Markovian property) Let= [0, ] denote the reverse loop-erasure of X[0, Uc]. Conditioned on [0, k] = [0, . . . , k], [k, ] has thedistribution of LERW from(k) to 0 in the domainU\ {0, . . . , k}.

    The scaling limit curve has the following description, see [33].Let : [0, ) U be a continuous, simple curve, with (0, ) U.

    Assume that (0) = 1. Let gt : U\ [0, t] U be the unique conformalmap that satisfies the normalization gt(0) = 0 and g

    t(0)> 0. One can show

    that gt(0) is strictly increasing and continuous. Hence one can choose theparametrization ofin such a way that g t(0) =e

    t. Let

    W(t) = limz(t)

    zU\[0,t]

    gt(z),

    55

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    56/65

    which lies on the unit circle. Then W : [0, ) U is continuous, andsatisfies the Loewner differential equation:

    d

    dtgt(z) = gt(z) gt(z) + W(t)

    gt(z)

    W(t)

    g0(z) =z.

    The pathcan be uniquely recovered from W, using the Loewner equation.Crucially, the Markovian property Lemma 9.3, implies that if the scaling

    limit exists and is conformally invariant, then W(t) has stationary and inde-pendent increments. The scaling limit of LERW is obtained when W(t) =exp(i

    B(t)), with = 2, for a standard one-dimensional Brownian motion

    B(t), and is called radial SLE2. Other values of >0 give radial SLE, andthey arise in the context of the random cluster measures of Exercise 1.1.

    A precise theorem about the LERW is the following. Let D be a domain

    inZ2

    whose boundary consists of edges ofZ2

    . Assume that 0 D. Theinnerradius ofD is defined as inf{|z|: zDc}. Letbe the path of the reverseloop-erasure of a random walk in D from 0 to D. Viewas a continuospath. For t 0, let ft : D \ [0, t] Ube the unique conformal mappingsuch that ft(0) = 0 and ft(0) > 0. Assume that has been parametrizedso that ft(0)/f

    0(0) =e

    t. Let W(t) = ft((t)), and write W(t) = exp(i(t))withcontinuous.

    Theorem 9.4 (Lawler, Schramm, Werner; 2004 [24]). For every T > 0and > 0, there existsr1 = r1(, T), such that for all domainsD as abovewith inner radius at leastr1, there is a coupling of with a standard one-

    dimensional Brownian motion{B(t)} starting at a uniform point in[0, 2),such thatP

    sup0tT

    |(t) B(2t)| >

    < .

    Thus the theorem says that the driving function of the LERW con-verges to aB(2t) (which has the same distribution as

    2B(t)).

    56

  • 8/10/2019 The Uniform Spanning Tree and Related Models

    57/65

    10 The Abelian sandpile / Chip-firing game

    In this last section we look at the Abelian sandpile model, also