32
CompSci 102 17.1 Today’s topics Instant Insanity Instant Insanity Random Walks Random Walks

17.1 CompSci 102 Today’s topics Instant InsanityInstant Insanity Random WalksRandom Walks

Embed Size (px)

Citation preview

CompSci 102 17.1

Today’s topics

• Instant InsanityInstant Insanity

• Random WalksRandom Walks

CompSci 102 17.2

Instant Insanity

• Given four cubes, how can we stack the Given four cubes, how can we stack the cubes, so that whether the cubes are viewed cubes, so that whether the cubes are viewed from the front, back, left or right, one sees from the front, back, left or right, one sees all four colorsall four colors

QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture.

CompSci 102 17.3

Cubes 1 & 2

CompSci 102 17.4

Cubes 3 & 4

CompSci 102 17.5

Creating graph formulation

• Create multigraphCreate multigraph– Four vertices represent four colors of facesFour vertices represent four colors of faces– Connect vertices with edges when they are Connect vertices with edges when they are

opposite from each otheropposite from each other– Label cubesLabel cubes

• SolvingSolving– Find subgraphs with certain propertiesFind subgraphs with certain properties– What properties?What properties?

CompSci 102 17.6

Summary

Graph-theoretic Formulation of Instant Insanity:

Find two edge-disjoint labeled factors in the graph of the Instant Insanity puzzle, one for left-right sides and one for front-back sides

Use the clockwise traversal procedure to determine the left-right and front-back arrangements of each cube.

CompSci 102 17.7

Try it out:

Find all Instant Insanity solution to the game with the multigraph:

12

3

41

1

22

3

34 4

CompSci 102 17.8

Answer:

1

2

34 1

2

3

4

and

CompSci 102 17.9

Getting back home

• Lost in a city, you want to get back to your hotel.Lost in a city, you want to get back to your hotel.• How should you do this?How should you do this?

– Depth-first search?Depth-first search?• What resources does this algorithm require?What resources does this algorithm require?• Why not breadth first?Why not breadth first?

– Walk randomlyWalk randomly• What does this algorithm require?What does this algorithm require?

-

CompSci 102 17.10

Random Walking

• Will it work?Will it work?– Pr[ will reach home ] = ?

• When will I get home?When will I get home?– Given that there are Given that there are nn nodes and nodes and mm edges edges– E[ time to visit all nodes ] ≤ 2m × (n-1)

CompSci 102 17.11

Cover times

• Let us define a couple of useful things:Let us define a couple of useful things:– Cover time (fromCover time (from u u))

– CCuu = E [ = E [ time to visit all verticestime to visit all vertices | | start at ustart at u ] ]

• Cover time of the graph:Cover time of the graph:– CC((GG) = max) = maxuu { { CCuu }}

• Cover time theoremCover time theorem

– CC((GG) ) ≤ 2m (n – 1)

– What is the max value of C(G) in terms of n?

CompSci 102 17.12

We will eventually get home

• Look at the first Look at the first nn steps. steps.– There is a non-zero chance There is a non-zero chance pp11 that we get home. that we get home.

• Suppose we fail.Suppose we fail.– Then, wherever we are, there a chance Then, wherever we are, there a chance pp22 > 0 > 0 that we hit that we hit

home in the next home in the next nn steps from there. steps from there.

• Probability of failing to reach home by time Probability of failing to reach home by time knkn – = = (1 – p(1 – p11)(1- p)(1- p22) … (1 – p) … (1 – pkk)) 0 as k 0 as k ∞ ∞

CompSci 102 17.13

In fact

Pr[ we don’t get home by 2k C(G) steps ] ≤ (½)k

Recall: C(G) = cover time of G ≤ 2m(n-1)

CompSci 102 17.14

An averaging argument

• Suppose I start at u.Suppose I start at u.– E[ E[ time to hit all verticestime to hit all vertices | | start at ustart at u ] ≤ C(G) ] ≤ C(G)

• Hence,Hence,– Pr[ Pr[ time to hit all vertices > 2C(G)time to hit all vertices > 2C(G) | | start at u start at u ] ≤ ½.] ≤ ½.

• Why?Why?Else this average would be higher.Else this average would be higher.(called (called Markov’s inequalityMarkov’s inequality.).)

CompSci 102 17.15

Markov’s Inequality

• Random variable Random variable XX has expectation A = E[ has expectation A = E[XX].].

• A = E[A = E[XX] = E[] = E[XX | | X > 2 X > 2AA ] Pr[] Pr[X > 2X > 2AA]]• + E[+ E[XX | | X ≤ 2 X ≤ 2AA ] Pr[] Pr[X ≤ 2X ≤ 2AA]]

• ≥ ≥ E[E[XX | | X > 2 X > 2AA ] Pr[] Pr[X > 2X > 2AA]]

• Also, E[Also, E[X X | | X > 2AX > 2A]] > 2A> 2A

A ≥ 2A × Pr[A ≥ 2A × Pr[X > 2X > 2AA]] ½ ≥ Pr[ ½ ≥ Pr[X > 2X > 2AA]]Pr[ X exceeds k × expectation ] ≤ 1/k.

CompSci 102 17.16

An averaging argument

• Suppose I start at u.Suppose I start at u.– E[ E[ time to hit all verticestime to hit all vertices | | start at ustart at u ] ≤ C(G) ] ≤ C(G)

• Hence, by Markov’s InequalityHence, by Markov’s Inequality– Pr[ Pr[ time to hit all vertices > 2C(G)time to hit all vertices > 2C(G) | | start at u start at u ] ≤ ½] ≤ ½

• Suppose at time 2C(G), at some node v, with more nodes still to visit.Suppose at time 2C(G), at some node v, with more nodes still to visit.– Pr [ Pr [ haven’t hit all vertices in 2C(G) more timehaven’t hit all vertices in 2C(G) more time | | start at v start at v ] ≤ ½.] ≤ ½.

• Chance that you failed Chance that you failed bothboth times ≤ ¼ ! times ≤ ¼ !

CompSci 102 17.17

The power of independence

• It is like flipping a coin with tails probability It is like flipping a coin with tails probability q ≤ ½.q ≤ ½.

• The probability that you get k tails is The probability that you get k tails is qqkk ≤ (½) ≤ (½)kk..(because the trials are independent!)(because the trials are independent!)

• Hence, Hence, – Pr[ Pr[ havent hit everyone in time k × 2C(G)havent hit everyone in time k × 2C(G) ] ] ≤ (½)≤ (½)kk

• Exponential in k!Exponential in k!

CompSci 102 17.18

Hence, if we know that

Expected Cover TimeC(G) < 2m(n-1)

then

Pr[ home by time 4km(n-1) ] ≥ 1 – (½)k

CompSci 102 17.19

Random walks on infinite graphs

• A drunk man will find A drunk man will find his way home, but a his way home, but a drunk bird may get drunk bird may get

lost foreverlost forever

• - Shizuo Kakutani- Shizuo Kakutani

CompSci 102 17.20

Random Walk on a line

• Flip an unbiased coin and go left/right. Flip an unbiased coin and go left/right.

• Let Let XXtt be the position at time be the position at time tt

• Pr[ Pr[ XXtt = i = i ] ]

= Pr[ = Pr[ #heads - #tails = i#heads - #tails = i]]

= Pr[ = Pr[ #heads – (t - #heads) = i#heads – (t - #heads) = i] =] =

0 i

CompSci 102 17.21

Unbiased Random Walk

• Pr[ Pr[ XX2t2t = 0 = 0 ] = / ] = /222t2t ≤ ≤ ΘΘ(1/√t)(1/√t)

• YY2t2t = indicator for (X = indicator for (X2t2t = 0) = 0) E[ E[ YY2t2t ] = ] = ΘΘ(1/√t)(1/√t)

• ZZ2n2n = number of visits to origin in = number of visits to origin in 2n2n steps. steps.

E[ E[ ZZ2n 2n ] = E[ ] = E[ t = 1…nt = 1…n Y Y2t2t ] ]

• = = ΘΘ(1/√1 + 1/√2 +…+ 1/√n) = (1/√1 + 1/√2 +…+ 1/√n) = ΘΘ(√n)(√n)

0Sterling’sapprox. 2t

t

CompSci 102 17.22

In n steps, you expect to return to the origin

Θ(√n) times!

CompSci 102 17.23

Simple Claim

• Recall:Recall: if we repeatedly flip coin with bias p if we repeatedly flip coin with bias p– E[ E[ # of flips till heads# of flips till heads ] = 1/p. ] = 1/p.

• Claim:Claim: If Pr[ If Pr[ not return to originnot return to origin ] = p, then ] = p, then– E[ E[ number of times at originnumber of times at origin ] = 1/p. ] = 1/p.

• Proof:Proof: H = never return to origin. T = we do. H = never return to origin. T = we do.– Hence returning to origin is like getting a tails.Hence returning to origin is like getting a tails.– E[ E[ # of returns# of returns ] = E[ ] = E[ # tails before a head# tails before a head] = 1/p – 1. ] = 1/p – 1.

• (But we started at the origin too!)(But we started at the origin too!)

CompSci 102 17.24

We will return…

• Claim:Claim: If Pr[ If Pr[ not return to originnot return to origin ] = p, then ] = p, then

• E[ E[ number of times at originnumber of times at origin ] = 1/p. ] = 1/p.

• Theorem:Theorem: Pr[ Pr[ we return to originwe return to origin ] = 1. ] = 1.

• Proof:Proof: Suppose not. Suppose not.

• Hence p = Pr[ Hence p = Pr[ never returnnever return ] > 0. ] > 0.

• E [ E [ #times at origin#times at origin ] = 1/p = constant. ] = 1/p = constant.

• But we showed that E[ But we showed that E[ ZZnn ] = ] = ΘΘ(√n) (√n) ∞ ∞

CompSci 102 17.25

How about a 2-d grid?

• Let us simplify our 2-d random walk:Let us simplify our 2-d random walk:• move in both the x-direction and y-direction…move in both the x-direction and y-direction…

CompSci 102 17.26

How about a 2-d grid?

• Let us simplify our 2-d random walk:Let us simplify our 2-d random walk:• move in both the x-direction and y-direction…move in both the x-direction and y-direction…

CompSci 102 17.27

How about a 2-d grid?

• Let us simplify our 2-d random walk:Let us simplify our 2-d random walk:• move in both the x-direction and y-direction…move in both the x-direction and y-direction…

CompSci 102 17.28

How about a 2-d grid?

• Let us simplify our 2-d random walk:Let us simplify our 2-d random walk:• move in both the x-direction and y-direction…move in both the x-direction and y-direction…

CompSci 102 17.29

How about a 2-d grid?

• Let us simplify our 2-d random walk:Let us simplify our 2-d random walk:• move in both the x-direction and y-direction…move in both the x-direction and y-direction…

CompSci 102 17.30

in the 2-d walk

• Returning to the origin in the gridReturning to the origin in the grid

• both “line” random walks return to their originsboth “line” random walks return to their origins

• Pr[ Pr[ visit origin at time tvisit origin at time t ] = ] = ΘΘ(1/√t) × (1/√t) × ΘΘ(1/√t)(1/√t)

• = = ΘΘ(1/t)(1/t)

• E[ E[ # of visits to origin by time n# of visits to origin by time n ] ]

• = = ΘΘ(1/1 + 1/2 + 1/3 + … + 1/n ) = (1/1 + 1/2 + 1/3 + … + 1/n ) = ΘΘ(log n)(log n)

CompSci 102 17.31

We will return (again!)…

• Claim:Claim: If Pr[ If Pr[ not return to originnot return to origin ] = p, then ] = p, then

• E[ E[ number of times at originnumber of times at origin ] = 1/p. ] = 1/p.

• Theorem:Theorem: Pr[ Pr[ we return to originwe return to origin ] = 1. ] = 1.

• Proof:Proof: Suppose not. Suppose not.

• Hence p = Pr[ Hence p = Pr[ never returnnever return ] > 0. ] > 0.

• E [ E [ #times at origin#times at origin ] = 1/p = constant. ] = 1/p = constant.

• But we showed that E[ But we showed that E[ ZZnn ] = ] = ΘΘ(log n) (log n) ∞ ∞

CompSci 102 17.32

But in 3-d

• Pr[ Pr[ visit origin at time tvisit origin at time t ] = ] = ΘΘ(1/√t)(1/√t)33 = = ΘΘ(1/t(1/t3/23/2))

• limlimn n ∞∞ E[ E[ # of visits by time n# of visits by time n ] < K (constant) ] < K (constant)

• HenceHence

• Pr[ Pr[ never return to originnever return to origin ] > 1/K. ] > 1/K.