28
Randomness and Computation or, “Randomized Algorithms” Heng Guo (Based on slides by M. Cryan) RC (2019/20) – Lecture 10 – slide 1

Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Randomness and Computationor, “Randomized Algorithms”

Heng Guo

(Based on slides by M. Cryan)

RC (2019/20) – Lecture 10 – slide 1

Page 2: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Balls into Bins

m balls, n bins, and balls thrown uniformly at random and indepen-dently into bins (usually one at a time).

Magic bins with no upper limit on capacity.

Can be viewed as a random function [m] → [n].

Commonmodel of random allocations and their effects on overall loadand load balance etc.

Many related questions:

How many balls do we need to cover all bins?

(Coupon collector, surjective mapping)

How many balls will lead to a collision?

(Birthday paradox, injective mapping)

What is the maximum load of each bin?

(Load balancing)

RC (2019/20) – Lecture 10 – slide 2

Page 3: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Balls into Bins — maximum loadDepending on m/n, there are a few different scenarios. Fix n = 100, vary m.

0.0

0.2

0.4

0.6

0.8

1.0

m = 10

0

5

10

15

20

m = 1000

0

1

2

3

4

5

m = 100

0

10

20

30

40

50

60

70

m = 5000

RC (2019/20) – Lecture 10 – slide 3

Page 4: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Balls into Bins — maximum load

m = Ω(n log n) Maximum load is Θ(mn

), namely of the same order

as the average load.

m = n Maximum load is ln(n)ln ln(n) + O(1) (same number of balls

as bins).

We have already shown that when m = n and n is sufficiently large,the maximum load is ≤ 3 ln(n)

ln ln(n) with probability at least 1− 1n .

Today: a matching Ω( ln(n)ln ln(n)) lower bound.

RC (2019/20) – Lecture 10 – slide 4

Page 5: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Poisson random variable

Probability pr of a specific bin having r balls:

pr =

(mr

)(1n

)r (1−

1n

)m−r

.

Note

pr ∼e−m/n

r!

(mn

)r,

where we consider r as a constant, m/n is a fixed constant, and n → ∞.

Definition (5.1)A discrete Poisson random variable X with parameter µ is given by the fol-lowing probability distribution on j = 0, 1, 2, . . .:

Pr[X = j] =e−µµj

j!.

RC (2019/20) – Lecture 10 – slide 5

Page 6: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Poisson random variable

Probability pr of a specific bin having r balls:

pr =

(mr

)(1n

)r (1−

1n

)m−r

.

Note

pr ∼e−m/n

r!

(mn

)r,

where we consider r as a constant, m/n is a fixed constant, and n → ∞.

Definition (5.1)A discrete Poisson random variable X with parameter µ is given by the fol-lowing probability distribution on j = 0, 1, 2, . . .:

Pr[X = j] =e−µµj

j!.

RC (2019/20) – Lecture 10 – slide 5

Page 7: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Poisson as the limit of the Binomial Distribution

Theorem (5.5)If Xn is a binomial random variable with parameters n and p = p(n) such thatlimn→∞ np = µ is a constant (independent of n), then for any fixed k ∈ N0

limn→∞ Pr[Xn = k] =

e−µµk

k!.

RC (2019/20) – Lecture 10 – slide 6

Page 8: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Properties of Poisson

It is well-defined:

∞∑j=0

e−µµj

j!= e−µ

∞∑j=0

µj

j!= 1.

E[X] = µ

Var[X] = µ

The sum of two Poisson with parameters µ1 and µ2 is a Poisson withµ1 + µ2.

RC (2019/20) – Lecture 10 – slide 7

Page 9: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Concentration of Poisson r.v.

Theorem (5.4)Let X be a Poisson random variable with parameter µ.

If x = (1+ δ)µ for δ > 0, then

Pr[X ≥ x] ≤ e−µ(eµ)x

xx=

(eδ

(1+ δ)1+δ

;

If x = (1− δ)µ for 0 < δ < 1, then

Pr[X ≤ x] ≤ e−µ(eµ)x

xx=

(e−δ

(1− δ)1−δ

;

RC (2019/20) – Lecture 10 – slide 8

Page 10: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Proof for Poisson concentration

Same argument as before:

Pr[X ≥ x] = Pr[etX ≥ etx] ≤ E[etX]etx

.

The claim follows from setting t = ln(x/µ) and the following:

E[etX] = eµ(et−1).

(It was ≤ in the sum of independent Bernoulli case.)

RC (2019/20) – Lecture 10 – slide 9

Page 11: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Poisson moment generating function

E[etX] =∞∑i=0

eti · e−µµi

i!

= e−µ

∞∑i=0

(µet)i

i!

= e−µeµet

= eµ(et−1)

RC (2019/20) – Lecture 10 – slide 10

Page 12: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Poisson modelling of balls-in-bins

Our balls in bins model has n bins, m (for variable m) balls, and the balls arethrown into bins independently and uniformly at random.

Each bin X(m)i behaves like a binomial r.v. B(m, 1

n ).

Write X(m) = (X(m)1 , . . . ,X(m)

n ) for the joint distribution. These X(m)i s are

not independent.

For the “Poisson approximation” we take µ = mn , and write Y(m)

i to denotea Poisson r.v. with parameter µ = m/n.

Write Y(m) = (Y(m)1 , . . . ,Y(m)

n ) to denote a joint distribution of Poisson r.v.swhich are all independent.

Notice that the sum∑n

i=1 Y(m)n is a Poisson r.v. with parameter n · m

n = m.

RC (2019/20) – Lecture 10 – slide 11

Page 13: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Poisson modelling of balls-in-bins

Our balls in bins model has n bins, m (for variable m) balls, and the balls arethrown into bins independently and uniformly at random.

Each bin X(m)i behaves like a binomial r.v. B(m, 1

n ).

Write X(m) = (X(m)1 , . . . ,X(m)

n ) for the joint distribution. These X(m)i s are

not independent.

For the “Poisson approximation” we take µ = mn , and write Y(m)

i to denotea Poisson r.v. with parameter µ = m/n.

Write Y(m) = (Y(m)1 , . . . ,Y(m)

n ) to denote a joint distribution of Poisson r.v.swhich are all independent.

Notice that the sum∑n

i=1 Y(m)n is a Poisson r.v. with parameter n · m

n = m.

RC (2019/20) – Lecture 10 – slide 11

Page 14: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Poisson approximation

Theorem (5.7)Let f(x1, . . . , xn) be a non-negative function. Then

E[f(X(m)1 , . . . ,X(m)

n )] ≤ e√m · E[f(Y(m)

1 , . . . ,Y(m)n )].

To bound the probability of some event, take f as its indicator function.

Corollary (5.9)Any event that takes place with probability p in the “Poisson case” takes placewith probability at most pe

√m in the exact balls-in-bins case.

Thus the Poisson approximation is good enough if p is sufficiently small.

RC (2019/20) – Lecture 10 – slide 12

Page 15: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Poisson approximation

Theorem (5.7)Let f(x1, . . . , xn) be a non-negative function. Then

E[f(X(m)1 , . . . ,X(m)

n )] ≤ e√m · E[f(Y(m)

1 , . . . ,Y(m)n )].

To bound the probability of some event, take f as its indicator function.

Corollary (5.9)Any event that takes place with probability p in the “Poisson case” takes placewith probability at most pe

√m in the exact balls-in-bins case.

Thus the Poisson approximation is good enough if p is sufficiently small.

RC (2019/20) – Lecture 10 – slide 12

Page 16: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Justify the Poisson approximation

E[f(Y(m))] =

∞∑k=0

E[f(Y(m)) |

n∑i=1

Y(m)i = k

]Pr

[ n∑i=1

Y(m)i = k

]

≥ E[f(Y(m)) |

n∑i=1

Y(m)i = m

]Pr

[ n∑i=1

Y(m)i = m

]The theorem follow from two facts:

1. Y(m) conditional on∑n

i=1 Y(m)i = k is the same as X(k);

2. Pr[∑n

i=1 Y(m)i = m] ≥ 1

e√m .

RC (2019/20) – Lecture 10 – slide 13

Page 17: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Fact 1

The sum of Y(m), denoted by Y, is a Poisson r.v. with parameter n · mn = m.

Conditional on the sum, the probability that Y(m) taking values k1, . . . , kn(where

∑ni=1 ki = k) is∏n

i=1 e−m/n(m/n)ki/ki!e−mmk/k!

=k!

nk∏n

i=1 ki!=

( kk1,...,kn

)nk

,

which is exactly the probability that X(k) taking values k1, . . . , kn.

RC (2019/20) – Lecture 10 – slide 14

Page 18: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Fact 2

Stirling approximation:

m! ∼√

2πm(me

)m.

The sum of Y(m), denoted by Y, is a Poisson r.v. with parameter n · mn = m.

Pr[Y = m] = e−mmm/m! ∼1√2πm

.

To make things rigorous (Lemma 5.8),

m! ≤ e√m(me

)m.

RC (2019/20) – Lecture 10 – slide 15

Page 19: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Poisson approximation

Theorem (5.7)Let f(x1, . . . , xn) be a non-negative function. Then

E[f(X(m)1 , . . . ,X(m)

n )] ≤ e√m · E[f(Y(m)

1 , . . . ,Y(m)n )].

Corollary (5.9)Any event that takes place with probability p in the “Poisson case” takes placewith probability at most pe

√m in the exact balls-in-bins case.

RC (2019/20) – Lecture 10 – slide 16

Page 20: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Lower bound for n “balls into bins”Lemma (5.12)Let n balls be thrown independently and uniformly at random into n bins. Then(for n sufficiently large) the maximum load is at least ln(n)

ln ln(n) with probability

at least 1− 1n .

Proof.For the Poisson variables, we have µ = n

n = 1. Let M := ln(n)ln ln(n) . For bin i,

Pr[Y(m)i ≥ M] ≥ Pr

[Y(m)i = M

]=

1Me−1

M!=

1eM!

.

(Chernoff bounds give an upper bound here.)

In our Poisson model, the bins are independent, so the probability no binhas load ≥ M (our bad event) is at most(

1−1

eM!

)n

≤ e−n/(eM!).

RC (2019/20) – Lecture 10 – slide 17

Page 21: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Lower bound for n “balls into bins”Lemma (5.12)Let n balls be thrown independently and uniformly at random into n bins. Then(for n sufficiently large) the maximum load is at least ln(n)

ln ln(n) with probability

at least 1− 1n .

Proof.For the Poisson variables, we have µ = n

n = 1. Let M := ln(n)ln ln(n) . For bin i,

Pr[Y(m)i ≥ M] ≥ Pr

[Y(m)i = M

]=

1Me−1

M!=

1eM!

.

(Chernoff bounds give an upper bound here.)In our Poisson model, the bins are independent, so the probability no binhas load ≥ M (our bad event) is at most(

1−1

eM!

)n

≤ e−n/(eM!).

RC (2019/20) – Lecture 10 – slide 17

Page 22: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Proof of Lemma 5.1 cont’d.To tolerate the e

√n loss of the Poisson approximation, we want to show

that

e−n/(eM!) ≤ 1n2 ⇔ 2 ln n ≤ n

eM!⇔ ln ln n+ ln 2 ≤ ln n− lnM! − 1⇔ lnM! ≤ ln n− ln ln n− C,

where C = 1+ ln 2 is a constant.

Recall Lemma 5.8, M! ≤ e√M(Me

)M.

lnM! ≤ M lnM−M+ lnM

=ln nln ln n (ln ln n− ln ln ln n) − ln n

ln ln n + lnM

= ln n− ln nln ln n − (M ln ln ln n− lnM)

≤ ln n− ln nln ln n .

RC (2019/20) – Lecture 10 – slide 18

Page 23: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Proof of Lemma 5.1 cont’d.To tolerate the e

√n loss of the Poisson approximation, we want to show

that

e−n/(eM!) ≤ 1n2 ⇔ 2 ln n ≤ n

eM!⇔ ln ln n+ ln 2 ≤ ln n− lnM! − 1⇔ lnM! ≤ ln n− ln ln n− C,

where C = 1+ ln 2 is a constant.

Recall Lemma 5.8, M! ≤ e√M(Me

)M.

lnM! ≤ M lnM−M+ lnM

=ln nln ln n (ln ln n− ln ln ln n) − ln n

ln ln n + lnM

= ln n− ln nln ln n − (M ln ln ln n− lnM)

≤ ln n− ln nln ln n .

RC (2019/20) – Lecture 10 – slide 18

Page 24: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Proof of Lemma 5.1 cont’d.

We want to show

lnM! ≤ ln n− ln ln n− C,

where C = 1+ ln 2 is a constant.

We have

lnM! ≤ ln n− ln nln ln n

≤ ln n− ln ln n− C,

since ln ln n = o( ln nln ln n ).

RC (2019/20) – Lecture 10 – slide 19

Page 25: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Proof of Lemma 5.1 cont’d.

We want to show

lnM! ≤ ln n− ln ln n− C,

where C = 1+ ln 2 is a constant.

We have

lnM! ≤ ln n− ln nln ln n

≤ ln n− ln ln n− C,

since ln ln n = o( ln nln ln n ).

RC (2019/20) – Lecture 10 – slide 19

Page 26: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Proof of Lemma 5.1 cont’d.

We have shown that in the Poisson case,

Pr[∀i ∈ [n], Y(m)

i < M]=

n∏i=1

Pr[Y(m)i < M

]≤

(1−

1eM!

)n

≤ 1n2 ,

where M = ln nln ln n .

Due to Poisson approximation,

Pr[∀i ∈ [n], X(m)

i < M]≤ e

√n Pr

[∀i ∈ [n], Y(m)

i < M]

≤ e√n

n2 <1n.

RC (2019/20) – Lecture 10 – slide 20

Page 27: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

Proof of Lemma 5.1 cont’d.

We have shown that in the Poisson case,

Pr[∀i ∈ [n], Y(m)

i < M]=

n∏i=1

Pr[Y(m)i < M

]≤

(1−

1eM!

)n

≤ 1n2 ,

where M = ln nln ln n .

Due to Poisson approximation,

Pr[∀i ∈ [n], X(m)

i < M]≤ e

√n Pr

[∀i ∈ [n], Y(m)

i < M]

≤ e√n

n2 <1n.

RC (2019/20) – Lecture 10 – slide 20

Page 28: Randomness and Computation - or, ``Randomized Algorithms'' › teaching › courses › rc › lecture10.pdf · 2020-02-14 · BallsintoBins—maximumload Dependingonm=n,thereareafewdifferentscenarios

References

Sections 5.1 and 5.2 of “Probability and Computing”.

Sections 5.3 and 5.4 have all precise details of our Ω( ln(n)ln ln(n) ) result.

We will skip the rest of Chapter 5.

RC (2019/20) – Lecture 10 – slide 21