76
Ryan ’Donnell Carnegie Mellon University O

Ryan ’Donnell Carnegie Mellon University O. f : {−1, 1} n → {−1, 1} is “quasirandom” iff fixing O(1) input coords changes E[f(x)] by only o (1)

Embed Size (px)

Citation preview

Ryan ’Donnell

Carnegie Mellon University

O

f : {−1, 1}n → {−1, 1}

is “quasirandom” iff

fixing O(1) input coords

changes E[f(x)] by only o(1)

Examples

• Constants: f ≡ ±1

• Majority: f(x) = sgn(xi)

• Parity: f(x) = xi

• Random f (whp)

Non-Examples

• Dictators: f(x) = xi

• Juntas: f depending on only

O(1) coords(non-const)

“Fourier analysis”

Fact 1:

Fact 2: “Plancherel”

Fact 3:

= 1 if f {−1, 1}-valued

def: f : {−1, 1}n → {−1, 1} is

(ϵ,δ)-quasirandom if

“no large low-degree Fourier coeffs”

“indisting. from a constant by juntas”

“does not ‘suggest’ any coords in [n]”

ex.: f is (ϵ,δ)-quasirandom

Every restriction of ≤ 1/δ coords

changes E[f] by ≤ ϵ’

Structure vs. Quasirandomness

Dictators

with equality iff f(x) = ±xi

with equality iff f(x) = ±xi

(ex.)

with equality iff f(x) = ±xi

Friedgut−Kalai−Naor ’02 Theorem:

⇒ f is O(ϵ)-close to some ±xi

Say f is (ϵ,δ)-qrand

Say f is (ϵ,1)-qrand

Say f is (ϵ,1)-qrand

(Plancherel)

acts like a Gaussian

because all ≤ ϵ

(Berry−Esseen Thm)

Say f is (ϵ,1)-qrand

1 vs. 2/π Theorem:

= 1 if f = Dictatori

is…

≤ 2/π +o(1) if f is qrand

Constraint Satisfaction Problems (CSPs)

V = {v1, v2, …, vn}Σ : alphabet

“type(s) of constrs.”

e.g.: Σ = {true, false}, 3-ary disjunctions

→ Max-3Sat

Constraint Satisfaction Problems (CSPs)

Max-3Sat instance I

Algorithmic task:

Find best

F : V → {true,false}

Constraint Satisfaction Problems (CSPs)

Σ = , 3-vbl linear equations

→ Max-3Lin(mod 2)

Constraint Satisfaction Problems (CSPs)

Max-3Lin(mod 2) instance I

Algorithmic task:

Find best

F : V → Σ

Constraint Satisfaction Problems (CSPs)

Σ = {−1,1}, non-equality constraints

→ Max-Cut

Constraint Satisfaction Problems (CSPs)

Max-Cut instance I

Algorithmic task:

Find best

F : V → {−1, 1}

allow weights

Constraint Satisfaction Problems (CSPs)

Max-Cut instance I allow weights

weight

p1

p2

p3

p4

…+

= 1

Algorithmic task:

Find best

F : V → {−1, 1}

Constraint Satisfaction Problems (CSPs)

Max-Cut instance I allow weights

weight

p1

p2

p3

p4

…+

= 1

def: Val(F) = weight

of constrs. that

F satisfies

Opt(I) = max {Val(F)} F

Even for Max-Cut, the task

“Find F achieving Opt(I)”

is NP-hard.

[Karp ’72] thm:

proof:

Formula-Sat

φ

Max-Cut

Iefficient reduction

φ satisfiable

φ unsatisfiable

Opt(I) = 5/6

Opt(I) < 5/6

Given Max-Cut inst. I with

Opt(I) = 5/6,

NP-hard to find F achieving

Val(F) ≥ 5/6 − 10−10.

“PCP Theorem”:

[AS’92, ALMSS’92]

(+ [PY’88])

[Goemans

−Williamson’94]:

Efficient alg. guaranteeing

Val(F) ≥

≈ .73

when Opt(I) ≥ 5/6.

def: (c,s)-approximation algorithm:

If Opt(I) ≥ c, guarantees Val(F) ≥ s.

NP-hard: (5/6, 5/6 −10−10)-approx’ing Max-Cut.

in P: (5/6, .73)-approx’ing Max-Cut.

(5/6, 3/4)-approx’ing Max-Cut?

But we’ll see, it’s “UG-hard”. [see Khot’02]

Unknown:

CSP approximation: Two algorithms

Random:

SDP (“semidefinite programming”):

Choose F : V → Σ at random.

Approximation quality is trivial to analyze.

Highly geometric algorithm.

Approximation quality is difficult to analyze.

For the CSP Max-Blah, define SDPBlah(c) to be

max s such that SDP is a (c, s)-approx.

([Raghavendra−Steurer’09] version)

CSP approximation: Two algorithms

Random:

SDP:

(c, SDPCut(c))-approx.

Max-Cut

weight

p1

p2

p3

p4

constraint

E[ Val(F) ] = 1/2.

(c, 1/2)-approx. ∀ c.

=

[GW’94]

thm: [Håstad’97]

For Max-3Lin(mod 2), for all η > 0,

(1 − η, 1/2 + η)-approx is NP-hard.

Random alg. gives (c, 1/2)-approx ∀ c.

(ex: an efficient (1,1)-approx alg.)

Sharp:

“Test ⇒ Hardness” Theorem

(Morally [Håstad’97], see also [BGS’95, KKMO’04])

thm: if ∃ “c vs. s Dict-vs.-QRand Test”

on {−1,1}n using Blah constrs

then (c−η, s+η)-approx’ing Max-Blah UG-hard.

def: A (boolean) Dictator-vs.-Quasirandom Test

is a way of “spot-checking” boolean fcns,

distinguishing Dictators from QRand fcns.

Formally: it is a CSP where V = {−1, 1}n.

constr1 ( f (−1,1,1,−1), f (1,1,−1,1), f (1,1,1,1) )

constr2 ( f (−1,1,−1,1), f (1,1,1,−1), f (−1,1,1,1) )

constr3 ( f ( 1, 1, 1,−1), f (−1,1,1,1), f (1,1,−1,1) )……

p1

p2

p3

Completeness c,

meaning Val(f) ≥ c whenever f = Dicti.

Soundness s,

“meaning” Val(f) ≤ s + o(1) ∀ qrand f.

c vs. s Dictator-vs.-Quasirandom Test

1 vs. 2/π Theorem:

= 1 if f = Dictatori

is…

≤ 2/π +o(1) if f is qrand

Completeness c,

meaning Val(f) ≥ c whenever f = Dicti.

Soundness s,

“meaning” Val(f) ≤ s + o(1) ∀ qrand f.

e.g.: If there were a CSP on {−1,1}n s.t.

Val(f) = , it would be a

“1 vs. 2/π Dict-vs.-Quasirandom Test”

c vs. s Dictator-vs.-Quasirandom Test

“Test ⇒ Hardness” Theorem

(Morally [Håstad’97], see also [BGS’95, KKMO’04])

thm: if ∃ “c vs. s Dict-vs.-QRand Test”

on {−1,1}n using Blah constrs

then (c−η, s+η)-approx’ing Max-Blah UG-hard.

Comical sketch of Test ⇒ Hardness Thm

UG-problem

G

Max-3Lin(mod 2)

Iefficient reduction

CSP w/ |Σ| = m large,bijective constrs

Opt(G) ≥ 1−o(1)

Opt(G) ≤ o(1)

F : V → [m]

{−1,1}m

fv : {−1,1}m → {−1,1}

⇒ Opt(I) ≥ c−o(1)

Opt(I) ≤ s+o(1)⇒

thm: [Håstad’97]

For Max-3Lin(mod 2), for all η > 0,

(1 − η, 1/2 + η)-approx is NP-hard.

Hardness of Max-3Lin(mod 2)

Need CSP over {−1,1}n with “linear” constrs:

f(x)f(y)f(z) = ±1

Val( f ) ≥ c := 1 − o(1) ∀ f = Dictatori

Val( f ) ≤ s := 1/2 + o(1) ∀ qrand f.

Then Test ⇒ Hardness Thm gives:

“(1−η, 1/2+η)-approx’ing Max-3Lin is UG-hard.”

f(x)+f(y)+f(z) = 0/1 (mod 2)

Hardness of Max-3Lin(mod 2)

Test Idea 1: Pick x, y ~ {−1,1}n unif, indep.

Define

Test constr. f(x)f(y)f(z) = 1.

Val(Dicti) = Pr[xi yi zi = 1] = 1. ✔

Val(Parity) = Pr[xi yi zi = 1] = 1. ✘

Bad, because Parity is quasirandom! ddd

[BLR’90]

Hardness of Max-3Lin(mod 2)

Test Idea 2: Pick x, y ~ {−1,1}n unif, indep.

Define

Test constr. f(x)f(y)f(z) = 1.

Val(Dicti) = Pr[xi yi zi = 1] = 1 − δ/2. ✔

Val(f) = Pr[f(x)f(y)f(z)= 1] = · · ·

=

[Håstad’97]

Hardness of Max-3Lin(mod 2)

Test Idea 2: Pick x, y ~ {−1,1}n unif, indep.

Define

Test constr. f(x)f(y)f(z) = 1.

Val(f) =

[Håstad’97]

1

Hardness of Max-3Lin(mod 2)

Test Idea 2: Pick x, y ~ {−1,1}n unif, indep.

Define

Test constr. f(x)f(y)f(z) = 1.

[Håstad’97]

≤ + o(1), if f is quasirandom. ✔

Val(f) =

Hardness of Max-3Lin(mod 2)

Test Idea 2: Pick x, y ~ {−1,1}n unif, indep.

Define

Test constr. f(x)f(y)f(z) = 1.

[Håstad’97]

≤ + o(1), if f is quasirandom. ✔

Val(f) =

Hardness of Max-3Lin(mod 2)

Test Idea 2: Pick x, y ~ {−1,1}n unif, indep.

Define

Test constr. f(x)f(y)f(bz) = b.

[Håstad’97]

≤ + o(1), if f is quasirandom. ✔

Val(f) =

Pick b ~ {−1,1}.

ex.

thm: [Khot−Kindler−Mossel−O.’04,

Mossel−O.−Oleszkiewicz’05]

For Max-Cut, for all η > 0,

(c, + η)-approx

is UG-hard.

SDPCut(c) =

Sharp for : [GW’94] showed

“Test ⇒ Hardness” Theorem

(Morally [Håstad’97], see also [BGS’95, KKMO’04])

thm: if ∃ “c vs. s Dict-vs.-QRand Test”

on {−1,1}n using Blah constrs

then (c−η, s+η)-approx’ing Max-Blah UG-hard.

Hardness of Max-Cut

Need CSP over {−1,1}n with f(x)≠f(y) constrs.

∀ i, Val(Dicti) = Pr[xi ≠ yi] = c, by design.

Test Idea: Pick x ~ {−1,1}n unif.

Define

Test constr. f(x) ≠ f(y).

[KKMO’04]

Hardness of Max-Cut

Need CSP over {−1,1}n with f(x)≠f(y) constrs.

Conj: “Majority is the qrand fcn with largest Val.”

Test Idea: Pick x ~ {−1,1}n unif.

Define

Test constr. f(x) ≠ f(y).

[KKMO’04]

Hardness of Max-Cut

Need CSP over {−1,1}n with f(x)≠f(y) constrs.

Ex.: Val(Majority) = + o(1)

Test Idea: Pick x ~ {−1,1}n unif.

Define

Test constr. f(x) ≠ f(y).

[KKMO’04]

Hardness of Max-Cut

Need CSP over {−1,1}n with f(x)≠f(y) constrs.

Difficulty: Val( f ) =

Test Idea: Pick x ~ {−1,1}n unif.

Define

Test constr. f(x) ≠ f(y).

[KKMO’04]

Conjecture proved in [MOO’05]. Sketch…

f : {−1, 1}n → {−1, 1} is qrand, odd.

x ~ {−1, 1}n, y is (2c−1)-correlated to −x.

Max. Pr[f(x)≠f(y)]

easy: the maximizing f is odd, f(−x) = −f(x)

Conjecture proved in [MOO’05]. Sketch…

f : {−1, 1}n → {−1, 1} is qrand, odd.

x ~ {−1, 1}n, y is (2c−1)-correlated to +x.

Max. Pr[f(x)=f(y)]

easy: the maximizing f is odd, f(−x) = −f(x)

Conjecture proved in [MOO’05]. Sketch…

f : {−1, 1}n → {−1, 1} is qrand, odd.

x ~ {−1, 1}n, y is (2c−1)-correlated to x.

Max. Pr[f(x)=f(y)]

Conjecture proved in [MOO’05]. Sketch…

f : {−1, 1}n → {−1, 1} is qrand, E[f] = 0.

x ~ {−1, 1}n, y is (2c−1)-correlated to x.

Max. Pr[f(x)=f(y)]

Generalize the “1 vs. 2/π Theorem”

≈ gaussian

because all small

Conjecture proved in [MOO’05]. Sketch…

f : {−1, 1}n → {−1, 1} is qrand, E[f] = 0.

x ~ {−1, 1}n, y is (2c−1)-correlated to x.

Max. Pr[f(x)=f(y)]

Generalize the “1 vs. 2/π Theorem”

Conjecture proved in [MOO’05]. Sketch…

f : {−1, 1}n → {−1, 1} is qrand, E[f] = 0.

x ~ {−1, 1}n, y is (2c−1)-correlated to x.

Max. Pr[f(x)=f(y)]

Generalize the “1 vs. 2/π Theorem”

“because” all small

iid N(0,1)

“Invariance Principle”

Conjecture proved in [MOO’05]. Sketch…

f : {−1, 1}n → {−1, 1} is qrand, E[f] = 0.

g ~ N(0, In), h is (2c−1)-correlated to g.

Max. Pr[f(g)=f(h)]

Generalize the “1 vs. 2/π Theorem”

“because” all small

iid N(0,1)

“Invariance Principle”

Conjecture proved in [MOO’05]. Sketch…

f : {−1, 1}n → {−1, 1} is qrand, E[f] = 0.

g ~ N(0, In), h is (2c−1)-correlated to g.

Max. Pr[f(g)=f(h)]

Luckily, [Borell’85] solved essentially this

problem, via a symmetrization argument.

Conjecture proved in [MOO’05]. Sketch…

f : {−1, 1}n → {−1, 1} is qrand, E[f] = 0.

g ~ N(0, In), h is (2c−1)-correlated to g.

Max. Pr[f(g)=f(h)]

Maximizing f is indicator of halfspace.

Maximum value is .

Recall SDP algorithm:

Given CSP of type Φ, defined SDPΦ(c) to be

max s such that SDP is a (c, s)-approx.

[RS’09]: SDPΦ(c) = inf { Opt(I) }Relax(I) ≥ c

Determine I* achieving inf: hard geom. prob.

[Austrin’06]: Investigated Max-2Sat.

• SDP2Sat(c) partly analyzed in

[Lewin−Livnat−Zwick’02]

• Austrin: Designed Dict.-vs.-QRand Test

inspired by optimizer I* in [LLZ]

• Analyzed with Invariance Principle

→ converted to a Gaussian geom problem

• UG-hardness for 2Sat matching the SDP alg.

[Austrin’07]: Generic 2-ary CSP Φ, |Σ| = 2.

• Optimizers I * for SDPΦ(c) not understood.

• Austrin: Whatever optimizer I * is, designed

Dict.-vs.-QRand Test “based on” I *.

• Analyzed with Invariance Principle.

• Thm: Assuming I * satisfies a certain

condition which it probably does,

it’s a c−η vs. SDPΦ(c)+η Test.

⇒ “UG-hard to improve on SDP alg.”

What about |Σ| > 2, constraints on > 2 vbls?

[Mossel’07]: Souped up Invariance Principle,

allowed vector-valued random variables,

new techniques for analyzing tests

on f : Σn → , where |Σ| > 2.

[Raghavendra’08]: Let Φ be any CSP.

• Given optimizer I * for SDPΦ(c), designed

Dict.-vs.-QRand Test based on I *.

• Analyzed via [Mossel’07]’s Invariance Princ.

Thm: UG-hard to improve on SDP, i.e.,

(c−η, SDPΦ(c+η)+η)-approx Max-Φ.

• Mostly closes inapprox. of CSPs, except…

Except: Given CSP type Φ, what is SDPΦ(c)?

• [R’08]: Compute to ±ϵ in time

• [O.−Wu’08]: For Max-Cut, c < .845,

compute to ±ϵ in time poly(1/ϵ).

• [Austrin−Mossel’08]: If Φ’s constraints “support a

pairwise-indep. distribution”,

then SDPΦ(c) = Rand Alg’s quality ∀ c.

• Φ = Bip.-Max-2Lin(mod 2): determine SDPΦ(c)

⇔ determine Grothendieck’s Constant KG

• Notion of quasirandom boolean functions

is powerful for CSP inapproxambility

• Assuming UG-hardness, Raghavendra

mostly resolves whole area... “in principle”

• What remains is a bunch of hard problems

in Gaussian geometry!

Conclusion

• Bansal−Khot’s “Subcube Test Conjecture”:

(see Khot’s 2010 ICM survey)

• Aaronson’s Conjecture

(see [Aaronson−Ambainis’09])

• Talagrand’s “Simple Exercise on Convolution”

(reduce to Gaussian case via Invariance?)

A few open problems