Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Introduction
Correlation Distillation
Elchanan Mossel
April 13, 2015
Elchanan Mossel Correlation Distillation
Introduction Motivation
Executive Summary
To distill correlation you need to be stable.
Sometime balls, sometimes cubes are more able.
But in Gaussian space - we don’t know - you ask why?
It’s because the optimal partition in not always a Y.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Executive Summary
To distill correlation you need to be stable.
Sometime balls, sometimes cubes are more able.
But in Gaussian space - we don’t know - you ask why?
It’s because the optimal partition in not always a Y.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Correlation Distillation Problem
Let X and Y be two random variables and µ a distribution on[q].
Goal: find f : ΩX → [q], g : ΩY → [q] such that
f (X ), g(Y ) ∼ µ and P[f (X ) = f (Y )] is maximized.
Can be formulated as a question about noise stability.
Or a Shannon Theory problem: decoding randomness from aphysical source.
Motivation 2: hardness of approximation (Hastad, Khot etc.)
Motivation 3: robustness of voting (Kalai)
Motivation 4: communication complexity (Canonne-Guruwami-Meka-Sudan-14).
If there’s time left - also something about tail spaces.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Correlation Distillation Problem
Let X and Y be two random variables and µ a distribution on[q].
Goal: find f : ΩX → [q], g : ΩY → [q] such that
f (X ), g(Y ) ∼ µ and P[f (X ) = f (Y )] is maximized.
Can be formulated as a question about noise stability.
Or a Shannon Theory problem: decoding randomness from aphysical source.
Motivation 2: hardness of approximation (Hastad, Khot etc.)
Motivation 3: robustness of voting (Kalai)
Motivation 4: communication complexity (Canonne-Guruwami-Meka-Sudan-14).
If there’s time left - also something about tail spaces.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Correlation Distillation Problem
Let X and Y be two random variables and µ a distribution on[q].
Goal: find f : ΩX → [q], g : ΩY → [q] such that
f (X ), g(Y ) ∼ µ and P[f (X ) = f (Y )] is maximized.
Can be formulated as a question about noise stability.
Or a Shannon Theory problem: decoding randomness from aphysical source.
Motivation 2: hardness of approximation (Hastad, Khot etc.)
Motivation 3: robustness of voting (Kalai)
Motivation 4: communication complexity (Canonne-Guruwami-Meka-Sudan-14).
If there’s time left - also something about tail spaces.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation - known cases
If q = 2 and X ,Y ∼ N(0, In) with E[XTY ] = ρIn and ρ > 0:
Borell-85: Optimum is f = g = indicator of a half space.
If q = 2, µ = 0.5(δ−1 + δ1) and X ,Y ∈ −1, 1n with X andY ρ-correlated.
Folklore: f = g = x1 is optimal.
These are essentially the only cases known exactly.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation - known cases
If q = 2 and X ,Y ∼ N(0, In) with E[XTY ] = ρIn and ρ > 0:
Borell-85: Optimum is f = g = indicator of a half space.
If q = 2, µ = 0.5(δ−1 + δ1) and X ,Y ∈ −1, 1n with X andY ρ-correlated.
Folklore: f = g = x1 is optimal.
These are essentially the only cases known exactly.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation - known cases
If q = 2 and X ,Y ∼ N(0, In) with E[XTY ] = ρIn and ρ > 0:
Borell-85: Optimum is f = g = indicator of a half space.
If q = 2, µ = 0.5(δ−1 + δ1) and X ,Y ∈ −1, 1n with X andY ρ-correlated.
Folklore: f = g = x1 is optimal.
These are essentially the only cases known exactly.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation and hypercontraction
Let X ,Y be ρ = (1 + ε)/2 correlated in 0, 1n.
Let µ be uniform on 0, 1k .
f = g = xk1 =⇒ P[f (X ) = g(Y )] = (1− ε)k ∼ e−εk .
How tight is the cube partition?
Bogdanov-Mossel-12: By hyper-contractivity:P[f (X ) = g(Y )] =
∑z P[f (X ) = g(Y ) = z ]
≤ 2k‖1(f = z)‖21+ρ = 2k(ρ−1)/(ρ+1) = 2−kε/(1−ε) ∼ 2−εk .
BM12: P[f (X ) = f (Y )] ≥ 0.1(kε)−1/22−kε/(1−ε) for f =partition of cube to Hamming balls.
Ball Partitions are better than Cube Partitions!
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation and hypercontraction
Let X ,Y be ρ = (1 + ε)/2 correlated in 0, 1n.
Let µ be uniform on 0, 1k .
f = g = xk1 =⇒ P[f (X ) = g(Y )] = (1− ε)k ∼ e−εk .
How tight is the cube partition?
Bogdanov-Mossel-12: By hyper-contractivity:P[f (X ) = g(Y )] =
∑z P[f (X ) = g(Y ) = z ]
≤ 2k‖1(f = z)‖21+ρ = 2k(ρ−1)/(ρ+1) = 2−kε/(1−ε) ∼ 2−εk .
BM12: P[f (X ) = f (Y )] ≥ 0.1(kε)−1/22−kε/(1−ε) for f =partition of cube to Hamming balls.
Ball Partitions are better than Cube Partitions!
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation and hypercontraction
Let X ,Y be ρ = (1 + ε)/2 correlated in 0, 1n.
Let µ be uniform on 0, 1k .
f = g = xk1 =⇒ P[f (X ) = g(Y )] = (1− ε)k ∼ e−εk .
How tight is the cube partition?
Bogdanov-Mossel-12: By hyper-contractivity:P[f (X ) = g(Y )] =
∑z P[f (X ) = g(Y ) = z ]
≤ 2k‖1(f = z)‖21+ρ = 2k(ρ−1)/(ρ+1) = 2−kε/(1−ε) ∼ 2−εk .
BM12: P[f (X ) = f (Y )] ≥ 0.1(kε)−1/22−kε/(1−ε) for f =partition of cube to Hamming balls.
Ball Partitions are better than Cube Partitions!
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation and hypercontraction
Let X ,Y be ρ = (1 + ε)/2 correlated in 0, 1n.
Let µ be uniform on 0, 1k .
f = g = xk1 =⇒ P[f (X ) = g(Y )] = (1− ε)k ∼ e−εk .
How tight is the cube partition?
Bogdanov-Mossel-12: By hyper-contractivity:P[f (X ) = g(Y )] =
∑z P[f (X ) = g(Y ) = z ]
≤ 2k‖1(f = z)‖21+ρ = 2k(ρ−1)/(ρ+1) = 2−kε/(1−ε) ∼ 2−εk .
BM12: P[f (X ) = f (Y )] ≥ 0.1(kε)−1/22−kε/(1−ε) for f =partition of cube to Hamming balls.
Ball Partitions are better than Cube Partitions!
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation and hypercontraction
Let X ,Y be ρ = (1 + ε)/2 correlated in 0, 1n.
Let µ be uniform on 0, 1k .
f = g = xk1 =⇒ P[f (X ) = g(Y )] = (1− ε)k ∼ e−εk .
How tight is the cube partition?
Bogdanov-Mossel-12: By hyper-contractivity:P[f (X ) = g(Y )] =
∑z P[f (X ) = g(Y ) = z ]
≤ 2k‖1(f = z)‖21+ρ = 2k(ρ−1)/(ρ+1) = 2−kε/(1−ε) ∼ 2−εk .
BM12: P[f (X ) = f (Y )] ≥ 0.1(kε)−1/22−kε/(1−ε) for f =partition of cube to Hamming balls.
Ball Partitions are better than Cube Partitions!
Elchanan Mossel Correlation Distillation
Introduction Motivation
From binary to q-ary
Let X ,Y be ρ = (1− ε) correlated in [q]n.
Let µ be uniform on [q]k .
f = g = xk1 =⇒ P[f (X ) = g(Y )] = (1− ε+ ε/q)k
How tight is the cube partition?
Chan-Mossel-Neeman-13: using hyper-contractivity:
P[f (X ) = g(Y )] ≤ (1− ε)k(1 + δ(q))k , δ(q)→q→∞ 0.
So cube partitions are tight as q →∞.
CNM-13: Any construction based on Hamming balls satisfies:
P[f (X ) = f (Y )] ≤ q−ckε, c > 0.
Cube Partitions are better than Ball partitions!
Elchanan Mossel Correlation Distillation
Introduction Motivation
From binary to q-ary
Let X ,Y be ρ = (1− ε) correlated in [q]n.
Let µ be uniform on [q]k .
f = g = xk1 =⇒ P[f (X ) = g(Y )] = (1− ε+ ε/q)k
How tight is the cube partition?
Chan-Mossel-Neeman-13: using hyper-contractivity:
P[f (X ) = g(Y )] ≤ (1− ε)k(1 + δ(q))k , δ(q)→q→∞ 0.
So cube partitions are tight as q →∞.
CNM-13: Any construction based on Hamming balls satisfies:
P[f (X ) = f (Y )] ≤ q−ckε, c > 0.
Cube Partitions are better than Ball partitions!
Elchanan Mossel Correlation Distillation
Introduction Motivation
From binary to q-ary
Let X ,Y be ρ = (1− ε) correlated in [q]n.
Let µ be uniform on [q]k .
f = g = xk1 =⇒ P[f (X ) = g(Y )] = (1− ε+ ε/q)k
How tight is the cube partition?
Chan-Mossel-Neeman-13: using hyper-contractivity:
P[f (X ) = g(Y )] ≤ (1− ε)k(1 + δ(q))k , δ(q)→q→∞ 0.
So cube partitions are tight as q →∞.
CNM-13: Any construction based on Hamming balls satisfies:
P[f (X ) = f (Y )] ≤ q−ckε, c > 0.
Cube Partitions are better than Ball partitions!
Elchanan Mossel Correlation Distillation
Introduction Motivation
From binary to q-ary
Let X ,Y be ρ = (1− ε) correlated in [q]n.
Let µ be uniform on [q]k .
f = g = xk1 =⇒ P[f (X ) = g(Y )] = (1− ε+ ε/q)k
How tight is the cube partition?
Chan-Mossel-Neeman-13: using hyper-contractivity:
P[f (X ) = g(Y )] ≤ (1− ε)k(1 + δ(q))k , δ(q)→q→∞ 0.
So cube partitions are tight as q →∞.
CNM-13: Any construction based on Hamming balls satisfies:
P[f (X ) = f (Y )] ≤ q−ckε, c > 0.
Cube Partitions are better than Ball partitions!
Elchanan Mossel Correlation Distillation
Introduction Motivation
From binary to q-ary
Let X ,Y be ρ = (1− ε) correlated in [q]n.
Let µ be uniform on [q]k .
f = g = xk1 =⇒ P[f (X ) = g(Y )] = (1− ε+ ε/q)k
How tight is the cube partition?
Chan-Mossel-Neeman-13: using hyper-contractivity:
P[f (X ) = g(Y )] ≤ (1− ε)k(1 + δ(q))k , δ(q)→q→∞ 0.
So cube partitions are tight as q →∞.
CNM-13: Any construction based on Hamming balls satisfies:
P[f (X ) = f (Y )] ≤ q−ckε, c > 0.
Cube Partitions are better than Ball partitions!
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation in Gaussian Space
Theorem (Borell, Tsirelson-Sudakov ’75)
In Gaussian space, the sets of a given measure that minimizeGaussian surface area are half-spaces.
Theorem ( Borell ’85)
In Gaussian space, sets of a given measure that maximize noisestability are half-spaces.
Corollary
If µ is a measure on 2 points and X ,Y ∼ N(0, I ) are ρ > 0correlated then f = g = half space is an optimal solution tocorrelation distillation.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation in Gaussian Space
Theorem (Borell, Tsirelson-Sudakov ’75)
In Gaussian space, the sets of a given measure that minimizeGaussian surface area are half-spaces.
Theorem ( Borell ’85)
In Gaussian space, sets of a given measure that maximize noisestability are half-spaces.
Corollary
If µ is a measure on 2 points and X ,Y ∼ N(0, I ) are ρ > 0correlated then f = g = half space is an optimal solution tocorrelation distillation.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Correlation Distillation in Gaussian Space
Theorem (Borell, Tsirelson-Sudakov ’75)
In Gaussian space, the sets of a given measure that minimizeGaussian surface area are half-spaces.
Theorem ( Borell ’85)
In Gaussian space, sets of a given measure that maximize noisestability are half-spaces.
Corollary
If µ is a measure on 2 points and X ,Y ∼ N(0, I ) are ρ > 0correlated then f = g = half space is an optimal solution tocorrelation distillation.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Euclidean Picture
Theorem (Archimedes(-200**), Schwartz (18**))
The body on given measure and minimal surface area is a ball.
Theorem (Plateau, Boys (18**), Hutching Morgan Ritoro Ros(2002))
In the case of two bodies the answer is double bubble.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Euclidean Picture
Theorem (Archimedes(-200**), Schwartz (18**))
The body on given measure and minimal surface area is a ball.
Theorem (Plateau, Boys (18**), Hutching Morgan Ritoro Ros(2002))
In the case of two bodies the answer is double bubble.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Ys?
Theorem (Corneli, Corwin, Hurder, Sesum, Xu, Adams, Davis, Lee,Visocchi, Hoffman ’08)
For q = 3 and f : Rn → [3] with 0.332 ≤ P[f = a] ≤ 0.334 ∀a, theShifted Y minimizes Gaussian surface area.
Theorem (Heilman 13)
If µ is uniform over [3] and n < n(ρ) then standard Y is a solutionto the correlation distillation problem.
Theorem (Heilam-Mossel-Neeman-14)
For every µ 6= (1/3, 1/3, 1/3) on [3] and any ρ ∈ (0, 1), shifted Y sin Gaussian space are not a solution of the correlation distillationproblem.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Ys?
Theorem (Corneli, Corwin, Hurder, Sesum, Xu, Adams, Davis, Lee,Visocchi, Hoffman ’08)
For q = 3 and f : Rn → [3] with 0.332 ≤ P[f = a] ≤ 0.334 ∀a, theShifted Y minimizes Gaussian surface area.
Theorem (Heilman 13)
If µ is uniform over [3] and n < n(ρ) then standard Y is a solutionto the correlation distillation problem.
Theorem (Heilam-Mossel-Neeman-14)
For every µ 6= (1/3, 1/3, 1/3) on [3] and any ρ ∈ (0, 1), shifted Y sin Gaussian space are not a solution of the correlation distillationproblem.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Ys?
Theorem (Corneli, Corwin, Hurder, Sesum, Xu, Adams, Davis, Lee,Visocchi, Hoffman ’08)
For q = 3 and f : Rn → [3] with 0.332 ≤ P[f = a] ≤ 0.334 ∀a, theShifted Y minimizes Gaussian surface area.
Theorem (Heilman 13)
If µ is uniform over [3] and n < n(ρ) then standard Y is a solutionto the correlation distillation problem.
Theorem (Heilam-Mossel-Neeman-14)
For every µ 6= (1/3, 1/3, 1/3) on [3] and any ρ ∈ (0, 1), shifted Y sin Gaussian space are not a solution of the correlation distillationproblem.
Elchanan Mossel Correlation Distillation
Introduction Motivation
A Shifted Simplex
B1 + y B2 + y
0
y
B3 + y
Elchanan Mossel Correlation Distillation
Introduction Motivation
And the balanced case?
Still don’t know.
Borell =⇒ simplex partitions are optimal up to a constantfactor for any q (KKMO-07).
Heilman-13: Standard simplexes are most stable in Rn forbounded dimensions. n ≤ n(ρ).
So far: no techniques / intuitions on what to do if that’s thecase.
Elchanan Mossel Correlation Distillation
Introduction Motivation
And the balanced case?
Still don’t know.
Borell =⇒ simplex partitions are optimal up to a constantfactor for any q (KKMO-07).
Heilman-13: Standard simplexes are most stable in Rn forbounded dimensions. n ≤ n(ρ).
So far: no techniques / intuitions on what to do if that’s thecase.
Elchanan Mossel Correlation Distillation
Introduction Motivation
And the balanced case?
Still don’t know.
Borell =⇒ simplex partitions are optimal up to a constantfactor for any q (KKMO-07).
Heilman-13: Standard simplexes are most stable in Rn forbounded dimensions. n ≤ n(ρ).
So far: no techniques / intuitions on what to do if that’s thecase.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Sketch of the proof that shifted Y are not optimal
WLOG assume one arm parallel but not equal to y axis.
By a first variation argument it suffices to show thatTρ(1B1+y − 1B2+y )(x) is not constant forx ∈ (B1 + y) ∩ (B2 + y).
Let f (t) := Tρ(1(B1+y) − 1(B2+y)) restricted to the lineseparating B1 and B2. Then
|limt→+∞ f (t)| = 2γ1[0, c], where c(ρ) 6= 0.
limt→−∞ f (t) = 0.f (t) is a holomorphic function of t for all complex t.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Sketch of the proof that shifted Y are not optimal
WLOG assume one arm parallel but not equal to y axis.
By a first variation argument it suffices to show thatTρ(1B1+y − 1B2+y )(x) is not constant forx ∈ (B1 + y) ∩ (B2 + y).
Let f (t) := Tρ(1(B1+y) − 1(B2+y)) restricted to the lineseparating B1 and B2. Then
|limt→+∞ f (t)| = 2γ1[0, c], where c(ρ) 6= 0.limt→−∞ f (t) = 0.
f (t) is a holomorphic function of t for all complex t.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Sketch of the proof that shifted Y are not optimal
WLOG assume one arm parallel but not equal to y axis.
By a first variation argument it suffices to show thatTρ(1B1+y − 1B2+y )(x) is not constant forx ∈ (B1 + y) ∩ (B2 + y).
Let f (t) := Tρ(1(B1+y) − 1(B2+y)) restricted to the lineseparating B1 and B2. Then
|limt→+∞ f (t)| = 2γ1[0, c], where c(ρ) 6= 0.
limt→−∞ f (t) = 0.f (t) is a holomorphic function of t for all complex t.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Sketch of the proof that shifted Y are not optimal
WLOG assume one arm parallel but not equal to y axis.
By a first variation argument it suffices to show thatTρ(1B1+y − 1B2+y )(x) is not constant forx ∈ (B1 + y) ∩ (B2 + y).
Let f (t) := Tρ(1(B1+y) − 1(B2+y)) restricted to the lineseparating B1 and B2. Then
|limt→+∞ f (t)| = 2γ1[0, c], where c(ρ) 6= 0.limt→−∞ f (t) = 0.
f (t) is a holomorphic function of t for all complex t.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Sketch of the proof that shifted Y are not optimal
WLOG assume one arm parallel but not equal to y axis.
By a first variation argument it suffices to show thatTρ(1B1+y − 1B2+y )(x) is not constant forx ∈ (B1 + y) ∩ (B2 + y).
Let f (t) := Tρ(1(B1+y) − 1(B2+y)) restricted to the lineseparating B1 and B2. Then
|limt→+∞ f (t)| = 2γ1[0, c], where c(ρ) 6= 0.limt→−∞ f (t) = 0.f (t) is a holomorphic function of t for all complex t.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Sketch of the proof
The lastassertion isnew inisoperimetrictheory.
The first twoassertions havethe followingpicture inmind.
B1 + y B2 + y
L0
t →∞
B1 + y B2 + y
y
Elchanan Mossel Correlation Distillation
Introduction Motivation
Sketch of the proof
The lastassertion isnew inisoperimetrictheory.
The first twoassertions havethe followingpicture inmind.
B1 + y B2 + y
L0
t →∞
B1 + y B2 + y
y
Elchanan Mossel Correlation Distillation
Introduction Motivation
Sketch of the proof
The lastassertion isnew inisoperimetrictheory.
The first twoassertions havethe followingpicture inmind.
B1 + y B2 + y
L0
t →∞
B1 + y B2 + y
y
Elchanan Mossel Correlation Distillation
Introduction Motivation
Executive Summary
To distill correlation you need to be stable.
Sometime balls, sometimes cubes are more able.
But in Gaussian space- we don’t know - you ask why?
It’s because the optimal partition in not always a Y.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Open Problems
Find better correlation distillation for
Gaussian space q ≥ 3.0, 1n → 0, 1k (improve polynomial factors).[q]n → [q]k (get the right exponent for every q).Other correlated variables.
When do there exist sets / small sets which are tight forhypercontractive / Log-Sob inequalities.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Open Problems
Find better correlation distillation for
Gaussian space q ≥ 3.0, 1n → 0, 1k (improve polynomial factors).[q]n → [q]k (get the right exponent for every q).Other correlated variables.
When do there exist sets / small sets which are tight forhypercontractive / Log-Sob inequalities.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Executive Summary - Part 2
Based on Joint work with Steven Heilman and KrzysztofOleszkiewicz.
The tale of the tail:
Tails diminish faster - so it seems.
Yet their influence may be dim.
And their boundaries almost unseen.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Executive Summary - Part 2
Based on Joint work with Steven Heilman and KrzysztofOleszkiewicz.
The tale of the tail:
Tails diminish faster - so it seems.
Yet their influence may be dim.
And their boundaries almost unseen.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Boolean Functions and Tail Spaces
The Fourier expansion of f : −1, 1n → R is:
f (x) =∑S⊆[n]
f (S)xS , xS =∏i∈S
xi .
L>k := f : f (S) = 0, ∀ |S | ≤ k.L>k+ := f : f (S) = 0, ∀ 0 < |S | ≤ k
what information can be extracted from f ∈ L>k .
Note in particular - f ∈ L>k stronger than ”superconcentration”.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Boolean Functions and Tail Spaces
The Fourier expansion of f : −1, 1n → R is:
f (x) =∑S⊆[n]
f (S)xS , xS =∏i∈S
xi .
L>k := f : f (S) = 0, ∀ |S | ≤ k.
L>k+ := f : f (S) = 0, ∀ 0 < |S | ≤ k
what information can be extracted from f ∈ L>k .
Note in particular - f ∈ L>k stronger than ”superconcentration”.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Boolean Functions and Tail Spaces
The Fourier expansion of f : −1, 1n → R is:
f (x) =∑S⊆[n]
f (S)xS , xS =∏i∈S
xi .
L>k := f : f (S) = 0, ∀ |S | ≤ k.L>k+ := f : f (S) = 0, ∀ 0 < |S | ≤ k
what information can be extracted from f ∈ L>k .
Note in particular - f ∈ L>k stronger than ”superconcentration”.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Boolean Functions and Tail Spaces
The Fourier expansion of f : −1, 1n → R is:
f (x) =∑S⊆[n]
f (S)xS , xS =∏i∈S
xi .
L>k := f : f (S) = 0, ∀ |S | ≤ k.L>k+ := f : f (S) = 0, ∀ 0 < |S | ≤ k
what information can be extracted from f ∈ L>k .
Note in particular - f ∈ L>k stronger than ”superconcentration”.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Boolean Functions and Tail Spaces
The Fourier expansion of f : −1, 1n → R is:
f (x) =∑S⊆[n]
f (S)xS , xS =∏i∈S
xi .
L>k := f : f (S) = 0, ∀ |S | ≤ k.L>k+ := f : f (S) = 0, ∀ 0 < |S | ≤ k
what information can be extracted from f ∈ L>k .
Note in particular - f ∈ L>k stronger than ”superconcentration”.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Bonami-Beckner Operator and Contraction
Pt re-randomized each coordinate with probability 1− e−t :
(Pt f )(x) := E [f (y)|y ∼e−t x ] =∑S
e−t|S |f (S)xS
Clearly, ‖Pt f ‖2 ≤ e−tk‖f ‖2 for f ∈ L≥k .
Mendel and Naor: What about other norms?
Motivation: Study of ”super-expanders” (with respect to allconvex spaces).
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Bonami-Beckner Operator and Contraction
Pt re-randomized each coordinate with probability 1− e−t :
(Pt f )(x) := E [f (y)|y ∼e−t x ] =∑S
e−t|S |f (S)xS
Clearly, ‖Pt f ‖2 ≤ e−tk‖f ‖2 for f ∈ L≥k .
Mendel and Naor: What about other norms?
Motivation: Study of ”super-expanders” (with respect to allconvex spaces).
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Bonami-Beckner Operator and Contraction
Pt re-randomized each coordinate with probability 1− e−t :
(Pt f )(x) := E [f (y)|y ∼e−t x ] =∑S
e−t|S |f (S)xS
Clearly, ‖Pt f ‖2 ≤ e−tk‖f ‖2 for f ∈ L≥k .
Mendel and Naor: What about other norms?
Motivation: Study of ”super-expanders” (with respect to allconvex spaces).
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Bonami-Beckner Operator and Contraction
Pt re-randomized each coordinate with probability 1− e−t :
(Pt f )(x) := E [f (y)|y ∼e−t x ] =∑S
e−t|S |f (S)xS
Clearly, ‖Pt f ‖2 ≤ e−tk‖f ‖2 for f ∈ L≥k .
Mendel and Naor: What about other norms?
Motivation: Study of ”super-expanders” (with respect to allconvex spaces).
Elchanan Mossel Correlation Distillation
Introduction Motivation
Contraction in Tail Spaces
For f ∈ L≥k and p > 1:
‖Pt f ‖p ≤ e−c(p)k min(t,t2)‖f ‖p, p ≥ 2, Meyer, Mendel-Naor
‖Pt f ‖p ≤ e−c(p)kt‖f ‖p, p > 1, Conj: Mendel-Naor
‖Pt f ‖p ≤ e−c(p)kt‖f ‖p, if ∀x , f (x) ∈ −1, 0, 1 HMO
Elchanan Mossel Correlation Distillation
Introduction Motivation
An Easy Proof
For f ∈ L≥k :
‖Pt f ‖p ≤ e−c(p)kt‖f ‖p, p > 1, f ∈ −1, 0, 1 HMO
For p ≥ 2:
E[|Pt f |p] ≤ E[|Pt f |2] ≤ e−2tkE[f 2] = e−2tkE[|f |p].
For 1 < p < 2 by (1/(2− p), 1/(p − 1)) Holder inequality
E[|Pt f |p] = E[|Pt f |2−p|Pt f |2p−2|] ≤ E[|Pt f |]2−pE[|Pt f |2]p−1
≤ E[|f |]2−pe−2tk(p−1)E [|f |2]p−1 = e−2tk(p−1)E[|f |p].
Elchanan Mossel Correlation Distillation
Introduction Motivation
An Easy Proof
For f ∈ L≥k :
‖Pt f ‖p ≤ e−c(p)kt‖f ‖p, p > 1, f ∈ −1, 0, 1 HMO
For p ≥ 2:
E[|Pt f |p] ≤ E[|Pt f |2] ≤ e−2tkE[f 2] = e−2tkE[|f |p].
For 1 < p < 2 by (1/(2− p), 1/(p − 1)) Holder inequality
E[|Pt f |p] = E[|Pt f |2−p|Pt f |2p−2|] ≤ E[|Pt f |]2−pE[|Pt f |2]p−1
≤ E[|f |]2−pe−2tk(p−1)E [|f |2]p−1 = e−2tk(p−1)E[|f |p].
Elchanan Mossel Correlation Distillation
Introduction Motivation
An Easy Proof
For f ∈ L≥k :
‖Pt f ‖p ≤ e−c(p)kt‖f ‖p, p > 1, f ∈ −1, 0, 1 HMO
For p ≥ 2:
E[|Pt f |p] ≤ E[|Pt f |2] ≤ e−2tkE[f 2] = e−2tkE[|f |p].
For 1 < p < 2 by (1/(2− p), 1/(p − 1)) Holder inequality
E[|Pt f |p] = E[|Pt f |2−p|Pt f |2p−2|] ≤ E[|Pt f |]2−pE[|Pt f |2]p−1
≤ E[|f |]2−pe−2tk(p−1)E [|f |2]p−1 = e−2tk(p−1)E[|f |p].
Elchanan Mossel Correlation Distillation
Introduction Motivation
Contraction in the first k = 1 tail space
For f ∈ L≥1(−1, 1n) (E[f ] = 0):
‖Pt f ‖p ≤ e−c(p)min(t,t2)‖f ‖p, p ≥ 2, Meyer, Mendel-Naor
‖Pt f ‖p ≤ e−c(p)t‖f ‖p, p > 1, Conj: Mendel-Naor
‖Pt f ‖p ≤ e−c(p)tM(n)e−δ(n)t‖f ‖p, p > 1, Hino
‖Pt f ‖p ≤ e−c(p)t‖f ‖p, p > 1, HMO
Elchanan Mossel Correlation Distillation
Introduction Motivation
A harder proof
For f ∈ L≥1(−1, 1n) (E[f ] = 0):
‖Pt f ‖p ≤ e−c(p)t‖f ‖p, p > 1, HMO
Proof based a new type of Poincare inequality when E[f ] = 0:
E[|f |p−1sgn(f )Lf ] ≥ r(p)E [|f |p], r(p) :=2p − 2
p2 − 2p + 2p > 1.
So ddt exp(r(p)tE[|Pt |p]) is:
exp(r(p)t)(r(p)E[|Pt f |p]− p|Pt f |p−1sgn(Pt f )LPt f
)≤ 0.
Elchanan Mossel Correlation Distillation
Introduction Motivation
A harder proof
For f ∈ L≥1(−1, 1n) (E[f ] = 0):
‖Pt f ‖p ≤ e−c(p)t‖f ‖p, p > 1, HMO
Proof based a new type of Poincare inequality when E[f ] = 0:
E[|f |p−1sgn(f )Lf ] ≥ r(p)E [|f |p], r(p) :=2p − 2
p2 − 2p + 2p > 1.
So ddt exp(r(p)tE[|Pt |p]) is:
exp(r(p)t)(r(p)E[|Pt f |p]− p|Pt f |p−1sgn(Pt f )LPt f
)≤ 0.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Isoperimetry in Tail Spaces
Let L>k+ := f : f (S) = 0, ∀0 < |S | ≤ k.
Harper’s isoperimetric inequality: For f : −1, 1n → 0, 1:
n∑i=1
Ii (f ) ≥ 2
log 2E[f ] log(1/E[f ]).
Note:∑n
i=1 Ii (f ) =∑
S |S |f 2(S)
Kalai: Does there exist ω(k)→∞ such that
n∑i=1
Ii (f ) ≥ ω(k)E[f ] log(1/E[f ]) for f ∈ L>k+ .
HMO: No.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Isoperimetry in Tail Spaces
Let L>k+ := f : f (S) = 0, ∀0 < |S | ≤ k.
Harper’s isoperimetric inequality: For f : −1, 1n → 0, 1:
n∑i=1
Ii (f ) ≥ 2
log 2E[f ] log(1/E[f ]).
Note:∑n
i=1 Ii (f ) =∑
S |S |f 2(S)
Kalai: Does there exist ω(k)→∞ such that
n∑i=1
Ii (f ) ≥ ω(k)E[f ] log(1/E[f ]) for f ∈ L>k+ .
HMO: No.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Isoperimetry in Tail Spaces
Let L>k+ := f : f (S) = 0, ∀0 < |S | ≤ k.
Harper’s isoperimetric inequality: For f : −1, 1n → 0, 1:
n∑i=1
Ii (f ) ≥ 2
log 2E[f ] log(1/E[f ]).
Note:∑n
i=1 Ii (f ) =∑
S |S |f 2(S)
Kalai: Does there exist ω(k)→∞ such that
n∑i=1
Ii (f ) ≥ ω(k)E[f ] log(1/E[f ]) for f ∈ L>k+ .
HMO: No.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Isoperimetry in Tail Spaces
Let L>k+ := f : f (S) = 0, ∀0 < |S | ≤ k.
Harper’s isoperimetric inequality: For f : −1, 1n → 0, 1:
n∑i=1
Ii (f ) ≥ 2
log 2E[f ] log(1/E[f ]).
Note:∑n
i=1 Ii (f ) =∑
S |S |f 2(S)
Kalai: Does there exist ω(k)→∞ such that
n∑i=1
Ii (f ) ≥ ω(k)E[f ] log(1/E[f ]) for f ∈ L>k+ .
HMO: No.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Isoperimetry in Tail Spaces
Let L>k+ := f : f (S) = 0, ∀0 < |S | ≤ k.
Harper’s isoperimetric inequality: For f : −1, 1n → 0, 1:
n∑i=1
Ii (f ) ≥ 2
log 2E[f ] log(1/E[f ]).
Note:∑n
i=1 Ii (f ) =∑
S |S |f 2(S)
Kalai: Does there exist ω(k)→∞ such that
n∑i=1
Ii (f ) ≥ ω(k)E[f ] log(1/E[f ]) for f ∈ L>k+ .
HMO: No.
Elchanan Mossel Correlation Distillation
Introduction Motivation
KKL in Tail Spaces
KKL Thm: For f : 0, 1n → 0, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Recall:∑n
i=1 Ii (f ) =∑
S |S |f 2(S).
Kalai, Hatami: Does there exist ω(k)→∞ such that
max Ii (f ) ≥ ω(k)Var [f ] log n
nfor f ∈ L>k .
HMO: No.
Elchanan Mossel Correlation Distillation
Introduction Motivation
KKL in Tail Spaces
KKL Thm: For f : 0, 1n → 0, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Recall:∑n
i=1 Ii (f ) =∑
S |S |f 2(S).
Kalai, Hatami: Does there exist ω(k)→∞ such that
max Ii (f ) ≥ ω(k)Var [f ] log n
nfor f ∈ L>k .
HMO: No.
Elchanan Mossel Correlation Distillation
Introduction Motivation
KKL in Tail Spaces
KKL Thm: For f : 0, 1n → 0, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Recall:∑n
i=1 Ii (f ) =∑
S |S |f 2(S).
Kalai, Hatami: Does there exist ω(k)→∞ such that
max Ii (f ) ≥ ω(k)Var [f ] log n
nfor f ∈ L>k .
HMO: No.
Elchanan Mossel Correlation Distillation
Introduction Motivation
KKL in Tail Spaces
KKL Thm: For f : 0, 1n → 0, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Recall:∑n
i=1 Ii (f ) =∑
S |S |f 2(S).
Kalai, Hatami: Does there exist ω(k)→∞ such that
max Ii (f ) ≥ ω(k)Var [f ] log n
nfor f ∈ L>k .
HMO: No.
Elchanan Mossel Correlation Distillation
Introduction Motivation
KKL in Tail Spaces
KKL Thm: For f : 0, 1n → 0, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Recall:∑n
i=1 Ii (f ) =∑
S |S |f 2(S).
Kalai, Hatami: Does there exist ω(k)→∞ such that
max Ii (f ) ≥ ω(k)Var [f ] log n
nfor f ∈ L>k .
HMO: No.
Elchanan Mossel Correlation Distillation
Introduction Motivation
On codes and tails
Work with 0, 1n = F n2 . Given a linear code C ⊆ F n
2 , letw(C ) := min‖x‖1 : 0 6= x ∈ C and
C+ := y ∈ F n2 : ⊕n
i=1yixi = 0, ∀x ∈ C
MacWilliams identities: C ∈ L>k+ iff w(C+) > k .
Claim: there exists a γ > 1 and functionsg : 0, 1γm → 0, 1 with g ∈ L>m
+ and2−3m ≤ P[g = 1] ≤ 2−m.
Pf: Let g be a dual of a good code.
Note that
γm∑i=1
Ii (g) ≤ γmP[g = 1] ≤ γE[g ] log(1/E[g ]).
Tight for Harper’s inequality up to constants.
Elchanan Mossel Correlation Distillation
Introduction Motivation
On codes and tails
Work with 0, 1n = F n2 . Given a linear code C ⊆ F n
2 , letw(C ) := min‖x‖1 : 0 6= x ∈ C and
C+ := y ∈ F n2 : ⊕n
i=1yixi = 0, ∀x ∈ C
MacWilliams identities: C ∈ L>k+ iff w(C+) > k .
Claim: there exists a γ > 1 and functionsg : 0, 1γm → 0, 1 with g ∈ L>m
+ and2−3m ≤ P[g = 1] ≤ 2−m.
Pf: Let g be a dual of a good code.
Note that
γm∑i=1
Ii (g) ≤ γmP[g = 1] ≤ γE[g ] log(1/E[g ]).
Tight for Harper’s inequality up to constants.
Elchanan Mossel Correlation Distillation
Introduction Motivation
On codes and tails
Work with 0, 1n = F n2 . Given a linear code C ⊆ F n
2 , letw(C ) := min‖x‖1 : 0 6= x ∈ C and
C+ := y ∈ F n2 : ⊕n
i=1yixi = 0, ∀x ∈ C
MacWilliams identities: C ∈ L>k+ iff w(C+) > k .
Claim: there exists a γ > 1 and functionsg : 0, 1γm → 0, 1 with g ∈ L>m
+ and2−3m ≤ P[g = 1] ≤ 2−m.
Pf: Let g be a dual of a good code.
Note that
γm∑i=1
Ii (g) ≤ γmP[g = 1] ≤ γE[g ] log(1/E[g ]).
Tight for Harper’s inequality up to constants.
Elchanan Mossel Correlation Distillation
Introduction Motivation
On codes and tails
Work with 0, 1n = F n2 . Given a linear code C ⊆ F n
2 , letw(C ) := min‖x‖1 : 0 6= x ∈ C and
C+ := y ∈ F n2 : ⊕n
i=1yixi = 0, ∀x ∈ C
MacWilliams identities: C ∈ L>k+ iff w(C+) > k .
Claim: there exists a γ > 1 and functionsg : 0, 1γm → 0, 1 with g ∈ L>m
+ and2−3m ≤ P[g = 1] ≤ 2−m.
Pf: Let g be a dual of a good code.
Note that
γm∑i=1
Ii (g) ≤ γmP[g = 1] ≤ γE[g ] log(1/E[g ]).
Tight for Harper’s inequality up to constants.
Elchanan Mossel Correlation Distillation
Introduction Motivation
On codes and tails
Work with 0, 1n = F n2 . Given a linear code C ⊆ F n
2 , letw(C ) := min‖x‖1 : 0 6= x ∈ C and
C+ := y ∈ F n2 : ⊕n
i=1yixi = 0, ∀x ∈ C
MacWilliams identities: C ∈ L>k+ iff w(C+) > k .
Claim: there exists a γ > 1 and functionsg : 0, 1γm → 0, 1 with g ∈ L>m
+ and2−3m ≤ P[g = 1] ≤ 2−m.
Pf: Let g be a dual of a good code.
Note that
γm∑i=1
Ii (g) ≤ γmP[g = 1] ≤ γE[g ] log(1/E[g ]).
Tight for Harper’s inequality up to constants.
Elchanan Mossel Correlation Distillation
Introduction Motivation
On codes and tails
Work with 0, 1n = F n2 . Given a linear code C ⊆ F n
2 , letw(C ) := min‖x‖1 : 0 6= x ∈ C and
C+ := y ∈ F n2 : ⊕n
i=1yixi = 0, ∀x ∈ C
MacWilliams identities: C ∈ L>k+ iff w(C+) > k .
Claim: there exists a γ > 1 and functionsg : 0, 1γm → 0, 1 with g ∈ L>m
+ and2−3m ≤ P[g = 1] ≤ 2−m.
Pf: Let g be a dual of a good code.
Note that
γm∑i=1
Ii (g) ≤ γmP[g = 1] ≤ γE[g ] log(1/E[g ]).
Tight for Harper’s inequality up to constants.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Coding Tribes Function
KKL Thm: For f : −1, 1n → −1, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Ben-Or and Linial Tribes f are tight for the Thm.
f (x) = g(x1, . . . , xr )∨. . .∨g(x(b−1)r+1, . . . , xbr ), br = n, r = Θ(log n)
g = ANDlog2 n−log log n =⇒ E[g ] = Θ(log n/n) =⇒ Var(f ) = Θ(1).
Then: max Ii (f ) ≤ O(P[g = 1]) ≤ O(log n/n).
HMO: g := dual of a good code on O(log n) bits withE[g ] = Θ(log n/n).
g ∈ Lk=c log n+ =⇒ f ∈ Lk=c log n
+ by writing Fourierexpression.
Tight for KKL.
More work - make the function balanced.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Coding Tribes Function
KKL Thm: For f : −1, 1n → −1, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Ben-Or and Linial Tribes f are tight for the Thm.
f (x) = g(x1, . . . , xr )∨. . .∨g(x(b−1)r+1, . . . , xbr ), br = n, r = Θ(log n)
g = ANDlog2 n−log log n =⇒ E[g ] = Θ(log n/n) =⇒ Var(f ) = Θ(1).
Then: max Ii (f ) ≤ O(P[g = 1]) ≤ O(log n/n).
HMO: g := dual of a good code on O(log n) bits withE[g ] = Θ(log n/n).
g ∈ Lk=c log n+ =⇒ f ∈ Lk=c log n
+ by writing Fourierexpression.
Tight for KKL.
More work - make the function balanced.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Coding Tribes Function
KKL Thm: For f : −1, 1n → −1, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Ben-Or and Linial Tribes f are tight for the Thm.
f (x) = g(x1, . . . , xr )∨. . .∨g(x(b−1)r+1, . . . , xbr ), br = n, r = Θ(log n)
g = ANDlog2 n−log log n =⇒ E[g ] = Θ(log n/n) =⇒ Var(f ) = Θ(1).
Then: max Ii (f ) ≤ O(P[g = 1]) ≤ O(log n/n).
HMO: g := dual of a good code on O(log n) bits withE[g ] = Θ(log n/n).
g ∈ Lk=c log n+ =⇒ f ∈ Lk=c log n
+ by writing Fourierexpression.
Tight for KKL.
More work - make the function balanced.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Coding Tribes Function
KKL Thm: For f : −1, 1n → −1, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Ben-Or and Linial Tribes f are tight for the Thm.
f (x) = g(x1, . . . , xr )∨. . .∨g(x(b−1)r+1, . . . , xbr ), br = n, r = Θ(log n)
g = ANDlog2 n−log log n =⇒ E[g ] = Θ(log n/n) =⇒ Var(f ) = Θ(1).
Then: max Ii (f ) ≤ O(P[g = 1]) ≤ O(log n/n).
HMO: g := dual of a good code on O(log n) bits withE[g ] = Θ(log n/n).
g ∈ Lk=c log n+ =⇒ f ∈ Lk=c log n
+ by writing Fourierexpression.
Tight for KKL.
More work - make the function balanced.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Coding Tribes Function
KKL Thm: For f : −1, 1n → −1, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Ben-Or and Linial Tribes f are tight for the Thm.
f (x) = g(x1, . . . , xr )∨. . .∨g(x(b−1)r+1, . . . , xbr ), br = n, r = Θ(log n)
g = ANDlog2 n−log log n =⇒ E[g ] = Θ(log n/n) =⇒ Var(f ) = Θ(1).
Then: max Ii (f ) ≤ O(P[g = 1]) ≤ O(log n/n).
HMO: g := dual of a good code on O(log n) bits withE[g ] = Θ(log n/n).
g ∈ Lk=c log n+ =⇒ f ∈ Lk=c log n
+ by writing Fourierexpression.
Tight for KKL.
More work - make the function balanced.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Coding Tribes Function
KKL Thm: For f : −1, 1n → −1, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Ben-Or and Linial Tribes f are tight for the Thm.
f (x) = g(x1, . . . , xr )∨. . .∨g(x(b−1)r+1, . . . , xbr ), br = n, r = Θ(log n)
g = ANDlog2 n−log log n =⇒ E[g ] = Θ(log n/n) =⇒ Var(f ) = Θ(1).
Then: max Ii (f ) ≤ O(P[g = 1]) ≤ O(log n/n).
HMO: g := dual of a good code on O(log n) bits withE[g ] = Θ(log n/n).
g ∈ Lk=c log n+ =⇒ f ∈ Lk=c log n
+ by writing Fourierexpression.
Tight for KKL.
More work - make the function balanced.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Coding Tribes Function
KKL Thm: For f : −1, 1n → −1, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Ben-Or and Linial Tribes f are tight for the Thm.
f (x) = g(x1, . . . , xr )∨. . .∨g(x(b−1)r+1, . . . , xbr ), br = n, r = Θ(log n)
g = ANDlog2 n−log log n =⇒ E[g ] = Θ(log n/n) =⇒ Var(f ) = Θ(1).
Then: max Ii (f ) ≤ O(P[g = 1]) ≤ O(log n/n).
HMO: g := dual of a good code on O(log n) bits withE[g ] = Θ(log n/n).
g ∈ Lk=c log n+ =⇒ f ∈ Lk=c log n
+ by writing Fourierexpression.
Tight for KKL.
More work - make the function balanced.
Elchanan Mossel Correlation Distillation
Introduction Motivation
The Coding Tribes Function
KKL Thm: For f : −1, 1n → −1, 1 it holds thatmax Ii (f ) ≥ cVar [f ] log n/n.
Ben-Or and Linial Tribes f are tight for the Thm.
f (x) = g(x1, . . . , xr )∨. . .∨g(x(b−1)r+1, . . . , xbr ), br = n, r = Θ(log n)
g = ANDlog2 n−log log n =⇒ E[g ] = Θ(log n/n) =⇒ Var(f ) = Θ(1).
Then: max Ii (f ) ≤ O(P[g = 1]) ≤ O(log n/n).
HMO: g := dual of a good code on O(log n) bits withE[g ] = Θ(log n/n).
g ∈ Lk=c log n+ =⇒ f ∈ Lk=c log n
+ by writing Fourierexpression.
Tight for KKL.
More work - make the function balanced.
Elchanan Mossel Correlation Distillation
Introduction Motivation
Executive Summary - Part 2
The tale of the tail:
Tails diminish faster - so it seems.
Yet their influence may be dim.
And their boundaries almost unseen.
Questions??
Elchanan Mossel Correlation Distillation
Introduction Motivation
Executive Summary - Part 2
The tale of the tail:
Tails diminish faster - so it seems.
Yet their influence may be dim.
And their boundaries almost unseen.
Questions??
Elchanan Mossel Correlation Distillation