111
Local correctability of expander codes Brett Hemenway Rafail Ostrovsky Mary Wootters IAS April 14, 2014

video.ias.edu · One choice for inner code: based on a ne geometry See[Assmus, Key ’94,’98]for a nice overview I Let L 1;:::;L t be the r-dimensional a ne subspaces of

  • Upload
    letram

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Local correctability of expander codes

Brett Hemenway Rafail Ostrovsky Mary Wootters

IAS

April 14, 2014

The point(s) of this talk

I Locally decodable codes are codes which admit sublinear timedecoding of small pieces of a message.

I Expander codes are a family of error correcting codes basedon expander graphs.

I In this work, we show that (appropriately instantiated)expander codes are high-rate locally decodable codes.

I Only two families of codes known in this regime[KSY’11,GKS’12].

I Expander codes (and the corresponding decoding algorithmand analysis) are very different from existing constructions!

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

Error correcting codes

Alice Bob

Noisy channel

message x ∈ Σk

codeword C(x) ∈ ΣN

corrupted codeword w ∈ ΣN

x?xi?

Bob makesonly q queries

C(x)i?

Error correcting codes

Alice Bob

Noisy channel

message x ∈ Σk

codeword C(x) ∈ ΣN

corrupted codeword w ∈ ΣN

x?xi?

Bob makesonly q queries

C(x)i?

Error correcting codes

Alice Bob

Noisy channel

message x ∈ Σk

codeword C(x) ∈ ΣN

corrupted codeword w ∈ ΣN

x?xi?

Bob makesonly q queries

C(x)i?

Error correcting codes

Alice Bob

Noisy channel

message x ∈ Σk

codeword C(x) ∈ ΣN

corrupted codeword w ∈ ΣN

x?xi?

Bob makesonly q queries

C(x)i?

Error correcting codes

Alice Bob

Noisy channel

message x ∈ Σk

codeword C(x) ∈ ΣN

corrupted codeword w ∈ ΣN

x?

xi?

Bob makesonly q queries

C(x)i?

Locally decodable codes

Alice Bob

Noisy channel

message x ∈ Σk

codeword C(x) ∈ ΣN

corrupted codeword w ∈ ΣN

x?

xi?

Bob makesonly q queries

C(x)i?

Locally decodable codes

Alice Bob

Noisy channel

message x ∈ Σk

codeword C(x) ∈ ΣN

corrupted codeword w ∈ ΣN

x?

xi?

Bob makesonly q queries

C(x)i?

Locally decodable codes

Alice Bob

Noisy channel

message x ∈ Σk

codeword C(x) ∈ ΣN

corrupted codeword w ∈ ΣN

x?

xi?

Bob makesonly q queries

C(x)i?

Locally correctable codes

Alice Bob

Noisy channel

message x ∈ Σk

codeword C(x) ∈ ΣN

corrupted codeword w ∈ ΣN

x?xi?

Bob makesonly q queries

C(x)i?

Locally correctable codes, sans stick figures

DefinitionC is (q, δ, η)-locally correctable if for all i ∈ [N], for all x ∈ Σk ,and for all w ∈ ΣN with d(w , C(x)) ≤ δN,

P Bob correctly guesses C(x)i ≥ 1− η.

Bob reads only q positions in the corrupted word, w .

Local correctability vs. local decodability

When C is linear, local correctability implies local decodability.

=G x

C(x)

11

1...

11

1

x

Local correctability vs. local decodability

When C is linear, local correctability implies local decodability.

=G x

C(x)

11

1...

11

1

x

Before we get too farSome notation

For a code C : Σk → ΣN

I The message length is k, the length of the message.

I The block length is N, the length of the codeword.

I The rate is k/N.

I The locality is q, the number of queries Bob makes.

Goal: large rate, small locality.

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

Example: Reed-Muller Codes

Alice

f (x , y) 7→ (f (0, 0), f (0, 1), f (1, 0), f (1, 1))

I Message: multivariate polynomial of total degree d ,

f ∈ Fq[z1, . . . , zm].

I Codeword: the evaluation of f at points in Fmq :

C(f ) = f (~x)~x∈Fmq

Locally Correcting Reed Muller Codes

Points in Fmq

message is f ∈ Fq[z1, . . . , zm]

codeword is f (~x)~x∈Fmq

f (~z)

f (~v)

I We want to correctC(f )~z = f (~z).

I Choose a random line through~z , and consider the restriction

g(t) = f (~z + t~v)

to that line.

I This is a univariate polynomial,and g(0) = f (~z).

I Query all of the points on theline.

Locally Correcting Reed Muller Codes

Points in Fmq

message is f ∈ Fq[z1, . . . , zm]

codeword is f (~x)~x∈Fmq

f (~z)

f (~v)

I We want to correctC(f )~z = f (~z).

I Choose a random line through~z , and consider the restriction

g(t) = f (~z + t~v)

to that line.

I This is a univariate polynomial,and g(0) = f (~z).

I Query all of the points on theline.

Locally Correcting Reed Muller Codes

Points in Fmq

message is f ∈ Fq[z1, . . . , zm]

codeword is f (~x)~x∈Fmq

f (~z)

f (~v)

I We want to correctC(f )~z = f (~z).

I Choose a random line through~z , and consider the restriction

g(t) = f (~z + t~v)

to that line.

I This is a univariate polynomial,and g(0) = f (~z).

I Query all of the points on theline.

Locally Correcting Reed Muller Codes

Points in Fmq

message is f ∈ Fq[z1, . . . , zm]

codeword is f (~x)~x∈Fmq

f (~z)

f (~v)

I We want to correctC(f )~z = f (~z).

I Choose a random line through~z , and consider the restriction

g(t) = f (~z + t~v)

to that line.

I This is a univariate polynomial,and g(0) = f (~z).

I Query all of the points on theline.

Locally Correcting Reed Muller Codes

Points in Fmq

message is f ∈ Fq[z1, . . . , zm]

codeword is f (~x)~x∈Fmq

f (~z)

f (~v)

I We want to correctC(f )~z = f (~z).

I Choose a random line through~z , and consider the restriction

g(t) = f (~z + t~v)

to that line.

I This is a univariate polynomial,and g(0) = f (~z).

I Query all of the points on theline.

Resulting parameters

I Rate is (m+dm )/qm (we needed d = O(q), so we can decode)

I Locality is q (the field size)

If we choose m constant, we get:

I Rate is constant, but less than 1/2.

I Locality is N1/m = Nε.

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

Question:

Reed-Muller Codes have locality Nε and constant rate,but rate is less than 1/2.

Are there locally decodable codes with locality Nε,and rate arbitrarily close to 1?

Question:

Reed-Muller Codes have locality Nε and constant rate,but rate is less than 1/2.

Are there locally decodable codes with locality Nε,and rate arbitrarily close to 1?

Previous Work

Rate → 1 and locality Nε:

I Multiplicity codes[Kopparty, Saraf, Yekhanin 2011]

I Lifted codes[Guo, Kopparty, Sudan 2012]

These have decoders similar to RM:the queries form a good code.

I Expander codes[H., Ostrovsky, Wootters 2013]

Decoder is similar in spirit to low-query decoders. The queries will notform an error correcting code.

Another regime:

Rate bad(N/22O(

√log(N))

),

but locality 3:

I Matching vectorcodes[Yekhanin 2008,Efremenko 2009, ...]

These decoders are different:

I The queries cannottolerate any errors.

I There are so few queriesthat they are probably allcorrect.

Previous Work

Rate → 1 and locality Nε:

I Multiplicity codes[Kopparty, Saraf, Yekhanin 2011]

I Lifted codes[Guo, Kopparty, Sudan 2012]

These have decoders similar to RM:the queries form a good code.

I Expander codes[H., Ostrovsky, Wootters 2013]

Decoder is similar in spirit to low-query decoders. The queries will notform an error correcting code.

Another regime:

Rate bad(N/22O(

√log(N))

),

but locality 3:

I Matching vectorcodes[Yekhanin 2008,Efremenko 2009, ...]

These decoders are different:

I The queries cannottolerate any errors.

I There are so few queriesthat they are probably allcorrect.

Previous Work

Rate → 1 and locality Nε:

I Multiplicity codes[Kopparty, Saraf, Yekhanin 2011]

I Lifted codes[Guo, Kopparty, Sudan 2012]

These have decoders similar to RM:the queries form a good code.

I Expander codes[H., Ostrovsky, Wootters 2013]

Decoder is similar in spirit to low-query decoders. The queries will notform an error correcting code.

Another regime:

Rate bad(N/22O(

√log(N))

),

but locality 3:

I Matching vectorcodes[Yekhanin 2008,Efremenko 2009, ...]

These decoders are different:

I The queries cannottolerate any errors.

I There are so few queriesthat they are probably allcorrect.

Previous Work

Rate → 1 and locality Nε:

I Multiplicity codes[Kopparty, Saraf, Yekhanin 2011]

I Lifted codes[Guo, Kopparty, Sudan 2012]

These have decoders similar to RM:the queries form a good code.

I Expander codes[H., Ostrovsky, Wootters 2013]

Decoder is similar in spirit to low-query decoders. The queries will notform an error correcting code.

Another regime:

Rate bad(N/22O(

√log(N))

),

but locality 3:

I Matching vectorcodes[Yekhanin 2008,Efremenko 2009, ...]

These decoders are different:

I The queries cannottolerate any errors.

I There are so few queriesthat they are probably allcorrect.

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

Tanner Codes [Tanner’81]

Given:

I A d-regular graph G with n vertices and N = nd2 edges

I An inner code C0 with block length d over Σ

We get a Tanner code C.

I C has block length N and alphabet Σ.

I Codewords are labelings of edges of G .

I A labeling is in C if the labels on each vertex form a codewordof C0.

Example [Tanner’81]G is K8, and C0 is the [7, 4, 3]-Hamming code.

N =(8

2

)= 28 and Σ = 0, 1

A codeword of C is a labeling of edges of G .These edges form a codeword in the Hamming code

red 7→ 0

blue 7→ 1

(0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1) ∈ C ⊂ 0, 128

Example [Tanner’81]G is K8, and C0 is the [7, 4, 3]-Hamming code.

N =(8

2

)= 28 and Σ = 0, 1

A codeword of C is a labeling of edges of G .

These edges form a codeword in the Hamming code

red 7→ 0

blue 7→ 1

(0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1) ∈ C ⊂ 0, 128

Example [Tanner’81]G is K8, and C0 is the [7, 4, 3]-Hamming code.

N =(8

2

)= 28 and Σ = 0, 1A codeword of C is a labeling of edges of G .

These edges form a codeword in the Hamming code

red 7→ 0

blue 7→ 1

(0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1) ∈ C ⊂ 0, 128

Encoding Tanner CodesEncoding is Easy!

1. Generate parity-check matrixRequires:

I Edge-vertex incidence matrix of graphI Parity-check matrix of inner code

2. Calculate a basis for the kernel of the parity-check matrix

3. This basis defines a generator matrix for the linear TannerCode

4. Encoding is just multiplication by this generator matrix

LinearityIf the inner code C0 is linear, so is the Tanner code C

I C0 = Ker(H0) for some parity check matrix H0.

x ∈ C0 ⇐⇒ H0 x= 0

I So codewords of the Tanner code C also are defined by linearconstraints:

v ↔ y

y ∈ C ⇐⇒ ∀v ∈ G , H0

y |Γ(v)

= 0

LinearityIf the inner code C0 is linear, so is the Tanner code C

I C0 = Ker(H0) for some parity check matrix H0.

x ∈ C0 ⇐⇒ H0 x= 0

I So codewords of the Tanner code C also are defined by linearconstraints:

v ↔ y

y ∈ C ⇐⇒ ∀v ∈ G , H0

y |Γ(v)

= 0

Example: vertex edge incidence matrix of K8

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

1ro

wfo

rea

chve

rtex

1 column for each edge

I Columns have weight 2(Each edge hits two vertices)

I Rows have weight 7(Each vertex has degree seven)

Example: parity-check matrix of a Tanner codeK8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 10 1 1 0 0 1 10 0 0 1 1 1 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

Parity-checkof Hamming code

Edge-vertex incidence matrix of K8

Vertex 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

100100000000

010000100000

110000010000

001000001000

101000000100

011000000010

111000000001

000100100000

000010010000

000110001000

000001000100

000101000010

000011000001

000000110000

000000101000

000000100100

000000100010

000000100001

000000011000

000000010100

000000010010

000000010001

000000001100

000000001010

000000001001

000000000110

000000000101

000000000011

100100000000000000000000

010000010000000000000000

110000000110000000000000

001000000000001000000000

101000000000000101000000

011000000000000000011000

111000000000000000000111

000100100000000000000000

000010000010000000000000

000110000000110000000000

000001000000000001000000

000101000000000000101000

000011000000000000000011

000000100100000000000000

000000010000010000000000

000000110000000110000000

000000001000000000001000

000000101000000000000101

000000000100100000000000

000000000010000010000000

000000000110000000110000

000000000001000000000001

000000000000100100000000

000000000000010000010000

000000000000110000000110

000000000000000100100000

000000000000000010000010

000000000000000000100100

Example: parity-check matrix of a Tanner codeK8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 10 1 1 0 0 1 10 0 0 1 1 1 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

Parity-checkof Hamming code

Edge-vertex incidence matrix of K8

Vertex 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

100100000000

010000100000

110000010000

001000001000

101000000100

011000000010

111000000001

000100100000

000010010000

000110001000

000001000100

000101000010

000011000001

000000110000

000000101000

000000100100

000000100010

000000100001

000000011000

000000010100

000000010010

000000010001

000000001100

000000001010

000000001001

000000000110

000000000101

000000000011

100100000000000000000000

010000010000000000000000

110000000110000000000000

001000000000001000000000

101000000000000101000000

011000000000000000011000

111000000000000000000111

000100100000000000000000

000010000010000000000000

000110000000110000000000

000001000000000001000000

000101000000000000101000

000011000000000000000011

000000100100000000000000

000000010000010000000000

000000110000000110000000

000000001000000000001000

000000101000000000000101

000000000100100000000000

000000000010000010000000

000000000110000000110000

000000000001000000000001

000000000000100100000000

000000000000010000010000

000000000000110000000110

000000000000000100100000

000000000000000010000010

000000000000000000100100

Example: parity-check matrix of a Tanner codeK8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 10 1 1 0 0 1 10 0 0 1 1 1 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

Parity-checkof Hamming code

Edge-vertex incidence matrix of K8

Vertex 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

100100000000

010000100000

110000010000

001000001000

101000000100

011000000010

111000000001

000100100000

000010010000

000110001000

000001000100

000101000010

000011000001

000000110000

000000101000

000000100100

000000100010

000000100001

000000011000

000000010100

000000010010

000000010001

000000001100

000000001010

000000001001

000000000110

000000000101

000000000011

100100000000000000000000

010000010000000000000000

110000000110000000000000

001000000000001000000000

101000000000000101000000

011000000000000000011000

111000000000000000000111

000100100000000000000000

000010000010000000000000

000110000000110000000000

000001000000000001000000

000101000000000000101000

000011000000000000000011

000000100100000000000000

000000010000010000000000

000000110000000110000000

000000001000000000001000

000000101000000000000101

000000000100100000000000

000000000010000010000000

000000000110000000110000

000000000001000000000001

000000000000100100000000

000000000000010000010000

000000000000110000000110

000000000000000100100000

000000000000000010000010

000000000000000000100100

Example: parity-check matrix of a Tanner codeK8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 10 1 1 0 0 1 10 0 0 1 1 1 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

Parity-checkof Hamming code

Edge-vertex incidence matrix of K8

Vertex 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

100100000000

010000100000

110000010000

001000001000

101000000100

011000000010

111000000001

000100100000

000010010000

000110001000

000001000100

000101000010

000011000001

000000110000

000000101000

000000100100

000000100010

000000100001

000000011000

000000010100

000000010010

000000010001

000000001100

000000001010

000000001001

000000000110

000000000101

000000000011

100100000000000000000000

010000010000000000000000

110000000110000000000000

001000000000001000000000

101000000000000101000000

011000000000000000011000

111000000000000000000111

000100100000000000000000

000010000010000000000000

000110000000110000000000

000001000000000001000000

000101000000000000101000

000011000000000000000011

000000100100000000000000

000000010000010000000000

000000110000000110000000

000000001000000000001000

000000101000000000000101

000000000100100000000000

000000000010000010000000

000000000110000000110000

000000000001000000000001

000000000000100100000000

000000000000010000010000

000000000000110000000110

000000000000000100100000

000000000000000010000010

000000000000000000100100

Example: parity-check matrix of a Tanner codeK8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 10 1 1 0 0 1 10 0 0 1 1 1 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

Parity-checkof Hamming code

Edge-vertex incidence matrix of K8

Vertex 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

100100000000

010000100000

110000010000

001000001000

101000000100

011000000010

111000000001

000100100000

000010010000

000110001000

000001000100

000101000010

000011000001

000000110000

000000101000

000000100100

000000100010

000000100001

000000011000

000000010100

000000010010

000000010001

000000001100

000000001010

000000001001

000000000110

000000000101

000000000011

100100000000000000000000

010000010000000000000000

110000000110000000000000

001000000000001000000000

101000000000000101000000

011000000000000000011000

111000000000000000000111

000100100000000000000000

000010000010000000000000

000110000000110000000000

000001000000000001000000

000101000000000000101000

000011000000000000000011

000000100100000000000000

000000010000010000000000

000000110000000110000000

000000001000000000001000

000000101000000000000101

000000000100100000000000

000000000010000010000000

000000000110000000110000

000000000001000000000001

000000000000100100000000

000000000000010000010000

000000000000110000000110

000000000000000100100000

000000000000000010000010

000000000000000000100100

Example: parity-check matrix of a Tanner codeK8 and the [7, 4, 3]-Hamming code

1 0 1 0 1 0 10 1 1 0 0 1 10 0 0 1 1 1 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

Parity-checkof Hamming code

Edge-vertex incidence matrix of K8

Vertex 1

11000000

10100000

10010000

10001000

10000100

10000010

10000001

01100000

01010000

01001000

01000100

01000010

01000001

00110000

00101000

00100100

00100010

00100001

00011000

00010100

00010010

00010001

00001100

00001010

00001001

00000110

00000101

00000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

1001000000

0100100000

1100010000

0010001000

1010000100

0110000010

1110000001

0001100000

0001010000

0001001000

0001000100

0001000010

0001000001

0000110000

0000101000

0000100100

0000100010

0000100001

0000011000

0000010100

0000010010

0000010001

0000001100

0000001010

0000001001

0000000110

0000000101

0000000011

100100000000

010000100000

110000010000

001000001000

101000000100

011000000010

111000000001

000100100000

000010010000

000110001000

000001000100

000101000010

000011000001

000000110000

000000101000

000000100100

000000100010

000000100001

000000011000

000000010100

000000010010

000000010001

000000001100

000000001010

000000001001

000000000110

000000000101

000000000011

100100000000000000000000

010000010000000000000000

110000000110000000000000

001000000000001000000000

101000000000000101000000

011000000000000000011000

111000000000000000000111

000100100000000000000000

000010000010000000000000

000110000000110000000000

000001000000000001000000

000101000000000000101000

000011000000000000000011

000000100100000000000000

000000010000010000000000

000000110000000110000000

000000001000000000001000

000000101000000000000101

000000000100100000000000

000000000010000010000000

000000000110000000110000

000000000001000000000001

000000000000100100000000

000000000000010000010000

000000000000110000000110

000000000000000100100000

000000000000000010000010

000000000000000000100100

If the inner code has good rate, so does the outer codeSay that C0 is linear

I If C0 has rate r0, it satisfies (1− r0)d linear constraints.

I Each of the n vertices of G must satisfy these constraints.

I C is defined by at most n · (1− r0)d constraints.

I Length of C = N = # edges = nd/2

I The rate of C is

R =k

N≥ N − n · (1− r0)d

N= 2r0 − 1.

If the inner code has good rate, so does the outer codeSay that C0 is linear

I If C0 has rate r0, it satisfies (1− r0)d linear constraints.

I Each of the n vertices of G must satisfy these constraints.

I C is defined by at most n · (1− r0)d constraints.

I Length of C = N = # edges = nd/2

I The rate of C is

R =k

N≥ N − n · (1− r0)d

N= 2r0 − 1.

If the inner code has good rate, so does the outer codeSay that C0 is linear

I If C0 has rate r0, it satisfies (1− r0)d linear constraints.

I Each of the n vertices of G must satisfy these constraints.

I C is defined by at most n · (1− r0)d constraints.

I Length of C = N = # edges = nd/2

I The rate of C is

R =k

N≥ N − n · (1− r0)d

N= 2r0 − 1.

If the inner code has good rate, so does the outer codeSay that C0 is linear

I If C0 has rate r0, it satisfies (1− r0)d linear constraints.

I Each of the n vertices of G must satisfy these constraints.

I C is defined by at most n · (1− r0)d constraints.

I Length of C = N = # edges = nd/2

I The rate of C is

R =k

N≥ N − n · (1− r0)d

N= 2r0 − 1.

Better rate bounds?

I The lower bound R > 2r0 − 1 is independent of the orderingof edges around a vertex

I Tanner already noticed that order matters.Let G be the complete bipartite graph with 7 vertices per sideLet C0 be the [7, 4, 3] hamming codeThen different “natural” orderings achieve a Tanner code with

I [49, 16, 9] ( 1649 ≈ .327)

I [49, 12, 16] ( 1249 ≈ .245)

I [49, 7, 17] ( 749 ≈ .142) Meets lower bound of 2 · 4

7 − 1

Expander codes

When the underlying graph is an expander graph, the Tanner codeis a expander code.

I Expander codes admit very fast decoding algorithms[Sipser and Spielman 1996]

I Further improvements in[Sipser’96, Zemor’01, Barg and Zemor’02,’05,’06]

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

Main Result

Given:

I a d-regular expander graph;

I an inner code of length d with smooth reconstruction.

Then:

I We will give a local-correcting procedure for this expandercode.

Smooth Reconstruction

Bob

ci?

ci !6= ci

codeword c ∈ ΣN

Bob makesq queries

Suppose that:

I Each Bob’s q queries is(close to) uniformlydistributed (they don’t needto be independent!)

I From the (uncorrupted)queries, he can alwaysrecover ci .

I But! He doesn’t need totolerate any errors.

Then:

I We say that the code has asmooth reconstructionalgorithm.

Smooth Reconstruction

Bob

ci?

ci !6= ci

codeword c ∈ ΣN

Bob makesq queries

Suppose that:

I Each Bob’s q queries is(close to) uniformlydistributed (they don’t needto be independent!)

I From the (uncorrupted)queries, he can alwaysrecover ci .

I But! He doesn’t need totolerate any errors.

Then:

I We say that the code has asmooth reconstructionalgorithm.

Smooth Reconstruction

Bob

ci?

ci !6= ci

codeword c ∈ ΣN

Bob makesq queries

Suppose that:

I Each Bob’s q queries is(close to) uniformlydistributed (they don’t needto be independent!)

I From the (uncorrupted)queries, he can alwaysrecover ci .

I But! He doesn’t need totolerate any errors.

Then:

I We say that the code has asmooth reconstructionalgorithm.

Smooth Reconstruction

Bob

ci?

ci !6= ci

codeword c ∈ ΣN

Bob makesq queries

Suppose that:

I Each Bob’s q queries is(close to) uniformlydistributed (they don’t needto be independent!)

I From the (uncorrupted)queries, he can alwaysrecover ci .

I But! He doesn’t need totolerate any errors.

Then:

I We say that the code has asmooth reconstructionalgorithm.

Smooth Reconstruction

Bob

ci?

ci !

6= ci

codeword c ∈ ΣN

Bob makesq queries

Suppose that:

I Each Bob’s q queries is(close to) uniformlydistributed (they don’t needto be independent!)

I From the (uncorrupted)queries, he can alwaysrecover ci .

I But! He doesn’t need totolerate any errors.

Then:

I We say that the code has asmooth reconstructionalgorithm.

Smooth Reconstruction

Bob

ci?ci !

6= ci

codeword c ∈ ΣN

Bob makesq queries

Suppose that:

I Each Bob’s q queries is(close to) uniformlydistributed (they don’t needto be independent!)

I From the (uncorrupted)queries, he can alwaysrecover ci .

I But! He doesn’t need totolerate any errors.

Then:

I We say that the code has asmooth reconstructionalgorithm.

Smooth Reconstruction

Bob

ci?ci !

6= ci

codeword c ∈ ΣN

Bob makesq queries

Suppose that:

I Each Bob’s q queries is(close to) uniformlydistributed (they don’t needto be independent!)

I From the (uncorrupted)queries, he can alwaysrecover ci .

I But! He doesn’t need totolerate any errors.

Then:

I We say that the code has asmooth reconstructionalgorithm.

Smooth reconstruction, sans stick figures

DefinitionA code C0 ⊂ Σd has a q-query smooth reconstruction algorithmif, for all i ∈ [d ] and for all codewords c ∈ C0:

I Bob can always determine ci from a set of queries ci1 , . . . , ciqI Each cij is (close to) uniformly distributed in [d ].

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

Main Result

Given:

I a d-regular expander graph;

I an inner code of length d with smooth reconstruction.

Then:

I We will give a local-correcting procedure for this expandercode.

Decoding algorithm: main idea

Want tocorrect the label

on this edge

For thisdiagramq = 2

Decoding algorithm: main idea

Want tocorrect the label

on this edge

For thisdiagramq = 2

Decoding algorithm: main idea

Want tocorrect the label

on this edge

For thisdiagramq = 2

Decoding algorithm: main idea

Want tocorrect the label

on this edge

For thisdiagramq = 2

Decoding algorithm: main idea

Want tocorrect the label

on this edge

For thisdiagramq = 2

Decoding algorithm: main idea

Want tocorrect the label

on this edge

For thisdiagramq = 2

The expander walk as a tree

Want tocorrect the label

on this edge

O(l

ogn

)

q-ary tree(inner code has q-query reconstruction)

The expander walk as a tree

!

O(l

ogn

)

q-ary tree

True Statements:

I The symbols on the leavesdetermine the symbol on the root.

I There are qO(log(n)) ≈ Nε leaves.

I The leaves are (nearly) uniformlydistributed in G .

Idea: Query the leaves!Problems:

I There are errors on the leaves.

I Errors on the leaves propagate.

The expander walk as a tree

!

O(l

ogn

)

q-ary tree

True Statements:

I The symbols on the leavesdetermine the symbol on the root.

I There are qO(log(n)) ≈ Nε leaves.

I The leaves are (nearly) uniformlydistributed in G .

Idea: Query the leaves!

Problems:

I There are errors on the leaves.

I Errors on the leaves propagate.

The expander walk as a tree

!

O(l

ogn

)

q-ary tree

True Statements:

I The symbols on the leavesdetermine the symbol on the root.

I There are qO(log(n)) ≈ Nε leaves.

I The leaves are (nearly) uniformlydistributed in G .

Idea: Query the leaves!Problems:

I There are errors on the leaves.

I Errors on the leaves propagate.

The expander walk as a tree

!

O(l

ogn

)

q-ary tree

True Statements:

I The symbols on the leavesdetermine the symbol on the root.

I There are qO(log(n)) ≈ Nε leaves.

I The leaves are (nearly) uniformlydistributed in G .

Idea: Query the leaves!Problems:

I There are errors on the leaves.

I Errors on the leaves propagate.

The expander walk as a tree

!

O(l

ogn

)

q-ary tree

True Statements:

I The symbols on the leavesdetermine the symbol on the root.

I There are qO(log(n)) ≈ Nε leaves.

I The leaves are (nearly) uniformlydistributed in G .

Idea: Query the leaves!Problems:

I There are errors on the leaves.

I Errors on the leaves propagate.

The expander walk as a tree

!

O(l

ogn

)

q-ary tree

True Statements:

I The symbols on the leavesdetermine the symbol on the root.

I There are qO(log(n)) ≈ Nε leaves.

I The leaves are (nearly) uniformlydistributed in G .

Idea: Query the leaves!Problems:

I There are errors on the leaves.

I Errors on the leaves propagate.

The expander walk as a tree

!

O(l

ogn

)

q-ary tree

True Statements:

I The symbols on the leavesdetermine the symbol on the root.

I There are qO(log(n)) ≈ Nε leaves.

I The leaves are (nearly) uniformlydistributed in G .

Idea: Query the leaves!Problems:

I There are errors on the leaves.

I Errors on the leaves propagate.

The expander walk as a tree

!

O(l

ogn

)

q-ary tree

True Statements:

I The symbols on the leavesdetermine the symbol on the root.

I There are qO(log(n)) ≈ Nε leaves.

I The leaves are (nearly) uniformlydistributed in G .

Idea: Query the leaves!Problems:

I There are errors on the leaves.

I Errors on the leaves propagate.

The expander walk as a tree

!

O(l

ogn

)

q-ary tree

True Statements:

I The symbols on the leavesdetermine the symbol on the root.

I There are qO(log(n)) ≈ Nε leaves.

I The leaves are (nearly) uniformlydistributed in G .

Idea: Query the leaves!Problems:

I There are errors on the leaves.

I Errors on the leaves propagate.

The expander walk as a tree

!

O(l

ogn

)

q-ary tree

True Statements:

I The symbols on the leavesdetermine the symbol on the root.

I There are qO(log(n)) ≈ Nε leaves.

I The leaves are (nearly) uniformlydistributed in G .

Idea: Query the leaves!Problems:

I There are errors on the leaves.

I Errors on the leaves propagate.

Correcting the last layer

O(l

ogn

)O

(log

n)

Edges to get us to uniform locations in the graph (not read)

Edge we want to learn (not read)

Edges for error correction (read)

Correcting the last layer

O(l

ogn

)O

(log

n)

Edges to get us to uniform locations in the graph (not read)

Edge we want to learn (not read)

Edges for error correction (read)

Correcting the last layer

O(l

ogn

)O

(log

n)

Edges to get us to uniform locations in the graph (not read)

Edge we want to learn (not read)

Edges for error correction (read)

Why should this help?

!

O(l

ogn

)

q-ary tree

False statement:

I Now the queries can tolerate a fewerrors.

True statements:

I This is basically the only thing thatcan go wrong.

I Because everything in sight is(nearly) uniform, it probably won’tgo wrong.

Why should this help?

!

O(l

ogn

)

q-ary tree

False statement:

I Now the queries can tolerate a fewerrors.

True statements:

I This is basically the only thing thatcan go wrong.

I Because everything in sight is(nearly) uniform, it probably won’tgo wrong.

Why should this help?

!

O(l

ogn

)

q-ary tree

False statement:

I Now the queries can tolerate a fewerrors.

True statements:

I This is basically the only thing thatcan go wrong.

I Because everything in sight is(nearly) uniform, it probably won’tgo wrong.

Why should this help?

!

O(l

ogn

)

q-ary tree

False statement:

I Now the queries can tolerate a fewerrors.

True statements:

I This is basically the only thing thatcan go wrong.

I Because everything in sight is(nearly) uniform, it probably won’tgo wrong.

Decoding algorithm

Each leaf edge queries its symbolEach leaf edge thinks to itself...Each second-level edge reads its symbol and thinks to itself...etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Decoding algorithm

Each leaf edge queries its symbol

Each leaf edge thinks to itself...Each second-level edge reads its symbol and thinks to itself...etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Decoding algorithm

Each leaf edge queries its symbol

Each leaf edge thinks to itself...

Each second-level edge reads its symbol and thinks to itself...etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Decoding algorithm

Each leaf edge queries its symbolEach leaf edge thinks to itself...

Each second-level edge reads its symbol and thinks to itself...

etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Decoding algorithm

Each leaf edge queries its symbolEach leaf edge thinks to itself...

Each second-level edge reads its symbol and thinks to itself...

etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...

If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Decoding algorithm

Each leaf edge queries its symbolEach leaf edge thinks to itself...

Each second-level edge reads its symbol and thinks to itself...

etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...

If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Decoding algorithm

Each leaf edge queries its symbolEach leaf edge thinks to itself...

Each second-level edge reads its symbol and thinks to itself...

etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Decoding algorithm

Each leaf edge queries its symbolEach leaf edge thinks to itself...

Each second-level edge reads its symbol and thinks to itself...

etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Decoding algorithm

Each leaf edge queries its symbolEach leaf edge thinks to itself...

Each second-level edge reads its symbol and thinks to itself...

etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Decoding algorithm

Each leaf edge queries its symbolEach leaf edge thinks to itself...Each second-level edge reads its symbol and thinks to itself...

etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Decoding algorithm

Each leaf edge queries its symbolEach leaf edge thinks to itself...Each second-level edge reads its symbol and thinks to itself...etc.

· · ·

This only fails if there exist a path that is heavily corrupted.Heavily corrupted paths occur with exponentially small probability.

0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0

I If my correct value were 0,there would be some pathbelow me with 1 error.

I If my correct value were 1,there would be some pathbelow me with 0 errors.

1 0 0 1 0 1 1 0

Local corrector:

(0, 0) 7→ 0

(0, 1) 7→ 1

(1, 0) 7→ 1

(1, 1) 7→ 0

If I were 0...

0

0 0

⇒ path with two errors...If I were 1...

1

0 1

⇒ or path with no errors.

1

1 0

If I were 1...⇒ path with one error...

If I were 0...

0

1 1

⇒ or path with two errors.

I If my correct value were 0,there would be some pathbelow me with ≥ 2 errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 0 errors.

I If my correct value were 0,there would be some pathbelow me with Ω(log(n))errors.

I If my correct value were 1,there would be some pathbelow me with ≥ 7 errors.

TRIUMPHANTLY RETURN 1!

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

One choice for inner code: based on affine geometrySee [Assmus, Key ’94,’98] for a nice overview

I Let L1, . . . , Lt be the r -dimensional affine subspaces of Fmq ,

and consider the code with parity-check matrix H:

qm

t

H

Hi ,~x =

1 ~x ∈ Li

0 ~x 6∈ Li

Li

~x ∈ Fmq

query the qr nonzeros in this row

I Smooth reconstruction: To learn a coordinate indexed by~x ∈ Fm

q :I pick a random r -flat, Li , containing ~x .I query all of the points in Li .

I Observe: This is not a very good LCC!

One choice for inner code: based on affine geometrySee [Assmus, Key ’94,’98] for a nice overview

I Let L1, . . . , Lt be the r -dimensional affine subspaces of Fmq ,

and consider the code with parity-check matrix H:

qm

t

H

Hi ,~x =

1 ~x ∈ Li

0 ~x 6∈ Li

Li

~x ∈ Fmq

query the qr nonzeros in this row

I Smooth reconstruction: To learn a coordinate indexed by~x ∈ Fm

q :I pick a random r -flat, Li , containing ~x .I query all of the points in Li .

I Observe: This is not a very good LCC!

One choice for inner code: based on affine geometrySee [Assmus, Key ’94,’98] for a nice overview

I Let L1, . . . , Lt be the r -dimensional affine subspaces of Fmq ,

and consider the code with parity-check matrix H:

qm

t

H

Hi ,~x =

1 ~x ∈ Li

0 ~x 6∈ Li

Li

~x ∈ Fmq

query the qr nonzeros in this row

I Smooth reconstruction: To learn a coordinate indexed by~x ∈ Fm

q :I pick a random r -flat, Li , containing ~x .I query all of the points in Li .

I Observe: This is not a very good LCC!

One good instantiation

Graph:

I Ramanujan graph

Inner code:

I Finite geometry code

Results:For any α, ε > 0, for infinitely many N, we get a code with blocklength N, which

I has rate 1− αI has locality (N/d)ε

I tolerates constant error rate

Outline

1 Local correctabilityDefinitions and notationExample: Reed-Muller codesPrevious work and our contribution

2 Expander codes

3 Local correctability of expander codesRequirement for the inner code: smooth reconstructionDecoding algorithmExample instantiation: finite geometry codes

4 Conclusion

Summary

I When the inner code has smooth reconstruction, we give alocal-decoding procedure for expander codes.

I This gives a new (and yet old!) family of linear locallycorrectable codes of rate approaching 1.

Open questions

I Can we use expander codes to achieve local correctability withlower query complexity?

I Can we use inner codes with rate < 1/2?

The end

Thanks!