50
Network Coding: Mixin’ it up Sidharth Jaggi Michelle Effros Michael Langberg Tracey Ho Philip Chou Kamal Jain Muriel Médard Peter Sanders Ludo Tolhuizen Sebastian Egner

Network Coding: Mixin’ it up Sidharth Jaggi Michelle Effros Michael Langberg Tracey Ho Philip Chou Kamal Jain Muriel MédardPeter Sanders Ludo Tolhuizen

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

Network Coding: Mixin’ it up

Sidharth Jaggi

Michelle Effros

Michael Langberg

Tracey Ho

Philip Chou

Kamal Jain

Muriel Médard Peter Sanders

Ludo Tolhuizen

Sebastian Egner

Network CodingR. Ahlswede, N. Cai, S.-Y. R. Li and R. W. Yeung,"Network information flow," IEEE Trans. on Information

Theory, vol. 46, pp. 1204-1216, 2000.

http://tesla.csl.uiuc.edu/~koetter/NWC/Bibliography.html 131 papers as of last night(≈2 years)

NetCod Workshop, DIMACS working group, ISIT 2005 - 4+ sessions

Several patents, theses

“The core notion of network coding is to allow and encourage mixing of data at intermediate network nodes. “

(Network Coding homepage)

But . . . what is it?

Point-to-point flows

)(maxmin)(

cutsizeCflowtscut →

=

C

1P

2P

CP

Min-cut Max-flow (Menger’s) Theorem [M27]

Ford-Fulkerson Algorithm [FF62]

s

t

Multicasting

Webcasting

P2P networks

Sensor networks

s1

t1

t2

t|T|

Network

s|S|

Justifications revisited - I

s

t1 t2

b1 b2

b2

b2

b1

b1 ?b1

b1 b1

b1 (b1,b2)

b1+b2

b1+b2b1+b2

(b1,b2)[ACLY00]

Throughput

Gap Without Coding

. . .

. . .

h2

( )hh2

Coding capacity = h Routing capacity≤2

Example due to Sanders et al. (collaborators)

s

Multicasting

Upper bound for multicast capacity C,

C ≤ min{Ci}

s

t1

t2

t|T|

C|T|

C1

C2

Network

[ACLY00] - achievable!

[LYC02] - linear codes suffice!!

[KM01] - “finite field” linear codes suffice!!!

Multicasting

{ } )2(1,0)...( 21mm

m Fbbb ∈→∈ α

b1b2 bmα

kkαβαβαβ +++ ...2211

β1

β2

βk

F(2m)-linear network[KM01]

Source:- Group together `m’ bits,

Every node:- Perform linear combinations over finite field F(2m)

Multicasting

Upper bound for multicast capacity C,

C ≤ min{Ci}

s

t1

t2

t|T|

C|T|

C1

C2

Network

[ACLY00] - achievable!

[LYC02] - linear codes suffice!!

[KM01] - “finite field” linear codes suffice!!!

[JCJ03],[SET03] - polynomial time code design!!!!

Thms: Deterministic Codes

For m ≥ log(|T|), exists an F(2m)-linear network which can be designed in O(|E||T|C(C+|T|)) time.

[JCJ03],[SET03]

Exist networks for which minimum m≈0.5(log(|T|))

[JCJ03],[LL03]

Justifications revisited - II

s

t1 t2

One link breaks

Robustness/Distributeddesign

Justifications revisited - II

s

t1 t2

b1 b2

b2

b2

b1

b1

(b1,b2)

b1+b2

Robustness/Distributeddesign

(b1,b2)

b1+2b2

(Finite field arithmetic)b1+b2 b1+b2

b1+2b2

Thm: Random Robust Codes

s

t1

t2

t|T|

C|T|

C1

C2

Original Network

C = min{Ci}

Thm: Random Robust Codes

s

t1

t2

t|T|

C|T|'

C1'

C2'

Faulty Network

C' = min{Ci'}

If value of C' known to s,same code can achieve C' rate!

(interior nodes oblivious)

Thm: Random Robust Codesm sufficiently large, rate R<C

Choose random [ß] at each node

Probability over [ß] thatcode works

>1-|E||T|2-m(C-R)+|V|

[JCJ03] [HKMKE03]

(different notions of linearity)

Decentralized design

b1b2 bm

b’1b’2 b’m

b’’1b’’2 b’’m

’’

Much “sparser” linear operations

(O(m) instead of O(m2)) [JCE06?]

Vs. prob of error - necessary evil?

Zero-error Decentralized CodesNo a priori network topological

information available - informationcan only be percolated down links

Desired - zero-error code design

One additional resource - eachnode vi has a unique ID number i(GPS coordinates/IP address/…)

Need to use yet other types of linear codes[JHE06?]

Inter-relationships between notions of linearity

C

B

M

M Multicast G General

Global Local I/O ≠ Local I/O =

a Acyclic

A AlgebraicB BlockC Convolutional

Does not exist

Є epsilon rate loss

G

a

A Ma

Ma

Ma

G?

M

G

a

G

Ma G

G

[JEHM04]

Justifications revisited - III

s

t1 t2

Security

Evil adversary hiding in networkeavesdropping,

injecting false information[JLHE05]

Greater throughputRobust against random errors...

Aha!Network Coding!!!

??

?

Xavier Yvonne

Zorba

???

Unicast

1. Code (X,Y,Z)2. Message (X,Z)3. Bad links (Z)4. Coin (X)5. Transmission (Y,Z)6. Decode correctly (Y)

Eureka

Xavier Yvonne

?

Zorba

??

|E| directed unit-capacity links

Zorba (hidden to Xavier/Yvonne) controls |Z| links Z. p = |Z|/|E|Xavier and Yvonne share no resources (private key, randomness)Zorba computationally unbounded; Xavier and Yvonne can only

perform “simple” computations.

Unicast

Zorba knows protocols and already knows almost all of Xavier’s message (except Xavier’s private coin tosses)

Goal: Transmit at “high” rate and w.h.p. decode correctly

Background

Noisy channel models (Shannon,…)Binary Symmetric Channel

p (“Noise parameter”)0

1

1

C

(C

apac

ity)

0 1

H(p)

0.5

Background

Noisy channel models (Shannon,…) Binary Symmetric Channel Binary Erasure Channel

p (“Noise parameter”)0

1

1

C

(C

apac

ity)

0 E

1-p

0.5

Background

Adversarial channel models “Limited-flip” adversary (Hamming,Gilbert-Varshanov,McEliece et al…) Shared randomness, private key, computationally

bounded adversary…

p (“Noise parameter”)0

1

1

C

(C

apac

ity)

0 1

0.5

p (“Noise parameter”)

0

1

1

C

(C

apac

ity)

Unicast - Results

0.5

0.5

1-p

p (“Noise parameter”)

0

1

1

C

(C

apac

ity)

Unicast - Results

0.5

0.5

??

?

0

p (“Noise parameter”)

0

1

1

C

(C

apac

ity)

Unicast - Results

0.5

0.5(Just for this talk, Zorba is causal)

p = |Z|/h

0

1

1

C

(N

orm

aliz

ed b

y h)

General Multicast Networks

0.5

0.5

h

ZS

R1

R|T|

Slightly more intricate proof

|E|-|Z| |E||E|-|Z|

Unicast - Encoding

|E|-|Z| |E|

MDSCode

X|E|-|Z|

Block-length n over finite field Fq

|E|-

|Z|

n(1-ε)

x1…

n

Vandermonde matrix

T|E|

|E|

n(1-ε)

T1

. . .

n

Rate fudge-factor “Easy to use consistency information”

nεSymbol from Fq

Unicast - Encoding

… T|E|

… T1

. . .

r

r

D1…D|E|

D1…D|E|

Di=Ti(1).1+Ti(2).r+…+Ti(n(1- ε)).rn(1- ε)

Ti

r Di

i

Unicast - Encoding

… T|E|

… T1

. . .

r

r

D1…D|E|

D1…D|E| … T|E|’

… T1’

. . .

r’

r’

D1’…D|E|’

D1’…D|E|’

Unicast - Transmission

Di=Ti(1)’.1+Ti(2)’.r+…+Ti(n(1- ε))’.rn(1- ε) ? If so, accept Ti, else reject Ti

Unicast - Quick Decoding

… T|E|’

… T1’

. . .

r

r’

D1…D|E|

D1’…D|E|’Choose majority (r,D1,…,D|E|)

∑k(Ti(k)-Ti(k)’).rk=0

Polynomial in r of degree n over Fq,value of r unknown to ZorbaProbability of error < n/q<<1

Use accepted Tis to decode

??

?

General Multicast Networks

p = |Z|/h

0

1

1

C

(N

orm

aliz

ed b

y h)

0.5

0.5

General Multicast Networks

R1

R|T|

S

S’|Z|

S’2

S’1

Observation: Can treatadversaries as new sources

R1

R|T|

S

General Multicast Networksyi=Tix

x

y1

S’|Z|

S’2

S’1

R1

R|T|

S

General Multicast Networksyi=Tix

x

y1’

S’|Z|

S’2

S’1 a1

yi’=Tix+Ti’ai

(x(1),x(2),…,x(n)) form a R-dimensionalsubspace X

w.h.p. over network code design,TX and TAi do not intersect (robust codes…).

(ai(1),ai(2),…,ai(n)) form a |Z|-dimensionalsubspace Ai

w.h.p. over x(i), (y(1),y(2),…y(R+|Z|)) forms a basis for TXTAi

But already know basis for TX,therefore can obtain basis for TAi

Variations - FeedbackC

p

0

1

1

Variations – Know thy enemyC

p

0

1

1C

p

0

1

1

Variations – Omniscient but not Omnipresent

C

p

0

1

10.5

Achievability: Gilbert-Varshamov, Algebraic Geometry Codes

Converse: Generalized MRRW bound

Variations – Random NoiseC

p

0

CN

1

SEPARATION

p (“Noise parameter”)

0

1

1

C

(C

apac

ity)

Ignorant Zorba - Results

0.5

0.5

1

Xp+Xs

Xs

1-2p

p (“Noise parameter”)

0

1

1

C

(C

apac

ity)

Ignorant Zorba - Results

0.5

0.5

1

Xp+Xs

Xs

1-2p

a+b+c

a+2b+4c

a+3b+9c

MDS code

Overview of results

Centralized design Deterministic

Decentralized design Randomized Deterministic

Complexity Lower bounds Sparse codes

Types of linearity - interrelationships Adversaries

THE END