31
Qiwen Wang, Sidharth Jaggi, Shuo-Yen Robert Li Institute of Network Coding (INC) The Chinese University of Hong Kong October 19, 2011 IEEE Information Theory Workshop 2011, Paraty, Brazil Binary Error Correcting Network Codes The Chinese University of Hong Kong Institute of Network Coding 1

Qiwen Wang, Sidharth Jaggi , Shuo -Yen Robert Li Institute of Network Coding (INC)

  • Upload
    frayne

  • View
    104

  • Download
    0

Embed Size (px)

DESCRIPTION

Binary Error Correcting Network Codes. Qiwen Wang, Sidharth Jaggi , Shuo -Yen Robert Li Institute of Network Coding (INC) The Chinese University of Hong Kong October 19, 2011 IEEE Information Theory Workshop 2011, Paraty , Brazil. Outline. 1. Motivation. Model. 2. 3. Main Results. - PowerPoint PPT Presentation

Citation preview

Page 1: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

Qiwen Wang, Sidharth Jaggi, Shuo-Yen Robert Li

Institute of Network Coding (INC)The Chinese University of Hong Kong

October 19, 2011

IEEE Information Theory Workshop 2011, Paraty, Brazil

Binary Error Correcting Network Codes

The Chinese University of Hong Kong Institute of Network Coding 1

Page 2: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 2

Outline

4

Motivation1

2

3

5

Model

Main Results

Discussion

Conclusion

Page 3: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 3

Motivation:Challenges of NC over Noisy Networks

varying noise level1

(p-ε,p+ε)

Page 4: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 4

Motivation:Challenges of NC over Noisy Networks

1

2errors propagate through mix-and-forward

error

varying noise level

Page 5: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 5

Motivation:Challenges of NC over Noisy Networks

coding kernels unknown a priori

1

2

3

1 2

3 4

5 6 7

8 9

[f1,3,f1,5][f2,4,f2,7]

[f3,6,f4,6]

[f6,8,f6,9]

varying noise level

errors propagate through mix-and-forward

Page 6: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 6

Network Model & Code Model

Alice Bob

Mincut = C

… …

Page 7: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 7

Network Model & Code Model

Alice Bob

C

n

⁞⁞

……

Mincut = C

2mF

Page 8: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 8

Network Model & Code Model

Alice Bob

Cm x n

X

……

.

.

.

.

.

.

.

.

.

Cm x n

Y

……

.

.

.

.

.

.

.

.

.

⁞⁞

α

β

2mF

Page 9: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 9

Finite Field & Binary Field I

S1 S2 …… Sn 2mF

S11S12…S1m S21S22…S2m …… Sn1Sn2…Snm mn bits

S11

S12...S1m

S21

S22...S2m

……

Sn1

Sn2...Snm

mx n binary matrix

n symbols over

One Packet:

Page 10: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 10

Finite Field & Binary Field II

T

2mFa symbol over

T11 T12 …… T1m

T21 T22 …… T2m

. . .Tm1 Tm2 …… Tmm

m x m binary matrix

TS

2mFMultiplication over

T11 T12 …… T1m

T21 T22 …… T2m

. . .Tm1 Tm2 …… Tmm

S11

S21

. . .Sm1

Multiplication over binary field

Page 11: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 11

Transfer Matrix

Noiseless NetworkX Y

YCm x nCm x Cm

T XCm x n

×

… …

Page 12: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 12

Noise Model:Worst-case Bit-flip Error

111 101 001 …… 110 101 011 ……

Link A

000 110 100 …… 000 101 100 ……

Link B

111 101 001 …… 100 100 011 ……

Link A

000 110 100 …… 000 110 100 ……

Link B

OR

Errors can be arbitrarily distributed, with an upper bound of fraction p.Worst possible damage can happen to received packets.

Page 13: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 13

Noise Model:Worst-case Bit-flip Error

Z……

.

.

.

.

.

.

.

.

.

Em x n

Worst-case bit-flip error matrix Z: no more than pEmn 1s, arbitrarily distributedE: num of edges in the network

Page 14: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 14

Noise Model:Worst-case Bit-flip Error

Z……

.

.

.

.

.

.

.

.

.

Em x n

Error bits on the 1st edge

111 101 001 …… 100 111 001 ……

Edge 1Worst-case bit-flip error matrix Z: no more than pEmn 1s, arbitrarily distributedE: num of edges in the network

Page 15: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 15

Noise Model:Worst-case Bit-flip Error

011

010

Z……

.

.

.

.

.

.

.

.

.

Em x n

Error bits on the 1st edge

111 101 001 …… 100 111 001 ……

Edge 1Worst-case bit-flip error matrix Z: no more than pEmn 1s, arbitrarily distributedE: num of edges in the network

Page 16: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 16

Impulse Response Matrix

Z

X Y

Y

Cm x nCm x Cm Cm x Em

T X ZT̂× ×

Em x n

Cm x n

… …

Page 17: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 17

Transform Metric

YT X ZT̂× ×

YTX ZT̂ ×

00101...0010

Page 18: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 18

Transform Metric

TX YXi Yi

+di columns

=

ˆ1

( , )n

iTi

d TX Y d

• di is the minimum number of columns of that need to be added to TX(i) to obtain Y(i).

• Claim: is a distance metric.

ˆ ( , )T

d TX Y

Page 19: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 19

Hamming-type Upper Bound

For all p less than C/(2Em), an upper bound on the achievable rate of any code over the worst-case binary-error network is

Theorem 1

1 ( )E H pC

Page 20: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 20

Hamming-type Upper Bound

• Total number of Cm x n binary matrices (volume of the big square) is .

• Lower bound of the volume of the balls

• Consider those Z’s where every column has pEm ones in it, distinct Z results in distinct .

• The number of distinct is at least ~

• Upper bound on the size of any codebook is

• Asymptotically in n, the Hamming-type upper bound is .

Proof (sketch)

pEmnpEmn

pEmn

Hamming 1 ( )ER H pC

ˆ ˆ( , ) { | ( , ) }T T

B TX pEmn Y d TX Y pEmn

T̂ZnEm

pEm

( ) log( 1)2H p Emn Em

T̂Z

2Cmn

log( 1)(1 ( ) )

( ) log( 1)

2 22

E EmCmn H p CmnC Cmn

EmnH p Em

1 ( ) EH pC

Page 21: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 21

Coherent/Non-coherent NC

Coherent NC: receiver knows the internal coding coefficients, hence knows T and .

Non-coherent NC: coding coefficients, hence T and , are unknown in advance, more realistic setting.

However, the random linear coding coefficients are usually chosen on the fly.

Page 22: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 22

Coherent/Non-coherent NC

Coherent NC: receiver knows the internal coding coefficients, hence knows T and .

Non-coherent NC: coding coefficients, hence T and , are unknown in advance, more realistic setting.

However, the random linear coding coefficients are usually chosen on the fly.

Page 23: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 23

Coherent/Non-coherent NC

Coherent NC: receiver knows the internal coding coefficients, hence knows T and .

Non-coherent NC: coding coefficients, hence T and , are unknown in advance, more realistic setting.

However, the random linear coding coefficients are usually chosen on the fly.

Page 24: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 24

Coherent/Non-coherent NC

Coherent NC: receiver knows the internal coding coefficients, hence knows T and .

Non-coherent NC: coding coefficients, hence T and , are unknown in advance, more realistic setting.

However, the random linear coding coefficients are usually chosen on the fly.

Page 25: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 25

GV-type Lower Bound

Coherent GV-type network codes achieve a rate of at least

Theorem 2

Non-coherent GV-type network codes achieve a rate of at least

Theorem 3

1 (2 )E H pC

1 (2 )E H pC

Page 26: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 26

GV-type Lower Bound

• Need an upper bound on volume of instead of the lower bound on volume of as in Thm1. (sphere packing vs. covering)

• Different Y, or equivalently , can be bounded above by the number of different Z, which equals

• The summation can be bounded from above by ~

• Lower bound on the size of the codebook

• Asymptotically in n, the rate of coherent GV-type NC .

Proof of Thm2 (sketch)

T̂Z

TX(1)

2pEmn

TX(2) 2pEmn

TX(3)

2pEmn

GV-coherent 1 (2 )ER H pC

ˆ ( , 2 )T

B TX pEmnˆ ( , )

TB TX pEmn

2

0

pEmn

i

Emni

(2 1)2

EmnpEmn

pEmn

(2 )(2 1)2H p EmnpEmn

log(2 1)(1 (2 ) )

(2 )

2 2(2 1)2

E pEmnCmn H p CmnC n

H p EmnpEmn

1 (2 ) EH pC

Page 27: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 27

GV-type Lower Bound

• Crucial difference with the proof of Thm2: the process of choosing codewords.

• Consider all possible values of , at most (and hence T, since it comprises of a specific subset of C columns of ).

• The number of potential codewords that can be chosen in the codebook is at least

which equals

• Asymptotically in n, it leads to the same rate of as coherent NC in Theorem2.

Proof of Thm3 (sketch)

T̂T̂

2CEm

(2 )

22 (2 1)2

Cmn

CEm H p EmnpEmn

log(2 1)(1 (2 ) )2

E pEmn EH p CmnC n

1 (2 )E H pC

Page 28: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 28

Scale of Parameters

For all p less than ,the Hamming-type and GV-type bounds hold.

Claim

• Theorem 1 (Hamming-type upper bound)requires that .• For the GV-type bound in Thm2 and Thm3 to give non-negative rates, .

When p is small,

Proof

1

1min( , )2 2m

CEm

2CpEm

(2 ) 1EH pC

1(2 ) 2 (log )2

1(log )2

log 2

1

E EH p pC p C

C EEm p C

pm

Page 29: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 29

Conclusion

Coherent/non-coherent GV-type lower bound:

GV-type codes: End-to-end natureComplexity: poly. in block lengthHamming-type upper

bound:

Worst-case bit-flip error model

1 ( ) EH pC

1 (2 ) EH pC

Page 30: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 30

Future Direction

• Efficient coding schemes• Other binary noise model• Combine link-by-link codeswith our end-to-end codes

Page 31: Qiwen  Wang,  Sidharth Jaggi ,  Shuo -Yen Robert Li Institute of Network Coding (INC)

The Chinese University of Hong Kong Institute of Network Coding 31

Thank you!