39
Modern Communications Chapter 4. Channel Coding Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Fall, 2016 1/39

Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

  • Upload
    lamthu

  • View
    220

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Modern CommunicationsChapter 4. Channel Coding

Husheng Li

Min Kao Department of Electrical Engineering and Computer ScienceUniversity of Tennessee, Knoxville

Fall, 2016

1/39

Page 2: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Outline

BasicsBlock CodesConvolutional CodesModern Channel Code

2/39

Page 3: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Why Coding

1 Use redundancy to enhance robustness: if the information bit isimpaired (by noise, fading, interference etc), it can still berecovered from redundant bits – error correcting codes.

2 We can also use channel codes to detect transmission errorswithout correcting them. Then the data can be retransmitted.

3 Used in wireless communications, storage systems (hard disk) etal, coding is THE ART of communications.

4 A simple example: repeated codes.

3/39

Page 4: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Some Terminology

Codeword: some number of channel uses to represent someinformation bits.

Codebook: the ensemble of codewords.

Codeword length: number of channel uses.

Encoding: mapping from information bits to codewords.

Decoding: mapping from received signal to information bits.

4/39

Page 5: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Requirements of A Good Coding Scheme

Low bit/block error rate.

Low-complexity encoder and decoder.(random coding has optimalperformance, but the decodingprocedure has to look up a hugecodebook)

Reasonable codeword length(otherwise the delay is too long)

No performance floor.

5/39

Page 6: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Block Codes and Convolutional Codes

In block codes, a block of k information digits is encoded by a codewordof n digits n ≥ k . The codings of different blocks are independent.

In convlutional codes, the coded sequence of n digits depends not onlyon the k data digits but also the previous N − 1 digits.

6/39

Page 7: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Hamming Distance

Codeword weight: number of nonzero bits.

We can consider each codeword as a point inthe space.

Hamming distance dij : the number of differentcoded bits in two codewords ci and cj (theweight of codeword c1 − c2).

What does a larger Hamming distance mean?

7/39

Page 8: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Minimum Distance

Minimum distance (also minimum weight of nonzerocodewords):

dmin = mini,j

dij ,

which characterizes how close the codewords are close toeach other (the closer, the worse performance)

Singleton bound:

dmin ≤ n − k + 1.

If equality holds, it is code maximum distance separation(MDS) code. A codeword in a MDS code is uniquelydetermined by any k elements (why?).

8/39

Page 9: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Capability of Error Detection and Correction

Error detection: at most dmin − 1error bits can be detected (butmay not be corrected).Error correction: at mostt =

⌊dmin−1

2

⌋error bits can be

corrected.Hamming bound:

2n−k ≥t∑

j=0

(nj

).

9/39

Page 10: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Some Examples

10/39

Page 11: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Outline

BasicsBlock CodesConvolutional CodesModern Channel Code

11/39

Page 12: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Generating Matrix

1 Information bits: k -dimensional row vector space; codewords:n-dimensional row vector space.

2 Generator matrix: mapping from information bit space tocodeword space, namely a k × n matrix.

3 Systematic code: if the generator matrix contains a k × k identitysubmatrix (the information bits appear in k locations ofcodeword).

4 Encoding of information bit vector u:

c = uG

12/39

Page 13: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Parity Check Matrix

1 Parity check matrix H (n − k × n) of generator matrix G satisfies:

GHT = 0.

2 If receiving s = c + e (called senseword), where e is error vector,multiplying H yields

sHT = uGHT + eHT = eHT ,

which is called syndrome of s.3 If syndrome is nonzero, we can claim decoding error. If

establishing a mapping between syndrome and error vector (froman n − k -vector to an n-vector), we can correct some error bits.

13/39

Page 14: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Encoder

1 Generating matrix:

G =

1 0 0 0 1 1 00 1 0 0 1 0 10 0 1 0 0 0 10 0 0 1 0 1 0

(1)

14/39

Page 15: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Cyclic Code

1 Cyclic code: any cyclic shift of any codeword is anothercodeword.

2 Polynomial representation (z is a shift operator) c(z) = g(z)u(z):

message: u(z) =k−1∑i=0

uiz i

generator: g(z) =n−k∑i=0

giz i

codeword: c(z) =n−1∑i=0

ciz i

15/39

Page 16: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Hard Decision Decoding (HDD)

1 The maximum likelihood detection of codewords is based on adistance decoding metric.

16/39

Page 17: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Probability of HDD

1 The error probability is upper bounded by

Pe ≤n∑

j=t+1

(nj

)pj(1− p)n−j ,

where p is the demodulation error probability of a single bit.2 The low bound is given by

Pe ≥dmin∑

j=t+1

(dminj

)pj(1− p)d min−j ,

17/39

Page 18: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Soft Decoding

1 Choose the codeword with maximum a posteriori probability:

C∗ = arg minC

p(R|C),

where R is observation.2 For AWGN,

p(R|C) ∼n∏

t=1

exp(−Rt −

√Ec(2Ct − 1)2

2σ2n

).

3 It is difficult to do exhaustive search for all codewords.

18/39

Page 19: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Outline

BasicsBlock CodesConvolutional CodesModern Channel Code

19/39

Page 20: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

History of Convolutional Code

1 Convolutional code was first proposed by Elias (1954), developedby Wozencraft (1957) and rediscovered by Hagelbarger (1959).

2 Viterbi proposed his celebrated algorithm for hard decoding ofconvolutional code (essentially the same as Bellman’s dynamicprogramming) in 1967.

3 Soft decision algorithm (BCJR algorithm) was proposed by Bahl,Cocke, Jelinek and Raviv in 1974 (essentially the same asforward-backward algorithm in HMM).

4 Convolutional code is widely used in ... for its high efficientencoding and decoding algorithms.

20/39

Page 21: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Diagram Representation

1 Constituent: memory, output operators.2 Constraint length ν: length of memory.3 Rate p/q: q output bits when p information bits are input.

21/39

Page 22: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Trellis Representation

1 There are 2ν states. Possible state transitions are labeled.2 For each combination of input information bit and state, the

outputs are labeled in the trellis.

22/39

Page 23: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Polynomial Representation

1 If one input information bit one time,use p (number of outputoperators) polynomials {gi}i=1,...,p to represent the code, wherex is delay operator (like z transform). E.g. g0(x) = x2 + x + 1and g1(x) = x2 + 1.

2 If multiple inputs, we can use polynomial matrix. For example

G(x) =(

x 1 01 x2 1

)

23/39

Page 24: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Hard Decoding

1 Demodulation output is binary, 0 and 1 (hard).2 Viterbi algorithm searches a path in the trellis with minimum

discrepancy compared with the demodulation output.3 For the t-th demodulation output, each node in the trellis chooses

the node in the t − 1-th output having least output discrepancy,inherits its path and carries the accumulated discrepancy.

24/39

Page 25: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Soft Decoding: BCJR Algorithm

1 Forward pass:

αt(m) =∑m′

αt−1(m′)γt(m′,m),

with initial condition α0(0) = 1 and α0(m) = 0 for m 6= 02 Backward pass:

βt(m) =∑m′

βt+1(m′)γt+1(m,m′),

with initial condition βn(0) = 1 and βn(m) = 0 for m 6= 03 Compute joint probability, from which we can compare a

posteriori probability:

λt(m) = αt(m)βt(m).

25/39

Page 26: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Comparison Between Hard and Soft Decoding

1 Soft decoding utilizes more information (demodulation with muchambiguity is substantially igored). Therefore, soft decoderachieves better performance.

2 For convolutional code with constraint length 3 and transmissionrate 1/3, soft decoding has 2-3dB power gain (for achieving thesame decoding error probability, soft decoding needs 2-3dB lesspower than hard decoding).

3 Soft decoding has more computational cost than hard decoding.

26/39

Page 27: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

State Diagram

27/39

Page 28: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Transfer Function

1 State equations:

Xc = D3Xa + DXb, Xb = DXc + DXd

Xd = D2Xc + D2Xd , Xe = D2Xb

2 Transfer function:

T (D) =D6

1− 2D2 = D6 + 2D8 + 4D10 + ...

minimum distance is 6; there are two paths with Hammingdistance 8.

28/39

Page 29: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Concatenated Codes

29/39

Page 30: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Homework 5

Due Date: Oct. 28th, 2016Problem 1: Consider a (3,1) linear block code where eachcodeword consists of 3 data bits and 1 parity bit. (a) find allcodewords in this code. (b) find the minimum distance of thecode dmin.Problem 2: Consider a (7,4) code with generator matrix

G =

0 1 0 1 1 0 01 0 1 0 1 0 00 1 1 0 0 1 01 1 0 0 0 0 1

(a) Find all codewords of the code; (b) What is dmin? (c) Find theparity check matrix of the code; (d) Find the syndrome of thereceived vector R = [1101011].

30/39

Page 31: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Homework 5

Problem 3: Sketch the trellis diagram and the state diagram forthe above convolutional encoder. What is the memory length?What is the coding rate?Problem 4: Suppose that the input message bits are 1100010.What is the output coded bits? Suppose that transmission errorsoccur in the 4-th coded bit and the last coded bit. Use theViterbi’s algorithm for the decoding procedure.

31/39

Page 32: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Outline

BasicsBlock CodesConvolutional CodesModern Channel Code

32/39

Page 33: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Storm in Coding Theory: Turbo Code

1 Before 1993, practical code only achieved performance several dBs beyond Shannoncapacity.

2 In ICC1993, Berrou, Glavieux, and Thitimajshima published their incredible paper: NearShannon Limit error-correcting coding and decoding: Turbo-codes.

3 ”Their simulation curves claimed unbelievable performance, way beyond what was thoughtpossible”. ”The thing that blew everyone away about turbo codes is not just that they get soclose to Shannon capacity but that they’re so easy.” — McEliece

4 ”They said we must have made an error in our simulations” — Berrou

33/39

Page 34: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Rediscovery of LDPC

1 Gallager devised LDPC in his PhD thesis in 1960. It was still impractical at that time.2 People (Mackay, Richardson, Urbanke) rediscovered LDPC in late 1990s.3 LDPC beat turbo code and is now very close to Shannon limit (0.0045dB away, Chung 2001).4 ”A piece of 21st century coding that happened to fall in the 20th century” — Forney5 ”We’re close enough to the Shannon limit that from now on, the improvements will only be

incremental” — Tom Richardson

34/39

Page 35: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Turbo Encoder

1 A turbo encoder has several component encoders (e.g.convolutional code) and an interleaver (the magic part!).

35/39

Page 36: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Interleaver

1 An interleaver is used to permute the information bit sequence. Itbrings randomness to the encoder (similar to Shannon’s randomcoding)

36/39

Page 37: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

Turbo Decoder

1 Decoding is done in an iterative (turbo) way.2 The soft output (e.g. a posteriori probability) of one decoder is

used as the a priori probability of another decoder.3 Such a turbo principle can be applied in many other fields: turbo

equalization, turbo multiuser detection and so on.

37/39

Page 38: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

LDPC Codes

1 A low-density parity-check code is a linear code with a sparsecheck matrix (H). It could be represented by Tanner graphs.

2 Parity check matrix:

H =

1 1 1 1 0 00 0 1 1 0 11 0 0 1 1 0

38/39

Page 39: Modern Communications Chapter 4. Channel Codingweb.eecs.utk.edu/~husheng/ECE342_2016_files/High_dim_chpt4.pdf · Modern Communications Chapter 4. Channel Coding ... coding is THE

LDPC Decoder

1 Message Passing is also called Belief Propagation.2 Step 1: at each check node, the messages from variables nodes

are passed to neighboring variable nodes.3 Step 2: at each variable node, the messages from check nodes

are passed to neighboring check nodes (incorporating thechannel observations).

4 Repeat above two steps.39/39