Mandatory exercise - Universitetet i Bergeneirik/INF244/Lectures/Lecture16.pdf2 Mandatory exercise...

Preview:

Citation preview

1

Mandatory exercise● Mandatory exercise for INF 244● Deadline: November 7● The assignment is to implement an encoder/decoder system in Matlab using the

Communication Blockset. The system must simulate communication over an AWGN channel using either of these codes:

• Block code

• Convolutional code

• PCCC

• LDPC ● You are free to implement any of these coding techniques, as long as the requirements

below are fulfilled:

● Information length k = 1024● Block length n = 3072 ● E

b/N

0 = 1.25 dB

● We will test your answers on our own computer and evaluate them based on bit error rate versus CPU time usage per frame T according to the following formula:

● p = T BER⋅

2

Mandatory exerciseHow to create and run simulations in Matlab from scratch● Run Matlab from a command window● Type in simulink in Matlab's command window● Choose File -> New -> Model in Simulink's main window● Create the model by dragging and dropping elements into it

How to finish this exercise starting from a demo● Run Matlab from a command window● Type in sccc_sim in Matlab's command window. A ready-made

demo of an SCCC opens● Study the demo closely and modify it to your needs

3

Design of turbo codes• Turbo codes can be designed for performance at

• Low SNR

• High SNR

• Choices: Constituent codes and interleaver

Errors that an ML decoder would also make

Errors due to imperfectness of decoding algorithm

4

Design of turbo codes for high SNR• Goal:

• Maximize minimum distance d

• Minimize the number of codewords Ad of weight d

• In general, design code for a thin weight spectrum

• Use recursive component encoders!

• Simple approach:

• Concentrate on input-weight 2 sequences

• Simple (but flawed!) approach: This applies if the interleavers are chosen at random. But it is possible (and even easy) to avoid the problem

5

Single error events of input-weight 2• Consider an (n,1,ν) systematic feedback encoder

• The input sequence (1,0,0,0,...) will produce a cycle in the state diagram with input-weight zero that starts in state S

1 = (1,0,...,0) and arrives in state S

2ν-1 =

(0,...,0,1) after at most 2ν-2 steps, and then returns to state S1 in one step

• Let zc be the parity weight generated by this cycle

• Also, note that a 1 input from state S2ν-1 terminates the encoder

• When the n-1 numerator polynomials are monic and of degree ν, then the

zero-input branch connecting state S2ν-1 to state S

1 has zero parity weight

• If zt is the total parity weight of the two input-weight 1 branches that connect

the all-zero state to the the input-weight zero cycle, then the total parity weight of any input-weight 2 single error event is mz

c + z

t where m is the

number of input-weight zero cycles in the error event

• The minimum parity weight with an input-weight 2 is zmin

= zc + z

t

6

Approximation of the BER for large K

● This is an approximation for large K for recursive PCCCs with (n,1,ν) systematic feedback constituent encoders

● Interleaver gain is present● Maximize d

eff = 2 + 2z

min (the effective minimum free distance)

● Maximize zmin

and thus zc, which means that the cycle length should be

as large as possible● Choose a primitive feedback polynomial!

7

Example: Input-weight 2• Assume primitive feedback polynomial

• Input-weight 2 vectors that will take the encoder out of and back to the initial state:

• (1,0,0,1) corresponds to parity weight zmin = zc + z

t = 2 + 2 = 4

• (1,0,0,0,0,0,1) corresponds to parity weight zmin + zc = 4 + 2 = 6

• In general, 1 followed by 3m-1 0s followed by a 1

• Even more general, 1 followed by (2ν-1)m-1 0s followed by a 1

• deff = 2 + 2 zmin = 2 + 2*4 = 10

8

Theorem on zmin (n = 2)• Theorem: Let G(D) = [1, e(D)/d(D)], where the denominator polynomial d

(D) is primitive of degree ν. Then, zmin = 2ν-1 + 2s, where s = 1 if e(D) has

constant coefficient 1 and degree ν and s = 0 otherwise

• Proof:

• d(D) is the generator polynomial of a cyclic (2ν-1, 2ν-1-ν) Hamming code

• q(D) = (D2ν-1+1) / d(D) is the generator polynomial of a cyclic (2ν-1, ν)

maximum-length code of minimum distance 2ν-1

• deg e(D) < ν: e(D) q(D) is a non-zero codeword in the maximum-length

code, and so it has weight 2ν-1

• deg e(D) = ν and NOT monic: e(D) q(D) = Dr v(D) q(D), where v(D) is

monic and of degree < ν, from which it follows that the parity weight is

2ν-1 since v(D) q(D) is a non-zero codeword in the maximum-length code

9

Theorem on zmin (n = 2)• Proof (cont.):

• deg e(D) = ν and monic: e(D) = DDν-1 + e(2)(D). c1(D) = Dν-1q(D) and c2

(D) = e(2)(D) q(D) are both codewords in the maximum-length code, and

so they have weight 2ν-1. c1(D) = Dν-1 + c1,νD

ν +...+ c1,2ν-3D

2ν-3 + D2ν-2 and

c2(D) has a non-zero constant term. Dc1(D) = [cycl. shift of c1(D)] + D2ν-

1 + 1, so e(D) q(D) = D c1(D) + c2(D) = [cycl. shift of c1(D)] + D2ν-1 + 1 +

c2(D) = [a codeword with const.coeff = 0] + 1 + D2ν-1. Thus, the parity

weight is 2ν-1 + 2

10

The general case• Theorem: Let G(D) = [1, e

1(D)/d(D), ..., e

n-1(D)/d(D)], where the

denominator polynomial d(D) is primitive of degree ν. Then, zmin = (n-1)2ν-1 +

2s (n-1)(2ν-1+2), where s is the number of numerator polynomials that are

monic and of degree ν• Proof:

• The proof is the same as for the special case of n = 2

• The upper bound on zmin also holds for non-primitive feedback polynomials

• Thus, choose constituent codes with primitive feedback polynomials and monic numerator polynomials of degree ν to maximize zmin and consequently the effective minimum free distance

• When searching for constituent codes, optimize by first maximizing dw

and

then minimizing Aw,dw

, successively, for increasing values of w, where dw

is

the minimum weight of single error events produced by input-weight w sequences and A

w,dw is the corresponding multiplicity

11

Convolutional recursive encoders for PCCC codes

Max

6

10

13

16

18

5

8

10

12

13

15

3

5

6

7

8

10

12

Convolutional recursive encoders for PCCC codes

Max

2

3

4

5

6

7

2

3

4

4

5

13

Choice of component codes• The listed codes may not have the best free distance, but have a better

mapping (compared to ”optimum” CCs) of input weights to output weights

• The overall turbo code performance depends also on the actual interleaver used

14

Choice of interleaver• Interleavers that avoid the problem with weight-2 inputs*

• If |i-j| = (2ν-1)m, then |π(i)-π(j)| ≠ (2ν-1)l (for l+m small)

• S-random interleavers (to achieve spreading)

• If |i-j| ≤ S, then |π(i)-π(j)| ≥ S

• Interleavers specialized to accommodate the actual encoders*

• Maintains a list of ”critical” sets of positions, which are the positions of information symbols of low-weight words

• Do not map one critical set into another

15

Design of turbo codes for low SNR• The foregoing discussion assumes that the decoding is close

to maximum likelihood decoding. This is not the case for very low SNRs

• Goal for low SNR:

• Optimize interchange of information between the constituent decoders

• Analyze this interchange by using density evolution or EXIT charts

16

EXtrinsic Information Transfer chart• Approach: A SISO block produces more accurate information about

the transmitted information at its output than what is available at the input

• The amount of information can be precisely quantified using information theory

• The entropy H(X) of a discrete random variable X is given as H(X) = -∑xP(X=x)log(P(X=x)). It is a measure of uncertainty

• The conditional entropy H(X|Y) = ∑yP(Y=y)H(X|Y=y)

• The mutual information I(X;Y) = H(X) - H(X|Y)

• For a specified SNR compute

• Ia = I(ul; La(ul))

• Ie = I(ul; Le(ul))

• EXIT curve: Ie as a function of Ia for a given SISO algorithm

17

The Gaussian approximation• The a priori input L-values are modelled as independent Gaussian

random variables with variance σa2 and mean µ

a = σ

a2/2, where the

sign of µa depends on the transmitted bit

• The Gaussian approximation is based on two observations:

• The input channel L-values are independent Gaussian random variables with variance 2L

c and mean L

c

• Extensive simulations for a constituent decoder with very large block lengths support this claim

18

The Gaussian approximation (cont.)• Next, histograms of the conditional pdf's of the extrinsic a posteriori

L-values are determined by simulating the BCJR algorithm with independent Gaussian distributed input a priori L-values for a particular encoder, value of K, and channel SNR

• Then, the mutual information I(ul; L

e(u

l)) between u

l and its extrinsic a

posteriori L-value is determined using

19

EXIT curvesObtained by simulations (but much simpler than turbo code simulations)

20

EXIT chartsNext, plot the EXIT curve for one SNR, together with its mirror image. These curves represent the EXIT curves of the two constituent decoders

Open tunnel:

Decoding will

proceed

to convergence

21

EXIT chartsEXIT curves for another SNR

Closed tunnel:

Decoding will

get stuck

22

SNR threshold• SNR threshold: The smallest SNR with an open EXIT chart tunnel

• Defines the start of the waterfall region

Tunnel opens

Non-convergence

becomes a small

problem

23

EXIT chart• A property of the constituent encoder

• Can be used to find good constituent encoders for low SNRs

• In general, simple is good (flatter EXIT curve)

• Can be used for codes with different constituent encoders too. The constituent encoders can in this case be fitted to each other’s EXIT curve, providing a lower SNR threshold

• It is assumed that the interleavers are very long, so that the Gaussian approximation applies

24

Iterative decoding example (K=4)

25

Iterative decoding example (K=4)

26

Iterative decoding example (K=4)

27

Iterative decoding example (K=4)

28

Iterative decoding example (K=4)

29

The effect of many iterations

30

Iterative decoding: Stopping criteria• Fixed number of iterations

• Hard decisions

• If the hard decisions of the two extrinsic value vectors coincide, then assume that convergence has been achieved

• Cross-entropy

• Outer error-detecting (or error-correcting) code

31

Cross-entropy stopping criteriaLet P(u) and Q(u) be two joint probability distributions that factor as P(u) = P(u

0) *...* P(u

K-1) and Q(u) = Q(u

0) *...* Q(u

K-1), resp.

32

Cross-entropy stopping criteria (cont.)

33

Cross-entropy stopping criteria (cont.)● Two assumptions are used to approximate the cross-entropy

1) The hard decisions based on the signs of the a posteriori L-values of both decoders do not change once decoding has converged

2) Once decoding has converged, the magnitudes of the a posteriori L-values are large

34

Cross-entropy stopping criteria (cont.)

T(i) can be computed after each iteration. Experience with simulations have shown that once convergence is achieved, T(i) drops by a factor of 10-2 to 10-4 compared with its initial value

35

Iterative decoding: Some observations• Parallel implementations: The constituent decoders can

work in parallel• Final decision can be taken from a posteriori values of

either constituent decoder, or from their average• The decoder may sometimes, depending on the SNR and

on the occurence of structural faults in the interleaver, oscillate between correct and incorrect decisions

• Max-log-MAP can be shown to be equivalent to SOVA• Max-log-MAP is simpler to implement than log-MAP,

but suffers a penalty of about 0.5 dB

Recommended