Low Density Parity Check Codes1 Ppt

Preview:

Citation preview

LOW DENSITY PARITY CHECK CODES

PRESENTATION OVERVIEW

Digital Communication System Why Coding? Shannon’s Coding Theorems Error Correction Codes LDPC Codes Summery

DIGITAL COMMUNICATION SYSTEM

Information Source

Source Encoder

Channel Encoder

Modulator

Channel

Demodu-lator

Channel Decoder

Source Decoder

Data Sink

rb rc

rs

JPEG, MPEG, etc.

RS code, Turbo code, LDPC

QPSK, QAM, BPSK, etc.

CHANNEL CODING

Channel encoding : The application of redundant symbols to correct data errors.

Modulation : Conversion of symbols to a waveform for transmission.

Demodulation : Conversion of the waveform back to symbols, usually one at a time.

Decoding: Using the redundant symbols to correct errors.

WHAT IS CODING?

Coding is the conversion of information to another form for some purpose.

Source Coding : The purpose is lowering the redundancy in the information. (e.g. ZIP, JPEG, MPEG2)

Channel Coding : The purpose is to defeat channel noise.

WHY CHANNEL CODING?

Trade-off between Bandwidth, Energy and Complexity. Coding provides the means of patterning signals so as to

reduce their energy or bandwidth consumption for a given error performance.

CHANNELS The Binary Symmetric Channel(BSC) The Binary Erasure Channel (BEC)

HOW TO EVALUATE CODE PERFORMANCE? Need to consider Code Rate (R), SNR (Eb/No), and Bit Error

Rate (BER). Coding Gain is the saving in Eb/No required to achieve a

given BER when coding is used vs. that with no coding. Generally the lower the code rate, the higher the coding

gain. Better Codes provides better coding gains. Better Codes usually are more complicated and have higher

complexity.

SHANNON’S CODING THEOREMS

• If C is a code with rate R>C, then the probability of error in decoding this code is bounded away from 0. (In other words, at any rate R>C, reliable communication is not possible.)

• It Tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise.

SHANNON’S CODING THEOREMS

STATEMENT OF THE THEOREM

Where, C is the channel capacity in bits per second. B is the bandwidth of the channel in hertz.  S is the average received signal power over the bandwidth N is the average noise or interference power over the bandwidth S/N is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio

 (CNR) of the communication signal.

COMMON ERROR CORRECTION CODES

Convolutional Codes Block Codes (e.g. Reed-Solomon Code) Trellis-Coded-Modulation (TCM) Concatenated Codes Low density Parity Check Codes

THE ERROR-CONTROL PARADIGM

Noisy channels give rise to data errors: transmission or storage systems

Need powerful error-control coding (ECC) schemes: like linear or non-linear

Linear EC Codes: Generated through simple generator or parity-check matrix

Binary information vector (length k)

Code vector (word): (length n)

Key property: “Minimum distance of the code”, , smallest separation between two codewords.

Rate of the code R= k/n

1 0 0 0 1 0 11 1 1 0 1 0 0

0 1 0 0 1 1 00 1 1 1 0 1 0

0 0 1 0 1 1 11 0 1 1 0 0 1

0 0 0 1 0 1 1

G H

mind

1 2 3 4( , , , )u u u u u

, 0Tx uG Hx

Binary linear codes:

HISTORY OF LDPC CODES

Low Density Parity Check Code A class of Linear Block Codes

Invented by Robert Gallager in his 1960 MIT Ph. D. dissertation Being ignored for long time due to

Requirement of high complexity computation Introduction of Reed-Solomon codes and Turbo Codes The concatenated RS and convolutional codes were considered

perfectly suitable for error control coding Rediscovered by MacKay(1996) and

Richardson/Urbanke(1998).

FEATURES OF LDPC CODES

Approaching Shannon capacity For example, 0.3 dB from Shannon limit Irregular LDPC code with code length 1 million.

(Richardson:1999) An closer design from (Chung:2001), 0.04 dB away from

capacity (block size of 107 bits, 1000 iterations) Good block error correcting performance. Decoding complexity is reduced. Suitable for parallel implementation

FEATURES OF LDPC CODES

BER

LOW-DENSITY PARITY-CHECK CODES Regular LDPC codes

- All rows and columns contain the same number of ones

Irregular LDPC codes- The reverse of the statement of regular LDPC codes

Regular LDPC codes are the primary concern in this presentation

LINEAR BLOCK CODES

A Linear Code can be described by a generator matrix G or a parity check matrix H.

A (N,K) block encoder accepts K-bit input and produces N-bit codeword

c= xG, and cHT = 0 where c = codeword, x = information G = Generator matrix, H = parity check matrix

PROPERTIES OF LDPC CODES

LDPC codes are defined by a sparse parity-check matrix Parity Check Matrix (H) for decoding is sparse

Very few 1's in each row and column. Expected large minimum distance.

Regular LDPC codes H: m x n where (n-m) information bits are encoded into n codewords H contains exactly Wc 1's per column and exactly Wr = Wc(n/m) 1's per row,

where Wc << m. The above definition implies that Wr << n. Wc ≥ 3 is necessary for good codes.

If the number of 1's per column or row is not constant, the code is an irregular LDPC code. Usually irregular LDPC codes outperforms regular LDPC codes.

A SIMPLE LDPC CODE

Parity Check Matrix Wc = 3 n = 10 and m = 5 Wr = 3 * (10/5) = 6

G can be found by Gaussian elimination.

H

SIMPLE REGULAR LDPC CODES

We can represent a LDPC code in two ways Tanner graph Matrix

REPRESENTING REGULAR LDPC CODES notation and its length N : the number of one’s in a column : the number of one’s in a row

If the dimension of H is ,

: the number of 1’s in H

Rate R =

),( cv dd

NM

vc NdMd

NMdd cv /1/1

vdcd

BIPARTITE GRAPH REPRESENTATION

N variable nodes

- circular shape - each connected with check nodes

M check nodes - rectangular shape - each connected with variable nodes Edge e = {v,c}

Nivi ,,1,

vd

cd

Mjc j ,,1,

ENCODING OF LDPC CODES

General encoding of systematic linear block codes

Issues with LDPC codes The size of G is very large. G is not generally sparse. Example: A (10000, 5000) LDPC code.

P is 5000 x5000. If we assume that the density of 1's in P is 0.5 There are 12.5x106 1's in P 12.5x106 addition operations are required to encode one code word.

An alternative approach to simplified encoding is to design the LDPC code via algebraic or geometric methods Such “structured” codes can be encoded with shift registers

DECODING OF LDPC CODES

General decoding of linear block codes Only if c is a valid code word, we have

c HT = 0 For binary symmetric channel (BSC), the received code word is c

added with an error vector e The decoder needs to find out e and flip the corresponding bits The decoding algorithm is usually based on linear algebra

Graph-based Algorithms Sum-product algorithm for general graph-based codes MAP algorithm for trellis graph-based codes Message passing algorithm for bipartite graph-based codes

We have code bits x = {x1,x2,x3…….xn} We have observation bits y={y1,y2,y3……yn} We have the probability between x and y f(x,y)=f(x1,x2,x3…….xn,y1,y2,y3……yn) =f(y1,y2,y3……yn|x1,x2,x3……..xn)f(x1,x2,x3….xn) For discrete memor yless channel

f(x,y)=

X1 X2 X3 X4 X5 X6 X7

Y1 Y2 Y3 Y4 Y5 Y6 Y7

H3H2H1

1

1

1

1

1

1

1

1

1

1

1

00

0

0

0 0

0

0

0

H1

X3X2 X5X4 X7X6X1

1

H2H3

H =

1

1

1

1

1

1

1

1

1

1

1

H1

X3X2 X5X4 X7X6X1

1

H2H3

H =

X1

Y1

1 23

X2

Y2

4 56

X3

Y3

7 89

XN

YN

n-2 n-1 n

H1 H2 H3 Hm

dV = Weight of the variable Nodes.

dC = Weight of the check Nodes.

µi(xi)………µdv-1(xi) is incoming message bits from yi. Xi is identical to yi(xi)µout(xi)=product of all incoming messages µout(xi)=

Xi

μ1(X

i)

μ2(Xi)

μdv-

1 (X

i)

Yi

μOUT(Xi)

Yi(Xi)

Observations

dv-1

µi(xi)………µdv-1(xi) is incoming message bits. Xi is variable code.µout(xout)= X1

X2

Xdv

μ1(X

i)

μ2(Xi)

μdc-1 (Xi)XOUT

μOUT(XOUT)

(X1+x2+x3+x4+……..xdc-1+xout)= 0I(Xi,x2……Xout) = 1, check is satisfied

0, otherwise

MESSAGE PASSING SCHEDULES

Iterative Decoding Procedure1. What order we have to star the procedure.2. What order we have to pass the message.3. How do we know when to stop.

Initially we get yi and Lc Execute the sum product algorithm on each variable and pass

messages to all attachment checks. Exectute the sum product algorithm each parity check, pass msgs

to all attachement variables Goto Start until stop Stop if

1. Valid Code word is found2. Maximum number of Iterations reached.

SUM PRODUCT ALGORITHM AT VARIABLE NODE

Xi

L1

L2

Ldv-

1

Yi

LOUT

Yi(Xi)

Observations

dv-1Check

Lout = Lc +

SUM PRODUCT ALGORITHM AT CHECK NODE

L1

L2

Ldc-

1LOUTdc-1

Lout = 2 tanh-1

HOW DOES STANDARD MESSAGE PASSING ALGORITHM WORK?

Variable nodes…………. ………..

. . . . . . . . . …………….

check nodes

…………….

error bits

?

HOW DOES STANDARD MESSAGE PASSING ALGORITHM WORK?

variable nodes…………. ………..

. . . . . . . . . …………….

check nodes

…………….

Lout = 2 tanh-1

ADVANTAGES OF LDPC BLOCK CODES

These are suited for implementations that make heavy use of parallelism Consequently, error-correcting codes with very long code lengths are feasible.

No tail bits are required for block coding providing additional bits for data transmission.

LDPC has excellent BER performance under AWGN. LDPC is extremely useful for large values of data bits N.

SUMMARY

Low-density-parity-check codes have been studied a lot in the last years and huge progresses have been made in the understanding and ability to design iterative coding systems.

The iterative decoding approach is already used in turbo codes but the structure of LDPC codes give even better results.

In many cases they allow a higher code rate and also a lower error floor rate.

The main disadvantages are that encoders are somehow more complex and that the code length has to be rather long to yield good results.

RESEARCH OPPORTUNITY

LDPC Code Design Efficient Implementation on FPGA LDPC Application for next generation communication

systems (Wireless, OFDM, ADSL).

REFERENCES http://www.csee.wvu.edu/wcrl/ldpc.htm R. G. Gallager, Low-Density Parity Check Codes. Cambridge, MA: MIT Press, 1963. LDPC Codes: An Introduction by Amin Shokrollahi (Digital Fountain, Inc.) T. Richardson and R. Urbanke, “The capacity of low-density parity-check codes under

message-passing decoding,” IEEE Trans. Inform. Theory, Feb. 2001.

THANK YOU!

Recommended