23
06 Dec 04 Turbo Codes 1 TURBO CODES Michelle Stoll

06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

  • View
    221

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 1

TURBO CODES

Michelle Stoll

Page 2: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 2

A Milestone in ECCs

• Based on convolutional codes:

– multiple encoders used serially to create a codeword

– defined as triple (n, k, m)

• n encoded bits generated for every k data bits rec’d, where m represents the number of memory registers used

Page 3: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 3

Enhancements

• Added features include:

– concatenated recursive systematic encoders

– pseudo-random interleavers

– soft input/soft output (SISO) iterative decoding

Page 4: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 4

Accolades

• TCs nearly achieve Shannon’s channel capacity limit – first to get within 0.7 dB

• Do not require high transmission power to deliver low bit error rate

• Considered most powerful class of ECCs to-date

Page 5: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 5

Sidebar: Shannon Limit

• Defines the fundamental transmission capacity of a communication channel

• Claude Shannon from Bell Labs proved mathematically that totally random sets of codewords could achieve channel capacity, theoretically permitting error-free transmission

Page 6: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 6

Shannon Limit, con’t• Use of random sets of codewords not a practical solution

– channel capacity can only be attained when k data bits mapped to n code symbols approach infinity

• Cost of a code, in terms of computation required to decode it, increases closer to the Shannon limit

• Coding paradox: find good codewords the deliver BERs close to the Shannon limit, but not overly complex

– ECCs addressing both have been elusive for years

– until advent of TCs, best codes were outside 2 dB of Shannon’s Limit

“All codes are good, except the ones we can think of.”– Folk theorem

Page 7: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 7

Performance Bounds

The performance floor is in the vicinity of a BER of 10-5

Page 8: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 8

Turbo Code History

• Claude Berrou, Alain Glavieux, and Punja Thitimajshima presented their paper “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes” in 1993

• Their results were received with great skepticism

– in fact, the paper was initially rejected

– independent researchers later verified their simulated BER performance

Page 9: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 9

Anatomy: Encoder

• Two encoders, parallel concatenation of codes– can use the same clock, decreasing delay

• d blocks of n bits length sent to each encoder

– encoder 1 receives bits as-is, encodes the parity bits y1, and concatenates them with original data bits

– encoder 2 receives pre-shuffled bit string from interleaver, encodes the parity bits y2

– multiplexer receives a string of size 3n of parity bits and original data bits from encoder 1, and parity bits from encoder 2

Page 10: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 10

Turbo Encoder Schematic

Example: original data = 01101 Encoder 1 creates parity bits 10110 and appends original 01101Encoder 2 receives pre-shuffled bit string and create parity bits 11100

Multiplexer receives 011011011011100

Page 11: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 11

Non-Uniform Interleaver

• Irregular permutation map used to produce a pseudo-random interleaver – no block interleaving

• Nonuniform interleaving assures a maximum scattering of data, introducing quasi-random behavior in the code– recall Shannon’s observation

• Operates between modular encoders to permute all poor input sequences (low-weight CWs) into good input sequences producing large-weight output CWs

Page 12: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 12

Anatomy: Decoder

• Decoder is most complex aspect of turbo codes– But imposes the greatest latency in the process as it

is serial, iterative

• Two constituent decoders are trying to solve the same problem from different perspectives

• Decoders make soft decisions about data integrity, passing the extrinsic bit reliability information back and forth

– Hence the name ‘turbo’ in reference to a turbo engine

Page 13: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 13

Decoder, con’t• Inspects analog signal level of the received bits

– then turns the signal into integers which lend confidence to what the value should actually be

• Next, examines parity bits and assigns bit reliabilities for each bit

• Bit reliabilities are expressed as log likelihood ratios that vary between a positive and negative bound

– in practice, this bound is quite large, between -127 and +127

– the closer the LLR is to one side, the greater the confidence assigned one way or the other.

Page 14: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 14

Decoder, con’t:Log Likelihood Ratio (LLR)

• The probability that a data bit d = 1, Pr {d = 1}, is expressed as:

• What is passed from one decoder to the other are bit reliabilities

– its computations with respect to the estimation of d, without taking its own input into account

• The input related to d is thus a single shared piece of information

L(d) = lnPr {d = 1}

1 - Pr {d = 1}received sequence

Page 15: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 15

Decoder, con’t

• Decoder modules dec1 and dec2 receive input • dec1 passes its bit reliability estimate to interleaver, dec2

– if dec1 successful, it would’ve passed few or no errors to dec2

• Decoder module dec2 processes its input as well as the bit reliability from dec1 – refines the confidence estimate, then passes to de-interleaver

• This completes the first iteration.

• If no further refinements needed (i.e. acceptable confidence) the data is decoded and passed to upper layer– Otherwise, the output is passed back to dec1 for another iteration

Page 16: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 16

Turbo Decoder Schematic

Page 17: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 17

Decoding Drawbacks• To achieve near-optimum results, a relatively large

number of decoding iterations are required (on the order of 10 to 20)

• This increases computational complexity and output delay

– one way to mitigate delay is to use a stop rule

• Select some pre-determined number of interations to perform

• if convergence is detected before the number is reached, stop

Page 18: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 18

Puncturing

• Another way to address latency is through code puncturing

• puncturing will change the code rate, k/n, without changing any of its attributes

• instead of transmitting certain redundant values in the codeword these values are simply not transmitted

• i.e. a Rate 1/2 code can be increased to a Rate 2/3 code by dropping every other output bit from the parity stream

Page 19: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 19

Complexity

• Because the decoder is comprised of two constituent decoders, it is twice as complex as a conventional decoder when performing a single iteration

– two iterations require twice the computation, rendering it four times as complex as a conventional decoder

Page 20: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 20

Latency

• Latency on the decoding side is the biggest drawback of Turbo Codes

• Decoding performance is influenced by three broad factors: interleaver size, number of iterations, and the choice of decoding algorithm

– these can be manipulated, with consequences

Page 21: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 21

Ongoing Research

• Turbo coding is responsible for a renaissance in coding research

• Turbo codes, turbo code hybrids being applied to numerous problems– Multipath propagation– Low-density parity check (LDPC)– Software implementation!

• turbo decoding at 300 kbits/second using 10 iterations per frame. With a stopping rule in place, the speed can be doubled or tripled

Page 22: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 22

Turbo Codes in Practice

• Turbo codes have made steady inroads into a variety of practical applications – deep space – mobile radio– digital video– long-haul terrestrial wireless– satellite communications

• Not practical for real-time, voice

Page 23: 06 Dec 04Turbo Codes1 TURBO CODES Michelle Stoll

06 Dec 04 Turbo Codes 23

More Information• Excellent high-level overview:

Guizzo, Erico. “Closing in on the Perfect Code”, IEEE Spectrum, March 2004.

• Very informative four-part series on various aspects of TCs:Gumas, Charles Constantine. “Turbo Codes ev up error-correcting performance.” (part I in the series) EE Times Network at http://archive.chipcenter.com/dsp/DSP000419F1.html

• The paper that started it all:Berrou, Glavieux, and Thitimajshima. “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes” Ecole Superieure des Telecommunications de Bretagne, France. 1993.

Complete bibliography soon available on my CS522 page