27
® www.intel.com/labs A Survey of Advanced FEC Systems Eric Jacobsen Eric Jacobsen Minister of Algorithms, Intel Minister of Algorithms, Intel Labs Labs Communication Technology Communication Technology Laboratory/ Laboratory/ Radio Communications Radio Communications Laboratory Laboratory July 29, 2004 July 29, 2004 With a lot of material from Bo Xia, CTL/RCL With a lot of material from Bo Xia, CTL/RCL

A Survey of Advanced FEC Systems

  • Upload
    kael

  • View
    28

  • Download
    1

Embed Size (px)

DESCRIPTION

A Survey of Advanced FEC Systems. Eric Jacobsen Minister of Algorithms, Intel Labs Communication Technology Laboratory/ Radio Communications Laboratory July 29, 2004. With a lot of material from Bo Xia, CTL/RCL. Outline. What is Forward Error Correction? - PowerPoint PPT Presentation

Citation preview

Page 1: A Survey of Advanced FEC Systems

®

www.intel.com/labs

A Survey of Advanced FEC Systems

Eric JacobsenEric JacobsenMinister of Algorithms, Intel LabsMinister of Algorithms, Intel LabsCommunication Technology Laboratory/Communication Technology Laboratory/Radio Communications LaboratoryRadio Communications LaboratoryJuly 29, 2004July 29, 2004

With a lot of material from Bo Xia, CTL/RCLWith a lot of material from Bo Xia, CTL/RCL

Page 2: A Survey of Advanced FEC Systems

www.intel.com/labs

Communication and Interconnect Technology Lab

Outline What is Forward Error Correction?What is Forward Error Correction?

The Shannon Capacity formula and what it meansThe Shannon Capacity formula and what it means

A simple Coding TutorialA simple Coding Tutorial

A Brief History of FECA Brief History of FEC

Modern Approaches to Advanced FECModern Approaches to Advanced FEC Concatenated CodesConcatenated Codes

Turbo CodesTurbo Codes

Turbo Product CodesTurbo Product Codes

Low Density Parity Check CodesLow Density Parity Check Codes

Page 3: A Survey of Advanced FEC Systems

www.intel.com/labs

Communication and Interconnect Technology Lab

Information Theory Refresh

CC = = WW log log22(1 + P(1 + P / N)/ N)

ChannelChannelCapacityCapacity

(bps)(bps)

ChannelChannelBandwidthBandwidth

(Hz)(Hz)

TransmitTransmitPowerPower

The Shannon Capacity EquationThe Shannon Capacity Equation

2 fundamental ways to increase data rate2 fundamental ways to increase data rate

NoiseNoisePowerPower

C is the highest data rate that can be transmitted error free underC is the highest data rate that can be transmitted error free underthe specified conditions of W, P, and N. It is assumed that Pthe specified conditions of W, P, and N. It is assumed that Pis the only signal in the memoryless channel and N is AWGN.is the only signal in the memoryless channel and N is AWGN.

Page 4: A Survey of Advanced FEC Systems

www.intel.com/labs

Communication and Interconnect Technology Lab

A simple exampleA system transmits messages of two bits each through a channelA system transmits messages of two bits each through a channelthat corrupts each bit with probability Pe.that corrupts each bit with probability Pe.

Tx Data = { 00, 01, 10, 11 }Tx Data = { 00, 01, 10, 11 } Rx Data = { 00, 01, 10, 11 }Rx Data = { 00, 01, 10, 11 }

The problem is that it is impossible to tell at the receiver whether theThe problem is that it is impossible to tell at the receiver whether thetwo-bit symbol received was the symbol transmitted, or whether ittwo-bit symbol received was the symbol transmitted, or whether itwas corrupted by the channel.was corrupted by the channel.

Tx Data = 01Tx Data = 01 Rx Data = 00Rx Data = 00

In this case a single bit error has corrupted the received symbol, butIn this case a single bit error has corrupted the received symbol, butit is still a valid symbol in the list of possible symbols. The mostit is still a valid symbol in the list of possible symbols. The mostfundamental coding trick is just to expand the number of bitsfundamental coding trick is just to expand the number of bitstransmitted so that the receiver can determine the transmitted so that the receiver can determine the most likelymost likelytransmitted symbol just by finding the valid codeword with thetransmitted symbol just by finding the valid codeword with theminimum Hamming distance to the received symbol. minimum Hamming distance to the received symbol.

Page 5: A Survey of Advanced FEC Systems

www.intel.com/labs

Communication and Interconnect Technology Lab

Continuing the Simple ExampleA one-to-one mapping of symbol to codeword is produced:A one-to-one mapping of symbol to codeword is produced:

Symbol:CodewordSymbol:Codeword0000 : 0010: 00100101 : 0101: 01011010 : 1001: 10011111 : 1110: 1110

The result is a systematic block codeThe result is a systematic block codewith Code Rate R = ½ and a minimumwith Code Rate R = ½ and a minimumHamming distance between codewordsHamming distance between codewordsof dof dminmin = 2. = 2.

A single-bit error can be detected and corrected at the receiver byA single-bit error can be detected and corrected at the receiver byfinding the codeword with the closest Hamming distance. Thefinding the codeword with the closest Hamming distance. Themost likelymost likely transmitted symbol will always be associated with the transmitted symbol will always be associated with theclosest codeword, even in the presence of multiple bit errors. closest codeword, even in the presence of multiple bit errors.

This capability comes at the expense of transmitting more bits,This capability comes at the expense of transmitting more bits,usually referred to as usually referred to as parityparity, , overheadoverhead, or , or redundancyredundancy bits. bits.

Page 6: A Survey of Advanced FEC Systems

www.intel.com/labs

Communication and Interconnect Technology Lab

Coding GainThe difference in performance between an uncoded and a codedThe difference in performance between an uncoded and a codedsystem, considering the additional overhead required by the code,system, considering the additional overhead required by the code,is called the Coding Gain. In order to normalize the power requiredis called the Coding Gain. In order to normalize the power requiredto transmit a single bit of information (not a coded bit), Eto transmit a single bit of information (not a coded bit), Ebb/N/Noo is used is usedas a common metric, where Eas a common metric, where Ebb is the energy per information bit, and is the energy per information bit, andNNoo is the noise power in a unit-Hertz bandwidth. is the noise power in a unit-Hertz bandwidth.

…… ……

…… ……

TTbbTimeTime

The uncoded symbols require a certain The uncoded symbols require a certain amount of energy to transmit, in this case amount of energy to transmit, in this case over period Tover period Tbb..

The coded symbols at R = ½ can be The coded symbols at R = ½ can be transmitted within the same period if the transmitted within the same period if the transmission rate is doubled. Using Ntransmission rate is doubled. Using Noo instead of N normalizes the noise instead of N normalizes the noise considering the differing signal considering the differing signal bandwidths.bandwidths.

UncodedUncodedSymbolsSymbols

CodedCodedSymbolsSymbolswith R = ½with R = ½

Page 7: A Survey of Advanced FEC Systems

www.intel.com/labs

Communication and Interconnect Technology Lab

Coding Gain and Distance to Channel Capacity Example

1 2 3 4 5 6 7 8 9 10 111 10 7

1 10 6

1 10 5

1 10 4

1 10 3

0.01

0.1

R = 3/4 w/RSR = 9/10 w/RSVitRs R = 3/4Uncoded QPSK

Eb/No (dB)

BER

= P

e

3.21.62

Coding Gain = ~5.95dB

Coding Gain = ~6.35dB

C, R = 9/10C, R = 3/4

d = ~1.4dB

d = ~2.58dB

UncodedUncoded““Matched-FilterMatched-FilterBound”Bound”PerformancePerformance

Capacity for R = 3/4Capacity for R = 3/4

These curvesThese curvesCompare the Compare the performance performance of two Turbo of two Turbo Codes with a Codes with a concatenated concatenated Viterbi-RS Viterbi-RS system. The system. The TC with R = TC with R = 9/10 appears 9/10 appears to be inferior to be inferior to the R = ¾ to the R = ¾ Vit-RS system, Vit-RS system, but is actually but is actually operating operating closer to closer to capacity.capacity.

Page 8: A Survey of Advanced FEC Systems

www.intel.com/labs

Communication and Interconnect Technology Lab

FEC Historical Pedigree197019701960196019501950

Shannon’s PaperShannon’s Paper19481948

Reed and SolomonReed and Solomondefine ECCdefine ECCTechniqueTechnique

Early practicalEarly practicalimplementationsimplementations

of RS codes for tapeof RS codes for tapeand disk drivesand disk drives

HammingHammingdefines basicdefines basicbinary codesbinary codes

Berlekamp and MasseyBerlekamp and Masseyrediscover Euclid’srediscover Euclid’s

polynomial techniquepolynomial techniqueand enable practicaland enable practicalalgebraic decodingalgebraic decoding

Gallager’s ThesisGallager’s ThesisOn LDPCsOn LDPCs

Viterbi’s PaperViterbi’s PaperOn DecodingOn Decoding

Convolutional CodesConvolutional Codes

BCH codesBCH codesProposedProposed

Forney suggestsForney suggestsconcatenated codesconcatenated codes

Page 9: A Survey of Advanced FEC Systems

www.intel.com/labs

Communication and Interconnect Technology Lab

FEC Historical Pedigree II200020001990199019801980

Ungerboeck’sUngerboeck’sTCM Paper - 1982TCM Paper - 1982

First integratedFirst integratedViterbi decodersViterbi decoders

(late 1980s)(late 1980s)

LDPC beatsLDPC beatsTurbo CodesTurbo CodesFor DVB-S2For DVB-S2

Standard - 2003Standard - 2003

TCM HeavilyTCM HeavilyAdopted intoAdopted into

StandardsStandards

Renewed interestRenewed interestin LDPCs due to TCin LDPCs due to TC

ResearchResearch

Berrou’s Turbo CodeBerrou’s Turbo CodePaper - 1993Paper - 1993

Turbo CodesTurbo CodesAdopted intoAdopted into

StandardsStandards(DVB-RCS, 3GPP, etc.)(DVB-RCS, 3GPP, etc.)

RS codes appearRS codes appearin CD playersin CD players

Page 10: A Survey of Advanced FEC Systems

10®

www.intel.com/labs

Communication and Interconnect Technology Lab

Block Codes

DataData FieldField ParityParity

CodewordCodeword

Systematic Block CodeSystematic Block Code If the codeword is constructed byIf the codeword is constructed byappending redundancy to theappending redundancy to thepayload Data Field, it is called apayload Data Field, it is called a““systematic” code.systematic” code.

The “parity” portion can be actual parity bits, or generated by some other means, likeThe “parity” portion can be actual parity bits, or generated by some other means, likea polynomial function or a generator matrix. The decoding algorithms differ greatly.a polynomial function or a generator matrix. The decoding algorithms differ greatly.

The Code Rate, R, can be adjusted by shortening the data field (using zero padding)The Code Rate, R, can be adjusted by shortening the data field (using zero padding)or by “puncturing” the parity field.or by “puncturing” the parity field.

Generally, a block code is any code defined with a finite codeword length.Generally, a block code is any code defined with a finite codeword length.

Examples of block codes: BCH, Hamming, Reed-Solomon, Turbo Codes,Examples of block codes: BCH, Hamming, Reed-Solomon, Turbo Codes,Turbo Product Codes, LDPCsTurbo Product Codes, LDPCs

Essentially all iteratively-decoded codes are block codes.Essentially all iteratively-decoded codes are block codes.

Page 11: A Survey of Advanced FEC Systems

11®

www.intel.com/labs

Communication and Interconnect Technology Lab

Convolutional CodesConvolutional codes are generated using a shift register to apply a polynomial to aConvolutional codes are generated using a shift register to apply a polynomial to astream of data. The resulting code can be systematic if the data is transmitted instream of data. The resulting code can be systematic if the data is transmitted inaddition to the redundancy, but it often isn’t.addition to the redundancy, but it often isn’t.

This is the convolutional encoder forThis is the convolutional encoder forThe p = 133/171 Polynomial that is inThe p = 133/171 Polynomial that is invery wide use. This code has avery wide use. This code has aConstraint Length of k = 7. SomeConstraint Length of k = 7. Somelow-data-rate systems use k = 9 forlow-data-rate systems use k = 9 fora more powerful code.a more powerful code.

This code is naturally R = ½, butThis code is naturally R = ½, butdeleting selected output bits, ordeleting selected output bits, or““puncturing” the code, can be donepuncturing” the code, can be doneto increase the code rate.to increase the code rate.

Convolutional codes are typically decoded using the Viterbi algorithm, which Convolutional codes are typically decoded using the Viterbi algorithm, which increases in complexity exponentially with the constraint length. Alternatively a increases in complexity exponentially with the constraint length. Alternatively a sequential decoding algorithm can be used, which requires a much longer constraint sequential decoding algorithm can be used, which requires a much longer constraint length for similar performance.length for similar performance.

Diagram from [1]Diagram from [1]

Page 12: A Survey of Advanced FEC Systems

12®

www.intel.com/labs

Communication and Interconnect Technology Lab

Convolutional Codes - II

Diagrams from [1]Diagrams from [1]

This is the code-trellis, or state diagram of a k = 2This is the code-trellis, or state diagram of a k = 2Convolutional Code. Each end node represents a codeConvolutional Code. Each end node represents a codestate, and the branches represent codewords selectedstate, and the branches represent codewords selectedwhen a one or a zero is shifted into the encoder.when a one or a zero is shifted into the encoder.

The correcting power of the code comes from theThe correcting power of the code comes from thesparseness of the trellis. Since not all transitions fromsparseness of the trellis. Since not all transitions fromany one state to any other state are allowed, a state-any one state to any other state are allowed, a state-estimating decoder that looks at the data sequence canestimating decoder that looks at the data sequence canestimate the input data bits from the state relationships.estimate the input data bits from the state relationships.

The Viterbi decoder is a Maximum LikelihoodThe Viterbi decoder is a Maximum LikelihoodSequence Estimator, that estimates the encoderSequence Estimator, that estimates the encoderstate using the sequence of transmitted codewords.state using the sequence of transmitted codewords.

This provides a powerful decoding strategy, butThis provides a powerful decoding strategy, butwhen it makes a mistake it can lose track of thewhen it makes a mistake it can lose track of thesequence and generate a stream of errors untilsequence and generate a stream of errors untilit reestablishes code lock.it reestablishes code lock.

Page 13: A Survey of Advanced FEC Systems

13®

www.intel.com/labs

Communication and Interconnect Technology Lab

Concatenated CodesA very common and effective code is the concatenation of an inner convolutionalA very common and effective code is the concatenation of an inner convolutionalcode with an outer block code, typically a Reed-Solomon code. The convolutionalcode with an outer block code, typically a Reed-Solomon code. The convolutionalcode is well-suited for channels with random errors, and the Reed-Solomon code iscode is well-suited for channels with random errors, and the Reed-Solomon code iswell suited to correct the bursty output errors common with a Viterbi decoder. Anwell suited to correct the bursty output errors common with a Viterbi decoder. Aninterleaver can be used to spread the Viterbi output error bursts across multiple RSinterleaver can be used to spread the Viterbi output error bursts across multiple RScodewords.codewords.

RSRSEncoderEncoder InterleaverInterleaver Conv.Conv.

EncoderEncoder

DataDataChannelChannel ViterbiViterbi

DecoderDecoder

De-De-InterleaverInterleaver

RSRSDecoderDecoder

DataData

Inner CodeInner CodeOuter CodeOuter Code

Page 14: A Survey of Advanced FEC Systems

14®

www.intel.com/labs

Communication and Interconnect Technology Lab

Concatenating ConvolutionalCodes Parallel and serialParallel and serial

CCCCEncoder1Encoder1 InterleaverInterleaver CCCC

Encoder2Encoder2

DataDataChannelChannel Viterbi/APPViterbi/APP

DecoderDecoder

De-De-InterleaverInterleaver

DataDataViterbi/APPViterbi/APPDecoderDecoder

Serial ConcatenationSerial Concatenation

CCCCEncoder1Encoder1

DataData

InterleaverInterleaver CCCCEncoder2Encoder2

ChannelChannel

De-De-InterleaverInterleaver

DataData

Viterbi/APPViterbi/APPDecoderDecoder

Viterbi/APPViterbi/APPDecoderDecoder CombinerCombiner

Page 15: A Survey of Advanced FEC Systems

15®

www.intel.com/labs

Communication and Interconnect Technology Lab

Iterative Decoding of CCCsRx DataRx Data

InterleaverInterleaver

De-De-InterleaverInterleaver

DataData

Viterbi/APPViterbi/APPDecoderDecoder

Viterbi/APPViterbi/APPDecoderDecoder

Turbo Codes add coding diversity by encoding the same data twice throughTurbo Codes add coding diversity by encoding the same data twice throughconcatenation. Soft-output decoders are used, which can provide reliability updateconcatenation. Soft-output decoders are used, which can provide reliability updateinformation about the data estimates to the each other, which can be used during ainformation about the data estimates to the each other, which can be used during asubsequent decoding pass.subsequent decoding pass.

The two decoders, each working on a different codeword, can “iterate” and continueThe two decoders, each working on a different codeword, can “iterate” and continueto pass reliability update information to each other in order to improve the probabilityto pass reliability update information to each other in order to improve the probabilityof converging on the correct solution. Once some stopping criterion has been met,of converging on the correct solution. Once some stopping criterion has been met,the final data estimate is provided for use.the final data estimate is provided for use.

These Turbo Codes provided the first known means of achieving decodingThese Turbo Codes provided the first known means of achieving decodingperformance close to the theoretical Shannon capacity.performance close to the theoretical Shannon capacity.

Page 16: A Survey of Advanced FEC Systems

16®

www.intel.com/labs

Communication and Interconnect Technology Lab

MAP/APP decoders Maximum A Posteriori/A Posteriori ProbabilityMaximum A Posteriori/A Posteriori Probability

Two names for the same thingTwo names for the same thing Basically runs the Viterbi algorithm across the data sequence in both Basically runs the Viterbi algorithm across the data sequence in both

directionsdirections ~Doubles complexity~Doubles complexity

Becomes a bit estimator instead of a sequence estimatorBecomes a bit estimator instead of a sequence estimator

Optimal for Convolutional Turbo CodesOptimal for Convolutional Turbo Codes Need two passes of MAP/APP per iterationNeed two passes of MAP/APP per iteration

Essentially 4x computational complexity over a single-pass ViterbiEssentially 4x computational complexity over a single-pass Viterbi

Soft-Output Viterbi Algorithm (SOVA) is sometimes substituted as a Soft-Output Viterbi Algorithm (SOVA) is sometimes substituted as a suboptimal simplification compromisesuboptimal simplification compromise

Page 17: A Survey of Advanced FEC Systems

17®

www.intel.com/labs

Communication and Interconnect Technology Lab

Turbo Code Performance

Page 18: A Survey of Advanced FEC Systems

18®

www.intel.com/labs

Communication and Interconnect Technology Lab

Turbo Code Performance II

0 1 2 3 4 5 6 7 8 91 10 8

1 10 7

1 10 6

1 10 5

1 10 4

1 10 3

0.01

0.1

UncodedVit-RS R = 1/2Vit-RS R = 3/4Vit-RS R = 7/8Turbo Code R = 1/2Turbo Code R = 3/4Turbo Code R = 7/8

Eb/No (dB)

BER

2.8641.629The performance curves shown hereThe performance curves shown herewere end-to-end measuredwere end-to-end measuredperformance in practical modems. Theperformance in practical modems. Theblack lines are a PCCC Turbo Code, andblack lines are a PCCC Turbo Code, andThe blue lines are for a concatenated The blue lines are for a concatenated Viterbi-RS decoder. The verticalViterbi-RS decoder. The verticaldashed lines show QPSK capacity fordashed lines show QPSK capacity forR = ¾ and R = 7/8. The capacity forR = ¾ and R = 7/8. The capacity forQPSK at R = ½ is 0.2dB.QPSK at R = ½ is 0.2dB.

The TC system clearly operates muchThe TC system clearly operates muchcloser to capacity. Much of thecloser to capacity. Much of theobserved distance to capacity is dueobserved distance to capacity is dueto implementation loss in the modem.to implementation loss in the modem.

Page 19: A Survey of Advanced FEC Systems

19®

www.intel.com/labs

Communication and Interconnect Technology Lab

Tricky Turbo Codes

Since the differential encoder has R = 1, the final code rate is determined by theSince the differential encoder has R = 1, the final code rate is determined by theamount of repetition used.amount of repetition used.

1:21:2 DD ++InterleaverInterleaver

RepeatRepeatSectionSection

AccumulateAccumulateSectionSection

Repeat-Accumulate codes use simple repetition followed by a differential encoderRepeat-Accumulate codes use simple repetition followed by a differential encoder(the accumulator). This enables iterative decoding with extremely simple codes.(the accumulator). This enables iterative decoding with extremely simple codes.These types of codes work well in erasure channels.These types of codes work well in erasure channels.

R = 1/2R = 1/2Outer CodeOuter Code

R = 1R = 1Inner CodeInner Code

Page 20: A Survey of Advanced FEC Systems

20®

www.intel.com/labs

Communication and Interconnect Technology Lab

Turbo Product Codes

2-Dimensional2-DimensionalData FieldData Field

ParityParity

Parit

yPa

rity

ParityParityParityParity

Horizontal Hamming CodesHorizontal Hamming Codes

Vert

ical

Ham

min

g C

odes

Vert

ical

Ham

min

g C

odes The so-called “product codes” are codesThe so-called “product codes” are codes

Created on the independent dimensionsCreated on the independent dimensionsOf a matrix. A common implementationOf a matrix. A common implementationArranges the data in a 2-dimensional Arranges the data in a 2-dimensional array, and then applies a hamming code array, and then applies a hamming code to each row and column as shown.to each row and column as shown.

The decoder then iterates between The decoder then iterates between decoding the horizontal and vertical decoding the horizontal and vertical codes.codes.

Since the constituent codes are Hamming codes, which can be decoded simply, theSince the constituent codes are Hamming codes, which can be decoded simply, thedecoder complexity is much less than Turbo Codes. The performance is close to decoder complexity is much less than Turbo Codes. The performance is close to capacity for code rates around R = 0.7-0.8, but is not great for low code rates or short capacity for code rates around R = 0.7-0.8, but is not great for low code rates or short blocks. TPCs have enjoyed commercial success in streaming satellite applications.blocks. TPCs have enjoyed commercial success in streaming satellite applications.

Page 21: A Survey of Advanced FEC Systems

21®

www.intel.com/labs

Communication and Interconnect Technology Lab

Low Density Parity Check Codes Iterative decoding of simple parity check codesIterative decoding of simple parity check codes

First developed by Gallager, with iterative decoding, in 1962!First developed by Gallager, with iterative decoding, in 1962!

Published examples of good performance with short blocksPublished examples of good performance with short blocks Kou, Lin, Fossorier, Trans IT, Nov. 2001Kou, Lin, Fossorier, Trans IT, Nov. 2001

Near-capacity performance with long blocksNear-capacity performance with long blocks VeryVery near! - near! - Chung, et al, “On the design of low-density parity-check codes within Chung, et al, “On the design of low-density parity-check codes within

0.0045dB of the Shannon limit”, IEEE Comm. Lett., Feb. 20010.0045dB of the Shannon limit”, IEEE Comm. Lett., Feb. 2001

Complexity Issues, especially in encoderComplexity Issues, especially in encoder

Implementation Challenges – encoder, decoder memoryImplementation Challenges – encoder, decoder memory

Page 22: A Survey of Advanced FEC Systems

22®

www.intel.com/labs

Communication and Interconnect Technology Lab

LDPC Bipartite Graph

This is an example bipartite graph for an irregular LDPC code.This is an example bipartite graph for an irregular LDPC code.

Check NodesCheck Nodes

EdgesEdges

Variable NodesVariable Nodes(Codeword bits)(Codeword bits)

Page 23: A Survey of Advanced FEC Systems

23®

www.intel.com/labs

Communication and Interconnect Technology Lab

Iteration Processing11stst half iteration, compute half iteration, compute ’s,’s,’s, and r’s for each edge.’s, and r’s for each edge.

i+1i+1 = maxx( = maxx(ii,q,qii)) ii = maxx( = maxx(i+1i+1,q,qii))Check NodesCheck Nodes(one per parity bit)(one per parity bit)

Variable NodesVariable Nodes(one per code bit)(one per code bit)

qqii

rrii

rrii = maxx( = maxx(ii,,i+1i+1))

22ndnd half iteration, compute mV, q’s for each half iteration, compute mV, q’s for eachvariable node.variable node.

mVmVnn

mV = mVmV = mV00 + + r’sr’s qqii = mV – r = mV – rii

EdgesEdges

Page 24: A Survey of Advanced FEC Systems

24®

www.intel.com/labs

Communication and Interconnect Technology Lab

LDPC Performance Example

Figure is from [2]Figure is from [2]

LDPC Performance canLDPC Performance canBe very close to capacity.Be very close to capacity.The closest performanceThe closest performanceTo the theoretical limitTo the theoretical limitever was with an LDPC,ever was with an LDPC,and within 0.0045dB ofand within 0.0045dB ofcapacity.capacity.

The code shown here isThe code shown here isa high-rate code anda high-rate code andis operating within a fewis operating within a fewtenths of a dB of capacity.tenths of a dB of capacity.

Turbo Codes tend to workTurbo Codes tend to workbest at low code rates andbest at low code rates andnot so well at high code rates.not so well at high code rates.LDPCs work very well at highLDPCs work very well at highcode rates and low code rates.code rates and low code rates.

Page 25: A Survey of Advanced FEC Systems

25®

www.intel.com/labs

Communication and Interconnect Technology Lab

Current State-of-the-Art Block CodesBlock Codes

Reed-Solomon widely used in CD-ROM, communications standards. Reed-Solomon widely used in CD-ROM, communications standards. Fundamental building block of basic ECCFundamental building block of basic ECC

Convolutional CodesConvolutional Codes K = 7 CC is very widely adopted across many communications standardsK = 7 CC is very widely adopted across many communications standards K = 9 appears in some limited low-rate applications (cellular telephones)K = 9 appears in some limited low-rate applications (cellular telephones) Often concatenated with RS for streaming applications (satellite, cable, DTV)Often concatenated with RS for streaming applications (satellite, cable, DTV)

Turbo CodesTurbo Codes Limited use due to complexity and latency – cellular and DVB-RCSLimited use due to complexity and latency – cellular and DVB-RCS TPCs used in satellite applications – reduced complexityTPCs used in satellite applications – reduced complexity

LDPCsLDPCs Recently adopted in DVB-S2, ADSL, being considered in 802.11n, 802.16eRecently adopted in DVB-S2, ADSL, being considered in 802.11n, 802.16e Complexity concerns, especially memory – expect broader considerationComplexity concerns, especially memory – expect broader consideration

Page 26: A Survey of Advanced FEC Systems

26®

www.intel.com/labs

Communication and Interconnect Technology Lab

Cited References

[1] [1] http://www.andrew.cmu.edu/user/taon/Viterbi.html[2] [2] Kou, Lin, Fossorrier, “Low-Density Parity-Check Codes Based on Finite Kou, Lin, Fossorrier, “Low-Density Parity-Check Codes Based on Finite Geometries: A Rediscovery and New Results”, IEEE Trans. On IT, Vol 47-7, p2711, Geometries: A Rediscovery and New Results”, IEEE Trans. On IT, Vol 47-7, p2711, November, 2001November, 2001

Page 27: A Survey of Advanced FEC Systems

27®

www.intel.com/labs

Communication and Interconnect Technology Lab

Partial Reference List TCMTCM

G. Ungerboeck, “Channel Coding with Multilevel/Phase Signals”, G. Ungerboeck, “Channel Coding with Multilevel/Phase Signals”, IEEE Trans. ITIEEE Trans. IT, Vol. IT-, Vol. IT-28, No. 1, January, 198228, No. 1, January, 1982

BICMBICM G. Caire, G. Taricco, and E. Biglieri, “Bit-Interleaved Coded Modulation”, IEEE Trans. On G. Caire, G. Taricco, and E. Biglieri, “Bit-Interleaved Coded Modulation”, IEEE Trans. On

IT, May, 1998IT, May, 1998 LDPCLDPC

Ryan, W., “An Introduction to Low Density Parity Check Codes”, UCLA Short Course Ryan, W., “An Introduction to Low Density Parity Check Codes”, UCLA Short Course Notes, April, 2001Notes, April, 2001

Kou, Lin, Fossorier, “Low Density Parity Check Codes Based on Finite Geometries: A Kou, Lin, Fossorier, “Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and New Results”, IEEE Transactions on Information Theory, Vol. 47, No. 7, Rediscovery and New Results”, IEEE Transactions on Information Theory, Vol. 47, No. 7, November 2001November 2001

R. Gallager, “Low-density parity-check codes”, IRE Trans. IT, Jan. 1962R. Gallager, “Low-density parity-check codes”, IRE Trans. IT, Jan. 1962 Chung, et al, “On the design of low-density parity-check codes within 0.0045dB of the Chung, et al, “On the design of low-density parity-check codes within 0.0045dB of the

Shannon limit”, IEEE Comm. Lett., Feb. 2001Shannon limit”, IEEE Comm. Lett., Feb. 2001 J. Hou, P. Siegel, and L. Milstein, “Performance Analysis and Code Optimisation for Low J. Hou, P. Siegel, and L. Milstein, “Performance Analysis and Code Optimisation for Low

Density Parity-Check Codes on Rayleigh Fading Channels” Density Parity-Check Codes on Rayleigh Fading Channels” IEEE JSACIEEE JSAC, Vol. 19, No. 5, , Vol. 19, No. 5, May, 2001May, 2001

L. Van der Perre, S. Thoen, P. Vandenameele, B. Gyselinckx, and M. Engels, “Adaptive L. Van der Perre, S. Thoen, P. Vandenameele, B. Gyselinckx, and M. Engels, “Adaptive loading strategy for a high speed OFDM-based WLAN”, Globecomm 98loading strategy for a high speed OFDM-based WLAN”, Globecomm 98

Numerous articles on recent developments LDPCs, IEEE Trans. On IT, Feb. 2001Numerous articles on recent developments LDPCs, IEEE Trans. On IT, Feb. 2001