166
MODERN CODING THEORY FIGURES

MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

MO D E R N C O D I N G T H E O R Y – F I G U R E S

Page 2: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2
Page 3: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

Modern Coding¿eory –Figures

B Y

T . R I C H A R D S O N A N D R . U R B A N K E

Cambridge University Press

Page 4: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

Modern Coding¿eory – Figures

Copyright ©2008 by T. Richardson and R. Urbanke

All rights reserved

Library of Congress Catalog Card Number: 00–00000

isbn 0-000-00000-0

Page 5: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

L I S T O F F I G U R E S

0.1 Basic point-to-point communications problem. . . . . . . . . . . . . . . 10.2 Basic point-to-point communications problem in view of the source-

channel separation theorem. . . . . . . . . . . . . . . . . . . . . . . . . . 20.3 BSC(є). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.4 Upper and lower bound on δ(r). . . . . . . . . . . . . . . . . . . . . . . 40.5 Transmission over the BSC(є). . . . . . . . . . . . . . . . . . . . . . . . . 50.6 Error exponent of block codes (solid line) and of convolutional codes

(dashed line) for the BSC(є 0.11). . . . . . . . . . . . . . . . . . . . . . 60.7 Venn diagram representation of C. . . . . . . . . . . . . . . . . . . . . . 70.8 Encoding corresponding to (x1,x2,x3,x4) = (0,1,0,1). ¿e number

of ones contained in each circle must be even. By applying one suchconstraint at a time the initially unknown components x5, x6, and x7can be determined. ¿e resulting codeword is (x1,x2,x3,x4,x5,x6,x7) =(0,1,0,1,0,1,0). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

0.9 Decoding corresponding to the receivedmessage (0, ?, ?,1,0, ?,0). Firstwe recover x2 = 1 using the constraint implied by the top circle. Nextwe determine x3 = 0 by resolving the constraint given by the le circle.Finally, using the last constraint, we recover x6 = 1. . . . . . . . . . . . . 9

0.10 Decoding corresponding to the received message (?, ?,0, ?,0,1,0). ¿elocal decoding fails since none of the three parity-check equations bythemselves can resolve any ambiguity. . . . . . . . . . . . . . . . . . . . . 10

0.11 ELDPCnx3, n2 x6 [PBPB (G,є)] as a function of є for n = 2i, i > [10]. . . . . . 11

0.12 Le : Factor graph of f given in Example 2.2. Right: Factor graph forthe code membership function dened in Example 2.5. . . . . . . . . . . 12

0.13 Generic factorization and the particular instance. . . . . . . . . . . . . . 130.14 Generic factorization of gk and the particular instance. . . . . . . . . . . 140.15 Marginalization of function f from Example 2.2 via message passing.

Message passing starts at the leaf nodes. A node that has received mes-sages from all its children processes the messages and forwards the re-sult to its parent node. Bold edges indicate edges along which messageshave already been sent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

0.16 Message-passing rules. ¿e top row shows the initialization of the mes-sages at the leaf nodes. ¿e middle row corresponds to the processingrules at the variable and function nodes, respectively. ¿e bottom rowexplains the nalmarginalization step.xind]message-passing!rulesxind]beliefpropagation!rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

v

Page 6: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

vi list of figures

0.17 Factor graph for the MAP decoding of our running example. . . . . . . 170.18 Le : Standard FG in which each variable node has degree at most 2.

Right: Equivalent FSFG. ¿e variables in the FSFG are associated withthe edges in the FG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

0.19 Representation of a variable node of degree K as an FG (le ) and theequivalent representation as an FSFG (right). . . . . . . . . . . . . . . . 19

0.20 Standard FG and the corresponding FSFG for theMAP decoding prob-lem of our running example. . . . . . . . . . . . . . . . . . . . . . . . . . 20

0.21 Le : Mapping z = m(x, y). Right: Quantizer y = q(x). . . . . . . . . . . 210.22 Binary erasure channel with parameter є. . . . . . . . . . . . . . . . . . . 220.23 For є B δ, the BEC(δ) is degraded with respect to the BEC(є). . . . . . 230.24 Le : Tanner graph of H given in (3.9). Right: Tanner graph of [7,4,3]

Hamming code xind]code!Hamming corresponding to the parity-checkmatrix on page 15. ¿is graph is discussed in Example 3.11. . . . . . . . . 24

0.25 Function Ψ(y). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250.26 Message-passing decoding of the [7,4,3]Hamming code xind]code!Hamming

with the received word y = (0, ?, ?,1,0, ?,0). ¿e vector x denotes thecurrent estimate of the transmitted word x. A 0 message is indicatedas thin line, a 1 message is indicated as thick line, and a ? message isdrawn as dashed line. ¿e four rows correspond to iterations 0 to 3.A er the rst iteration we recover x2 = 1, a er the second x3 = 0,and a er the third we know that x6 = 1. ¿e recovered codeword isx = (0,1,0,1,0,1,0). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

0.27 Bit erasure probability of 10 randomsamples fromLDPC 512x3,256x6 ;LDPC 512,x2,x5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

0.28 Computation graph of height 2 (two iterations) for bit x1 and the codeC(H) for H given in (3.9). ¿e computation graph of height 2 (twoiterations) for edge e is the subtree consisting of edge e, variable nodex1, and the two subtrees rooted in check nodes c5 and c9. . . . . . . . . 28

0.29 Elements of C1(n, λ(x) = x, ρ(x) = x2) together with their proba-bilities PT > C1(n,x,x2) and the conditional probability of error,PBPb (T,є). ¿ick lines indicate double edges. . . . . . . . . . . . . . . . . 29

0.30 Examples of basic trees, L(5) and R(7). . . . . . . . . . . . . . . . . . . . 300.31 Twelve elements of T1(λ, ρ) together with their probabilities. . . . . . . 310.32 Le : Element of C1(T) for a given tree T > T2. Right: Minimal such

element, i.e., an element of C1min(T). Every check node has either no

connected variables of value 1 or exactly two such neighbors. Black andgray circles indicate variables with associated values of 1 or 0, respectively. 32

Page 7: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures vii

0.33 Le : Graphical determination of the threshold for (λ, ρ) = (x2,x5).¿ere is one critical point, xBP 0.2606 (black dot). xind]binary era-sure channel!critical point Right: Graphical determination of the thresh-old for optimized degree distribution described in Example 3.63. ¿ereare two critical points, xBP1,2 0.1493,0.3571 (two black dots). . . . . . . 33

0.34 Le : Graphical determination of the threshold for (λ(x) = x2, ρ(x) =x5). ¿e function v−1є (x) = (x~є)1~2 is shown as a dashed line for є =0.35, є = єBP 0.42944, and є = 0.5. ¿e function c(x) = 1 − (1 −x)5 is shown as a solid line. Right: Evolution of the decoding processfor є = 0.35. ¿e initial fraction of erasure messages emitted by thevariable nodes is x = 0.35. A er half an iteration (at the output of thecheck nodes) this fraction has evolved to c(x = 0.35) 0.88397. A erone full iteration, i.e., at the output of the variable nodes, we see anerasure fraction of x = vє(0.88397), i.e., x is the solution to the equation0.883971 = v−1є (x). ¿is process continues in the same fashion for eachsubsequent iteration, corresponding graphically to a staircase functionwhich is bounded below by c(x) and bounded above by v−1є (x). . . . . 34

0.35 EXIT function of the [3,1,3] repetition code, the [6,5,2] parity-checkcode, and the [7,4,3]Hamming code. xind]code!Hamming . . . . . . 35

0.36 ELDPC(n,x2,x5) [PBPb (G,є, ℓ =ª)] as a function of є forn = 2i, i = 6, . . . ,20.

Also shown is the limitELDPC(ª,x2,x5) [PBPb (G,є, ℓª)], which is dis-

cussed in Problem 3.17 (thick curve). . . . . . . . . . . . . . . . . . . . . 36

0.37 Peeling decoder applied to the [7,4,3] Hamming code with the re-ceived word y = (0, ?, ?,1,0, ?,0). ¿e vector x indicates the currentestimate of the decoder of the transmitted codeword x. A er three de-coding steps the peeling decoder has successfully recovered the code-word. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

0.38 Evolution of the residual degrees Rj(y), j = 0, . . . ,6, as a function ofthe parameter y for the (3,6)-regular degree distribution. ¿e channelparameter is є = єBP 0.4294. ¿e curve corresponding to nodes ofdegree 1 is shown as a thick line. . . . . . . . . . . . . . . . . . . . . . . . 38

0.39 Le : BP EXIT function hBP(є); Right: Corresponding EXIT functionh(є) constructed according to¿eorem 3.120. . . . . . . . . . . . . . . . 39

0.40 Le : EBP EXIT curve of the (l = 3,r = 6)-regular ensemble. Note thatthe curve goes “outside the box” and tends to innity. Right: Accordingto Lemma 3.128 the gray area is equal to 1 − r(l,r) = l

r=

12 . . . . . . . . 40

Page 8: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

viii list of figures

0.41 Le : Because of¿eorem 3.120 and Lemma 3.128, at theMAP thresholdєMAP the two dark gray areas are in balance. Middle: ¿e dark gray areais proportional to the total number of variables which the M decoderintroduces. Right: ¿e dark gray area is proportional to the total num-ber of equations which are produced during the decoding process andwhich are used to resolve variables. . . . . . . . . . . . . . . . . . . . . . 41

0.42 M decoder applied to a (l = 3,r = 6)-regular code of length n = 30. . . 42

0.43 Comparison of the number of unresolved variables for theMaxwell de-coder applied to the LDPC n,xl−1,xr−1 ensembles as predicted byLemma 3.134 with samples for n = 10,000. ¿e asymptotic curves areshown as solid lines, whereas the sample values are printed as dashedlines. ¿e parameters are є = 0.50 (le ), є = єMAP

0.48815 (mid-dle), and є = 0.46 (right). ¿e parameter є = 0.46 is not covered byLemma 3.134. Nevertheless, up to the point where the predicted curvedips below zero the experimental data agrees well. . . . . . . . . . . . . 43

0.44 ¿e subset of variable nodes S = 7,11,16 is a stopping set. . . . . . . 44

0.45 ELDPCnx3, n2 x6 [PBPB (G,є)] as a function of є for n = 2i, i > [10]. . . . . . 45

0.46 P(χ = 0, smin,є) for the ensemble LDPC nx3, n2 x6, where n = 500,1000,2000.¿e dashed curves correspond to the case smin = 1, whereas the solidcurves correspond to the case where smin was chosen to be 12, 22, and40, respectively. In each of these cases the expected number of ss of sizesmaller than smin is less than 1. . . . . . . . . . . . . . . . . . . . . . . . . 46

0.47 P(χ = 1, smin = 12, ℓ,є) for the ensemble LDPC 500x3,250x6 andthe rst 10 iterations (solid curves). Also shown are the correspond-ing curves of the asymptotic density evolution for the rst 10 iterations(dashed curves). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

0.48 Probability distribution of the iterationnumber for the LDPC nx3, n2 x6ensemble, lengths n = 400 (top curve), 600 (middle curve), and 800(bottom curve) and є = 0.3. ¿e typical number of iterations is around5, but, e.g., for n = 400, 50 iterations are required with a probability ofroughly 10−10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

0.49 Derivation of the recursion forA(v, t, s) for LDPC Λ(x) = nxl,P(x) = nlrxr

and an unbounded number of iterations. . . . . . . . . . . . . . . . . . . 49

Page 9: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures ix

0.50 Scaling of ELDPC(n,x2,x5)[PB(G,є)] for transmission over the BEC(є)and BP decoding. ¿e threshold for this combination is єBP 0.4294.¿e blocklengths/expurgation parameters are n~s = 1024~24, 2048~43,4096~82, and 8192~147, respectively. ¿e solid curves represent theexact ensemble averages. ¿e dotted curves are computed accordingto the basic scaling law stated in ¿eorem 3.151. ¿e dashed curvesare computed according to the rened scaling law stated in Conjec-ture 3.152. ¿e scaling parameters are α = 0.56036 and β~Ω = 0.6169;see Table 3.154. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

0.51 Scaling ofELDPCn,λ= 16 x+

56 x

3,ρ=x5[PB(G,є)] for transmission over BEC(є)and BP decoding. ¿e threshold for this combination is єBP 0.482803.¿e blocklengths/expurgation parameters are n~s = 350~14, 700~23,and 1225~35. ¿e solid curves represent the simulated ensemble aver-ages. ¿e dashed curves are computed according to the rened scalinglaw stated in Conjecture 3.152 with scaling parameters α = 0.5791 andβ~Ω = 0.6887. ¿e two curves are almost on top of each other and arehard to distinguish. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

0.52 Evolution of n(1 − r)R1 as a function of the size of the residual graphfor several instances for the ensemble LDPC n, λ(x) = x2, ρ(x) = x5for n = 2048 (le ) and n = 8192 (right). ¿e transmission is over theBEC(є = 0.415). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

0.53 Growth rateG(ω) for the (3,6) (dashed line) as well as the (2,4) (solidline) ensemble. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

0.54 BAWGNC(σ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

0.55 ln coth S x S2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

0.56 ¿e L-density aBAWGNC(σ)(y), the D-density aBAWGNC(σ)(y), as well asthe corresponding G-density aBAWGNC(σ)(1, y) for σ = 5~4. . . . . . . 56

0.57 Le : Capacity of the BAWGNC (solid line) and the AWGNC (dashedline) in bits per channel use as a function of EN~σ2. Also shown are theasymptotic expansions (dotted) for large and small values of EN

σ2 dis-cussed in Problem 4.12. Right: ¿e achievable (white) region for theBAWGNC and r = 1

2 as a function of (Eb~N0)dB. . . . . . . . . . . . . . 57

0.58 Comparison of the kernels SdSaBEC(h)(ċ) (dashed line) with SdSaBSC(h)(ċ)(dotted line) and SdSaBAWGNC(h)(ċ) (solid line) at channel entropy h = 0.1(le ), h = 0.5 (middle), and h = 0.9 (right). . . . . . . . . . . . . . . . . . 58

Page 10: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

x list of figures

0.59 Performance of Gallager’s algorithm A for the (3,6)-regular ensem-ble when transmission takes place over the BSC. ¿e blocklengths aren = 2i, i = 10, . . . ,20. ¿e le -hand graph shows the block error prob-ability, whereas the right-hand graph concerns the bit error probabil-ity. ¿e dots correspond to simulations. For most simulation pointsthe 95% condence intervals (see Problem 4.37) xind]condence inter-val are smaller than the dot size. ¿e lines correspond to the analyticapproximation of the waterfall curves based on scaling laws (see Sec-tion 4.13). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

0.60 Performance of the decoder with erasures for the (3,6)-regular ensem-ble when transmission takes place over the BSC. ¿e blocklengths aren = 2i, i = 10, . . . ,20. ¿e le -hand graph shows the block error prob-ability, whereas the right-hand graph concerns the bit error probability.¿e dots correspond to simulations. ¿e lines correspond to the ana-lytic approximation of the waterfall curves based on scaling laws (seeSection 4.13). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

0.61 Performance of the BP decoder for the (3,6)-regular ensemble whentransmission takes place over the BSC. ¿e blocklengths are n = 2i,i = 10, . . . ,20. ¿e le -hand graph shows the block error probability,whereas the right-hand graph concerns the bit error probability. ¿edots correspond to simulations. ¿e lines correspond to the analyticapproximation of the waterfall curves based on scaling laws. . . . . . . 61

0.62 Evolution of aℓ (densities of messages emitted by variable nodes) andbℓ+1 (densities of messages emitted from check nodes) for ℓ = 0, 5, 10,50, and 140 for the BAWGNC(σ = 0.93) and the code given in Exam-ple 4.100. ¿e densities “move to the right,” indicating that the errorprobability decreases as a function of the number of iterations. . . . . . 62

0.63 Evolution of PGalÑTℓ(x2,x5)(є) as a function of ℓ for various values of є. For

є = 0.03875,0.039375,0.0394531, and 0.039462 the error probabilityconverges to zero, whereas for є = 0.039465,0.0394922,0.0395313, and0.0396875 the error probability converges to a non-zero value. For є 0.03946365 the error probability stays constant. We conclude that єGal(3,6) 0.03946365. Note that for є A єGal(3,6), PGal

ÑTℓ(x2,x5)(є) is an increasingfunction of ℓ, whereas below this threshold it is a decreasing function.In either case, PGal

ÑTℓ(x2,x5)(є) is monotone as guaranteed by Lemma 4.104. 63

Page 11: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures xi

0.64 Evolution of PBPÑTℓ(x2,x5)(σ) as a function of the number of iterations ℓ for

various values of σ. For σ = 0.878,0.879.0.8795,0.8798, and 0.88 theerror probability converges to zero, whereas for σ = 0.9,1,1.2, and 2 theerror probability converges to a non-zero value. We see that σBP(3,6) 0.881. Note that, as predicted by Lemma 4.107, PBP

ÑTℓ(x2,x5)(σ) is a non-increasing function in ℓ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

0.65 Le : f(є,x) − x as a function of x for the (3,3)-regular ensemble andє = єGal

0.22305. Right: f(є,x) − x as a function of x for the (3,6)-regular ensemble and є = 0.037, є = єGal

0.394, and є = 0.042. . . . . . 65

0.66 Progress per iteration (change of error probability) of density evolu-tion for the (3,6)-ensemble and the BAWGNC(σ) channel with σ 0.881 as a function of the bit error probability. In formulae: we plotE(aℓ) − E(aℓ−1) as a function of Pb = E(aBAWGNC(σ) e L(ρ(aℓ−1))),where L(x) = x3 and ρ(x) = x5. For cosmetic reasons this discreteset of points was interpolated to form a smooth curve. ¿e initial errorprobability is equal to Q(1~0.881) 0.12817. At the xed point theprogress is zero. ¿e associated xed point densities are a (emitted atthe variable nodes) and b (emitted at the check nodes). . . . . . . . . . . 66

0.67 EXIT function of the [3,1,3] repetition code and the [6,5,2] parity-check code for the BEC (solid curve), the BSC (dashed curve), and alsothe BAWGNC (dotted curve). . . . . . . . . . . . . . . . . . . . . . . . . 67

0.68 EXIT function of the (3,6)-regular ensemble on the BAWGN channel.In the le -hand graph the parameter is h 0.3765 (σ 0.816), whereasin the right-hand graph we chose h 0.427 (σ = 0.878). . . . . . . . . . 68

0.69 Le : Forv > [0, 12] the function h2h−12 (u)(1−2v)+v is non-decreasingand convex-8 in u, u > [0,1]. Right: Universal bound applied to the(3,6)-regular ensemble. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

0.70 EXIT (solid) and GEXIT (dashed) function of the [n,1,n] repetitioncode and the [n,n − 1,2] parity-check code assuming that transmis-sion takes place over the BSC(h) (le ) or the BAWGNC(h) (right),n > 2,3,4,5,6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

0.71 BP GEXIT curve for several regular LDPC ensembles for the BSC (le )and the BAWGNC (right). . . . . . . . . . . . . . . . . . . . . . . . . . . 71

0.72 Le : BPGEXIT function gBP(h) for the (3,6)-regular ensemble; Right:Corresponding upper bound on GEXIT function g(h) constructed ac-cording to¿eorem 4.172. . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Page 12: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

xii list of figures

0.73 Scaling ofELDPC(n,x2,x5)[PB(G,h)] for transmission over theBAWGNC(h)and a quantized version of belief propagation xind]belief propagation!scalingdecoding implemented in hardware. ¿e threshold for this combina-tion is (Eb~N0)dB 1.19658. ¿e blocklengths n are n = 1000, 2000,4000, 8000, 16,000, and 32,000, respectively. ¿e solid curves representthe simulated ensemble averages. ¿e dashed curves are computed ac-cording to the scaling law of Conjecture 4.176 with scaling parametersα = 0.8694 and β = 5.884. ¿ese parameters were tted to the empiricaldata. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

0.74 Region of convergence for the all-one weight sequence (indicated in gray). 740.75 L-densities aKSIBRAYFC(σ) (solid curve) and aUSI

BRAYFC(σ) (dashed curve) forσ = 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

0.76 Upper bound on λ′(0)ρ′(1), i.e., 1~B, for the KSI case (solid curve),computed according to (5.1), and for the USI case (dashed curve). . . . 76

0.77 Capacities CKSIBRAYFC(σ) (solid curve) and CUSI

BRAYFC(σ) (dashed curve) asa function of σmeasured in bits. . . . . . . . . . . . . . . . . . . . . . . 77

0.78 ELDPC(n,λ,ρ)[Pb(G,Eb~N0)] for the optimized ensemble stated in Ex-ample 5.6 and transmission over theBRAYF(Eb~N0)withKSI and xind]beliefpropagation belief-propagation decoding. As stated in Example 5.6, thethreshold for this combination is σBPKSI 0.8028 which corresponds toEb~N0dB 1.90785. ¿e blocklengths/expurgation parameters n~sare n = 8192~10, 16384~10, and 32768~10, respectively. . . . . . . . . . 78

0.79 Z channel with parameter є. . . . . . . . . . . . . . . . . . . . . . . . . . 790.80 Comparison of CZC(є) (solid curve) with Iα= 1

2(X;Y) (dashed curve),

both measured in bits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800.81 FSFG corresponding to (5.13). . . . . . . . . . . . . . . . . . . . . . . . . 810.82 Gilbert-Elliott channel with two states. . . . . . . . . . . . . . . . . . . . 820.83 L-densities of density evolution at iteration 1, 2, 4, and 10. ¿e le pic-

tures show the densities of themessages which are passed from the codetoward the part of the FSFGxind]Forney-style factor graph which esti-mates the channel state. ¿e right-hand side shows the density of themessages which are the estimates of the channel state and which arepassed to the part of the FSFGxind]Forney-style factor graph corre-sponding to the code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

0.84 Two specic maps ψ for the 4-PAM constellation. . . . . . . . . . . . . . 840.85 Transition probabilities pY SX[1]y S x[1] for σ 0.342607 as a function

of x[1] = 0~1 (solid/dashed). ¿e two cases correspond to the twomapsψ shown in Figure 0.84. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Page 13: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures xiii

0.86 Multilevel decoding scheme. ¿e two decoding parts correspond to thetwo parts of (5.25). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

0.87 BICMdecoding scheme. ¿e twodecoding parts correspond to IX[1];Yand IX[2];Y, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . 87

0.88 BAWGNMA channel with two users. . . . . . . . . . . . . . . . . . . . . 880.89 Capacity region for σ 0.778. ¿e dominant face D (thick diagonal

line) is the set of rate tuples of the capacity region of maximal sum rate. 890.90 FSFGcorresponding to decoding on theBAWGNMAchannel. xind]Forney-

style factor graph!BAWGNMAC xind]binary AWGN multiple-accesschannel!FSFG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

0.91 Four standard signal constellations: 2-PAM(top le ), 4-QAM(top right),8-PSK (bottom le ), and 16-QAM (bottom right). In all cases it is as-sumed that the prior on S is uniform. ¿e signal constellations arescaled so that the average energy per dimension is E. . . . . . . . . . . . 91

0.92 Multiple-access binary adder channel. . . . . . . . . . . . . . . . . . . . . 920.93 Binary systematic recursive convolutional encoder of memory m = 2

and rate one-half dened by G = 7~5. ¿e two square boxes are delayelements. ¿e 7 corresponds to 1+D+D2. ¿ese are the coecients ofthe “forward” branch (the top branch of the lter) with 1 correspondingto the le most coecient. In a similar manner, 5 corresponds to 1+D2,which represents the coecients of the “feedback” branch. Again, thele most coecient corresponds to 1. . . . . . . . . . . . . . . . . . . . . 93

0.94 FSFG for the MAP decoding of C(G,n). . . . . . . . . . . . . . . . . . . 940.95 Trellis section for the caseG = 7~5. ¿ere are four states. A dashed/solid

line indicates that xsi = 0~1 and thin/thick lines indicate that xpi = 0~1. . 950.96 BCJR algorithm applied to the code C(G = 7~5,n = 5) assuming trans-

mission takes place over the BSC(є = 1~4). ¿e received word is equalto ys, yp = (1001000,1111100). ¿e top gure shows the trellis withbranch labels corresponding to the received sequence. We have not in-cluded the prior pxsi, since it is uniform. ¿e middle and bottomgures show the α- and the β- recursion, respectively. On the very bot-tom, the estimated sequence is shown. . . . . . . . . . . . . . . . . . . . 96

0.97 Performance of the rate one-half code C(G = 21~37,n = 216) over theBAWGNC under optimal bit-wise decoding (BCJR, solid line). Notethat (Eb~N0)dB = 10 log10

12rσ2 . Also shown is the performance un-

der optimal block-wise decoding (Viterbi, dashed line). ¿e two curvesoverlap almost entirely. Although the performance under the Viterbialgorithm is strictly worse the dierence is negligible. . . . . . . . . . . . 97

Page 14: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

xiv list of figures

0.98 Viterbi algorithm applied to the code C(G = 7~5,n = 5) assumingtransmission takes place over the BSC(є = 1~4). ¿e received wordis (ys, yp) = (1001000,1111100). ¿e top gure shows the trellis withbranch labels corresponding to − log10 p(ysi S xsi)p(ypi S xpi ). Since wehave a uniform prior we can take out the constant p(xsi). ¿ese branchlabels are easily derived fromFigure 0.96 by applying the function− log10.¿e bottom gure show the workings of the Viterbi algorithm. On thevery bottom the estimated sequence is shown. . . . . . . . . . . . . . . . 98

0.99 Encoder for C(G = 21~37,n,π = (π1,π2)), where π1 is the identitypermutation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

0.100Encoder for C(Go= 21~37,Gi

= 21~37,n,π). . . . . . . . . . . . . . . . 1000.101 FSFG for the optimum bit-wise decoding of an element of P(G,n).

xind]turbo code!FSFG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1010.102 EP(G=21~37,n,r=1~2)[Pb(C,Eb~N0)] for an alternating puncturing pattern

(identical on both branches), n = 211, . . . ,216, 50 iterations, and trans-mission over the BAWGNC(Eb~N0). ¿e arrow indicates the positionof the threshold (Eb~N0)BPdB 0.537 (σBP 0.94) which we compute inSection 6.5. ¿e dashed curves are analytic approximations of the erroroor discussed in Lemma 6.52. xind]turbo code!performance . . . . . 102

0.103 Computation graph corresponding to windowed (w = 1) iterative de-coding of a parallel concatenated code for two iterations. ¿e black fac-tor nodes indicate the end of the decoding windows and represent theprior which we impose on the boundary states. . . . . . . . . . . . . . . 103

0.104 Denition of themaps c = ΓsG(a,b) and d = ΓpG(a,b). We are given a bi-innite trellis dened by a rational function G(D). Associated with allsystematic variables are iid samples from a density a, whereas the paritybits experience the channel b. ¿e resulting densities of the outgoingmessages are denoted by c and d, respectively. . . . . . . . . . . . . . . . 104

0.105 Evolution of cℓ for ℓ = 1, . . . ,25 for the ensemble P(G = 21~37, r =1~2), an alternating puncturing pattern of the parity bits, and trans-mission over the BAWGNC(σ). In the le picture σ = 0.93 (Eb~N0

0.63 dB). For this parameter the densities keep moving “to the right”toward ∆ª. In the right picture σ = 0.95 (Eb~N0 0.446 dB). For thisparameter the densities converge to a xed point density. . . . . . . . . 105

0.106 EXIT chart method for the ensemble P(G = 21~37, r = 1~2) with al-ternating puncturing on the BAWGN channel. In the le -hand picturethe parameter is σ = 0.93, whereas in the right-hand picture we choseσ = 0.941. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Page 15: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures xv

0.107 BP GEXIT curve for the ensemble P(G = 7~5, r = 1~3) assuming thattransmission takes place over the BAWGNC(h). ¿e BP and the MAPthresholds coincide and both thresholds are given by the stability con-dition. We have hMAP~BP

0.559 (σMAP~BP 1.073). . . . . . . . . . . . 107

0.108 Exponent 1n log2(aw,n) of the regular weight distribution of the code

C(G = 7~5,n) as a function of the normalized weight w~n for n =64,128, and 256 (dashed curves). Also shown is the asymptotic limit(solid line). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

0.109 Exponent 1n log2(pw,n) as a function of the normalized weight w~n for

the ensemble P(G = 7~5,n, r = 1~3) and n = 64,128, and 256. ¿enormalization of the weight is with respect to n, not the blocklength. . 109

0.110 Bipartite graph corresponding to the parameters n = 19, d = 2, ∆ = 3,and π = 12,1,4,18,17,4,3,19,5,13,2,6,16,7,15,14,9,8,10. Dou-ble edges are indicated by thick lines. ¿e cycle of length 4, formed by(starting on the le ) S6 S6 S4 S2, is shown as dashed lines. . . . 110

0.111 EXIT chart for transmission over the BEC(h 0.6481) for an asym-metric (big-numerator) parallel concatenated ensemble. . . . . . . . . . 111

0.112 Alternative view of an encoder for a standard parallel concatenated code. 1120.113 Alternative view of the FSFG of a standard parallel concatenated code.

xind]turbo code!FSFG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1130.114 FSFG of an irregular parallel concatenated turbo code. . . . . . . . . . . 1140.115 Binary feed-forward convolutional encoder of memory m = 2 and rate

one-half dened by (p(D) = 1 + D + D2,q(D) = 1 + D2). . . . . . . . . 1150.116 FSFG for the optimal bit-wise decoding of S(Go,Gi,n, r). . . . . . . . . 1160.117 Tanner graph of a standard irregular LDPC code. . . . . . . . . . . . . . 1170.118 Encoder for an RA code. Each systematic bit is repeated l times; the

resulting vector is permuted and fed into a lter with response 1~(1+D)(accumulate). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

0.119 Tanner graph of an RA code with l = 3. . . . . . . . . . . . . . . . . . . 1190.120 Tanner graph corresponding to an IRA code. . . . . . . . . . . . . . . . 1200.121 Tanner graph of an ARA code. . . . . . . . . . . . . . . . . . . . . . . . . 1210.122 Tanner graph of an irregular LDGM code. . . . . . . . . . . . . . . . . . 1220.123 Tanner graph of a simple LDGM code. . . . . . . . . . . . . . . . . . . . 1230.124 Tanner graph of an MN code. . . . . . . . . . . . . . . . . . . . . . . . . . 1240.125 Base graph. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1250.126 Le : m copies of base graph with m = 5. Right: Li ed graph resulting

from applying permutations to the edge clusters. . . . . . . . . . . . . . 1260.127 Ω-network for 8 elements. It has log2(8) = 3 stages, each consisting of

a perfect shue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

Page 16: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

xvi list of figures

0.128 Le : Base graph with a multiple edge between variable node 1 to checknode 1. Right: Li ed graph. . . . . . . . . . . . . . . . . . . . . . . . . . 128

0.129 FSFG of a simple code over F4 and its associated parity-check matrixH. ¿e primitive polynomial generating F4 is p(z) = 1 + z + z2. . . . . . 129

0.130 FSFG of a simple code over F4 and its associated parity-check matrixH. ¿e primitive polynomial generating F4 is p(z) = 1 + z + z2. . . . . . 130

0.131 Le : Performance of the (2,3)-regular ensemble overF2m ,m = 1,2,3,4of binary length 4320 over the BAWGNC(σ). Right: EXIT curves forthe (2,3)-regular ensembles over F2m form = 1,2,3,4,5,6, and trans-mission over the BEC(є). . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

0.132 EXIT chart for the LDGM ensemble with λ(x) = 12x

4+

12x

5 and ρ(x) =25x +

15x

2+

25x

8 and transmission over the BEC(є = 0.35). . . . . . . . 1320.133 Value of α as a function of γ for l = 2, 3, 4, 5 and r = 6. . . . . . . . . . . 1330.134 H in upper triangular form. . . . . . . . . . . . . . . . . . . . . . . . . . . 1340.135 H in approximate upper triangular form. . . . . . . . . . . . . . . . . . . 1350.136 Greedy algorithm to perform approximate upper triangulation. . . . . . 1360.137 Tanner graph corresponding to H0. . . . . . . . . . . . . . . . . . . . . . 1370.138 Tanner graph a er splitting of node 1. . . . . . . . . . . . . . . . . . . . . 1380.139 Tanner graph a er one round of dual erasure decoding. . . . . . . . . . 1390.140 1 − z − ρ(1 − λ(z)). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1400.141 Element chosen uniformly at random from LDPC (1024, λ, ρ), with

(λ, ρ) as described in Example A.19, a er the application of the greedyalgorithm. For the particular experiment we get g = 1. ¿e non-zero el-ements in the last row (in the gap) are drawn larger to make themmorevisible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

0.142 Element chosen uniformly at random from LDPC 2048,x2,x5 a erthe application of the greedy algorithm. ¿e result is g = 39. . . . . . . . 142

0.143 Evolution of the dierential equation for the (3,6)-regular ensemble.For u 0.0247856 we have λ′(0)r = 1, L2 0.2585, L3 0.6895, andg 0.01709. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

0.144 Le : Example K(x),δ = 0.125. Right: Logarithm (base 10) K(x). . . . 1440.145 Exponent G(r = 1~2,ω) of the weight distribution of typical elements

of G(n, k = n~2) as a function of the normalized weight ω. For w~n >(δGV,1−δGV) the number of codewords ofweightw in a typical elementof G(n, k) is 2n(G(r,w~n)+o(1)). . . . . . . . . . . . . . . . . . . . . . . . . 145

Page 17: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures xvii

0.146 Le : Graph G from the ensemble LDPC 10,x2,x5; Middle: Graph Hfrom the ensemble G7(G,7) (note that the labels of the sockets are notshown – these labels should be inferred from the order of the connec-tions in themiddle gure); the rst 7 edges that H has in commonwith Gare drawn in bold; Right: the associated graph ϕ7,30(H).¿e two dashedlines correspond to the two edges whose endpoints are switched. . . . . 146

0.147 Le : Two SDS-distributions SAS (thick line) and SBS (thin line). Right:Since R 1

z SAS(x)dx B R 1z SBS(x)dx we know that SAS_ SBS. . . . . . . . 147

0.148 Denition of q on (zB, zA) pair. . . . . . . . . . . . . . . . . . . . . . . . 148

Page 18: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2
Page 19: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 1

source channel sink

Figure 0.1: Basic point-to-point communications problem.

Page 20: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

2 list of figures

channelencoder channel channel

decoder

sourceencoder

sourcedecoder

source sink

Figure 0.2: Basic point-to-point communications problem in view of the source-channel separation theorem.

Page 21: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 3

Xt Yt

1 − є

1 − є

єє

1

-1

1

-1Figure 0.3: BSC(є).

Page 22: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

4 list of figures

0.2 0.4 0.6 0.80.0 r

0.10.20.3

δ(r)

GVElias

Figure 0.4: Upper and lower bound on δ(r).

Page 23: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 5

X > C(n,M)uniform

X BSC(є) Y decoder

Figure 0.5: Transmission over the BSC(є).

Page 24: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

6 list of figures

0.1 0.2 0.3 0.40.00 r

0.050.100.15

E(r)

Figure 0.6: Error exponent of block codes (solid line) and of convolutional codes(dashed line) for the BSC(є 0.11).

Page 25: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 7

x4x2 x1

x3x7 x6

x5

Figure 0.7: Venn diagram representation of C.

Page 26: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

8 list of figures

x4=1x2=1 x1=0

x3=0x7= ? x6= ?

x5= ?

x4=1x2=1 x1=0

x3=0x7=0 x6=1

x5=0

Figure 0.8: Encoding corresponding to (x1,x2,x3,x4) = (0,1,0,1). ¿e numberof ones contained in each circle must be even. By applying one such constraint ata time the initially unknown components x5, x6, and x7 can be determined. ¿eresulting codeword is (x1,x2,x3,x4,x5,x6,x7) = (0,1,0,1,0,1,0).

Page 27: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 9

x4=1x2= ? x1=0

x3= ?x7=0 x6= ?

x5=0

x4=1x2=1 x1=0

x3= ?x7=0 x6= ?

x5=0

x4=1x2=1 x1=0

x3=0x7=0 x6= ?

x5=0

x4=1x2=1 x1=0

x3=0x7=0 x6=1

x5=0

Figure 0.9: Decoding corresponding to the received message (0, ?, ?,1,0, ?,0). Firstwe recover x2 = 1 using the constraint implied by the top circle. Next we determinex3 = 0 by resolving the constraint given by the le circle. Finally, using the lastconstraint, we recover x6 = 1.

Page 28: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

10 list of figures

x4=?x2=? x1=?

x3=0x7=0 x6=1

x5=0

Figure 0.10: Decoding corresponding to the received message (?, ?,0, ?,0,1,0). ¿elocal decoding fails since none of the three parity-check equations by themselvescan resolve any ambiguity.

Page 29: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 11

0.1 0.2 0.3 0.4

10-2

10-1

є

PBPB

єBP0.4294

Figure 0.11: ELDPCnx3, n2 x6 [PBPB (G,є)] as a function of є for n = 2i, i > [10].

Page 30: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

12 list of figures

x2 x3 x4 x6

f3 f4

x5

f1 f2

x1

x1x2x3x4x5x6x7

1x1+x2+x4=0

1x3+x4+x6=0

1x4+x5+x7=0

Figure 0.12: Le : Factor graph of f given in Example 2.2. Right: Factor graph forthe code membership function dened in Example 2.5.

Page 31: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 13

zg

x1

f1 f2

x2 x3 x4 x6

f3 f4

x5

f

[g1] [gk] [gK] [f1]

[f2 f3 f4]Figure 0.13: Generic factorization and the particular instance.

Page 32: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

14 list of figures

z

kernel h

z1 zj zJ

[h1] [hj] [hJ] f3f4

x5

x1

f2kernel

x4 x6

[f3 f4]

[1]

[f2 f3 f4][gk]Figure 0.14: Generic factorization of gk and the particular instance.

Page 33: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 15

x2 x3 x4 x6

f3 f4

x5

f1 f2

x1

x2 x3 x4 x6

f3 f4

x5

f1 f2

x1

1 1

f3

1

1x2 x3 x4 x6

f3 f4

x5

f1 f2

x1

1 1

f3 Px4 f4

1

1

Px1 f1

x2 x3 x4 x6

f3 f4

x5

f1 f2

x1

1 1

f3 Px4 f4

1

1

Px1 f1

Px4 f3 f4

x2 x3 x4 x6

f3 f4

x5

f1 f2

x1

1 1

f3 Px4 f4

1

1

Px1 f1

Px4 f3 f4

Px1 f2 f3 f4

x2 x3 x4 x6

f3 f4

x5

f1 f2

x1

1 1

f3 Px4 f4

1

1

Px1 f1

Px4 f3 f4

Px1 f2 f3 f4

Px1 f1 f2 f3 f4

Figure 0.15: Marginalization of function f from Example 2.2 via message passing.Message passing starts at the leaf nodes. A node that has received messages from allits children processes the messages and forwards the result to its parent node. Boldedges indicate edges along which messages have already been sent.

Page 34: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

16 list of figures

f

xµ(x) = f(x) initialization at

leaf nodes x

fµ(x) = 1

f

x

variable/functionnode processing

µ(x) =LKk=1 µk(x)

µ1 µk µKf1

fkfK

x

f

µ(x) = Px f(x,x1, ,xJ)LJj=1 µj(xj)

µ1 µj µJx1

xjxJ

xmarginalization LK+1k=1 µk(x)

µ1 µk µKf1

fkfK

fK+1

µK+1

Figure 0.16: Message-passing rules. ¿e top row shows the initialization of themessages at the leaf nodes. ¿e middle row corresponds to the processing rulesat the variable and function nodes, respectively. ¿e bottom row explains the nalmarginalization step.

Page 35: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 17

p(y1 S x1)p(y2 S x2)p(y3 S x3)p(y4 S x4)p(y5 S x5)p(y6 S x6)p(y7 S x7)

1x1+x2+x4=0

1x3+x4+x6=0

1x4+x5+x7=0

Figure 0.17: Factor graph for the MAP decoding of our running example.

Page 36: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

18 list of figures

x4x3x2x1

x4x3x2 x1

Figure 0.18: Le : Standard FG in which each variable node has degree at most 2.Right: Equivalent FSFG. ¿e variables in the FSFG are associated with the edges inthe FG.

Page 37: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 19

xKx1 xk xK−1

=

x

Figure 0.19: Representation of a variable node of degree K as an FG (le ) and theequivalent representation as an FSFG (right).

Page 38: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

20 list of figures

p(y1 S x1)p(y2 S x2)p(y3 S x3)p(y4 S x4)p(y5 S x5)p(y6 S x6)p(y7 S x7)

1x1+x2+x4=0

1x3+x4+x6=0

1x4+x5+x7=0

p(y1 S x1)p(y2 S x2)p(y3 S x3)p(y4 S x4)p(y5 S x5)p(y6 S x6)p(y7 S x7)

1x1+x2+x4=0

1x3+x4+x6=0

1x4+x5+x7=0

=

Figure 0.20: Standard FG and the corresponding FSFG for theMAP decoding prob-lem of our running example.

Page 39: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 21

xy00 10 01 110 1 2 3z = m(x, y) Y

y = q(x)X

Figure 0.21: Le : Mapping z = m(x, y). Right: Quantizer y = q(x).

Page 40: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

22 list of figures

XtYt

1 − є

1 − є

єє

0

1

0

1

?

Figure 0.22: Binary erasure channel with parameter є.

Page 41: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 23

XtYt

1 − є

1 − є

єє

0

1

0

1

?

0

1

?

1 − δ−є1−є

1 − δ−є1−є

δ−є1−є

δ−є1−є

Figure 0.23: For є B δ, the BEC(δ) is degraded with respect to the BEC(є).

Page 42: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

24 list of figures

2019181716151413121110987654321

10

9

8

7

6

5

4

3

2

1

7654321

3

2

1

Figure 0.24: Le : Tanner graph of H given in (3.9). Right: Tanner graph of [7,4,3]Hamming code corresponding to the parity-check matrix on page 15. ¿is graph isdiscussed in Example 3.11.

Page 43: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 25

0 0.5 1.0 1.5

0.00

−0.25

−0.50

Figure 0.25: Function Ψ(y).

Page 44: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

26 list of figures

check-to-variableÐ

variable-to-checkÐx y

7654321

3

2

1

0?01??0

0?01??0

7654321

3

2

1

0?01?10

0?01??0

7654321

3

2

1

0?01010

0?01??0

7654321

3

2

1

0101010

0?01??0

Figure 0.26: Message-passing decoding of the [7,4,3] Hamming code with the re-ceived word y = (0, ?, ?,1,0, ?,0). ¿e vector x denotes the current estimate ofthe transmitted word x. A 0 message is indicated as thin line, a 1 message is in-dicated as thick line, and a ? message is drawn as dashed line. ¿e four rows cor-respond to iterations 0 to 3. A er the rst iteration we recover x2 = 1, a er thesecond x3 = 0, and a er the third we know that x6 = 1. ¿e recovered codeword isx = (0,1,0,1,0,1,0).

Page 45: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 27

0.3 0.35 0.4 0.45

10-4

10-3

10-2

10-1

є

PBPb

Figure 0.27: Bit erasure probability of 10 random samples fromLDPC 512x3,256x6 ; LDPC 512,x2,x5.

Page 46: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

28 list of figures

x1e

c5

c8c9

x2 x5 x13 x17 x19

x3

x9

x13

x14

x15x2

x3

x4

x10

x17

Figure 0.28: Computation graph of height 2 (two iterations) for bit x1 and the codeC(H) for H given in (3.9). ¿e computation graph of height 2 (two iterations) foredge e is the subtree consisting of edge e, variable node x1, and the two subtreesrooted in check nodes c5 and c9.

Page 47: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 29

(2n−6)(2n−8)(2n−1)(2n−5)

2(2n−6)(2n−1)(2n−5)

1(2n−1)(2n−5)

4(2n−6)(2n−1)(2n−5)

2(2n−1)(2n−5)

22n−1

є(1 − (1 − є)2)2

є2(1 − (1 − є)2)

є3

є2 + є3(1 − є)

є(1 − (1 − є)2)є2

T PT > C1(n,x,x2) PBPb (T,є)

Figure 0.29: Elements of C1(n, λ(x) = x, ρ(x) = x2) together with their probabil-ities PT > C1(n,x,x2) and the conditional probability of error, PBP

b (T,є). ¿icklines indicate double edges.

Page 48: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

30 list of figures

L(5) R(7)

Figure 0.30: Examples of basic trees, L(5) and R(7).

Page 49: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 31

12125

48125

8625

32625

32625

128625

3125

12125

2625

8625

8625

32625

Figure 0.31: Twelve elements of T1(λ, ρ) together with their probabilities.

Page 50: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

32 list of figures

Figure 0.32: Le : Element of C1(T) for a given tree T > T2. Right: Minimal suchelement, i.e., an element of C1

min(T). Every check node has either no connectedvariables of value 1 or exactly two such neighbors. Black and gray circles indicatevariables with associated values of 1 or 0, respectively.

Page 51: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 33

0.0 0.1 0.2 0.3 0.4

-0.06-0.04-0.020.000.02

x

є = 0.4

є = єBPє = 0.45

0.0 0.1 0.2 0.3 0.4

-0.03-0.02-0.010.00

x

є = єBP 0.4741

Figure 0.33: Le : Graphical determination of the threshold for (λ, ρ) = (x2,x5).¿ere is one critical point, xBP 0.2606 (black dot). Right: Graphical determinationof the threshold for optimized degree distribution described in Example 3.63. ¿ereare two critical points, xBP1,2 0.1493,0.3571 (two black dots).

Page 52: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

34 list of figures

x

v−1є (x)

c(x)

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

1.0

x

v−1є (x)

c(x)

c(0.35) 0.884є = 0.35

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

1.0

0.0

Figure 0.34: Le : Graphical determination of the threshold for (λ(x) = x2, ρ(x) =x5). ¿e function v−1є (x) = (x~є)1~2 is shown as a dashed line for є = 0.35, є = єBP 0.42944, and є = 0.5. ¿e function c(x) = 1 − (1 − x)5 is shown as a solid line.Right: Evolution of the decoding process for є = 0.35. ¿e initial fraction of erasuremessages emitted by the variable nodes is x = 0.35. A er half an iteration (at theoutput of the check nodes) this fraction has evolved to c(x = 0.35) 0.88397. A erone full iteration, i.e., at the output of the variable nodes, we see an erasure fractionof x = vє(0.88397), i.e., x is the solution to the equation 0.883971 = v−1є (x). ¿isprocess continues in the same fashion for each subsequent iteration, correspondinggraphically to a staircase function which is bounded below by c(x) and boundedabove by v−1є (x).

Page 53: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 35

є

h(є)

h [6,5,2]

h [3,1,3]

h [7,4,3]

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.0

Figure 0.35: EXIT function of the [3,1,3] repetition code, the [6,5,2] parity-checkcode, and the [7,4,3]Hamming code.

Page 54: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

36 list of figures

0.3 0.325 0.35 0.375 0.4 0.425

10-610-510-410-310-2

є

PBPb

Figure 0.36: ELDPC(n,x2,x5) [PBPb (G,є, ℓ =ª)] as a function of є for n = 2i, i =

6, . . . ,20. Also shown is the limit ELDPC(ª,x2,x5) [PBPb (G,є, ℓª)], which is dis-

cussed in Problem 3.17 (thick curve).

Page 55: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 37

initial graph and received word initialization (i): forward

initialization (ii): accumulate and delete step 1.(i)

step 1.(ii) step 1.(iii)

a er step 2 a er step 3

x0?01??0

x0?01??0

0?01??0

0?01?10

0?01?10

0?01?10

0?01010

0101010

Figure 0.37: Peeling decoder applied to the [7,4,3] Hamming code with the re-ceived word y = (0, ?, ?,1,0, ?,0). ¿e vector x indicates the current estimate ofthe decoder of the transmitted codeword x. A er three decoding steps the peelingdecoder has successfully recovered the codeword.

Page 56: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

38 list of figures

0.2 0.4 0.6 0.8

0.10.20.3

j= 1

j= 2j= 3j= 4

j= 5j= 6

y0.0

Figure 0.38: Evolution of the residual degrees Rj(y), j = 0, . . . ,6, as a function ofthe parameter y for the (3,6)-regular degree distribution. ¿e channel parameter isє = єBP 0.4294. ¿e curve corresponding to nodes of degree 1 is shown as a thickline.

Page 57: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 39

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.0 h

hBP (h)

hBP

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.0 h

h(h)

hBP

hChMAP

R hBP = 12

Figure 0.39: Le : BP EXIT function hBP(є); Right: Corresponding EXIT functionh(є) constructed according to¿eorem 3.120.

Page 58: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

40 list of figures

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.0 h

hEPB

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.0 h

hEPB

R = 1 − r

Figure 0.40: Le : EBP EXIT curve of the (l = 3,r = 6)-regular ensemble. Notethat the curve goes “outside the box” and tends to innity. Right: According toLemma 3.128 the gray area is equal to 1 − r(l,r) = l

r=

12 .

Page 59: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 41

0.20.40.60.8

0.20.40.60.81.0

0.0 є

єMAP

0.20.40.60.8

0.20.40.60.81.0

0.0 є 0.20.40.60.8

0.20.40.60.81.0

0.0 є

Figure 0.41: Le : Because of¿eorem 3.120 and Lemma 3.128, at theMAP thresholdєMAP the two dark gray areas are in balance. Middle: ¿e dark gray area is propor-tional to the total number of variables which the M decoder introduces. Right: ¿edark gray area is proportional to the total number of equations which are producedduring the decoding process and which are used to resolve variables.

Page 60: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

42 list of figures

1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

(i) Decoding bit 1 from check 110

2 3 4 5 6 7 8 9 10 11 12 13 14 15

(ii) Decoding bit 10 from check 5

11

2 3 4 6 7 8 9 10 11 12 13 14 15

(iii) Decoding bit 11 from check 82

2 3 4 6 7 9 10 11 12 13 14 15

(iv) Introducing variable v2

6

2 3 4 6 7 9 10 11 12 13 14 15

(v) Introducing variable v6

v2 v2 v2

28

2 3 4 6 7 9 10 11 12 13 14 15

(vi) Decoding bit 28 (v2 + v6) from check 6

v6 v2v2+v6´¹¹¹¹¸¹¹¹¶ v2 v6

19

2 3 4 7 9 10 11 12 13 14 15

(vii) Decoding bit 19 (v6) from check 14

v2+v6±v6 v2v2+v6± v2 v6

12

2 3 4 7 9 10 11 12 13 15

(viii) Introducing variable v12

v2+v6±v6 v2v2+v6± v6 v2

30

2 3 4 7 9 10 11 12 13 15

(ix) Decoding bit 30 (v6 + v12 = v12)from checks 11 and 15Ð v6 = 0

v2+v6±v6+v12²

v2v2+v6±

v6+v12²v2 v12

24

2 3 4 7 9 10 12 13

(x) Decoding bit 24 (v12) from check 3

v2 v12 v2 v2 v12

23

2 4 7 9 10 12 13

(xi) Decoding bit 23 (v2 + v12)from check 4

v2v2+v12²

v2+v12²v12 v2

21

2 7 9 10 12 13

(xii) Decoding bit 21 (v2 + v12) from check 7

v2v2+v12²

v2+v12²v2 v2

29

2 9 10 12 13

(xiii) Decoding bit 29 (v12 = v2 + v12)from checks 2 and 9Ð v2 = 0

v12

v2+v12²v2 v12

26

10 12 13

(xiv) Decoding bit 26 (v12 = 0)from checks 10, 12, and 13Ð v12 = 0

0 v12 v12

Figure 0.42: M decoder applied to a (l = 3,r = 6)-regular code of length n = 30.

Page 61: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 43

0.1 0.2 0.3 0.4-0.020.000.020.04

s

v

0.1 0.2 0.3 0.4-0.020.000.020.04

s

v

0.1 0.2 0.3 0.4-0.020.000.020.04

s

v

Figure 0.43: Comparison of the number of unresolved variables for theMaxwell de-coder applied to the LDPC n,xl−1,xr−1 ensembles as predicted by Lemma 3.134with samples for n = 10,000. ¿e asymptotic curves are shown as solid lines,whereas the sample values are printed as dashed lines. ¿e parameters are є = 0.50(le ), є = єMAP

0.48815 (middle), and є = 0.46 (right). ¿e parameter є = 0.46is not covered by Lemma 3.134. Nevertheless, up to the point where the predictedcurve dips below zero the experimental data agrees well.

Page 62: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

44 list of figures

2019181716151413121110987654321

10

9

8

7

6

5

4

3

2

1

Figure 0.44: ¿e subset of variable nodes S = 7,11,16 is a stopping set.

Page 63: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 45

0.1 0.2 0.3 0.4

10-2

10-1

є

PBPB

єBP0.4294

Figure 0.45: ELDPCnx3, n2 x6 [PBPB (G,є)] as a function of є for n = 2i, i > [10].

Page 64: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

46 list of figures

0.1 0.2 0.3 0.4

10-1310-1110-910-710-510-3

є

PITB

Figure 0.46: P(χ = 0, smin,є) for the ensemble LDPC nx3, n2 x6, where n =500,1000,2000. ¿e dashed curves correspond to the case smin = 1, whereas thesolid curves correspond to the case where smin was chosen to be 12, 22, and 40, re-spectively. In each of these cases the expected number of ss of size smaller than sminis less than 1.

Page 65: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 47

0.1 0.2 0.3 0.4

10-1210-1010-810-610-4

є

PITb

Figure 0.47: P(χ = 1, smin = 12, ℓ,є) for the ensemble LDPC 500x3,250x6 andthe rst 10 iterations (solid curves). Also shown are the corresponding curves ofthe asymptotic density evolution for the rst 10 iterations (dashed curves).

Page 66: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

48 list of figures

1 10 5010-10

10-8

10-6

10-4

10-2

number of iterations

Figure 0.48: Probability distribution of the iteration number for theLDPC nx3, n2 x6 ensemble, lengths n = 400 (top curve), 600 (middle curve), and800 (bottom curve) and є = 0.3. ¿e typical number of iterations is around 5, but,e.g., for n = 400, 50 iterations are required with a probability of roughly 10−10.

Page 67: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 49

³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µt ³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µs

´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶t − ∆t − σ

´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶s + σ − ∆s

∆t σ ∆s

Figure 0.49: Derivation of the recursion for A(v, t, s) forLDPC Λ(x) = nxl,P(x) = nl

rxr and an unbounded number of iterations.

Page 68: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

50 list of figures

0.35 0.37 0.39 0.41 0.4310-710-610-510-410-310-2

є

PB

Figure 0.50: Scaling of ELDPC(n,x2,x5)[PB(G,є)] for transmission over the BEC(є)and BP decoding. ¿e threshold for this combination is єBP 0.4294. ¿eblocklengths/expurgation parameters are n~s = 1024~24, 2048~43, 4096~82, and8192~147, respectively. ¿e solid curves represent the exact ensemble averages.¿e dotted curves are computed according to the basic scaling law stated in ¿e-orem 3.151. ¿e dashed curves are computed according to the rened scaling lawstated in Conjecture 3.152. ¿e scaling parameters are α = 0.56036 and β~Ω =0.6169; see Table 3.154.

Page 69: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 51

0.30 0.35 0.4 0.45 0.5

10-510-410-310-210-1

є

PB

Figure 0.51: Scaling of ELDPCn,λ= 16 x+

56 x

3,ρ=x5[PB(G,є)] for transmission overBEC(є) and BP decoding. ¿e threshold for this combination is єBP 0.482803.¿e blocklengths/expurgation parameters are n~s = 350~14, 700~23, and 1225~35.¿e solid curves represent the simulated ensemble averages. ¿e dashed curves arecomputed according to the rened scaling law stated in Conjecture 3.152 with scal-ing parameters α = 0.5791 and β~Ω = 0.6887. ¿e two curves are almost on top ofeach other and are hard to distinguish.

Page 70: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

52 list of figures

200 400 600 800

4080120160200

0 800 160024003200

160320480720800

0

Figure 0.52: Evolution of n(1 − r)R1 as a function of the size of the residual graphfor several instances for the ensemble LDPC n, λ(x) = x2, ρ(x) = x5 for n = 2048(le ) and n = 8192 (right). ¿e transmission is over the BEC(є = 0.415).

Page 71: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 53

ω0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

0.00.10.20.30.40.50.6

ω 0.0227

G(ω)ω log 2(3)

Figure 0.53: Growth rateG(ω) for the (3,6) (dashed line) as well as the (2,4) (solidline) ensemble.

Page 72: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

54 list of figures

Xt Yt

Zt N (0,σ2)Figure 0.54: BAWGNC(σ).

Page 73: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 55

-5 -4 -3 -2 -1 0 1 2 3 401234

x

Figure 0.55: ln coth S x S2 .

Page 74: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

56 list of figures

-6 -4 -2 0 2 40.0

0.1

0.2

0.3

y

L

-1.0 -0.5 0.0 0.50.0

0.5

1.0

1.5

y

D

0 1 2 3 40.00

0.05

0.10

σ

G+1

0 1 2 3 40.0

0.5

1.0

1.5

σ

G−1

Figure 0.56: ¿e L-density aBAWGNC(σ)(y), the D-density aBAWGNC(σ)(y), as well asthe corresponding G-density aBAWGNC(σ)(1, y) for σ = 5~4.

Page 75: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 57

EN~σ21.0 2.0

0.250.500.751.00bits

0.00 (Eb~N0)dB0.0 0.4

10-410-310-210-1Pb

Figure 0.57: Le : Capacity of the BAWGNC (solid line) and the AWGNC (dashedline) in bits per channel use as a function of EN~σ2. Also shown are the asymp-totic expansions (dotted) for large and small values of ENσ2 discussed in Problem 4.12.Right: ¿e achievable (white) region for the BAWGNC and r = 1

2 as a function of(Eb~N0)dB.

Page 76: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

58 list of figures

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0 0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0 0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

Figure 0.58: Comparison of the kernels SdSaBEC(h)(ċ) (dashed line) with SdSaBSC(h)(ċ)(dotted line) and SdSaBAWGNC(h)(ċ) (solid line) at channel entropy h = 0.1 (le ), h =0.5 (middle), and h = 0.9 (right).

Page 77: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 59

0.025 0.03 0.035 0.04

10-410-310-210-1

є

PB

єGal

0.025 0.03 0.035 0.04

10-410-310-210-1

є

Pb

єGal

Figure 0.59: Performance of Gallager’s algorithm A for the (3,6)-regular ensem-ble when transmission takes place over the BSC. ¿e blocklengths are n = 2i,i = 10, . . . ,20. ¿e le -hand graph shows the block error probability, whereas theright-hand graph concerns the bit error probability. ¿e dots correspond to simula-tions. For most simulation points the 95% condence intervals (see Problem 4.37)are smaller than the dot size. ¿e lines correspond to the analytic approximation ofthe waterfall curves based on scaling laws (see Section 4.13).

Page 78: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

60 list of figures

0.05 0.06 0.07

10-410-310-210-1

є

PB

єDE

0.05 0.06 0.07

10-410-310-210-1

є

Pb

єDE

Figure 0.60: Performance of the decoder with erasures for the (3,6)-regular en-semble when transmission takes place over the BSC. ¿e blocklengths are n = 2i,i = 10, . . . ,20. ¿e le -hand graph shows the block error probability, whereas theright-hand graph concerns the bit error probability. ¿e dots correspond to simu-lations. ¿e lines correspond to the analytic approximation of the waterfall curvesbased on scaling laws (see Section 4.13).

Page 79: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 61

0.05 0.06 0.07 0.08 0.09

10-410-310-210-1

є

PB

єBP0.05 0.06 0.07 0.08 0.09

10-410-310-210-1

є

Pb

єBP

Figure 0.61: Performance of the BP decoder for the (3,6)-regular ensemble whentransmission takes place over the BSC. ¿e blocklengths are n = 2i, i = 10, . . . ,20.¿e le -hand graph shows the block error probability, whereas the right-hand graphconcerns the bit error probability. ¿e dots correspond to simulations. ¿e linescorrespond to the analytic approximation of the waterfall curves based on scalinglaws.

Page 80: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

62 list of figures

0.050.100.150.20

-10 0 10 20 30 40

0.050.100.150.20

-10 0 10 20 30 40

0.050.100.150.20

-10 0 10 20 30 40

0.050.100.150.20

-10 0 10 20 30 40

0.050.100.150.20

-10 0 10 20 30 40

0.050.100.150.20

-10 0 10 20 30 40

0.050.100.150.20

-10 0 10 20 30 40

0.050.100.150.20

-10 0 10 20 30 40

0.050.100.150.20

-10 0 10 20 30 40

0.050.100.150.20

-10 0 10 20 30 40

aℓ bℓ+1

Figure 0.62: Evolution of aℓ (densities of messages emitted by variable nodes) andbℓ+1 (densities of messages emitted from check nodes) for ℓ = 0, 5, 10, 50, and 140for the BAWGNC(σ = 0.93) and the code given in Example 4.100. ¿e densities“move to the right,” indicating that the error probability decreases as a function ofthe number of iterations.

Page 81: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 63

0 5 10 15 20 25 30 ℓ

10-4

10-3

10-2

PGal Ñ T ℓ(x

2,x

5 )

є=0.03875

є = 0.0396875

Figure 0.63: Evolution of PGalÑTℓ(x2,x5)(є) as a function of ℓ for various values of є.

For є = 0.03875,0.039375,0.0394531, and 0.039462 the error probability convergesto zero, whereas for є = 0.039465,0.0394922,0.0395313, and 0.0396875 the errorprobability converges to a non-zero value. For є 0.03946365 the error prob-ability stays constant. We conclude that єGal(3,6) 0.03946365. Note that forє A єGal(3,6), PGal

ÑTℓ(x2,x5)(є) is an increasing function of ℓ, whereas below this thresh-old it is a decreasing function. In either case, PGal

ÑTℓ(x2,x5)(є) ismonotone as guaranteedby Lemma 4.104.

Page 82: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

64 list of figures

0 20 40 60 80 100 120 140

10-4

10-3

10-2

PBP Ñ T ℓ(x

2,x

5 )

σ=0.878

σ = 2

Figure 0.64: Evolution of PBPÑTℓ(x2,x5)(σ) as a function of the number of iterations

ℓ for various values of σ. For σ = 0.878,0.879.0.8795,0.8798, and 0.88 the errorprobability converges to zero, whereas for σ = 0.9,1,1.2, and 2 the error probabilityconverges to a non-zero value. We see that σBP(3,6) 0.881. Note that, as predictedby Lemma 4.107, PBP

ÑTℓ(x2,x5)(σ) is a non-increasing function in ℓ.

Page 83: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 65

0.05 0.10 0.15 0.20-0.04-0.03-0.02-0.010.00

x 0.1 0.2 0.3 0.4-0.0075-0.0050-0.00250.00000.00250.0050

0.04

20.03

940.03

7

x

Figure 0.65: Le : f(є,x) − x as a function of x for the (3,3)-regular ensemble andє = єGal

0.22305. Right: f(є,x) − x as a function of x for the (3,6)-regularensemble and є = 0.037, є = єGal

0.394, and є = 0.042.

Page 84: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

66 list of figures

0 0.02 0.04 0.06 0.08 0.1 Pb−0.020−0.015−0.010−0.0050.0000.005

xed point (densities)a, b

−5 1 7 130.000.050.100.15

aBAWGNC(σ0.881)

−5 1 7 13

a

−5 1 7 13b

Figure 0.66: Progress per iteration (change of error probability) of density evolutionfor the (3,6)-ensemble and the BAWGNC(σ) channel with σ 0.881 as a functionof the bit error probability. In formulae: we plot E(aℓ) − E(aℓ−1) as a function ofPb = E(aBAWGNC(σ) e L(ρ(aℓ−1))), where L(x) = x3 and ρ(x) = x5. For cosmeticreasons this discrete set of points was interpolated to form a smooth curve. ¿einitial error probability is equal to Q(1~0.881) 0.12817. At the xed point theprogress is zero. ¿e associated xed point densities are a (emitted at the variablenodes) and b (emitted at the check nodes).

Page 85: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 67

h

h(h)

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.0

C[6,5,2]

C[3,1,3]

Figure 0.67: EXIT function of the [3,1,3] repetition code and the [6,5,2] parity-check code for the BEC (solid curve), the BSC (dashed curve), and also theBAWGNC (dotted curve).

Page 86: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

68 list of figures

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

1.0h 0.3765

v h(0.883)

0.3

c(h) 0.883v−1h c(h)

h 0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

1.0h 0.427

v−1h c(h)

h

Figure 0.68: EXIT function of the (3,6)-regular ensemble on the BAWGN channel.In the le -hand graph the parameter is h 0.3765 (σ 0.816), whereas in theright-hand graph we chose h 0.427 (σ = 0.878).

Page 87: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 69

u

v = 0

v = 0.5

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

1.0

h

h0.3643

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

1.0

Figure 0.69: Le : For v > [0, 12] the function h2h−12 (u)(1 − 2v) + v is non-decreasing and convex-8 in u, u > [0,1]. Right: Universal bound applied to the(3,6)-regular ensemble.

Page 88: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

70 list of figures

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0 h

h,g

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0 h

h,g

Figure 0.70: EXIT (solid) and GEXIT (dashed) function of the [n,1,n] repetitioncode and the [n,n− 1,2] parity-check code assuming that transmission takes placeover the BSC(h) (le ) or the BAWGNC(h) (right), n > 2,3,4,5,6.

Page 89: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 71

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0 h

gBP

(3,4)

(3,6)(4

,8)

(2,4)

(2,6)

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0 h

gBP

(3,4)

(3,6)(4

,8)

(2,4)

(2,6)

Figure 0.71: BP GEXIT curve for several regular LDPC ensembles for the BSC (le )and the BAWGNC (right).

Page 90: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

72 list of figures

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.0 h

hBP (h)

hBP

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.0 h

h(h)

hBP

hChMAP

R hBP = 12

Figure 0.72: Le : BPGEXIT function gBP(h) for the (3,6)-regular ensemble; Right:Corresponding upper bound on GEXIT function g(h) constructed according to¿eorem 4.172.

Page 91: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 73

1.0 1.2 1.4 1.6 1.8 2.0 2.2

10-4

10-3

10-2

10-1

(Eb~N0)dB

PB

(Eb~N

0) dB1.19658

Figure 0.73: Scaling of ELDPC(n,x2,x5)[PB(G,h)] for transmission over theBAWGNC(h) and a quantized version of belief propagation decoding implementedin hardware. ¿e threshold for this combination is (Eb~N0)dB 1.19658. ¿eblocklengths n are n = 1000, 2000, 4000, 8000, 16,000, and 32,000, respectively.¿e solid curves represent the simulated ensemble averages. ¿e dashed curves arecomputed according to the scaling law of Conjecture 4.176 with scaling parametersα = 0.8694 and β = 5.884. ¿ese parameters were tted to the empirical data.

Page 92: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

74 list of figures

0.0 0.05 0.10 0.15 0.20

0.80

0.85

0.90

0.95

µ(0)

µ(1)

Figure 0.74: Region of convergence for the all-one weight sequence (indicated ingray).

Page 93: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 75

-6 -4 -2 0 2 4 6 8 10 120.0

0.1

y

aKSI/U

SIBR

AYF(

σ)

Figure 0.75: L-densities aKSIBRAYFC(σ) (solid curve) and aUSIBRAYFC(σ) (dashed curve) for

σ = 1.

Page 94: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

76 list of figures

0.5 1 1.5 2 2.5

123

0 σ

1B

Figure 0.76: Upper bound on λ′(0)ρ′(1), i.e., 1~B, for the KSI case (solid curve),computed according to (5.1), and for the USI case (dashed curve).

Page 95: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 77

1.0 2.0 3.0 4.0 5.0

0.20.40.6

0.0 σ

CKSI/U

SIBR

AYF

Figure 0.77: Capacities CKSIBRAYFC(σ) (solid curve) and CUSI

BRAYFC(σ) (dashed curve) asa function of σmeasured in bits.

Page 96: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

78 list of figures

1.6 1.8 2.0 2.2 2.4

10-4

10-3

10-2

10-1

Eb~N0dB

Pb

(Eb~N

0)BP dB

Figure 0.78: ELDPC(n,λ,ρ)[Pb(G,Eb~N0)] for the optimized ensemble stated inExample 5.6 and transmission over the BRAYF(Eb~N0) with KSI and belief-propagation decoding. As stated in Example 5.6, the threshold for this combina-tion is σBPKSI 0.8028 which corresponds to Eb~N0dB 1.90785. ¿e block-lengths/expurgation parameters n~s are n = 8192~10, 16384~10, and 32768~10, re-spectively.

Page 97: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 79

Xt Yt

є

є

−1

+1

−1

+1Figure 0.79: Z channel with parameter є.

Page 98: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

80 list of figures

0.2 0.4 0.6 0.8

0.20.40.60.8

0.0 є

Figure 0.80: Comparison of CZC(є) (solid curve) with Iα= 12(X;Y) (dashed curve),

both measured in bits.

Page 99: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 81

Σ0 Σ1 Σ2 Σ3 Σn−1 Σn

X1 X2 X3 Xn

p(σ0)p(x1, y1,σ1 S σ0)

p(x2, y2,σ2 S σ1)p(x3, y3,σ3 S σ2)

p(xn, yn,σn S σn−1)

Figure 0.81: FSFG corresponding to (5.13).

Page 100: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

82 list of figures

G B

g

g

bb

Figure 0.82: Gilbert-Elliott channel with two states.

Page 101: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 83

0.10.20.3

-10 0 10 20 30 40 50 60 70

1

2

-5 -4 -3 -2 -1 0 1 2 3 4 5

0.10.20.3

-10 0 10 20 30 40 50 60 70

1

2

-5 -4 -3 -2 -1 0 1 2 3 4 5

0.10.20.3

-10 0 10 20 30 40 50 60 70

1

2

-5 -4 -3 -2 -1 0 1 2 3 4 5

0.10.20.3

-10 0 10 20 30 40 50 60 70

1

2

-5 -4 -3 -2 -1 0 1 2 3 4 5

Figure 0.83: L-densities of density evolution at iteration 1, 2, 4, and 10. ¿e le pictures show the densities of the messages which are passed from the code towardthe part of the FSFG which estimates the channel state. ¿e right-hand side showsthe density of the messages which are the estimates of the channel state and whichare passed to the part of the FSFG corresponding to the code.

Page 102: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

84 list of figures

00 10 11 01 00 10 01 11−3º5

−1º5

1º5

3º5

−3º5

−1º5

1º5

3º5

Figure 0.84: Two specic maps ψ for the 4-PAM constellation.

Page 103: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 85

−3º5

−1º5

1º5

3º5

−3º5

−1º5

1º5

3º5

Figure 0.85: Transition probabilities pY SX[1]y S x[1] for σ 0.342607 as a functionof x[1] = 0~1 (solid/dashed). ¿e two cases correspond to the two maps ψ shown inFigure 0.84.

Page 104: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

86 list of figures

Y Decoder 1

Decoder 2

X[1]

X[1]

X[2]

Figure 0.86: Multilevel decoding scheme. ¿e two decoding parts correspond to thetwo parts of (5.25).

Page 105: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 87

Y Decoder 1

Decoder 2

X[1]

X[2]

Figure 0.87: BICM decoding scheme. ¿e two decoding parts correspond toIX[1];Y and IX[2];Y, respectively.

Page 106: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

88 list of figures

X[1]t

X[2]t

Yt

Zt N (0,σ2)Figure 0.88: BAWGNMA channel with two users.

Page 107: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 89

r[1]

r[2]

1~2

1~2

D

Figure 0.89: Capacity region for σ 0.778. ¿e dominant face D (thick diagonalline) is the set of rate tuples of the capacity region of maximal sum rate.

Page 108: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

90 list of figures

py 1Sx[1]

1,x[2]

1

py 2Sx[1]

2,x[2]

2

py 3Sx[1]

3,x[2]

3

py nSx[1]

n,x[2]

n

1x[1]>C[1]

1x[2]>C[2]

Figure 0.90: FSFG corresponding to decoding on the BAWGNMA channel.

Page 109: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 91

8-PSKS = º2E(cos(iπ~4), sin(iπ~4))

16-QAM: S = »E~5(i, j)

i, j> −3,−1,1,3

2-PAM: S = ºE 4-QAM: S = (ºE,(ºE)

Figure 0.91: Four standard signal constellations: 2-PAM (top le ), 4-QAM (topright), 8-PSK (bottom le ), and 16-QAM (bottom right). In all cases it is assumedthat the prior on S is uniform. ¿e signal constellations are scaled so that the aver-age energy per dimension is E.

Page 110: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

92 list of figures

X[1]t

X[2]t

Yt

Figure 0.92: Multiple-access binary adder channel.

Page 111: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 93

xs xs

xp1 D DD D2

1 D2

Figure 0.93: Binary systematic recursive convolutional encoder of memory m = 2and rate one-half dened byG = 7~5. ¿e two square boxes are delay elements. ¿e7 corresponds to 1+D+D2. ¿ese are the coecients of the “forward” branch (thetop branch of the lter) with 1 corresponding to the le most coecient. In a similarmanner, 5 corresponds to 1+D2, which represents the coecients of the “feedback”branch. Again, the le most coecient corresponds to 1.

Page 112: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

94 list of figures

p(σ0)Σ0

Σ1

Σ2

Σn+m−1

Σn+m

Xs1

Xs2

Xsn+m = 0

Xp1

Xp2

Xpn+m

pxs1pys1 S xs1

pxs2pys2 S xs2

pxsn+mpysn+m S xsn+m

pyp1 S xp1

pyp2 S xp2

pypn+m S xpn+m

pxp1,σ1 S x

s1,σ0

pxp2,σ2 S x

s2,σ1

pxpn+m

,σn+mS xsn+m,

σn+m−1

Figure 0.94: FSFG for the MAP decoding of C(G,n).

Page 113: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 95

(00)

(10)

(01)

(11)

(00)

(10)

(01)

(11)xsix

pi

01

00

10

11

00

01

10

11

Σi−1 Σi

Figure 0.95: Trellis section for the caseG = 7~5. ¿ere are four states. A dashed/solidline indicates that xsi = 0~1 and thin/thick lines indicate that xpi = 0~1.

Page 114: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

96 list of figures

(1,1) (0,1) (0,1) (1,1) (0,1) (0,0) (0,0)

116

916

316

316

916

116

316

316

916

116

3163

16

116

916

116

916

316

316

1169

16

316

316

316

316

916

116

3163

16

116

916

916

316

316

916

916

316

1 0.062

0.562

0.012

0.012

0.316

0.035

0.062

0.062

0.009

0.021

0.009

0.035

0.015

0.015

0.005

0.005

0.021

0.011

0.006

0.007

0.005

0.005 0.007

0.008

0.012

0.025

0.012

0.016

0.041

0.025

0.041

0.025

0.066

0.066

0.066

0.066

0.316

0.035

0.105

0.105

0.562

0.188

1.(1,1) (0,1) (1,1) (1,1) (0,1) (0,0) (0,0)

Figure 0.96: BCJR algorithm applied to the code C(G = 7~5,n = 5) assumingtransmission takes place over the BSC(є = 1~4). ¿e received word is equal toys, yp = (1001000,1111100). ¿e top gure shows the trellis with branch la-bels corresponding to the received sequence. We have not included the prior pxsi,since it is uniform. ¿emiddle and bottomgures show the α- and the β- recursion,respectively. On the very bottom, the estimated sequence is shown.

Page 115: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 97

1.0 2.0 3.0 4.00.0

10-4

10-3

10-2

10-1

10-5 (Eb~N0)dB

Pb

Figure 0.97: Performance of the rate one-half code C(G = 21~37,n = 216) overthe BAWGNC under optimal bit-wise decoding (BCJR, solid line). Note that(Eb~N0)dB = 10 log10

12rσ2 . Also shown is the performance under optimal block-

wise decoding (Viterbi, dashed line). ¿e two curves overlap almost entirely. Al-though the performance under the Viterbi algorithm is strictly worse the dierenceis negligible.

Page 116: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

98 list of figures

(1,1) (0,1) (0,1) (1,1) (0,1) (0,0) (0,0)

1.2040.25

0.7270.727

0.251.204

0.7270.727

0.251.204

0.7270.727

0.25

1.204

1.2040.25

0.7270.727

1.2040.25

0.727

0.727

0.7270.727

0.251.204

0.7270.727

0.25

1.204

0.25

0.727

0.727

0.25

0.25

0.727

0. 1.204

0.25

1.931

1.931

0.5

1.454

1.227

1.227

2.181

1.704

2.431

1.477

1.954

1.954

2.681

2.681

1.727

2.204

2.454

2.454

2.703

(1,1) (0,1) (1,1) (1,1) (0,1) (0,1) (0,0)Figure 0.98: Viterbi algorithm applied to the code C(G = 7~5,n = 5) assumingtransmission takes place over the BSC(є = 1~4). ¿e received word is (ys, yp) =(1001000,1111100). ¿e top gure shows the trellis with branch labels correspond-ing to − log10 p(ysi S xsi)p(ypi S xpi ). Since we have a uniform prior we can take outthe constant p(xsi). ¿ese branch labels are easily derived from Figure 0.96 by ap-plying the function − log10. ¿e bottom gure show the workings of the Viterbialgorithm. On the very bottom the estimated sequence is shown.

Page 117: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 99

xs xs

xp1

xp2permutation

π2

D D D D

D D D D

Figure 0.99: Encoder for C(G = 21~37,n,π = (π1,π2)), where π1 is the identitypermutation.

Page 118: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

100 list of figures

xsi

xso xso

xpo

xsoserialize, permute,and append zeros

xsi

xpi

innerencoder

outerencoder

D D D D

D D D D

Figure 0.100: Encoder for C(Go= 21~37,Gi

= 21~37,n,π).

Page 119: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 101

p(xs1)p(ys1 S xs1) p(xsn+m)p(ysn+m S xsn+m)

π2

π1

p(yp21 S xp21 ) p(yp22 S xp2

2 ) p(yp2n+m S xp2n+m)

p(σ20) σ20 σ21 σ22 σ2n+m−1 σ2n+mp(xp2

n+m,σ2n+m S xs2n+m,σ2n+m−1)

p(yp11 S xp11 ) p(yp12 S xp1

2 ) p(yp1n+m S xp1n+m)

p(σ10) σ10 σ11 σ12 σ1n+m−1 σ1n+m

p(xp1n+m,σ1n+m S xs1n+m,σ1n+m−1)

Figure 0.101: FSFG for the optimum bit-wise decoding of an element of P(G,n).

Page 120: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

102 list of figures

0.4 0.5 0.6 0.7 0.8

10-510-410-310-210-1

10-6 (Eb~N0)dB

Pb

(Eb~N

0)BP dB0.537

Figure 0.102: EP(G=21~37,n,r=1~2)[Pb(C,Eb~N0)] for an alternating puncturing pat-tern (identical on both branches), n = 211, . . . ,216, 50 iterations, and transmis-sion over the BAWGNC(Eb~N0). ¿e arrow indicates the position of the thresh-old (Eb~N0)BPdB 0.537 (σBP 0.94) which we compute in Section 6.5. ¿e dashedcurves are analytic approximations of the error oor discussed in Lemma 6.52.

Page 121: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 103

Figure 0.103: Computation graph corresponding to windowed (w = 1) iterative de-coding of a parallel concatenated code for two iterations. ¿e black factor nodesindicate the end of the decoding windows and represent the prior which we imposeon the boundary states.

Page 122: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

104 list of figures

b b b b d b b b

a a a a c a a a

parity bits

syst. bits

Figure 0.104: Denition of the maps c = ΓsG(a,b) and d = ΓpG(a,b). We are given abi-innite trellis dened by a rational functionG(D). Associatedwith all systematicvariables are iid samples from a density a, whereas the parity bits experience thechannel b. ¿e resulting densities of the outgoing messages are denoted by c and d,respectively.

Page 123: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 105

−5 0 5 10 15 200.0

0.2

0.4

0.6

1.0

25 −5 0 5 10 15 200.0

0.2

0.4

0.6

1.0

25

Figure 0.105: Evolution of cℓ for ℓ = 1, . . . ,25 for the ensemble P(G = 21~37, r =1~2), an alternating puncturing pattern of the parity bits, and transmission over theBAWGNC(σ). In the le picture σ = 0.93 (Eb~N0 0.63 dB). For this parameterthe densities keep moving “to the right” toward ∆ª. In the right picture σ = 0.95(Eb~N0 0.446 dB). For this parameter the densities converge to a xed pointdensity.

Page 124: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

106 list of figures

σ = 0.930.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

1.0

hσ = 0.941

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

1.0

h

Figure 0.106: EXIT chart method for the ensemble P(G = 21~37, r = 1~2) withalternating puncturing on the BAWGN channel. In the le -hand picture the pa-rameter is σ = 0.93, whereas in the right-hand picture we chose σ = 0.941.

Page 125: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 107

h

r = 13

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

1.0

Figure 0.107: BP GEXIT curve for the ensemble P(G = 7~5, r = 1~3) assuming thattransmission takes place over the BAWGNC(h). ¿e BP and the MAP thresholdscoincide and both thresholds are given by the stability condition. We have hMAP~BP

0.559 (σMAP~BP 1.073).

Page 126: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

108 list of figures

0.4 0.8 1.20.20.40.60.81.0

0.0 w~nFigure 0.108: Exponent 1

n log2(aw,n) of the regular weight distribution of the codeC(G = 7~5,n) as a function of the normalized weight w~n for n = 64,128, and 256(dashed curves). Also shown is the asymptotic limit (solid line).

Page 127: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 109

0.2 0.6 1.0 1.4 1.8 2.20.20.40.60.81.0

w~nFigure 0.109: Exponent 1

n log2(pw,n) as a function of the normalized weightw~n forthe ensemble P(G = 7~5,n, r = 1~3) and n = 64,128, and 256. ¿e normalizationof the weight is with respect to n, not the blocklength.

Page 128: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

110 list of figures

S1 = 2,4,6S2 = 8,10,12S3 = 14,16,18S4 = 1,3,5S5 = 7,9,11

S6 = 13,15,17,19

S1 = 2,4,6S2 = 8,10,12S3 = 14,16,18S4 = 1,3,5S5 = 7,9,11S6 = 13,15,17,19

Figure 0.110: Bipartite graph corresponding to the parameters n = 19, d = 2, ∆ = 3,and π = 12,1,4,18,17,4,3,19,5,13,2,6,16,7,15,14,9,8,10. Double edges areindicated by thick lines. ¿e cycle of length 4, formed by (starting on the le ) S6 S6 S4 S2, is shown as dashed lines.

Page 129: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 111

h

hBP0.6481

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

1.0

Figure 0.111: EXIT chart for transmission over the BEC(h 0.6481) for an asym-metric (big-numerator) parallel concatenated ensemble.

Page 130: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

112 list of figures

repeat π G(D)

xs

xp

Figure 0.112: Alternative view of an encoder for a standard parallel concatenatedcode.

Page 131: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 113

π

Figure 0.113: Alternative view of the FSFG of a standard parallel concatenated code.

Page 132: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

114 list of figures

π

´¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¶degree 2

´¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¶degree 3

´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶degree dlmax

Figure 0.114: FSFG of an irregular parallel concatenated turbo code.

Page 133: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 115

xsxp

xq

D D

Figure 0.115: Binary feed-forward convolutional encoder of memorym = 2 and rateone-half dened by (p(D) = 1 + D + D2,q(D) = 1 + D2).

Page 134: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

116 list of figures

Figure 0.116: FSFG for the optimal bit-wise decoding of S(Go,Gi,n, r).

Page 135: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 117

´¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¶degree 2 checks

¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶degree 3

´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶degree rmax checks

degree 2 variables³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ degree 3³¹¹¹¹¹¹¹·¹¹¹¹¹¹¹µ degree lmax variables³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ

permutation π

Figure 0.117: Tanner graph of a standard irregular LDPC code.

Page 136: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

118 list of figures

l

repeatπ

permute1~(1 + D)

accumulate

xs

xp

Figure 0.118: Encoder for an RA code. Each systematic bit is repeated l times; theresulting vector is permuted and fed into a lter with response 1~(1+D) (accumu-late).

Page 137: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 119

systematic part³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ

´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶parity part

permutation π

Figure 0.119: Tanner graph of an RA code with l = 3.

Page 138: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

120 list of figures

degree 2 variables³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µdegree 3 variables³¹¹¹¹¹¹·¹¹¹¹¹µ degree lmax variables³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ

permutation π

Figure 0.120: Tanner graph corresponding to an IRA code.

Page 139: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 121

permutation π

Figure 0.121: Tanner graph of an ARA code.

Page 140: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

122 list of figures

information word (punctured)³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ

´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶codeword

permutation π

Figure 0.122: Tanner graph of an irregular LDGM code.

Page 141: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 123

x1 = u1 + u4x2 = u1x3 = u3x4 = u1 + u2 + u3x5 = u3x6 = u2 + u4x7 = u3 + u4

u1

u2

u3

u4

Figure 0.123: Tanner graph of a simple LDGM code.

Page 142: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

124 list of figures

information word (punctured)³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ

´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶codeword

permutation of LDPC part

permutation of LDGM part

Figure 0.124: Tanner graph of an MN code.

Page 143: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 125

1 2 3

1 2 3 4

Figure 0.125: Base graph.

Page 144: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

126 list of figures

1112131415 2122232425 3132333435

1112131415

2122232425 3132333435 4142434445

cluster

m copies of base graph1112131415 2122232425 3132333435

1112131415

2122232425 3132333435 4142434445

li ed base graph = m-cover

Figure 0.126: Le : m copies of base graph withm = 5. Right: Li ed graph resultingfrom applying permutations to the edge clusters.

Page 145: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 127

01234567

01234567

Figure 0.127: Ω-network for 8 elements. It has log2(8) = 3 stages, each consisting ofa perfect shue.

Page 146: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

128 list of figures

1 2 3

1 2 3 4

base graph with one multiple edge1112131415 2122232425 3132333435

1112131415

2122232425 3132333435 4142434445

li ed base graph

Figure 0.128: Le : Base graph with amultiple edge between variable node 1 to checknode 1. Right: Li ed graph.

Page 147: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 129

x1 = (x11,x21)

x2 = (x21,x22)

x3 = (x31,x32)

x4 = (x41,x42)H3,4

H2,4

H2,3

H3,2

H1,2H3,1

H1,1

1H1,1x1+H1,2x2=0

1H2,3x3+H2,4x4=0

1H3,1x1+H3,2x2+H3,4x4=0

=

=

=

=

H =

H1,1 H1,2 0 00 0 H2,3 H2,4H3,1 H3,2 0 H3,4

=

1 1 + z 0 00 0 z z

1 + z z 0 1

Figure 0.129: FSFG of a simple code over F4 and its associated parity-check matrixH. ¿e primitive polynomial generating F4 is p(z) = 1 + z + z2.

Page 148: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

130 list of figures

x11x12x21x22

x31x32x41x42

1x11+x21+x22=0

1x12+x21=0

1x32+x42=0

1x31+x32+x41+x42=0

1x11+x12+x22+x41=0

1x11+x21+x22+x42=0

=

=

=

=

=

=

=

=

H =

1 0 1 1 0 0 0 00 1 1 0 0 0 0 00 0 0 0 0 1 0 10 0 0 0 1 1 1 11 1 0 1 0 0 1 01 0 1 1 0 0 0 1

Figure 0.130: FSFG of a simple code over F4 and its associated parity-check matrixH. ¿e primitive polynomial generating F4 is p(z) = 1 + z + z2.

Page 149: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 131

-4.0 -2.0 0.0

10-510-410-310-2

(Eb~N0)dB

Pb

m= 1

m=2m=3

m=4

h0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

єSha=2~3

Figure 0.131: Le : Performance of the (2,3)-regular ensemble over F2m , m =

1,2,3,4 of binary length 4320 over the BAWGNC(σ). Right: EXIT curves for the(2,3)-regular ensembles over F2m for m = 1,2,3,4,5,6, and transmission over theBEC(є).

Page 150: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

132 list of figures

y

v

g−1є

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

0.0

Figure 0.132: EXIT chart for the LDGMensemble with λ(x) = 12x

4+

12x

5 and ρ(x) =25x +

15x

2+

25x

8 and transmission over the BEC(є = 0.35).

Page 151: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 133

0.1 0.2 0.3 0.4 0.5

0.2

0.4

0.6

0.8

l = 2

l = 5

γ

α

Figure 0.133: Value of α as a function of γ for l = 2, 3, 4, 5 and r = 6.

Page 152: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

134 list of figures

0

11

111

n − k k

n−k

Figure 0.134: H in upper triangular form.

Page 153: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 135

0

11

1A B

E C D g

n−k−gn − k − g g k

Figure 0.135: H in approximate upper triangular form.

Page 154: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

136 list of figures

Initialize: Set H0 = H and t = g = 0. Go to Continue.

Continue: If t = n−k−g then stop and outputHt. Otherwise, if theminimumresidual degree is 1 go to Extend, else go to Choose.

Extend: Choose uniformly at random a column c of residual degree 1 inHt.Let r be the row (in the range [t+ 1,n− k− g]) ofHt that containsthe (residual) non-zero entry in column c. Swap column c withcolumn t + 1 and row r with row t + 1. (¿is places the non-zeroelement at position (t+1, t+1), extending the diagonal by 1.) Callthe resulting matrix Ht+1. Increase t by 1 and go to Continue.

Choose: Choose uniformly at randoma column c inHt withminimumpos-itive residual degree, call the degree d. Let r1,r2, . . . ,rd denote therows ofHt in the range [t+1,n−k−g]which contain the d residualnon-zero entries in column c. Swap column c with column t + 1.Swap row r1 with row t + 1 and move rows r2,r3, . . . ,rd to thebottom of the matrix. Call the resulting matrix Ht+1. Increase t by1 and increase g by d − 1. Go to Continue.

Figure 0.136: Greedy algorithm to perform approximate upper triangulation.

Page 155: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 137

1 2 3 4 5 6 7 8 9 10 11 12

1 2 3 4 5 6

Figure 0.137: Tanner graph corresponding to H0.

Page 156: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

138 list of figures

1a 1b 1c 2 3 4 5 6 7 8 9 10 11 12

1 2 3 4 5 6

Figure 0.138: Tanner graph a er splitting of node 1.

Page 157: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 139

1a 1b 1c 2 3 4 5 6 7 8 9 10 11 12

1 2 3 4 5 6

Figure 0.139: Tanner graph a er one round of dual erasure decoding.

Page 158: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

140 list of figures

0.25 0.5 0.75 1.0

0.1

0.2

0.0

Figure 0.140: 1 − z − ρ(1 − λ(z)).

Page 159: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 141

Figure 0.141: Element chosen uniformly at random from LDPC (1024, λ, ρ), with(λ, ρ) as described in Example A.19, a er the application of the greedy algorithm.For the particular experiment we get g = 1. ¿e non-zero elements in the last row(in the gap) are drawn larger to make them more visible.

Page 160: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

142 list of figures

g

Figure 0.142: Element chosen uniformly at random from LDPC 2048,x2,x5 a erthe application of the greedy algorithm. ¿e result is g = 39.

Page 161: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 143

0.005 0.010 0.015 0.0200.250.500.751.001.251.50

0.00

100g

λ′ (0)rL3

L2u u

Figure 0.143: Evolution of the dierential equation for the (3,6)-regular ensemble.For u 0.0247856 we have λ′(0)r = 1, L2 0.2585, L3 0.6895, and g 0.01709.

Page 162: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

144 list of figures

-2.0 -1.2 -0.4 0.4 1.2 2.0-0.4-0.20.00.20.40.60.8

-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7

-16

-12

-8

-4

0

4

Figure 0.144: Le : Example K(x),δ = 0.125. Right: Logarithm (base 10) K(x).

Page 163: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 145

1.0 ωωGV 0.11 0.5

G(r=

1 2,ω)

r = 12

Figure 0.145: Exponent G(r = 1~2,ω) of the weight distribution of typical ele-ments of G(n, k = n~2) as a function of the normalized weight ω. For w~n >(δGV,1−δGV) the number of codewords of weightw in a typical element of G(n, k)is 2n(G(r,w~n)+o(1)).

Page 164: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

146 list of figures

12345678910

1

2

3

4

5

12345678910

1

2

3

4

5

12345678910

1

2

3

4

5

Figure 0.146: Le : Graph G from the ensemble LDPC 10,x2,x5; Middle: Graph Hfrom the ensembleG7(G,7) (note that the labels of the sockets are not shown – theselabels should be inferred from the order of the connections in themiddle gure); therst 7 edges that H has in common with G are drawn in bold; Right: the associatedgraph ϕ7,30(H).¿e two dashed lines correspond to the two edges whose endpointsare switched.

Page 165: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

list of figures 147

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

1.0

0.0 1.0

zA

zBZSBS

SAS(z) ~B SBS(z)

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

1.0

0.0 1.0

R1z SAS(x)dx B R 1

z SBS(x)dx

Figure 0.147: Le : Two SDS-distributions SAS (thick line) and SBS (thin line). Right:Since R 1

z SAS(x)dx B R 1z SBS(x)dx we know that SAS_ SBS.

Page 166: MODERN CODING THEORY – FIGURES€¦ · vi list of figures 0.17 FactorgraphfortheMAPdecodingofourrunningexample. . . . . . . 17 0.18 Le :StandardFGinwhicheachvariablenodehasdegreeatmost2

148 list of figures

1 − є2

1 − є1

є1є2

zA

zB

zA

zB

Figure 0.148: Denition of q on (zB, zA) pair.