18
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed Decoding of Protograph-Based LDPC Convolutional Codes Over Erasure Channels Aravind R. Iyengar, Student Member, IEEE, Marco Papaleo, Member, IEEE, Paul H. Siegel, Fellow, IEEE, Jack Keil Wolf, Life Fellow, IEEE, Alessandro Vanelli-Coralli, and Giovanni E. Corazza, Senior Member, IEEE Abstract—We consider a windowed decoding scheme for LDPC convolutional codes that is based on the belief-propagation (BP) algorithm. We discuss the advantages of this decoding scheme and identify certain characteristics of LDPC convolutional code ensem- bles that exhibit good performance with the windowed decoder. We will consider the performance of these ensembles and codes over erasure channels with and without memory. We show that the structure of LDPC convolutional code ensembles is suitable to obtain performance close to the theoretical limits over the mem- oryless erasure channel, both for the BP decoder and windowed decoding. However, the same structure imposes limitations on the performance over erasure channels with memory. Index Terms—Belief propagation, convolutional codes, decoding thresholds, erasure channels, iterative decoding, low-density parity-check codes, stopping sets, windowed decoding. I. INTRODUCTION L OW-DENSITY parity-check (LDPC) codes, although introduced in the early 1960’s [4], were established as state-of-the-art codes only in the late 1990’s with the applica- tion of statistical inference techniques [5] to graphical models representing these codes [6], [7]. The promising results from LDPC block codes encouraged the development of convolu- tional codes dened by sparse parity-check matrices. LDPC convolutional codes (LDPC-CC) were rst introduced in [8]. Ensembles of LDPC-CC have several attractive char- acteristics, such as thresholds approaching capacity with be- lief-propagation (BP) decoding [9], and BP thresholds close to Manuscript received October 21, 2010; revised October 25, 2011; accepted October 27, 2011. Date of publication November 23, 2011; date of current ver- sion March 13, 2012. The work of A. R. Iyengar is supported by the National Science Foundation under the Grant CCF-0829865. The material in this paper was presented in part at the 2010 IEEE Information Theory Workshop, Cairo, Egypt [1]; the 2010 IEEE International Communications Conference [2]; and at the 2010 International Symposium on Turbo Codes and Iterative Information Processing [3]. A. R. Iyengar and P. H. Siegel are with the Department of Electrical and Com- puter Engineering and the Center for Magnetic Recording Research, University of California, San Diego, La Jolla, CA 92093 USA (e-mail: [email protected]; [email protected]). M. Papaleo is with Qualcomm Inc., San Diego, CA USA (e-mail: mpa- [email protected]). J. K. Wolf, deceased, was with the Department of Electrical and Computer Engineering and the Center for Magnetic Recording Research, University of California, San Diego, La Jolla, CA 92093 USA. A. Vanelli-Coralli and G. E. Corazza are with the University of Bologna, DEIS-ARCES, 2-40136 Bologna, Italy (e-mail: [email protected]; geco- [email protected]). Communicated by I. Sason, Associate Editor for Coding Theory. Color versions of one or more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TIT.2011.2177439 the maximum a posteriori (MAP) thresholds of random ensem- bles with the same degree distribution [10]. Whereas irregular LDPC block codes have also been shown to have BP thresh- olds close to capacity [11], the advantage with convolutional counterparts is that good performance is achieved by relatively simple regular ensembles. Also, the construction of nite-length codes from LDPC-CC ensembles can be readily optimized to ensure desirable properties, e.g., large girths and fewer cycles, using well-known techniques of LDPC code design. Most of these attractive features of LDPC-CC are pronounced when the blocklengths are large. However, BP decoding for these long codes might be computationally impractical. By implementing a windowed decoder, one can get around this problem. In this paper, a windowed decoding scheme brought to the attention of the authors by Liva [12] is considered. This scheme exploits the convolutional structure of the parity-check matrix of the LDPC-CC to decode nonterminated codes, while maintaining many of the key advantages of iterative decoding schemes like the BP decoder, especially the low complexity and superior performance. Note that although similar decoding schemes were proposed in [13] and [14], the aim in these papers was not to reduce the decoding latency or complexity. When used to decode terminated (block) LDPC-CC, the win- dowed decoder provides a simple, yet efcient way to trade off decoding performance for reduced latency. Moreover, the proposed scheme provides the exibility to set and change the decoding latency on the y. This proves to be an extremely useful feature when the scheme is used to decode codes over upper layers of the internet protocol. Our contributions in this paper are to study the requirements of LDPC-CC ensembles for good performance over erasure channels with windowed decoding (WD). We are interested in identifying characteristics of ensembles that present a good performance-latency trade-off. Further we seek to nd such ensembles that are able to withstand not just random erasures but also long bursts of erasures. We reiterate that we will be interested in designing ensembles that have the aforementioned properties, rather than designing codes themselves. Although the channels considered here are erasure channels, we note that the WD scheme can be used when the transmission happens over any channel. This paper is organized as follows. Section II introduces LDPC-CC and the notation and terminology that will be used throughout the paper. In Section III we describe the decoding algorithms that will be considered. Along with a brief descrip- tion of the belief-propagation algorithm, we will introduce 0018-9448/$26.00 © 2011 IEEE

IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303

Windowed Decoding of Protograph-Based LDPCConvolutional Codes Over Erasure Channels

Aravind R. Iyengar, Student Member, IEEE, Marco Papaleo, Member, IEEE, Paul H. Siegel, Fellow, IEEE,Jack Keil Wolf, Life Fellow, IEEE, Alessandro Vanelli-Coralli, and Giovanni E. Corazza, Senior Member, IEEE

Abstract—We consider a windowed decoding scheme for LDPCconvolutional codes that is based on the belief-propagation (BP)algorithm. We discuss the advantages of this decoding scheme andidentify certain characteristics of LDPC convolutional code ensem-bles that exhibit good performance with the windowed decoder.We will consider the performance of these ensembles and codesover erasure channels with and without memory. We show thatthe structure of LDPC convolutional code ensembles is suitable toobtain performance close to the theoretical limits over the mem-oryless erasure channel, both for the BP decoder and windoweddecoding. However, the same structure imposes limitations on theperformance over erasure channels with memory.

Index Terms—Belief propagation, convolutional codes, decodingthresholds, erasure channels, iterative decoding, low-densityparity-check codes, stopping sets, windowed decoding.

I. INTRODUCTION

L OW-DENSITY parity-check (LDPC) codes, althoughintroduced in the early 1960’s [4], were established as

state-of-the-art codes only in the late 1990’s with the applica-tion of statistical inference techniques [5] to graphical modelsrepresenting these codes [6], [7]. The promising results fromLDPC block codes encouraged the development of convolu-tional codes defined by sparse parity-check matrices.LDPC convolutional codes (LDPC-CC) were first introduced

in [8]. Ensembles of LDPC-CC have several attractive char-acteristics, such as thresholds approaching capacity with be-lief-propagation (BP) decoding [9], and BP thresholds close to

Manuscript received October 21, 2010; revised October 25, 2011; acceptedOctober 27, 2011. Date of publication November 23, 2011; date of current ver-sion March 13, 2012. The work of A. R. Iyengar is supported by the NationalScience Foundation under the Grant CCF-0829865. The material in this paperwas presented in part at the 2010 IEEE Information Theory Workshop, Cairo,Egypt [1]; the 2010 IEEE International Communications Conference [2]; andat the 2010 International Symposium on Turbo Codes and Iterative InformationProcessing [3].A. R. Iyengar and P. H. Siegel are with the Department of Electrical and Com-

puter Engineering and the Center for Magnetic Recording Research, Universityof California, San Diego, La Jolla, CA 92093 USA (e-mail: [email protected];[email protected]).M. Papaleo is with Qualcomm Inc., San Diego, CA USA (e-mail: mpa-

[email protected]).J. K. Wolf, deceased, was with the Department of Electrical and Computer

Engineering and the Center for Magnetic Recording Research, University ofCalifornia, San Diego, La Jolla, CA 92093 USA.A. Vanelli-Coralli and G. E. Corazza are with the University of Bologna,

DEIS-ARCES, 2-40136 Bologna, Italy (e-mail: [email protected]; [email protected]).Communicated by I. Sason, Associate Editor for Coding Theory.Color versions of one or more of the figures in this paper are available online

at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TIT.2011.2177439

the maximum a posteriori (MAP) thresholds of random ensem-bles with the same degree distribution [10]. Whereas irregularLDPC block codes have also been shown to have BP thresh-olds close to capacity [11], the advantage with convolutionalcounterparts is that good performance is achieved by relativelysimple regular ensembles. Also, the construction of finite-lengthcodes from LDPC-CC ensembles can be readily optimized toensure desirable properties, e.g., large girths and fewer cycles,using well-known techniques of LDPC code design. Most ofthese attractive features of LDPC-CC are pronounced when theblocklengths are large. However, BP decoding for these longcodes might be computationally impractical. By implementinga windowed decoder, one can get around this problem.In this paper, a windowed decoding scheme brought to

the attention of the authors by Liva [12] is considered. Thisscheme exploits the convolutional structure of the parity-checkmatrix of the LDPC-CC to decode nonterminated codes, whilemaintaining many of the key advantages of iterative decodingschemes like the BP decoder, especially the low complexityand superior performance. Note that although similar decodingschemes were proposed in [13] and [14], the aim in thesepapers was not to reduce the decoding latency or complexity.When used to decode terminated (block) LDPC-CC, the win-dowed decoder provides a simple, yet efficient way to tradeoff decoding performance for reduced latency. Moreover, theproposed scheme provides the flexibility to set and change thedecoding latency on the fly. This proves to be an extremelyuseful feature when the scheme is used to decode codes overupper layers of the internet protocol.Our contributions in this paper are to study the requirements

of LDPC-CC ensembles for good performance over erasurechannels with windowed decoding (WD). We are interestedin identifying characteristics of ensembles that present a goodperformance-latency trade-off. Further we seek to find suchensembles that are able to withstand not just random erasuresbut also long bursts of erasures. We reiterate that we will beinterested in designing ensembles that have the aforementionedproperties, rather than designing codes themselves. Althoughthe channels considered here are erasure channels, we note thatthe WD scheme can be used when the transmission happensover any channel.This paper is organized as follows. Section II introduces

LDPC-CC and the notation and terminology that will be usedthroughout the paper. In Section III we describe the decodingalgorithms that will be considered. Along with a brief descrip-tion of the belief-propagation algorithm, we will introduce

0018-9448/$26.00 © 2011 IEEE

Page 2: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

2304 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

the windowed decoding scheme that is based on BP. Possiblevariants of the scheme will also be discussed. Section IVdeals with the performance of LDPC-CC on the binary erasurechannel. Starting with a short recapitulation of known resultsfor BP decoding, we will discuss the asymptotic analysis ofthe WD scheme in detail. Finite-length analysis will includeperformance evaluation using simulations that reinforce theobservations made in the analysis. For erasure channels withmemory, we analyse LDPC-CC ensembles both in the asymp-totic setting and for finite lengths in Section V. We also includesimulations illustrating the good performance of codes derivedfrom the designed protographs over the Gilbert-Elliott channel.Finally, we summarize our findings in Section VI.

II. LDPC CONVOLUTIONAL CODES

In the following, we will define LDPC-CC, give a construc-tion starting from protographs, and discuss various ways ofspecifying ensembles of these codes.

A. Definition

A rate binary, time-varying LDPC-CC is definedas the set of semi-infinite binary row vectors , satisfying

, where is the parity-check matrix

.... . .

.... . .. . .

. . .. . .

.... . .. . .. . .

(1)

and is the semi-infinite all-zero row vector. The elements, in (1) are binary matrices of size

that satisfy [15]• , for and , ;• such that ;• has full rank .

The parameter is called the memory of the code andis referred to as the constraint length. The first two

conditions above guarantee that the code has memory andthe third condition ensures that the parity-check matrix is full-rank. In order to get sparse graph codes, the Hamming weightof each column of must be very low, i.e., .Based on the matrices , LDPC-CC can be classified asfollows [8]. An LDPC-CC is said to be periodic if

and for some . When, the LDPC-CC is said to be time-invariant, in which

case the time dependence can be dropped from the notation,i.e., . If neither of theseconditions holds, it is said to be time-variant.

Terminated LDPC-CC have a finite parity-check matrix

.... . .

.... . .. . .. . .

...

where we say that the convolutional code has been terminatedafter instants. Such a code is said to be regular ifhas exactly ’s in every column and ’s in every row ex-cluding the first and the last rows, i.e., ignoring theterminated portion of the code. It follows that for a given , theparity-check matrix can be made sparse by increasing or orboth, leading to different code constructions [16]. In this paper,we will consider LDPC-CC characterized by large and small. As in [9], we will focus on regular LDPC-CC which can be

constructed from a protograph.

B. Protograph-Based LDPC-CC

A protograph [17] is a relatively small bipartite graph fromwhich a larger graph can be obtained by a copy-and-permuteprocedure—the protograph is copied times, and then theedges of the individual replicas are permuted among thereplicas to obtain a single, large bipartite graph referred to asthe derived graph. We will refer to as the expansion factor.is also referred to as the lifting factor in the literature [11].

Suppose the protograph possesses variable nodes (VNs) andcheck nodes (CNs), with degrees , and

, respectively. Then the derived graph willconsist of VNs and CNs. The nodesof the protograph are labeled so that if the VN is connectedto the CN in the protograph, then in a replica can onlyconnect to one of the replicated ’s.Protographs can be represented by means of an

biadjacency matrix , called the base matrix of the protographwhere the entry represents the number of edges betweenCN and VN (a nonnegative integer, since parallel edgesare permitted). The degrees of the VNs (CNs respectively) ofthe protograph are then equal to the sum of the correspondingcolumn (row, respectively) of . A regular protograph-based code is then one with a base matrix where all VNs havedegree and all CNs, excluding those in the terminated portionof the code, have degree .In terms of the base matrix, the copy-and-permute opera-

tion is equivalent to replacing each entry in the base ma-trix with the sum of distinct size- permutation matrices.This replacement is done ensuring that the degrees are main-tained, e.g., a 2 in the matrix is replaced by a matrix

where and are two permutation ma-trices of size chosen to ensure that each row and column of

Page 3: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

IYENGAR et al.: WINDOWED DECODING OF PROTOGRAPH-BASED LDPC CONVOLUTIONAL CODES 2305

has two ones. The resulting matrix after the above trans-formation for each element of , which is the biadjacency ma-trix of the derived graph, corresponds to the parity-check matrixof the code. The derived graph therefore is nothing but the

Tanner graph corresponding to the parity-check matrix of thecode.For different values of the expansion factor , different

blocklengths of the derived Tanner graph can be achieved,keeping the original graph structure imposed by the protograph.We can hence think of protographs as defining code ensemblesthat are themselves subsets of random LDPC code ensembles.We will henceforth refer to a protograph and the ensembleit represents interchangeably. This means that the density

evolution analysis for the ensemble of codes represented bythe protograph can be performed within the protograph. Fur-thermore, the structure imposed by a protograph on the derivedgraph can be exploited to design fast decoders and efficientencoders. Protographs give the code designer a refined controlon the derived graph edge connections, facilitating good codedesign.Analogous to LDPC block codes, LDPC-CC can also be de-

rived by a protograph expansion. As for block codes, the parity-check matrices of these convolutional codes are composed ofblocks of size- square matrices. We now give two construc-tions of regular LDPC-CC ensembles.1) Classical Construction: We briefly describe the con-

struction introduced in [18]. For convenience, we will refer tothis construction as the classical construction of regularLDPC-CC ensembles. Let be the greatest common divisor(gcd) of and . Then there exist positive integers andsuch that , , and . Assumingwe terminate the convolutional code after instants, we obtaina block code, described by the base matrix

.... . .

.... . .. . .. . .

...

where is the memory of the LDPC-CC andare submatrices that are all identical and have

all entries equal to 1. Note that an LDPC-CC constructed fromthe protograph with base matrix could be time-varyingor not depending on the expansion of the protograph into theparity-check matrix.The protograph of the terminated code has VNs

and CNs. The rate of the LDPC-CC istherefore

(2)

where is the rate of the nonterminated code. Notethat and the LDPC-CC has a regular degree distri-bution [9] when . We will assume that the parameterssatisfy and so that the rates andof the nonterminated and terminated codes, respectively, are inthe proper range.The classical construction was proposed in [18] and it pro-

duces protographs for some regular LDPC-CC ensem-bles. However, not all regular LDPC-CC can be con-structed, e.g., becomes zero if and are relatively primeand consequently the resulting code has no memory. In [9], theauthors addressed this problem by proposing a construction rulebased on edge spreading. We denote an ensemble of reg-ular LDPC-CC constructed as described here as withthe subscript for “classical” construction.2) Modified Construction: We propose a modified construc-

tion that is similar to the classical construction except that we donot require that , i.e., the memory of the LDPC-CCis independent of its degree distribution. We further disregardthe requirement that the matrices are identical and have onlyones, i.e., parallel edges in the protograph are allowed. How-ever, the sizes of the submatrices will stillbe .Wewill denote a regular LDPC-CC ensembleconstructed in this manner as , with subscript for“modified” construction. Note that the rate of the en-semble is still given by (2). Further, the independence of thecode memory and the degree distribution allows us to constructLDPC-CC even when and are co-primes. This is illustratedin the following example.

Example 1: Let and . Clearly, a classical con-struction of this ensemble is not possible. However, with themodified construction, we can set and define the en-semble given by

with design rate for a termination length .Note that these submatrices are by no means the only possibleones. Another set of submatrices satisfying the constraints is

The above example brings out the similarity between theproposed modified construction and the technique of edgespreading employed in [9], wherein the edges of the protographdefined by the matrix

are “spread” between the matrices and (or betweenand ) to obtain a regular LDPC-CC ensemble withmemory . The advantage of the modified constructionis thus clear—it gives us more degrees of freedom to design

Page 4: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

2306 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

the protographs in comparison with the classical construction.In particular, the ensemble specified by the classical construc-tion is contained in the set of ensembles allowed by the mod-ified construction, meaning that the best performingensemble (with memory the same as that of the en-semble) is at least as good as the ensemble. Note that in[9], there was no indication as to how edges are to be spread be-tween matrices. With windowed decoding, we will shortly showthat different protographs (edge spreadings) have different per-formances. We will also identify certain design criteria for effi-cient modified constructions that suit windowed decoding.

C. Polynomial Representation of LDPC-CC Ensembles

We have thus far specified LDPC-CC ensembles by givingthe parameter and thematrices . An alter-native specification of terminated protograph-based LDPC-CCensembles using polynomials is useful in establishing certainproperties of regular ensembles and is described below.Instead of specifying matrices of size ,

we can specify the columns of the matrix

...

using a polynomial of degree no more thanfor each column. The polynomial of the column

(3)

is defined so that the coefficient of , , is the entryof for all and . Therefore,an equivalent way of specifying the LDPC-CC ensemble is bygiving and the set of polynomials .With this notation, the column of is specified by thepolynomial where for unique

and . We can hence use “the column index”and “the column polynomial” interchangeably. Further, to de-fine regular ensembles, we will need the constraints

and

where is the polynomial of degree no larger thanobtained from by collecting the coefficients of terms withdegrees where for some , i.e.,

:

(4)

We will refer to these polynomials as the modulo polyno-mials. Let us denote the set of polynomials defining anLDPC-CC ensemble as , where

, and the modulo polynomials as, . Later in the

paper, we will say “the summation of polynomials and” to mean the collection of the and the columns of. The following example illustrates the notation.

Example 2: For regular codes, we have and, the component base matrices are

1 2 matrices. With the first column of the protograph , we

associate a polynomialof degree at most . Similarly, with the second column we as-sociate a polynomial ,also of degree at most . Then, the column ofcan be associated with the polynomial , and the

column with the polynomial . As noted earlier, wewill use the polynomial of a column and its index interchange-ably, e.g., when we say “choosing the polynomial ,” wemean that we choose the column of . Similarly,by “summations of polynomials and ,” we meanthe collection of the corresponding columns of . In orderto define regular ensembles, we will further have theconstraint . In this case, since ,

is the same as the previous constraint,because .

We define the minimum degree of a polynomial as theleast exponent of with a positive coefficient and denote it as

. Clearly, .Let us define a partial ordering of polynomials with nonnega-tive integer coefficients as follows. We write if

, andthe coefficients of are no larger than the correspondingones of . The ordering satisfies the following proper-ties over polynomials with nonnegative integer coefficients: if

and , then

We define the boundary polynomial of a polynomialto be where and

. Note that when , we define .We have for any polynomial , .

III. DECODING ALGORITHMS

LDPC-CC are characterized by a very large constraint length. Since the Viterbi decoder has a complexity

that scales exponentially in the constraint length, it is imprac-tical for this kind of code. However, the sparsity of the parity-check matrix can be exploited and an iterative message passingalgorithm can be adopted for decoding. We consider two spe-cific iterative decoders here—a conventional belief-propagationdecoder [6], [19] and a variant called a windowed decoder.

Page 5: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

IYENGAR et al.: WINDOWED DECODING OF PROTOGRAPH-BASED LDPC CONVOLUTIONAL CODES 2307

Fig. 1. Illustration of windowed decoding (WD) with window of size for a LDPC-CC with and at the fourth decoding instant.This window configuration consists of rows of the parity-check matrix and all the columns involved in theseequations: this comprises the red (vertically hatched) and the blue (hatched) edges shown within the matrix. Note that the symbols shown in green (backhatched)above the parity-check matrix have all been processed. The targeted symbols are shown in blue (hatched) above the parity-check matrix and the symbols that areyet to be decoded are shown in gray above the parity-check matrix.

A. Belief-Propagation (BP)

For terminated LDPC-CC, decoding can be performed as inthe case of an LDPC block code, meaning that each frame car-rying a codeword obtained through the termination can be de-coded with the sum-product algorithm (SPA) [19].Note that since the BP decoder can start decoding only after

the entire codeword is received, the total decoding latencyis given by , where is the time taken toreceive the entire codeword and is the time needed to de-code the codeword. Inmany practical applications this latency islarge and undesirable. Moreover, for nonterminated LDPC-CC,a BP decoder cannot be employed.

B. Windowed Decoding (WD)

The convolutional structure of the code imposes a constrainton the VNs connected to the same parity-check equations—twoVNs of the protograph that are at least columnsapart cannot be involved in the same parity-check equation.This characteristic can be exploited in order to perform con-tinuous decoding of the received stream through a “window”that slides along the bit sequence. Moreover, this structure al-lows for the possiblity of parallelizing the iterations of the mes-sage passing decoder through several processors working in dif-ferent regions of the Tanner graph. A pipeline decoder basedon this idea was proposed in [8]. In this paper we consider awindowed decoder to decode terminated codes with reduced la-tency. Note that whereas a similar sliding window decoder wasused to bound the performance of BP decoding in [14], we areinterested in evaluating the performance of the windowed de-coder from a perspective of reducing the decoding complexityand latency. Consider a terminated regular parity-checkmatrix built from a base matrix . The windowed decoderworks on subprotographs of the code and the window sizeis defined as the number of sets of CNs of the protographconsidered within each window. In the parity-check matrix ,the window thus consists of rowsof and all columns that are involved in the check equations

corresponding to these rows.We will henceforth refer to the sizeof the window only in terms of the protograph with the corre-sponding size in the parity-check matrix implied. The windowsize ranges between and because eachVN in the protograph is involved in at most checkequations; and, although there are a total ofCNs in , the decoder can perform BP when all the VN sym-bols are received, i.e., when . Apart from thewindow size, the decoder also has a (typically small) target era-sure probability as a parameter.1 The aim of the WD is toreduce the erasure probability of every symbol in the codewordto a value no larger than .At the first decoding instant, the decoder performs belief-

propagation over the edges within the window with the aim ofdecoding all of the first symbols in the window, called thetargeted symbols. The window slides down rows and rightcolumns in after at least a fraction of the targeted

symbols are recovered (or, in general, after a maximum numberof belief-propagation iterations have been performed), and con-tinues decoding at the new position at the next decoding timeinstant. We refer to the set of edges included in the window atany particular decoding time instant as the window configura-tion. In the terminated portion of the code, the window config-uration will have fewer edges than other configurations withinthe code. Since the WD aims to recover only the targeted sym-bols within each window configuration, the entire codeword isrecovered in decoding time instants. Fig. 1 shows a schematicrepresentation of the WD for .The decoding latency of the targeted symbols with WD is

therefore given by , where is thetime taken to receive all the symbols required to decode thetargeted symbols, and is the time taken to decode thetargeted symbols. The parameters and are related as

1We will see shortly that setting is not necessarily the most efficientuse of the WD scheme.

Page 6: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

2308 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

since at most symbols are to be received to processthe targeted symbols. The relation between andis given by

since the complexity of BP decoding scales linearly in block-length and the WD uses BP decoding over symbols ineach window configuration. We assume that the number of iter-ations of message passing performed is fixed to be the same forthe BP decoder and the WD. Thus, in latency-limited scenarios,we can use the WD to obtain a latency reduction of

The smallest latency supported by the code-decoder system istherefore at most a fraction that of the BPdecoder. As pointed out earlier, the only choice for nontermi-nated codes is to use some sort of a windowed decoder. Forthe sequence of ensembles indexed by , with the choice ofthe proposed WD with a fixed finite window size , the de-coding latency vanishes as . We will typically be inter-ested in small values of where large gains in decoding la-tencies are achievable. Since the decoding latency increases asincreases, the trade-off between decoding performance and

latency can be studied by analyzing the performance of the WDfor the entire range of window sizes.Latency Flexibility: Although reduced latency is an impor-

tant characteristic of WD, what is perhaps more useful practi-cally is the flexibility to alter the latencywith suitable changes inthe code performance. The latency can be controlled by varyingthe parameter as required. If a large latency can be handled,can be kept large ensuring good code performance and if a

small latency is required, can be made small while paying aprice with the code performance. (We will see shortly that theperformance of WD is monotonic in the window size.)One possible variant of WD is a decoding scheme which

starts with the smallest possible window size and the size is in-creased whenever targeted symbols cannot be decoded, i.e., thetarget erasure probability cannot be met within the fixed max-imum number of iterations. Other schemes where the windowsize is either increased or decreased based on the performanceof the last few window configurations are also possible.

IV. MEMORYLESS ERASURE CHANNELS

In this section, we confine our attention to the performance ofthe LDPC-CC when the transmission occurs over a memorylesserasure channel, i.e., a binary erasure channel (BEC) parameter-ized by the channel erasure rate .

A. Asymptotic Analysis

We consider the performance of the LDPC-CC in terms ofthe average performance of the codes belonging to ensemblesdefined by protographs in the limit of infinite blocklengths andin the limit of infinite iterations of the decoder. As in the case ofLDPC block codes, the ensemble average performance is a goodestimate of the performance of a code in the ensemble with high

probability. We will therefore concentrate on the erasure ratethresholds [11] of the code ensembles as a performance metricin our search for good LDPC-CC ensembles.1) BP: The asymptotic analysis of LDPC block codes with

the BP decoder over the BEC has been well studied [20]–[23].For LDPC-CC based on protographs, the BP decoding thresh-olds can be numerically estimated using the Protograph-EXIT(P-EXIT) analysis [24]. This method is similar to the standardEXIT analysis in that it tracks the mutual information betweenthe message on an edge and the bit value corresponding to theVN on which the edge is incident, while maintaining the graphstructure dictated by the protograph.2

The processing at a CN of degree results in an updatingof the mutual information on the edge as

(5)

and the corresponding update at a VN of degree gives

(6)where is the mutual information obtained fromthe channel. Note that the edge multiplicities are included in theabove check and variable node computations. The a posteriorimutual information at a VN is found using

where the second equality follows from (6). The decoder is saidto be successful when the a posteriori mutual information atall the VNs of the protograph converges to 1 as the number ofiterations of message passing goes to infinity. The BP threshold

of the ensemble described by the protograph with basematrix is defined as the supremum of all erasure rates forwhich the decoder is successful.

Example 3: The protograph has a BPthreshold of . Note that all the CNs in the proto-graph are of degree 6 while all the VNs are of degree 3. ThisBP threshold is expected because corresponds to theregular LDPC block code ensemble. The following protograph

has a BP threshold for . Note that,as before, all VNs are of degree 3 and all the CNs except theones in the terminated portion of the code are of degree 6.

......

................... . .

2We will use the phrase “mutual information on an edge” to mean the mutualinformation between the message on the edge and the bit corresponding to theadjacent VN.

Page 7: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

IYENGAR et al.: WINDOWED DECODING OF PROTOGRAPH-BASED LDPC CONVOLUTIONAL CODES 2309

This is the ensemble constructed in [18]. In terms of thenotation introduced, this is given as ;or equivalently as .

The above example illustrates the strength of pro-tographs—they allow us to choose structures within anensemble defined by a pair of degree distributions that mayperform better than the ensemble average. In fact, the BPperformance of regular LDPC-CC ensembles has been relatedto the maximum a posteriori (MAP) decoder performance ofthe corresponding unstructured ensemble [10].2) WD: We now analyze the performance of the WD de-

scribed in Section III-B in the limit of infinite blocklengths andthe limit of infinite iterations of belief-propagation within eachwindow.

Remark: In the limit of infinite blocklength, each term in thebase protograph is replaced by a permutation matrix of in-finite size to obtain the parity-check matrix, and therefore thelatency of any window size is infinite, apparently defeating thepurpose of WD. Our interest in the asymptotic performance,however, is justified as it allows us to establish lower bounds onthe probability of failure of the windowed decoder to recoverthe symbols of the finite-length code. In practice, it is to be ex-pected that the gap between the performance of a finite-lengthcode with WD and the asymptotic ensemble performance of theensemble to which the code belongs increases as the windowsize reduces due to the reduction in the blocklength of the sub-code defined by the window.The asymptotic analysis for WD is very similar to that of the

BP decoder owing to the fact that the part of the code within awindow is itself a protograph-based code. However, the maindistinction in this case is the definition of decoding success.In the case of BP decoding, the decoding is considered a suc-cess only when, for any symbol in the codeword, the proba-bility of failing to recover the symbol goes to 0 (or equiva-lently, the a posteriori mutual information goes to 1) as thenumber of rounds of message-passing goes to infinity. On theother hand, the decoding within a window is successful as longas the probability of failing to recover the targeted symbols be-comes smaller than a predecided small value . The decoderperformance therefore depends on two parameters: the windowsize and the target erasure probability .We define the threshold of the window con-

figuration to be the supremum of the channel erasure rates forwhich the WD succeeds in retrieving the targeted symbols ofthe window with a probability at least , given thateach of the targeted symbols corresponding to the firstwindow configurations is known with probability . Fig. 2illustrates the threshold of the window config-uration. The windowed threshold is then defined asthe supremum of channel erasure rates for which the windoweddecoder can decode each symbol in the codeword with proba-bility at least .We assume that between decoding time instants, no infor-

mation apart from the targeted symbols is carried forward, i.e.,when a particular window configuration has been decoded, allthe present processing information apart from the decoded tar-geted symbols themselves is discarded. With this assumption,

Fig. 2. Illustration of the threshold of the window configuration. The targeted symbols of the previous window configurations

are known with probability . The targeted symbols within the window arehighlighted with a solid blue bar on top of the window. The symbols within theblue (hatched) region in the window are initially known with probability .The task of the decoder is to perform BP within this window until the erasureprobability of the targeted symbols is smaller than . The window is then slidto the next configuration.

it is clear that the windowed threshold of a protograph-basedLDPC-CC ensemble is given by the minimum of the thresh-olds of its window configurations. For the classical and mod-ified constructions of LDPC-CC described in Section II-B, allwindow configurations are similar except the ones at the termi-nated portion of the code. Since the window configurations atthe terminated portions can only perform better, the windowedthreshold is determined by the threshold of a window configu-ration not in the terminated portion of the code. Note that theperformance of WD when the information from processing theprevious window configurations is made use of in successivewindow configurations, e.g., when symbols other than the tar-geted symbols that were decoded previously are also retained,can only be better than what we obtain here.We now state a monotonicity property of the WD the proof

of which is relegated to Appendix I.

Proposition 1 (Monotonicity of WD Performance in ): Forany ensemble ,

It follows immediately from the definition of the windowedthreshold that

Furthermore, from the continuity of the density evolution equa-tions (6) and (5), we have that when we set , we decodenot only the targeted symbols within the window but all the re-maining symbols also. Since the symbols in the right end of thewindow are the “worst protected” ones within the window (inthe sense that these are the symbols for which the least numberof constraints are used to decode), we expect the windowedthresholds to be dictated mostly by the be-havior of the submatrix under BP. In the following, when thebase matrix of the protograph corresponding to an ensembleis unambiguous, we will write and

interchangeably.We next turn to giving some properties of LDPC-CC ensem-

bles with good performance under WD. We start with an ex-ample that illustrates the stark change in performance a smalldifference in the structure of the protograph can produce.

Page 8: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

2310 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

Example 4: Consider WD with the ensemble in Ex-ample 3 with a window of size . The corresponding pro-tograph defining the first window configuration is

and we have . This is seenreadily by observing that there are VNs of degree 1 that areconnected to the same CNs. In fact, from this reasoning, we seethat .As an alternative, we consider the modified construction of

Section II-B2 to obtain the ensemble given by. This ensemble has a BP threshold

for which is quite close to thatof the ensemble , . WD with awindow of size 3 for this ensemble has the first window config-uration

which has a threshold , i.e., wecan theoretically get close to 68.3% of the BP threshold with

of the latency of the BP decoder. Note that this im-provement in threshold has been obtained while also increasingthe rate of the ensemble, since for the ensemble incomparison with for .

The above example illustrates the tremendous advantage ob-tained by using ensembles for WD even under thesevere requirement of . The following is a good rule ofthumb for constructing LDPC-CC ensembles that have goodperformance with WD.

Design Rule 1: For ensembles, set forall where .The above design rule says that for ensembles, it is

better to avoid degree-1 VNswithin a window. Note that none ofthe ensembles satisfy this design rule. We now illus-trate the performance of LDPC-CC ensembles with WD whenwe allow .

Example 5: We compare three LDPC-CC ensembles. Thefirst is the classical LDPC-CC ensemble . Thesecond and the third are LDPC-CC ensembles constructed asdescribed in Section II-B2. The ensemble is defined by thepolynomials

and is defined by

We first observe that all three ensembles have the same asymp-totic degree distribution, i.e., all are regular LDPC-CCensembles when . While and have a memory

, has a memory . Therefore, for a fixed ,while and have the same rate, has a higher rate. Another

Fig. 3. Windowed threshold as a function of the window size for the ensembleswith . The rates of the ensembles and

are 0.49 whereas that of is 0.495. The corresponding Shannon limits aretherefore 0.51 for and , and 0.505 for . The thresholds for the two valuesof almost coincide for the ensembles and .

consequence of a smaller is that can be decoded with awindow of size . Further note that whereasand satisfy Design Rule 1, does not. For a window of size3, the subprotographs for ensembles and are as shown inExample 4, and that for ensemble is as shown below

In Fig. 3, we show the windowed thresholds plotted againstthe window size for the three ensembles and by fixing

for .A few observations are in order. The monotonicity of

in as proven in Proposition 1 is evident. Thewindowed thresholds for and are fairly closeto the maximum windowed threshold even when .The windowed thresholds for ensembles and are robust tochanges in , i.e., the thresholds are almost the same (the pointsoverlap in the figure) for and . Further,the windowed thresholds are fairly close to the BPthresholds for . We will see nextthat this last observation is not always true.

Effect of Termination: The better BP performance of theensemble in comparison with that of the -reg-

ular block code ensemble (cf. Example 3) is because of thetermination of the parity-check matrix of codes. Moreprecisely, the low-degree CNs at the terminated portion of theprotograph are more robust to erasures and their erasure-cor-recting power is cascaded through the rest of the protographto give a better threshold for the convolutional ensemble incomparison with that for the corresponding unstructured en-semble [16]. From the definition of the WD, we can see that thesubprotograph within a window does not have the lower-de-gree checks if previous targeted symbols are not decoded.Therefore, we would expect a deterioration in the performance.

Page 9: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

IYENGAR et al.: WINDOWED DECODING OF PROTOGRAPH-BASED LDPC CONVOLUTIONAL CODES 2311

TABLE I, , ,

TABLE II, , ,

Furthermore, the Design Rule 1 increases the degrees of theCNs in the terminated portion. Therefore, the effect of differenttermination on the WD performance is of interest.

Example 6: Tables I and II illustrate the WD thresholds forensembles that satisfy Design Rule 1 except when. These ensembles are defined by the polynomials

Note that . The ensembles are terminated so thatthe rate is . The worst threshold with WD (corre-sponding to the least window size ) is denoted

. The largest threshold with WD is denoted andthe BP threshold as . The increase in the gap betweenand with increasing illustrates the loss due to edge mul-tiplicities (“weaker” termination). This is because the termina-tions at the beginning and at the end of the code are different,i.e., the CN degrees in the terminated portion at the beginning ofthe code are which increases with ; whereas thoseat the end of the code are 2, a constant. Thus, much of the codeperformance is determined by the “stronger” (smaller check-de-gree) termination, the one at the end of the code for .This is also seen by the fact that the gap between and

decreases as increases, meaning that the terminationat the beginning of the code is weak and increasing the windowsize helps little. Note that the ensemble in Table II isin fact the classical ensemble, and that is largerthan the corresponding BP threshold. This is possible since WDonly demands that the erasure probability of the targeted sym-bols is reduced to . In contrast, BP demands that the erasureprobability of all the symbols is reduced to 0.From the above discussion, we can infer the following as an-

other design rule.

Design Rule 2: For ensembles, keep the termina-tion at the beginning of the code strong, preferably stronger

TABLE IIIWINDOWED THRESHOLDS ,

than the one at the end of the code. That is, use polynomialssuch that each of the sums

is kept as small as possible.Targeted Symbols: We have thus far considered only the firstVNs in the subprotograph contained within the window to be

the targeted symbols. However, as an alternative way to trade offperformance for reduced latency, it is possible to consider otherVNs also as targeted symbols. In this case, the window wouldbe shifted beyond all the targeted symbols after processing eachwindow configuration. For a window of size , let us denote by

the windowed threshold when the targeted symbolsare the first VNs, . Hence,

. By definition, .

Example 7: Consider the ensemble denotedwith defined by ; and the

ensemble denoted with defined by. Also consider ensembles and given by

andrespectively. Both and are ensembles, but withmemory and 2 respectively. Table III gives the win-dowed thresholds with targeted symbolsfor a window of size 4 for .One might expect the windowed threshold

to be higher for an ensemble for which ishigher. This is not quite right:

whereas. This can again be explained as the

effect of stronger termination in in comparison with . Thisis also evident in the larger thresholds for the ensemblewith same memory as , but stronger termination. Also,

keeping the same termination and increasing the memoryimproves the performance, as is exemplified by the largerthresholds of in comparison with those of .

The windowed thresholds quantify the unequalerasure protection of different VNs in the subprotograph withinthe window. Furthermore, it is clear that for good performance,it is advantageous to keep fewer targeted symbols within awindow.

B. Finite-Length Performance Evaluation

The finite-length performance of LDPC codes under iterativemessage-passing decoding over the BEC is dependent on thenumber and the size of stopping sets present in the parity-checkmatrix of the code [23], [25]. Thus, the performance of the

Page 10: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

2312 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

Fig. 4. SER performance for BP and Windowed Decoding over BEC.

codes varies based on the parity-check matrix used to repre-sent the code and, consequently, the performance of iterativedecoding can be made to approach that of ML decoding byadding redundant rows to the parity-check matrix (see, e.g.,[26]). However, since we are exploiting the structure of theparity-check matrix of the convolutional code, we will not beinterested in changing the parity-check matrix by adding redun-dant rows as this destroys the convolutional structure. The en-semble stopping set size distribution for some protograph-basedLDPC codes was evaluated in [27] where it was shown that aminimum stopping set size that grows linearly in blocklength isimportant for the good performance of codes with short block-lengths. This analysis is similar to the analysis of the minimumdistance growth rate of LDPC-CC ensembles—see [28] and ref-erences therein. It is worthwhile to note that although the min-imum stopping set size grows linearly for protograph codes ex-panded using random permutation matrices, the same is not truefor codes expanded using circulant permutation matrices [29].In the following we will evaluate the finite-length performanceof codes constructed from ensembles with BP andWD throughMonte Carlo simulations.WDwas considered withonly the first symbols as the targeted symbols.In Figs. 4 and 5, the symbol error rate (SER) and the code-

word error rate (CER) performance are depicted for codesand , where the ensembles and were defined

in Example 5. The codes used were those constructed by Liva[12] by expanding the protographs using circulant matrices (andsums of circulant matrices) and techniques of progressive edgegrowth (PEG) [30] and approximate cycle extrinsic messagedegree (ACE) [31] to avoid small cycles in the Tanner graphsof the codes. The girth of both the codes and was 12.The parameters used for the construction were and

so that the blocklength and. The BP thresholds for ensembles and with

were 0.4883 and 0.4882 respectively. As is clear fromFigs. 4 and 5, code outperforms code for small windowsizes ( ), confirming the effectiveness of the proposeddesign rules for windowed decoding. For larger window sizes( ), there is no marked difference in the performance

Fig. 5. CER performance for BP and Windowed Decoding over BEC. Alsoshown is the (Singleton) lower bound as SB.

of the two codes. It was also observed that for small values( ), the performance of codes constructed through circulantpermutation matrices was better than those constructed throughrandom permutation matrices. This difference in performancediminished for larger values.We include in Fig. 5, for comparison, a lower bound on the

CER . The Singleton bound, , represents the perfor-mance achievable by an idealized binary MDS code. Thisbound for the BEC can be expressed as

Note that by the idealized binary MDS code, we meana binary linear code that achieves the Singleton bound

with equality. This code does not exist for all valuesof and .

V. ERASURE CHANNELS WITH MEMORY

We now consider the performance of LDPC-CC ensemblesand codes over erasure channels with memory. We consider thefamiliar two-state Gilbert-Elliott channel (GEC) [32], [33] as amodel of an erasure channel with memory. In this model, thechannel is either in a “good” state , where we assume theerasure probability is 0, or in an “erasure” state , in whichthe erasure probability is 1. The state process of the channelis a first-order Markov process with the transition probabilities

and . With these parameters,we can easily deduce [34] that the average erasure rate and theaverage burst length are given by

We will consider the GEC to be parameterized by the pair. Note that there is a one-to-one correspondence between

the two pairs and .

Page 11: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

IYENGAR et al.: WINDOWED DECODING OF PROTOGRAPH-BASED LDPC CONVOLUTIONAL CODES 2313

Discussion: The channel capacity of a correlated binaryerasure channel with an average erasure rate of is given as

, which is the same as that of the memoryless channel,provided the channel is ergodic. Therefore, one can obtaingood performance on a correlated erasure channel through theuse of a capacity-achieving code for the memoryless channelwith an interleaver to randomize the erasures [27], [35]. This isequivalent to permuting the columns of the parity-check matrixof the original code. We are not interested in this approachsince such permutations destroy the convolutional structure ofthe code and as a result, we are unable to use the WD for sucha scheme.Construction of LDPC block codes for bursty erasure

channels has been well studied. The performance metric of acode over a bursty erasure channel is related to the maximumresolvable erasure burst length (MBL) denoted [35],which, as the name suggests, is the maximal length of a singlesolid erasure burst that can be decoded by a BP decoder.Methods of optimizing codes for such channels therefore focuson permuting columns of parity-check matrices to maximize

, e.g., [36]–[41]. Instead of permuting columns of theparity-check matrix, in order to maintain the convolutionalstructure of the code, we will consider designingensembles that maximize .

A. Asymptotic Analysis

1) BP: As noted earlier, the performance of LDPC-CC en-sembles depends on stopping sets. The structure of protographsimposes constraints on the code that limit the stopping set sizesand locations, as will be shown shortly.Let us define a protograph stopping set to be a subset of

the VNs of the protograph whose neighboring CNs are con-nected at least twice to . These are also denoted as ,in terms of the set of polynomials defining the protograph. Wedefine the size of the stopping set as the cardinality of , de-noted . We call the least number of consecutive columnsof that contain the stopping set the span of the stop-ping set, denoted . Let us denote the size of the smallestprotograph stopping set of the protograph by , and theminimum number of consecutive columns of the protographthat contain a protograph stopping set by . When theprotograph under consideration is clear from the context, wewill drop it from the notation and use and . The min-imum span of a stopping set is of interest because we can givesimple bounds for based on . Note that the stop-ping set of minimal size and the stopping set of minimal spanare not necessarily the same set of VNs. However, we alwayshave

The following example clarifies the notation.

Example 8: Let us denote the base matrix correspondingto the protograph of the ensembles of Example 5 as

. For ensembles and , the first twocolumns of form a protograph stopping set, i.e.,

is a stopping set. This is clearfrom the highlighted columns below

......

............. . .

......

............. . .

Therefore, and . Since no singlecolumn forms a protograph stopping set, and

, implying.For ensemble , the highlighted columns of in the fol-

lowing matrix form a protograph stopping set, i.e.,is a stopping set.

............

....... . .

Thus, and . As no singlecolumn of is a protograph stopping set and no three con-secutive columns of contain a protograph stopping set, itis clear that and , so that

In these cases, it so happened that the stopping set with the min-imal size and the stopping set with the minimal span were thesame.

Our aim in the following will be to obtain bounds for themaximal over ensembles with memory ,which we denote , and design protographs thatachieve minimal spans close to this optimal value.The analysis of the minimal span of stopping sets for un-

structured LDPC ensembles was performed in [42]. However,the structure of the protograph-based LDPC-CC allows us toobtain much more easily for someensembles.We start by observing that if one of the VNs in the protograph

is connected multiple times to all its neighboring CNs, then itforms a protograph stopping set by itself. In order to obtain alarger minimum span of stopping sets, it is desirable to avoidthis case, and we include this as one of our design criteria.

Design Rule 3: For a ensemble, choose the poly-nomials such that for every , there exists

such that .

Page 12: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

2314 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

Using the polynomial representation of LDPC-CC ensemblesis helpful in this case since we can easily track stopping sets asthose subsets that have polynomials whose coefficients are alllarger than 1. From this fact, we can prove the following.

Proposition 2 ( for Protographs): Forprotographs of memory defined by polynomials

and , can be upper bounded as

where and .We give the proof in Appendix II. We see from the above

that and a necessary condition forachieving this span is the first of four possible cases listed above,which we include as another design criterion.

Design Rule 4: For ensembles with memory ,set

Corollary 3 (Optimal Protographs): Forprotographs with memory and ,

.

The proof is given in Appendix III. Note that ensemblein Example 5 achieves , as was observed in Ex-ample 8. It also satisfies Design Rules 1, 3 and 4. We bring tothe reader’s attention here that constructions other than the onegiven in the proof of the above corollary that achieve

are also possible. These constructions allow us to designensembles for a wide range of required . We

quickly see that a drawback of the convolutional structure isthat if is increased to obtain a larger , the code ratedecreases linearly for a fixed .We give without proof the following upper bound for

for ensembles, as it follows from Proposition 2.

Proposition 4 ( for Protographs):For protographs defined by polynomials

, we have

where is the upper bound given in (7) at thebottom of the page for the minimal span ofstopping sets confined within subsets of the form

, where we have used

the notation , ,, ,

and .

Discussion: By looking at the stopping sets confined withincolumns corresponding to two polynomials only, we can useProposition 2 to upper bound the span of these stopping sets.The minimal such span over all possible choices of the twocolumns therefore gives an upper bound on the minimal span ofthe protograph. Since from(7); we have , which is similar to theresult in Proposition 2. This bound is, however, loose in general.For terminated codes, we can give an upper bound for

that is tighter in some cases.

Corollary 5 ( for Protographs): Forprotographs terminated after instants,

.Proof: From the Singleton bound for the protograph, we

have . Since we need for a

positive code rate in (2), .Note that for protographs, this is tighter than the

bound , which, in the worstcase, is a factor times larger. However, since we areinterested mainly in ensembles for which , this boundmight be looser than the one in Proposition 4 forensembles.

Example 9: Consider the ensemble withmemory defined bythe polynomials

. It can be shown by an argument similarto the one used to prove Corollary 3 that for the protograph ofthis ensemble, . This is exactly the bound inProposition 4 since

Thus, in this case

which is roughly only a fraction of the (loose) upper bound forsuggested in the discussion of Proposition 4.

The constructed protographs are thus optimal in thesense of maximizing the minimal span of stopping sets, i.e.,

.

(7)

Page 13: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

IYENGAR et al.: WINDOWED DECODING OF PROTOGRAPH-BASED LDPC CONVOLUTIONAL CODES 2315

They also satisfy Design Rules 1 and 3 for . AlthoughProposition 4 gave a tight bound for in this case, it is loosein general.

We can show that the protographs have minimalspans at least as large as the corresponding spans ofprotographs.

Proposition 6: where.

Proof: The equality is trivial when . When, one way of constructing the ensembles

with memory is to let each set of modulo polynomialsthemselves define ensembles with memory . Theresult then follows by noting that a stopping set for the polyno-mials has to be a stopping set for every set of polynomials ,

.

The construction proposed above often allows us to strictlyincrease the minimal span of the ensemble in com-parison with the ensemble, as illustrated by the fol-lowing example.

Example 10: Consider the construction of a en-semble with memory 3. Let us call it . The different parame-ters in this case are , , , , , and

. Since with , we have forprotographs, from Example 9 and

wewill define themodulo polynomials to be the optimal con-struction that achieves this minimal span, i.e.,

. Then, by defining ,we can show that and hence

Note that the protograph defined by has no degree-1 VNs as-sociated with the component matrix . In fact, the constructed

ensemble has , fairlyclose to the Shannon limit of , even with the smallestpossible window size. Table IV lists the windowed thresholds ofthis ensemble with different numbers of targeted symbols withinthe smallest window for .

2) WD: The asymptotic analysis for WD is essentially thesame as that for BP. We will consider WD with only the firstsymbols within each window as the targeted symbols. We

are now interested in the subprotograph stopping sets, denoted, that include one or more of the targeted symbols

within a window. Let us denote the minimal span of such stop-ping sets as . Since stopping sets of the protographof the LDPC-CC are also stopping sets of the subprotographwithin a window, and since such stopping sets can be chosento include some targeted symbols within the window, we have

. In fact,when

TABLE IV

since in this case the first columns are completelycontained in the window. Further, we have

This is true because a stopping set for window size involvingtargeted symbols is not necessarily a stopping set for windowsize , whereas a stopping set for window size isdefinitely a stopping set for window size .

Remark: When the first symbols within a window arethe targeted symbols, we have for

where denotes the minimal span of stopping setsof the subprotograph within the window of size involvingat least one of the targeted symbols, and

. Consequently, we have

The definition of can be extended to accommodate, as in the case of windowed thresholds.

In particular, we have , wherethe last inequality is from the Singleton bound.

Example 11: Consider the ensemble defined in Example 5.With a window of size , we havewith the corresponding stopping set high-

lighted below

and with a window size , we have, and the corresponding stopping sets

and are as follows

Note that for window size 3, whereas the minimal span of astopping set involving VN is 2, that of a stopping set in-volving is 4. However, for window size 4, the stopping setinvolving with minimal span, denoted , and that involving, , each have a span of 4, although their cardinalities are 2

Page 14: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

2316 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

and 3 respectively. We have in this case, . Notice that.

B. Finite-Length Analysis

1) BP: We now show the relation between the parametersand . We shall assume in the following that, i.e., every column of the protograph has at least one

of the entries equal to 1. We will consider the expansion of theprotographs by a factor to obtain codes.

Proposition 7: For any regular LDPC-CC,.

Proof: Clearly, the set of the columns of the parity-check matrix corresponding to the consecutive columns ofthat contain the protograph stopping set with minimal span

must contain a stopping set of the parity-check matrix. There-fore, if all symbols corresponding to these columns are erased,they cannot be retrieved.

Corollary 8: A terminated LDPC-CC withcan never achieve the MBL

of an MDS code.Proof: From the Singleton bound, we have

, assuming that the parity-check matrix isfull-rank. From Proposition 7 we have,

where the second equality follows from the discussion in Ex-ample 9. Since we require for anonnegative code rate in (2),

which shows that the MBL of an MDS code can never beachieved.

Remark: Although the idealized binary MDS codedoes not exist, there are codes that achieve MDS performancewhen used over a channel that introduces a single burst oferasures in a codeword. For example, the code with aparity-check matrix has an MBL of .Despite the discouraging result from Corollary 8, we can

guarantee an MBL that linearly increases with as follows.

Proposition 9: For any regular LDPC-CC,.

Proof: From the definition of , it is clear that if one ofthe two extreme columns is completely known, all other symbolscan be recovered, for otherwise the remaining columns withinthe span of the stopping set will have to contain another proto-graph stopping set, violating the minimality of the stopping setspan (The two extreme columns are pivots of the stoppingset [41].) The largest solid burst that is guaranteed to have atleast one of the extreme columns completely known is of length

. Therefore, .

Example 12: For the ensemble with memoryin Example 9, we have

from Propositions 7 and 9. Thus, we can construct codes withMBL proportional to .2) WD: The MBL for WD can be bounded as

in the case of BP based on . Assuming that thewindow size is , the targeted symbols are the firstsymbols within the window, and the polynomials defining

the ensemble are chosen to satisfy Design Rule 3, we have. Propositions 7 and 9 in this case imply that

C. Numerical Results

The MBL for codes and (the same codes used inSection IV-B) was computed using an exhaustive search al-gorithm, by feeding the decoder with a solid burst of erasuresand testing all the possible locations of the burst. The MBLfor the codes we considered was 1023 and 1751 for codes

and , respectively. Note that for code , the MBL, i.e., code achieves the upper

bound from Proposition 7. More importantly, the maximumpossible was achievable while maintaining good perfor-mance over the BEC with the BP decoder. However, the MBLfor code , , is much smallerthan the corresponding bound from Proposition 7. In this case,although other code constructions with up to 2045 werepossible, a trade-off between the BEC performance and MBLwas observed, i.e., the code that achieved wasfound to be much worse over the BEC than both codes andconsidered here. Such a trade-off has also been observed by

others, e.g., [40]. This could be because the codes that achievelarge are often those that have a very regular structure intheir parity-check matrices. Nevertheless, our code design doesgive a large increase in MBL ( ) when compared with thecorresponding codes constructed from ensembles, withoutany decrease in code rate (same ). The MBL achieved as afraction of the maximum possible MBL wasroughly 9.1% and 15.5% for codes and , respectively.In Figs. 6–8, we show the CER performance obtained for

codes and over GEC channels with , 50 and 100respectively, and . As can be seen from the fig-ures, for , code always outperforms code , whilefor there is no such gain when . However, for

and for BP decoding, code slightly outperforms .Note that the code outperforms for small when the

average burst length for large window sizes and forBP decoding. This can be explained because in this regime, theprobability of a burst is small but the average burst length islarge. Therefore, when a burst occurs, it is likely to resemblea single burst in a codeword, and in this case we know thatthe code is stronger than . Also note the significant gapbetween the BP decoder performance and the Singleton bound,suggesting that unlike some moderate length LDPC block codes

Page 15: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

IYENGAR et al.: WINDOWED DECODING OF PROTOGRAPH-BASED LDPC CONVOLUTIONAL CODES 2317

Fig. 6. CER Performance on GEC with with Singleton bound (SB).

Fig. 7. CER Performance on GEC with with Singleton bound (SB).

Fig. 8. CER Performance on GEC with with Singleton bound (SB).

with ML decoding [38], LDPC-CC are far from achievingMDSperformance with BP or windowed decoding.

Fig. 9. Subprotographs of window sizes and . The edges connectedto targeted symbols from previous window configurations are shown in darkershade of gray.

VI. CONCLUSIONS

We studied the performance of a windowed decodingscheme for LDPC convolutional codes over erasure channels.We showed that this scheme, when used to decode terminatedLDPC-CC, provides an efficient way to trade off decodingperformance for reduced latency. Through asymptotic perfor-mance analysis, several design rules were suggested to avoidbad structures within protographs and, in turn, to ensure goodthresholds. For erasure channels with memory, the asymptoticperformance analysis led to design rules for protographs thatensure large stopping set spans. Examples of LDPC-CC en-sembles that satisfy design rules for the BEC as well as erasurechannels with memory were provided. Finite-length codesbelonging to the constructed ensembles were simulated and thevalidity of the design rules as markers of good performancewas verified. The windowed decoding scheme can be used todecode LDPC-CC over other channels that introduce errors anderasures, although in this case error propagation due to wrongdecoding within a window will have to be carefully dealt with.For erasure channels, while close-to-optimal performance

(in the sense of approaching capacity) was achievable for theBEC, we showed that the structure of LDPC-CC imposedconstraints that bounded the performance over erasure channelswith memory strictly away from the optimal performance (inthe sense of approaching MDS performance). Nevertheless, thesimple structure and good performance of these codes, as wellas the latency flexibility and low complexity of the decodingalgorithm, are attractive characteristics for practical systems.

APPENDIX IPROOF OF PROPOSITION 1

Consider the window configuration for window sizesand shown in Fig. 9. We are interested in a window con-figuration that is not at the terminated portion of the code. Callthe Tanner graphs of these windows and

respectively, where andare the sets of VNs and CNs respectively and are thesets of edges. Clearly, , and .Any VN in that is connected to some variable in hasto be connected via some CN in . The edges betweenthese CNs and VNs in are shown hatched in Fig. 9. Considerthe computation trees for the a posteriori message at a targetedsymbol in and that for the same symbol in . Call themand respectively. Then we have .

Page 16: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

2318 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

We now state two lemmas which will be made use of sub-sequently. The proofs of these lemmas are straightforward andhave been omitted.

Lemma 10 (Monotonicity of ): The CN operation in (5) ismonotonic in its arguments, i.e.,

where the two-argument function .

Lemma 11 (Monotonicity of ): The VN operation in (6) ismonotonic in its arguments, i.e.,

where .The operational significance of the above lemmas is the fol-

lowing: if we can upper (lower) bound the mutual informationon some incoming edge of a CN or a VN, and use the bound tocompute the outgoing mutual information from that node, weget an upper (lower) bound on the actual outgoing mutual in-formation. Thus, by bounding the mutual information on someedges of a computation tree and repetitively applying Lemmas10 and 11, one can obtain bounds for the a posteriori mutualinformation at the root of the tree.We start by augmenting , creating another computation

tree that has the same structure as . In particular, in-cludes the additional edges corresponding to the hatched region.In and , we denote the set of these edges by and

respectively. In , we assign zeromutual informationto each edge in .Now, let , and be the a posteriori mutual infor-

mation at the roots of the trees , and respectively.Then it is clear that , since the messages on edges in

are effectively erasures and zero out the contributionsfrom the checks in .On the other hand, if we denote by the mutual infor-

mation associated with an edge , and bythe mutual information associated with the corresponding edgein , we know that so that .Hence, we have from Lemmas 10 and 11 that .Since , it follows that , as desired.

APPENDIX IIPROOF OF PROPOSITION 2

From the definitions made in the statement of Proposition 2,we have . We assumein order to satisfy Design Rule 3. Since the code has memory, we have and. Consider the subset of columns of corresponding to the

polynomial where

and

.

We claim that this is a protograph stopping set. To see this,consider the columns corresponding to the above subset with

and as the column polynomials defining .We have

when . Similarly, it can be verified thathas all coefficients equal to 2 in all other cases also. Clearly,

and thus can only differ from in havinglarger coefficients. Therefore, also has all coefficientsgreater than 1. This shows that the chosen subset of columnsform a protograph stopping set. Based on the parameters

, we can count the number of columns included inthe span of this stopping set and therefore give upper boundson as claimed

.

APPENDIX IIIPROOF OF COROLLARY 3

Consider the protograph of the ensemble given byand . Let the polyno-

mial represent an arbitrarysubset (chosen from the nonempty subsets) of thefirst columns of , for any choice of polynomials

and with coefficients in and maximal de-grees and respectively, i.e.,

where and not all s are zeros. When, let . Clearly, is a monic polynomialof degree . When and , let

. Then, is a monic polynomial of degree. Since in both these cases is a monic polynomial,

there is at least one coefficient equaling 1. Thus,. Finally, notice that

Page 17: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

IYENGAR et al.: WINDOWED DECODING OF PROTOGRAPH-BASED LDPC CONVOLUTIONAL CODES 2319

with all coefficients strictly larger than 1. Note that corre-sponds to the first column of the protograph and tothe column. Thus, we have . Since we have

from Proposition 2, we conclude that.

ACKNOWLEDGMENT

The authors are very grateful to the anonymous reviewers fortheir comments and suggestions for improving the presentationof the paper. The authors would also like to thank GianluigiLiva, who suggested the trade-off of decoding performance forreduced latency through the use of a windowed decoder, andRüdiger Urbanke for pointing out an error in an earlier versionof the paper.

REFERENCES[1] M. Papaleo, A. R. Iyengar, P. H. Siegel, J. K. Wolf, and G. Corazza,

“Windowed erasure decoding of LDPC convolutional codes,” in Proc.IEEE Inf. Theory Workshop, Cairo, Egypt, Jan. 2010, pp. 78–82.

[2] A. R. Iyengar, M. Papaleo, G. Liva, P. H. Siegel, J. K. Wolf, and G. E.Corazza, “Protograph-based LDPC convolutional codes for correlatederasure channels,” in Proc. IEEE Int. Conf. Commun., Cape Town,South Africa, May 2010, pp. 1–6.

[3] G. E. Corazza, A. R. Iyengar, M. Papaleo, P. H. Siegel, A. Vanelli-Coralli, and J. K.Wolf, “Latency constrained protograph-based LDPC-CC,” in Proc. 6th Int. Symp. Turbo Codes Iterative Inf. Process., Brest,France, Sep. 2010, pp. 6–10.

[4] R. G. Gallager, Low Density Parity Check Codes. Cambridge, MA:MIT Press, 1963.

[5] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks ofPlausible Inference. San Francisco, CA: Morgan Kaufmann, 1988.

[6] N. Wiberg, “Codes and decoding on general graphs,” Ph.D., LinkopingUniversity, Linkoping, Sweden, 1996.

[7] D. MacKay and R. Neal, “Near Shannon limit performance of low den-sity parity check codes,” Electron. Lett., vol. 33, no. 6, pp. 457–458,Mar. 1997.

[8] A. J. Felstrom and K. Zigangirov, “Time-varying periodic convolu-tional codes with low-density parity-check matrix,” IEEE Trans. Inf.Theory, vol. 45, no. 6, pp. 2181–2191, Sep. 1999.

[9] M. Lentmaier, G. P. Fettweis, K. S. Zigangirov, and D. J. Costello,“Approaching capacity with asymptotically regular LDPC codes,” inProc. Inf. Theory Appl., San Diego, CA, 2009.

[10] S. Kudekar, T. Richardson, and R. L. Urbanke, “Threshold saturationvia spatial coupling: Why convolutional LDPC ensembles perform sowell over the BEC,” CoRR, vol. abs/1001.1826, 2010.

[11] T. Richardson and R. Urbanke, Modern Coding Theory. New York:Cambridge Univ. Press, 2008.

[12] G. Liva, private communication, 2008.[13] M. Lentmaier, A. Sridharan, K. S. Zigangirov, and D. J. Costello, “Ter-

minated LDPC convolutional codes with thresholds close to capacity,”in Proc. IEEE Int. Symp. Inf. Theory, Sep. 4–9, 2005, pp. 1372–1376.

[14] M. Lentmaier, A. Sridharan, D. J. Costello, and K. S. Zigangirov, “It-erative decoding threshold analysis for LDPC convolutional codes,”IEEE Trans. Inf. Theory, vol. 56, no. 10, pp. 5274–5289, Oct. 2010.

[15] A. Pusane, A. J. Feltstrom, A. Sridharan, M. Lentmaier, K. Zigangirov,and D. J. Costello, “Implementation aspects of LDPC convolutionalcodes,” IEEE Trans. Commun., vol. 56, no. 7, pp. 1060–1069, Jul.2008.

[16] A. Sridharan, “Design and analysis of LDPC convolutional codes,”Ph.D. dissertation, Univ. Notre Dame, Notre Dame, IN, 2005.

[17] J. Thorpe, “Low-density parity-check (LDPC) codes constructed fromprotographs,” JPL INP, Tech. Rep., 2003.

[18] A. Sridharan, D. V. Truhachev, M. Lentmaier, D. J. Costello, and K. S.Zigangirov, “Distance bounds for an ensemble of LDPC convolutionalcodes,” IEEE Trans. Inf. Theory, vol. 53, no. 12, pp. 4537–4555, Dec.2007.

[19] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs andthe sum-product algorithm,” IEEE Trans. Inf. Theory, vol. 47, no. 2,pp. 498–519, Feb. 2001.

[20] M. Luby, M. Mitzenmacher, M. Shokrollahi, and D. Spielman, “Effi-cient erasure correcting codes,” IEEE Trans. Inf. Theory, vol. 47, no.2, pp. 569–584, Feb. 2001.

[21] M. Luby, M. Mitzenmacher, M. Shokrollahi, and D. Spielman, “Im-proved low-density parity-check codes using irregular graphs,” IEEETrans. Inf. Theory, vol. 47, no. 2, pp. 585–598, Feb. 2001.

[22] T. Richardson and R. Urbanke, “The capacity of low-densityparity-check codes under message-passing decoding,” IEEE Trans.Inf. Theory, vol. 47, no. 2, pp. 599–618, Feb. 2001.

[23] T. Richardson and R. Urbanke, “Efficient encoding of low-densityparity-check codes,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp.638–656, Feb. 2001.

[24] G. Liva and M. Chiani, “Protograph LDPC codes design based onEXIT analysis,” in Proc. IEEE Globecom, Washington, DC, Nov.27–30, 2007, pp. 3250–3254.

[25] C. Di, D. Proietti, I. Telatar, T. Richardson, and R. Urbanke, “Finite-length analysis of low-density parity-check codes on the binary erasurechannel,” IEEE Trans. Inf. Theory, vol. 48, no. 6, pp. 1570–1579, Jun.2002.

[26] J. Han and P. H. Siegel, “Improved upper bounds on stopping redun-dancy,” IEEE Trans. Inf. Theory, vol. 53, no. 1, pp. 90–104, Jan. 2007.

[27] D. Divsalar, S. Dolinar, and C. Jones, “Protograph LDPC codes overburst erasure channels,” JPL INP, Tech. Rep., 2006.

[28] M. Lentmaier, D. G. M. Mitchell, G. P. Fettweis, and D. J. Costello,“Asymptotically regular LDPC codes with linear distance growth andthresholds close to capacity,” in Proc. Inf. Theory Appl. 2010, SanDiego, CA.

[29] B. K. Butler and P. H. Siegel, “On distance properties of quasi-cyclicprotograph-based LDPC codes,” in Proc. IEEE Int. Symp. Inf. Theory,Austin, TX, Jun. 13–18, 2010, pp. 809–813.

[30] X.-Y. Hu, E. Eleftheriou, and D. Arnold, “Regular and irregular pro-gressive edge-growth Tanner graphs,” IEEE Trans. Inf. Theory, vol. 51,no. 1, pp. 386–398, Jan. 2005.

[31] T. Tian, C. Jones, J. Villasenor, and R. Wesel, “Selective avoidance ofcycles in irregular LDPC code construction,” IEEE Trans. Commun.,vol. 52, no. 8, pp. 1242–1247, Aug. 2004.

[32] E. Gilbert, “Capacity of a burst-noise channel,” Bell Syst. Tech. J., vol.39, pp. 1253–1265, Sep. 1960.

[33] E. Elliott, “Estimates of error rates for codes on burst-noise channels,”Bell Syst. Tech. J., vol. 42, pp. 1977–1997, Sep. 1963.

[34] L. Wilhelmsson and L. B. Milstein, “On the effect of imperfect inter-leaving for the Gilbert-Elliott channel,” IEEE Trans. Commun., vol. 47,no. 5, pp. 681–688, May 1999.

[35] M. Yang and W. E. Ryan, “Design of LDPC codes for two-state fadingchannel models,” in Proc. 5th Int. Symp. Wireless Pers. MultimediaCommun., Oct. 2002, vol. 3, pp. 986–990.

[36] S. J. Johnson and T. Pollock, “LDPC codes for the classic burstychannel,” in Proc. IEEE Int. Symp. Inf. Theory, Aug. 2004, pp.184–189.

[37] E. Paolini and M. Chiani, “Improved low-density parity-check codesfor burst erasure channels,” in Proc. IEEE Int. Conf. Commun., Is-tanbul, Turkey, Jun. 2006, pp. 1183–1188.

[38] G. Liva, B. Matuz, Z. Katona, E. Paolini, and M. Chiani, “On construc-tion ofmoderate-length LDPC codes over correlated erasure channels,”in Proc. IEEE Int. Conf. Commun., Dresden, Germany, Jun. 2009, pp.1–5.

[39] G. Sridharan, A. Kumarasubramanian, A. Thangaraj, and S. Bhashyam,“Optimizing burst erasure correction of LDPC codes by interleaving,”in Proc. IEEE Int. Symp. Inf. Theory, Jul. 2008, pp. 1143–1147.

[40] S. J. Johnson, “Burst erasure correcting LDPC codes,” IEEE Trans.Commun., vol. 57, no. 3, pp. 641–652, Mar. 2009.

[41] E. Paolini and M. Chiani, “Construction of near-optimum burst erasurecorrecting low-density parity-check codes,” IEEE Trans. Commun.,vol. 57, no. 5, pp. 1320–1328, May 2009.

[42] T.Wadayama, “Ensemble analysis on minimum span of stopping sets,”in Proc. Inf. Theory Appl. 2006, San Diego, CA.

Aravind R. Iyengar (S’09) was born in Mysore, India, in 1985. He received hisB.Tech degree in Electrical Engineering from the Indian Institute of TechnologyMadras, Chennai, in 2007, and his M.S. degree in Electrical Engineering fromthe University of California in San Diego, La Jolla, in 2009. He is presentlya doctoral student in the Electrical and Computer Engineering department atUCSD and is affiliated with the Center for Magnetic Recording Research. In2006, he was a visiting student intern at the École Nationale Supérieure del’Electronique et de ses Applications (ENSEA), Cergy, France. He was a vis-iting doctoral student at the Communication Theory Laboratory at the ÉcolePolytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland in 2010.His research interests are in the areas of information and coding theory, and inwireless communications.

Page 18: IEEE TRANSACTIONS ON INFORMATION THEORY, …cmrr-star.ucsd.edu/static/pubs/windowed_decoding_ldpc_cc.pdfIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2303 Windowed

2320 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

Marco Papaleo (S’07–M’10) was born in Soverato, Italy, in 1981. He receivedhis B.S. and M.Sc. degrees (summa cum laude) in Telecommunication Engi-neering from University of Bologna, Italy, in 2003 and in 2006, respectively. In2010, he received his PhD in Information Technology from the Advanced Re-search Center on Electronic Systems for Information and Communication Tech-nologies “Ercole De Castro” (ARCES) at the University of Bologna. In 2006 hewas a visiting affiliate student at the University College of London (UCL). Insummer 2008 he was a visiting Ph.D. student at the Institute of Communicationsand Navigation of the German Aerospace Center (DLR), in Munich, Germany,where he was involved in the design and analysis of LDPC convolutional codes.In 2009 he was a visiting graduate student at Center forMagnetic Recording Re-search of the University of California, San Diego (UCSD). In January 2011, hejoined the Corporate Research and Development department at Qualcomm Inc.,San Diego, CA, where is currently involved in the design of multimode mobilestation modems. His research activities are mainly focused on the next gener-ation wireless telecommunication systems, both the terrestrial and the satellitenetworks. In particular, his interests include design and performance evaluationof error control coding (with emphasis on packet level coding).

Paul H. Siegel (M’82–SM’90–F’97) received the S.B. and Ph.D. degrees inmathematics from the Massachusetts Institute of Technology (MIT), Cam-bridge, in 1975 and 1979, respectively.He held a ChaimWeizmann Postdoctoral Fellowship at the Courant Institute,

New York University. He was with the IBM Research Division in San Jose, CA,from 1980 to 1995. He joined the faculty at the University of California, SanDiego in July 1995, where he is currently Professor of Electrical and ComputerEngineering in the Jacobs School of Engineering. He is affiliated with the Centerfor Magnetic Recording Research where he holds an endowed chair and servedas Director from 2000 to 2011. His primary research interests lie in the areasof information theory and communications, particularly coding and modulationtechniques, with applications to digital data storage and transmission.Prof. Siegel was a member of the Board of Governors of the IEEE Informa-

tion Theory Society from 1991 to 1996 and was re-elected for a 3-year term in2009. He served as Co-Guest Editor of the May 1991 Special Issue on “Codingfor Storage Devices” of the IEEE TRANSACTIONS ON INFORMATION THEORY.He served the same Transactions as Associate Editor for Coding Techniquesfrom 1992 to 1995, and as Editor-in-Chief from July 2001 to July 2004. He wasalso Co-Guest Editor of the May/September 2001 two-part issue on “The TurboPrinciple: From Theory to Practice” of the IEEE JOURNAL ON SELECTED AREASIN COMMUNICATIONS.Prof. Siegel was co-recipient, with R. Karabed, of the 1992 IEEE Informa-

tion Theory Society Paper Award and shared the 1993 IEEE CommunicationsSociety Leonard G. Abraham Prize Paper Award with B. H. Marcus and J. K.Wolf. With J. B. Soriaga and H. D. Pfister, he received the 2007 Best PaperAward in Signal Processing and Coding for Data Storage from the Data StorageTechnical Committee of the IEEE Communications Society. He holds severalpatents in the area of coding and detection, and was named a Master Inventorat IBM Research in 1994. He is an IEEE Fellow and a member of the NationalAcademy of Engineering.

Jack Keil Wolf (S’54–M’60–F’73–LF’97) received the B.S.E.E. degree fromthe University of Pennsylvania, Philadelphia, in 1956, and the M.S.E., M.A.,and Ph.D. degrees from Princeton University, Princeton, NJ, in 1957, 1958, and1960, respectively. Hewas the Stephen O. Rice Professor of Electrical and Com-puter Engineering and amember of the Center forMagnetic Recording Researchat the University of California-San Diego, La Jolla. He was a member of theElectrical Engineering Department at New York University from 1963 to 1965,and the Polytechnic Institute of Brooklyn from 1965 to 1973. He was Chairmanof the Department of Electrical and Computer Engineering at the Universityof Massachusetts, Boston, from 1973 to 1975, and he was Professor there from1973 to 1984. From 1984 to 2011, he was a Professor of Electrical and ComputerEngineering and a member of the Center for Magnetic Recording Research atthe University of California-San Diego. He also held a part-time appointment atQualcomm, Inc., San Diego. From 1971 to 1972, he was an NSF Senior Post-doctoral Fellow, and from 1979 to 1980, he held a Guggenheim Fellowship. Hismost recent research interest was in signal processing for storage systems.

Dr. Wolf was elected to the National Academy of Engineering in 1993. Hewas the recipient of the 1990 E. H. Armstrong Achievement Award of the IEEECommunications Society and was co-recipient with D. Slepian of the 1975 IEEEInformation Theory Group Paper Award for the paper “Noiseless coding for cor-related information sources.” He shared the 1993 IEEE Communications So-ciety Leonard G. Abraham Prize Paper Award with B. Marcus and P. H. Siegelfor the paper “Finite-state modulation codes for data storage.” He served on theBoard of Governors of the IEEE Information Theory Group from 1970 to 1976and from 1980 to 1986. Dr. Wolf was President of the IEEE Information TheoryGroup in 1974. He was International Chairman of Committee C of URSI from1980 to 1983. He was the recipient of the 1998 IEEE Koji Kobayashi Com-puters and Communications Award, “for fundamental contributions to multiusercommunications and applications of coding theory to magnetic data storage de-vices.” In May 2000, he received a UCSD Distinguished Teaching Award. In2004 Professor Wolf received the IEEE Richard W. Hamming Medal for “fun-damental contributions to the theory and practice of information transmissionand storage.” In 2005 he was elected by the American Academy of Arts andSciences as a Fellow, and in 2010 was elected as a member of the NationalAcademy of Sciences. He was co-recipient with I.M. Jacobs of the 2011 Mar-coni Society Fellowship and Prize. Prof. Wolf passed away on May 12, 2011.

Alessandro Vanelli-Coralli received the Dr. Ing. Degree (cum laude) in Elec-tronics Engineering and the Ph.D. in Electronics and Computer Science fromthe University of Bologna (Italy) in 1991 and 1996, respectively. In 1996, hejoined the Department of Electronics, Computer Science and Systems (DEIS)at the University of Bologna, where he is currently an Assistant Professor.Since 2001, he has also been a Research Associate of the Advanced ResearchCenter for Electronic Systems (ARCES) of the University of Bologna. During2003 and 2005, he was a Visiting Scientist at Qualcomm Inc. (San Diego,CA), working in the Corporate R&D Department on Mobile CommunicationSystems. Dr. Vanelli-Coralli participates in national and international researchprojects on satellite mobile communication systems. He is the Co-Leader ofthe R&D group of the Integral SatCom Initiative (ISI) technology platform,and Scientific Responsible for several European Space Agency and EuropeanCommission funded projects. His research interests are in the area of wirelesscommunication systems, digital transmission techniques, and digital signalprocessing. Dr. Vanelli-Coralli has been appointed member of the EditorialBoard of the Wiley InterScience Journal on Satellite Communications andNetworks and has been guest co-Editor for several special issues in of inter-national scientific journals. Dr. Vanelli-Coralli has served in the organizationcommittees of scientific conferences and has been general Co-Chairman of the5th IEEE ASMS Conference and 11th IEEE SPSC Workshop and TechnicalChairman of the IEEE ASMS 2008 Conference and Technical Vice-Chairmanof the IEEE ISSSTA 2008 Conference. Dr. Vanelli-Coralli coauthored morethan 130 papers and scientific conference contributions and he is co-recipientof several Best Paper Awards. He is an IEEE Senior Member.

Giovanni E. Corazza (M’91–SM’09) is a Full Professor at the University ofBologna, Head of the Department of Electronics, Computer Science and Sys-tems (DEIS), and leader for Wireless Communications inside the AdvancedResearch Centre for Electronic Systems (ARCES). He is the founder of theMarconi Institute for Creativity (2011). He was Chairman of the School forTelecommunications in the years 2000–2003, Chairman of the Advanced Satel-lite Mobile Systems Task Force (ASMS-TF), Founder and Chairman of the In-tegral Satcom Initiative (ISI), a European Technology Platform devote to Satel-lite Communications. Since 1997, he is Editor for Communication Theory andSpread Spectrum for the IEEE TRANSACTIONS ON COMMUNICATIONS. He is au-thor of more than 200 papers, and received the Marconi International Fellow-ship Young Scientist Award in 1995, the IEEE 2009 Satellite CommunicationsDistinguished Service Award, the 2002 IEEE VTS Best System Paper Award,the Best Paper Award at IEEE ISSSTA’98, at IEEE ICT2001, and at ISWCS2005. He has been the General Chairman of the IEEE ISSSTA 2008, ASMS2004, ASMS 2006, ASMS 2008, ASMS 2010 Conferences. His research inter-ests are in wireless and satellite communications, creative thinking, estimationand synchronization, spread spectrum andmulticarrier transmission, upper layercoding, navigation, and positioning.