17
1 S OFT -D ECISION D ECODING OF L INEAR B LOCK C ODES U SING P REPROCESSING AND D IVERSIFICATION Yingquan Wu and Christoforos Hadjicostis Abstract— Order-w reprocessing is a suboptimal soft-decision decoding approach for binary linear block codes in which up to w bits are systematically ipped on the so-called most reliable (information) basis (MRB). At each iteration a candidate codeword is recoded and in the end, the most likely codeword is picked from the candidate list. In this paper, we rst incorporate two preprocessing rules into order-w reprocessing: i) a test error pattern with Hamming weight up to w +1 is discarded without further computing its actual likelihood if it results in more than θ bit errors within the MRB and the t most reliable bits of the redundancy part; ii) a test error pattern with Hamming weight w +2 is discarded if it results in more than 1 bit errors among the τ most reliable bits of the redundancy part, where θ, t, and τ are parameters to be determined. We show also that, with appropriate choice of parameters, the proposed order-w reprocessing with preprocessing requires comparable complexity to order-w reprocessing but achieves asymptotically the performance of order-(w + 2) reprocessing. To complement the MRB, we also employ iterative recoding on a second basis for practical SNRs and systematically extend this approach to a multi-basis order-w reprocessing scheme for high SNRs. We show that the proposed multi-basis scheme signicantly enlarges the error-correction radius, a commonly used measure of performance at high SNRs, over the original (single-basis) order-w reprocessing. As a by-product, we also precisely characterize the asymptotic performance of the well- known Chase and GMD decoding algorithms. Our simulation results show that the proposed algorithm successfully decodes the (192, 96) Reed-Solomon concatenated code and the (256, 147) extended BCH code in near optimal manner (within 0.01 dB at a block-error-rate of 10 -5 ) with affordable computational cost. Index Terms— Soft-decision decoding, most reliable basis, multi-basis, preprocessing, asymptotic performance, binary linear block codes, I. I NTRODUCTION Maximum-likelihood (ML) soft-decision decoding provides substantial performance gain in comparison to bounded- distance (hard-decision) decoding. However, ML soft-decision decoding has been shown to be NP-hard whereas polynomial- complexity bounded-distance decoding algorithms exist for many known codes. The problem of nding computationally efcient and practically implementable soft-decision decoding algorithms has been investigated extensively and remains an open and challenging problem, particularly for long block codes. In [8], Forney provided for the rst time a suboptimal soft-decision decoding approach that utilizes the concept of generalized minimum distance (GMD) decoding based on channel erasure information. In [5], Chase presented another suboptimal algorithm which uses channel reliability infor- mation to search for codewords by successively applying bounded-distance decoding to candidate test error patterns cor- responding to certain least reliable bit positions. At practical signal-to-noise ratios (SNRs), the methods in [8], [5] provide mild gains at manageable computational costs; however, these methods rely on an efcient hard-decision bounded-distance decoder which may not be available for many codes. An active research direction for the soft-decision decoding of general binary linear block codes is iterative recoding based on the so-called most reliable basis (MRB), which can be obtained by applying a greedy search algorithm to the decreasing reliability order of the received word. At each iteration of iterative recoding, a test error pattern (TEP) is added to the information message associated with the MRB and then a candidate codeword is recoded using the systematic generator matrix associated with the MRB. This was rst suggested by Dorsch in [7] whose approach was based on ipping TEPs that are iterated in ascending reliability order. In [9], Fossorier and Lin proposed a simple order-w reprocessing scheme that systematically ips up to w bits over the MRB. This method was shown to be asymptotically optimal when w min{ dmin 4 ,K}, where K denotes the information di- mension and d min denotes the minimum Hamming distance of the code. In [13], Fossorier presented a multi-basis algorithm that does not require a reprocessing order larger than (w - 1) in its main phase and that is able to correct any error pattern of at most w errors in the K + t most reliable positions (the parameter t satises 0 <t min{K, N - K} where N denotes the code length and K the information dimension). In [22], Wu and Hadjicostis proved that Dorsch’s proposed

SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

Embed Size (px)

Citation preview

Page 1: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

1

SOFT-DECISION DECODING OF LINEAR BLOCKCODES USING PREPROCESSING AND

DIVERSIFICATIONYingquan Wu and Christoforos Hadjicostis

Abstract—Order!w reprocessing is a suboptimal soft-decisiondecoding approach for binary linear block codes in whichup to w bits are systematically flipped on the so-called mostreliable (information) basis (MRB). At each iteration a candidatecodeword is recoded and in the end, the most likely codeword ispicked from the candidate list. In this paper, we first incorporatetwo preprocessing rules into order!w reprocessing: i) a testerror pattern with Hamming weight up to w + 1 is discardedwithout further computing its actual likelihood if it resultsin more than ! bit errors within the MRB and the t mostreliable bits of the redundancy part; ii) a test error pattern withHamming weight w + 2 is discarded if it results in more than 1bit errors among the " most reliable bits of the redundancypart, where !, t, and " are parameters to be determined.We show also that, with appropriate choice of parameters,the proposed order!w reprocessing with preprocessing requirescomparable complexity to order!w reprocessing but achievesasymptotically the performance of order!(w + 2) reprocessing.To complement the MRB, we also employ iterative recoding ona second basis for practical SNRs and systematically extendthis approach to a multi-basis order!w reprocessing schemefor high SNRs. We show that the proposed multi-basis schemesignificantly enlarges the error-correction radius, a commonlyused measure of performance at high SNRs, over the original(single-basis) order!w reprocessing. As a by-product, we alsoprecisely characterize the asymptotic performance of the well-known Chase and GMD decoding algorithms. Our simulationresults show that the proposed algorithm successfully decodesthe (192, 96) Reed-Solomon concatenated code and the (256, 147)extended BCH code in near optimal manner (within 0.01 dB ata block-error-rate of 10!5) with affordable computational cost.

Index Terms— Soft-decision decoding, most reliable basis,multi-basis, preprocessing, asymptotic performance, binarylinear block codes,

I. INTRODUCTION

Maximum-likelihood (ML) soft-decision decoding providessubstantial performance gain in comparison to bounded-distance (hard-decision) decoding. However, ML soft-decisiondecoding has been shown to be NP-hard whereas polynomial-complexity bounded-distance decoding algorithms exist formany known codes. The problem of finding computationallyefficient and practically implementable soft-decision decodingalgorithms has been investigated extensively and remains anopen and challenging problem, particularly for long blockcodes. In [8], Forney provided for the first time a suboptimalsoft-decision decoding approach that utilizes the concept ofgeneralized minimum distance (GMD) decoding based onchannel erasure information. In [5], Chase presented anothersuboptimal algorithm which uses channel reliability infor-mation to search for codewords by successively applyingbounded-distance decoding to candidate test error patterns cor-responding to certain least reliable bit positions. At practicalsignal-to-noise ratios (SNRs), the methods in [8], [5] providemild gains at manageable computational costs; however, thesemethods rely on an efficient hard-decision bounded-distancedecoder which may not be available for many codes.An active research direction for the soft-decision decoding

of general binary linear block codes is iterative recodingbased on the so-called most reliable basis (MRB), which canbe obtained by applying a greedy search algorithm to thedecreasing reliability order of the received word. At eachiteration of iterative recoding, a test error pattern (TEP) isadded to the information message associated with the MRBand then a candidate codeword is recoded using the systematicgenerator matrix associated with the MRB. This was firstsuggested by Dorsch in [7] whose approach was based onflipping TEPs that are iterated in ascending reliability order. In[9], Fossorier and Lin proposed a simple order-w reprocessingscheme that systematically flips up to w bits over the MRB.This method was shown to be asymptotically optimal whenw ! min{"dmin

4 #, K}, where K denotes the information di-mension and dmin denotes the minimum Hamming distance ofthe code. In [13], Fossorier presented a multi-basis algorithmthat does not require a reprocessing order larger than (w $ 1)in its main phase and that is able to correct any error patternof at most w errors in the K + t most reliable positions (theparameter t satisfies 0 < t % min{K, N $ K} where Ndenotes the code length and K the information dimension).In [22], Wu and Hadjicostis proved that Dorsch’s proposed

Page 2: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

2

order of flipping patterns in [7] is optimal in the sense ofminimizing the list error probability under a constraint onthe maximum number of recoding operations. The authorsalso devised a decoding algorithm in which flipping patternsare iterated in natural order and each recoding operationrequires at most N $ K Boolean operations. In addition, theaverage complexity of this algorithm is substantially reducedby employing a suboptimal probabilistic skipping rule whichuses the likelihood of the tentatively most likely codewordto dynamically avoid the recoding of unpromising TEPs.Recently, Valembois and Fossorier applied the so-called “boxand match” technique to order-w reprocessing and were ableto correct 2w bit errors in an extended MRB composed ofthe MRB information bits and a certain number of the mostreliable bits in the redundancy part [20]. This approach wasshown to successfully decode the (192, 96) concatenated Reed-Solomon code which had never been done before; it requires,however, a large amount of memory. This breakthrough isachieved by aiming at directly maximizing the error correctioncapability in an extended MRB (i.e., the MRB plus some ofthe most reliable bits in the redundancy part) as opposed tothe conventional approach of maximizing the error correctioncapability in the MRB [7], [9], [22].In this paper we develop two techniques to improve iterative

recoding algorithms that are based on the MRB. These tech-niques are universally applicable but our analysis focuses onorder$w reprocessing due to its simplicity. The first technique,called preprocessing, filters out (unpromising) TEPs whosecorresponding codewords have more than a pre-determinednumber of bit errors within an extended MRB that containsthe MRB and a certain number of the most reliable bits inthe redundant part. We consider two preprocessing rules. Thefirst rule copes with TEPs whose (Hamming) weights are lessthan w + 2 and discards the ones for which the weight of thepartial coset composed of the MRB and the t most reliablebits in the redundancy part is greater than a pre-determinedthreshold !. The second rule copes with TEPs which haveweight w + 2 and discards those that result in more than onebit error among the " most reliable bits in the redundancypart. We show that by appropriately choosing parameters t, !and " the proposed scheme has complexity comparable tothat of order$w reprocessing while achieving asymptoticallythe performance of order$(w + 2) reprocessing. The secondtechnique is to complement the MRB with a second basis thatis essentially formed by replacing the # least reliable bits inthe MRB with the # most reliable bits in the redundancy part,and by applying the proposed order reprocessing algorithmwith preprocessing to each basis. We also propose a systematicscheme that uses multiple bases in the high SNR regime andshow that the use of multiple bases can significantly enlargeerror-correction radius, a commonly employed indicator ofdecoding performance at high SNRs.The paper is organized as follows. In Section II we provide

background on soft-decision decoding and iterative recoding.We present the order$w reprocessing algorithm with pre-processing in Section III and extend the algorithm to thescenario of multiple bases in Section IV. Simulation resultsare presented in Section V and pertinent remarks are made in

Section VI.

II. PRELIMINARIES

Let C(N, K) be a binary linear block code of lengthN and dimension K (and redundancy length R

!= N $K) that is used for error control over the additive whiteGaussian noise (AWGN) channel, under binary-phase-shift-keying (BPSK) signaling of unit energy. More specifically,a bipolar version of a codeword c = [c1 c2 . . . cN ], i.e.,[($1)c1 ($1)c2 . . . ($1)cN ] is used for transmission. Atthe output of the demodulator, the received unquantized wordr = [r1 r2 . . . rN ] takes the form ri = ($1)ci + ni,i = 1, 2, . . . , N , where ni are independent and identicallydistributed Gaussian random variables with zero mean andvariance N0/2. We adopt the common assumption that code-words are equally probable, i.e., Pr(c) = 2"K for any c & C.Under this model, the ith-bit log-likelihood-ratio, which isformally defined as $i

!= ln Pr(ci=0|ri)Pr(ci=1|ri)

, can be simplified to$i = 4ri/N0, i.e., the received symbol ri is simply a scaledlog-likelihood-ratio. Bit hard-decision sets y i to 0 if ri > 0 and1 otherwise; the reliability of the corresponding bit decision,which is defined as a scaled version of the magnitude of thelog-likelihood-ratio, is conveniently represented by % i = |ri|.Suppose we are given a binary block code C(N, K) with

generator matrix G. Upon receiving r (in the form explainedabove), one can construct a new systematic generator matrix Gassociated with the most reliable (information) basis (MRB),via a transformation from the original generator matrix G. Thefollowing greedy search algorithm can be used to obtain theMRB [9].

Greedy Algorithm for Obtaining the MRB1) Sort bit indices in decreasing order of reliability, i1, i2, . . . , in

(a permutation of 1, 2, . . . , n).2) Set the index set to be empty.3) For l = 1, 2, . . ., till the index set becomes full, do:

Check whether the il-th column vector of G is independent ofthe column vectors of G that are associated to the index set.If so, add il to the index set, otherwise discard il.

4) Permute (column-wise) the original matrix G in accordancewith the index set and then systematize the resulting matrix.

Note that when kn > 1

2 the above procedure (Steps 3 and 4in particular) can be made computationally more efficient byoperating on the parity check matrix instead of the generatormatrix [9]. Note that the re-ordered reliabilities in the MRBand the redundancy part satisfy decreasing order respectively,i.e.,

%1 ' %2 ' . . . ' %K and %K+1 ' %K+2 ' . . . ' %N . (1)

In the sequel, “#” stands for the ordering associated theMRB; subscript “1” stands for the index set {1, 2, 3, . . . , K}associated with the basis and “2” stands for the index set{K + 1, K + 2, . . . , N} associated with the redundancy part.Since matrix G is in systematic form, we will also write it asG = [IK P], where IK denotes the K ( K identity matrixand P reflects the dependency of the (unreliable) redundancybits on the (reliable) information bits.

Page 3: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

3

Having obtained the reordered G, y and ! associated withthe MRB, one can immediately generate the first candidatecodeword by recoding the bit information y1

c0!= y1G = [y1 y1P]. (2)

Thus, if no errors occur in the MRB, then the above recod-ing operation successfully retrieves the transmitted codeword.When only a few errors are present in the MRB, one candevelop searching strategies that iteratively flip bits of y1

(the information bits corresponding to the MRB), each timerecoding the resulting information vector into a codeword(using G) and evaluating its likelihood. The action of flippingbits in y1 is equivalent to adding to y1 a binary test errorpattern (TEP) e != [e1 e2 . . . eK ] and performing a recodingoperation with respect to the TEP e as

ce!= (y1 ) e)G =

!y1 ) e (y1 ) e)P

", (3)

where ) denotes the bit-wise XOR operator and y1 denotesthe information corresponding to the MRB.The weighted Hamming distance (WHD) with respect to

a TEP e, denoted by D(!, e), is defined as the sum of thereliabilities corresponding to the entries in which the recodedcodeword ce differs from y, i.e.,

D(!, e) !=#

1$i$Nce,i %=yi

%i. (4)

It is well-known that Maximum-Likelihood (ML) soft-decisiondecoding is equivalent to searching for the TEP that minimizesthe WHD D(!, ·). It can be verified that when %i = 1,i = 1, 2, . . . , N , the above formula evaluates to the Hammingdistance between ce and y.Let e2 = [e2,1 e2,2 . . . e2,R] denote the binary error vector

in the redundancy part that corresponds to TEP e in the MRB,i.e.,

e2!= y2 ) (y1 ) e)P. (5)

The WHD D(!, e) can be naturally decomposed into twoparts, the information WHD and the redundancy WHD, definedrespectively as follows

D1(!, e) !=#

1$i$Kei=1

%i, D2(!, e2) !=#

1$i$Re2,i=1

%K+i. (6)

In [7], TEPs are ordered in ascending information WHDD1(!, ·), which is shown in [22] to be optimal in the senseof minimizing the list error probability when choosing onlya subset among 2K TEPs. This ordering, however, dependson the reliability values and thus requires complex dynamicsorting for each decoding. A simpler TEP ordering is orderreprocessing which sorts TEPs in increasing order of primarilyHamming weight and secondary dictionary ordering [9], [14],as shown in Table I. This ordering is independent of reliabilityvalues and is thus fixed based on the given reliability order in(1).

TABLE IORDER!w REPROCESSING OF TEPS

Weight 1 " Weight 2 " . . . " Weight w00 . . . 000000 . . . 0001 00 . . . 0011 . . . 00 . . . 00 1 . . . 111| {z }

w00 . . . 0010 00 . . . 0101 . . . 00 . . . 010 1 . . . 11| {z }

w!1

00 . . . 0100 00 . . . 0110 . . . 00 . . . 0110 1 . . . 1| {z }w!2

......

......

1000 . . . 00 1100 . . . 00 . . . 11 . . . 111| {z }w

0 . . . 00

III. ORDER REPROCESSING WITH PREPROCESSINGIn this section we present a soft-decision decoding algorithm

that is based on iterative recoding in light of the MRB.Recall that order-w reprocessing systematically recodes TEPswith weights up to w in reprocessing ordering, i.e., first theTEP e = 0, next all TEPs with weight 1 in the dictionaryorder, next all TEPs with weight 2 in the dictionary order,..., finally all TEPs with weight w in dictionary order [9].An improved implementation of order-w reprocessing is givenin [22]. We present two preprocessing rules which can beembedded into any iterative recoding algorithm (possibly withslight modifications). We also study the asymptotic behaviorof the proposed algorithm as N0 goes to zero.

A. First preprocessing ruleWe now consider a thresholding rule on the extended MRB

which is composed of the MRB and the t most reliablebits in the redundancy part. Following the order in (1), theextended MRB can be conveniently denoted by [1, K + t].This preprocessing rule dictates that a codeword c is discardedwithout further consideration if

d[1, K+t](ce, y) > !, (7)

where d[a, b](c, y) denotes the Hamming distance between cand y within bit positions [a, b]. We next show that thispreprocessing rule can be efficiently enforced if TEPs andtheir resulting codeword information are generated in termsof sub-TEPs (STEPs) as described below.Consider a decomposition of a TEP e into a unit vector

whose only 1 is positioned at the smallest 1’s position in e(i.e., the most reliable nonzero bit in e) and a STEP whichcorresponds to the remaining of e, i.e.,

e = esub + uµ(e), (8)

where µ(e) != min{i : ei = 1}. Note that since esubµ(e) =

0, esub + uµ(e) is equivalent to esub ) uµ(e); in the sequel,esub + ui always implies esub

i = 0. As we can see, a nonzeroSTEP esub always satisfies µ(esub) > 1. A STEP esub is usedto generate µ(esub)$1 TEPs (where µ(esub) is the minimumindex among nonzero positions of esub). More specifically,STEP esub generates the TEPs in the set

S(esub) != {esub + u1, esub + u2, . . . , esub + uµ(esub)"1}.

Page 4: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

4

We further note that any two STEPs generate disjoint sets ofTEPs, i.e., S(esub

1 ) * S(esub2 ) = +. This is so because a TEP

is uniquely decomposed as e = esub +uµ(e). This leads us toconclude that all STEPs with weight i $ 1 generate all TEPswith weight i; furthermore,

{e : w(e) = i} =$

esub: w(esub)=i"1

S(esub). (9)

Eq. (9) indicates that%Ki

&TEPs with weight i are generated by%K"1

i"1

&STEPs (recall that esub

1 = 0, thus esub has essentiallyK $ 1 degrees of freedom). Thus, on average, a STEP withweight i $ 1 produces K/i TEPs.Note that the information of esub is only computed once

for all µ(esub) $ 1 TEPs in S(esub); thus, we can calculatethe distance d[1, K+t](ce, y) through

d[1, K+t](cesub+ui, y)

= w(esub + ui) + d[K+1, K+t](cesub+ui, y)

= w(esub) + 1 + d[1, t](esub2 , pi) (10)

for k = µ(esub) $ 1, . . . , 2, 1, where esub2 is the redundancy

error vector as defined in (5). Therefore, given (w(e sub) + 1)and [esub

2 ][1, t], a preprocessing operation can be efficientlyimplemented with t XOR operations and a weight look-uptable. Since a STEP produces up to K TEPs, the complexityof preprocessing associated with a STEP is bounded by KtXORs. In fact, if order$w reprocessing is enforced for STEPs,then the average complexity of preprocessing associated witha STEP is

t'w+1

i=1

%Ki

&( 'wi=1

%K"1i

&! t

% Kw+1

&(%K"1w

&= Kt

w+1

XOR operations.We next show how to efficiently compute the WHD D(!, e)

in light of the STEP information. If D1(!, esub) (defined in(6)) and esub

2 (defined in (5)) are known, D(!, e) can becomputed efficiently as follows:

e2 = esub2 ) pµ(e), (11)

D(!, e) = D1(!, e) + D2(!, e2)= D1(!, esub) + %µ(e) + D2(!, e2), (12)

where pi denotes the i-th row of P and D2(!, e2) is theredundancy WHD defined in (6). Since we only need toexplicitly construct co

e = [y ) eo y2 ) eo2] as the best-offer

codeword (to output in the final stage), it suffices to computeesub2 and D1(!, esub) for a given STEP, so that e2 and

D(!, e) can be computed through (11) and (12) respectively.We remark that the task proposed in [13], i.e., to correct

any error pattern of at most w + 1 errors in the extendedMRB [1, K + p] by requiring a reprocessing order no largerthan w, can be straightforwardly implemented by the proposedorder$w STEP reprocessing with the preprocessing rule thatwe propose in this section (with ! = w + 1 and t = p).Moreover, this (trivial) implementation saves the trouble ofgenerating the multi-basis required in [13].

B. Second preprocessing ruleTo simplify our presentation we again assume that order$w

reprocessing for STEPs is enforced, i.e., TEPs of order$(w+1) are generated and processed. In this subsection, we areinterested in processing TEPs with weight w+2 using STEPswith weight w. As can be readily seen, if we apply thesame strategy as in the preceding subsection, each STEP onaverage contains K(K"w"1)

(w+1)(w+2) TEPs (note that%

Kw+2

&TEPs are

coupled with%K"1

w

&STEPs). To this end, we propose a slightly

easier task so as to avoid such heavy computational burden;in particular, we are interested in finding TEPs e with weightw + 2 satisfying

w[1, ! ](e2) % 1, (13)

where " is a parameter to be determined. The motivationfor choosing the threshold to be 1 is to substantially reducesearching complexity (this is addressed subsequently), whileat the same time retaining performance in an asymptotic sense(this issue is addressed in the last part of this section).Suppose we pre-calculate all L =

%K"w2

&combinations of

any two pi, i = 1, 2, . . . , K $ w (recall G = [I P] andP = [p1 p2 . . . pK ]T ) and (arbitrarily) index them byq1,q2, . . . ,qL. We can uniquely decompose any TEP e ofweight w +2 into a vector with two 1’s entries correspondingto the two most reliable 1’s entries of e and the residual vectorwhich is a STEP of weight w. Note that the vector e2 thatcorresponds to e with w(e) = w + 2 can be decomposedinto some qi and some esub

2 that corresponds to esub withw(esub) = w. Thus, for a STEP esub of weight w, it sufficesto find all qi’s satisfying

d[1,! ](qi, esub2 ) % 1. (14)

We next present two different approaches to implementthe above task. The first approach is convenient for softwareimplementation and stores the information regarding q i, 1 %i % L, in an array, naturally indexed by (q i)[1, ! ], i =1, 2, . . . , L. Let ui denote the " -dimensional unit vector withthe only “1” positioned at the ith entry. Once the redundancyerror vector esub

2 is given, the " + 1 patterns (esub2 )[1,! ],

(esub2 )[1,! ] ) u! , (esub

2 )[1,! ] ) u!"1, . . . , (esub2 )[1,! ] ) u1 are

checked one by one: when a pattern is in the array, it is recodedinto codeword(s) (note that an index of the array may be emptyor contain one or more qi’s), This method is called the “boxand match” algorithm and a detailed description for it can befound in [20].The second approach is convenient for hardware implemen-

tation and stores (q1)[1,! ], (q2)[1,! ], . . ., (qL)[1,! ] in a " -levelbinary tree with the root set to void and each path from theroot to a leaf (with path length " ) representing a prototype(qi)[1, ! ]. The information qi is stored in the end node of thepath (i.e., a leaf of the tree) and represents (q i)[1, ! ]. Given thisdata structure, a matching operation (i.e., checking whether aninput vector is a prototype pattern) can be accomplished withat most " XOR operations by searching from the root to a leaf.Thus, the task imposed by (14) can be efficiently implementedby " + 1 match operations for (esub

2 )[1,! ], (esub2 )[1,! ] ) u1,

(esub2 )[1,! ] ) u2, . . . , (esub

2 )[1,! ] ) u! , respectively. It can bereadily seen that the second preprocessing rule requires at most

Page 5: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

5

"(" + 1) binary operations and this complexity is comparableto the complexity of the first preprocessing rule.

C. Asymptotic performance analysisIn this section, we analyze the asymptotic performance of

the proposed order$w reprocessing with preprocessing, as N 0

goes to zero. To simplify the analysis, we make the followingassumptions and approximations: (i) The K most reliableindices are linearly independent and compose the MRB. Infact, for random codes the probability that the K + s mostreliable indices contain a basis is at least 1$2"s (see [17], p.455). Thus, for relatively large K and N , the MRB and the Kmost reliable bits are almost equally reliable. This assumptionwill be used to simplify the analysis of the list (decoding) errorprobability which is defined as the probability that a decoderfails to retrieve the transmitted codeword as one of candidatecodewords (thus, the decoder outputs an erroneous codewordunder any (logical) decision rule). (ii) All redundancy errorvectors e2 satisfy uniform distribution over [1, max(t, ")],i.e., Pr

%(e2)[1, max(t,!)]

&= 2"max(t,!). This assumption will

be used to evaluate the number of recodings that are requiredand comprise the dominant computational burden.Our analysis will be focusing on the list error probability

as N0 goes to zero. For a given decoding algorithm, theprobability of a list decoding error P list and the probabilityof a decoding error Pe satisfy

max{PML, Plist} % Pe % Plist + PML, (15)

where PML denotes the error probability of ML decoding.It has been shown in [18] that as N0 , 0 the asymptotic

performance of ML decoding takes the form

PML =)

Admin

2-

dmin&+ o(1)

*N1/2

0 e"dminN0 , (16)

where o(1) denotes a negligible term that diminishes as N0

goes to 0, dmin denotes the minimum Hamming distance of thecode C(N, K), and Admin denotes the number of codewordswith Hamming weight dmin. An alternative interpretation for(16) is

limN0&0

PML

N1/20 e"

dminN0

=Admin

2-

dmin&.

The following result is proved in the Appendix.Theorem 1: As N0 , 0, the probability of more than w bit

errors among the N $S most reliable bits of a received wordis

PS,w =

+,

-

.C(1)

S,w + o(1)/

N (S+w!)/20 e"(S+w!)/N0 , if S % w',

.C(2)

S,w + o(1)/

N (S+w)/20 e"

4Sw!(S+w!)N0 , if S > w',

(17)where w' != w+1 and C(1)

S,w and C(2)S,w are constants given by

C(1)S,w

!=)

N

S + w'

* S+w!#

i=w!

)S + w'

i

*2"(S+w!)&"(S+w!)/2, (18)

C(2)S,w

!=)

N

w'

*)N $ w'

S

*&"(S+w)/24"S"wS"w(w')"S(S + w')S+w!

.(19)

Remarks A: As shown in the proof in the Appendix, whenS % w' the dominant error events are those that involvebetween w' and S + w' bit errors with exactly w' errorsamong the N $S most reliable bits. On the other hand, whenS > w' the dominant error events are those with w' bit errorsoccurring among the N $S most reliable bits and with the Sleast reliable bits being error free. The above theorem preciselycharacterizes the asymptotic list-decoding performance of theChase-2 algorithm [5] and the generalized minimum distance(GMD) decoding algorithm [8]; a looser bound is given in[18].We next determine the asymptotic behavior of order$w

reprocessing in light of assumption (i) and Theorem 1. Notethat S in Theorem 1 is replaced by the redundancy lengthR

!= N $ K in this context and that the ML error exponentis dmin which is a fraction of R. Since we are only interestedin choosing w such that the list error exponent is at most aslarge as the ML error exponent (i.e., dmin), w must be less thanR. Using the above observations, the asymptotic performanceof order$w reprocessing can be closely approximated byignoring the (possible) linear dependency.Corollary 1: Assuming that the K most reliable bits are

linearly independent, the list decoding error probability oforder$w reprocessing takes, as N0 , 0, the form

PR,w =.C(2)

R,w + o(1)/

N (R+w)/20 e"

4Rw!(R+w!)N0 , (20)

where R!= N $ K and w' != w + 1.

Remarks B: Note that RdminR+dmin/4 ! dmin due to the fact

that dmin4 . R. Moreover, we have limN0&0 PR,w/PML = 0

when w satisfies 4Rw!

R+w! ' dmin. Thus, roughly speaking,by choosing w ! " dmin

4 #, order$w reprocessing asymptoti-cally achieves ML decoding as N0 , 0 (in the sense thatlimN0&0 Pe("dmin

4 #)/PML = 1, where Pe(w) denotes the errorprobability of order$w reprocessing).

We next determine the decision region of preprocessingparameters so that preprocessing error events are not thedominant events when compared to error events of the originalorder$(w+2) reprocessing for high SNRs. It suffices to showthat the list error exponents of the two preprocessing rulesare larger than that of order$(w + 2) reprocessing. In orderfor order$(w + 2) reprocessing to make sense, we assumethat at least order$3 reprocessing is required to achieveasymptotically optimal decoding. Hence, we assume that theredundancy length satisfies R / 1.We first introduce two notions. (i) The error-correction

radius of a soft-decision decoding algorithm A is definedas the largest value, denoted by D'

A, such that A decodescorrectly the transmitted codeword ctr whenever the receivedsequence r is within Euclidean distance D'

A to the bipolarversion of ctr. (ii) The list error-correction radius (LECR)DA of a decoding algorithm A is defined as the largestvalue such that A properly lists ctr as one of candidatecodewords whenever the received vector r is within EuclideandistanceDA to the bipolar version of ctr. It is well-known thatD'ML =

-dmin, and that the two error-correction radii satisfy

Page 6: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

6

(cf. [5])D'

A = min{0

dmin, DA}. (21)

The LECR is a lower bound of the list error exponent, whichis mathematically defined as limN0&0($N0 lnPlist).In the sequel, we concentrate on analyzing the LECR D

using the approach in [10]. We first study the LECR of adecoder that corrects up to w bit errors among the N$S mostreliable bits (this decoder has been precisely characterizedby Theorem 1; herein we have a close look at the relationbetween the LECR and the list error exponent). Without loss ofgenerality, we assume the all-zero codeword 0 is transmitted.We note that the worst-case error event that results in a listingerror is when w + 1 bit errors occur in the w + 1 leastreliable bits among the N $ S most reliable bits, i.e., in[N $S$w, N $S]. Note that the LECR is determined by theworst-case error events rather than the average error events.Thus, the LECR is equal to the minimum Euclidean distancebetween [1 1 . . . 1] and a vector r & RN such that

r1 ' r2 ' . . . ' rN"S"w"1 ' $rN"S"w ' . . . ' $rN"S ' rN"S+1 ' . . . ' rN ' 0.

The minimum Euclidean distance is determined by the follow-ing optimization [10]:

Minimize D2(x) ='w+1

i=1 x2i + S(2 $ xw+1)2,

subject to x1 ' x2 ' . . . ' xw+1 ' 1.(22)

We rewrite the expression of D2(x) as

D2(x) =w#

i=1

(x2i$x2

w+1)+(S+w+1))

xw+1 $2S

S + w + 1

*2

+4S(w + 1)S + w + 1

.

It is easily seen that D2(x) is minimized to

D2 =

1S + w + 1, if S < w + 1,4S(w+1)S+w+1 , otherwise.

(23)

by choosing

x1 = x2 = . . . = xw+1 = max)

1,2S

S + w + 1

*. (24)

Remarks C: The squared LECR (23) turns out to be preciselythe list error exponent as identified in Theorem 1. The abovemethodology also verifies the dominant error events describedin Remarks A.Following (23), the LECR of order$w reprocessing (ignor-

ing linear dependencies) is

D2 =4Rw'

R + w' (25)

and the LECR of the first preprocessing rule, denoted by D 1st,is

D21st =

4(R $ t)!R $ t + !

if t + ! < R, (26)

where t and ! are parameters to be determined. Therefore, toachieve marginal degradation over order$(w+2) reprocessingfor high SNRs, it suffices to require

4(R $ t)!R $ t + !

>4R(w + 3)R + w + 3

and t + ! < R. (27)

We proceed to discuss the LECR of the second preprocess-ing rule, denoted by D2nd. Note that the worst-case eventis that the w + 2 bit errors occur at the K-th, (K $ 1)-st, . . . (K $ w $ 1)-st most reliable positions and two biterrors occur at the (K + ")-th and (K + " $ 1)-st mostreliable positions. Following the methodology in [10], D 2nd

is determined by the following optimization problem:

Minimize D2(x) ='w+2

i=1 x2i + (" $ 2)(2 $ xw+2)2 +

'w+4i=w+3 x2

i +subject to x1 ' x2 ' . . . ' xw+4 ' 1.

(28)By inspection, D2(x) is minimized by choosing x1 = x2 =. . . = xw+2 and xw+3 = xw+4. We thus re-formulate theabove optimization problem as

Minimize D2(x) = (w + 2)x21 + (" $ 2)(2 $ x1)2 + 2x2

2 + (R $ ")(2 $= (w + ")

.x1 $ 2(!"2)

w+!

/2+ (R + w + 2)

.x2 $ 2(R

R"!subject to x1 ' x2 ' 1.

If2(" $ 2)w + "

<2(R $ ")R $ " + 2

, (29)

then D2(x) is minimized when x1 = x2 and, furthermore,

D2(x) = (R+w+2))

x $ 2(R $ 2)R + w + 2

*2

+4(R $ 2)(w + 4)

R + w + 2.

Note that w % dmin/4 < R/4, consequently 2(R"2)R+w+2 > 1

(recall that R / 1). Thus, D2nd is determined by

D22nd =

4(R $ 2)(w + 4)R + w + 2

(30)

provided that (29) is accommodated. The underlying condi-tions w < R/4 and R / 1 imply

4(R $ 2)(w + 4)R + w + 2

>4R(w + 3)R + w + 3

0 (w+3)(w+4) <

)R-2

* )R $-

(31)where 4R(w+3)

R+w+3 denotes the list error exponent of the originalorder$(w+2) reprocessing. Therefore, if the requirements in(27) and (29) are satisfied, the proposed order$w reprocessingwith preprocessing performs asymptotically identically as theoriginal order$(w + 2) reprocessing, as N0 , 0.The above analysis has established the performance criteria

for the proposed order$w reprocessing with preprocessing, weproceed to account for computation criteria. Since the dom-inant computational weight is carried by iterative recoding,we focus on analyzing the number of candidate codewords(or equivalently, the number of recodings). Based on assump-tion (ii), i.e., the assumption that redundancy error vectorse2 satisfy a uniform distribution over [1, t], the proposedorder$w reprocessing with the first preprocessing rule requiresthe following mean number of recoding operations

w+1X

i=0

Ki

!2!t

!!iX

j=0

tj

!. (32)

If we choose

t =R

2and ! = 2(w + 3), (33)

then (27) is satisfied and the number of recodings as in (32)is much less than that of order$w reprocessing

'wi=0

%Ki

&

Page 7: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

7

(herein we do not take into account extremely high rate codes).Therefore, we can safely set 1

2

'wi=0

%Ki

&as the bound on the

maximum number of recodings.For the second preprocessing rule, the mean number of addi-

tional recodings is% Kw+2

&2"!(" +1) based on assumption (ii)

that redundancy error vectors e2 satisfy uniform distributionover [1, " ]. Thus, it suffices to choose

" = 3 log2(K/w) (34)

to accommodate (29) while achieving%

Kw+2

&2"! (" + 1) .'w

i=0

%Ki

&(again we do not take into account extremely high

rate codes). Note that we can safely set 12

'wi=0

%Ki

&as the

bound on the maximum number of recodings.We summarize the above analysis in the following theorem.Theorem 2: By choosing t, ! and " as

t = R/2, ! = 2(w + 3), " = 3 log2(K/w), (35)

the proposed order$w reprocessing with preprocessing gener-ates less candidate codewords than the original order$w repro-cessing, while its decoding error probability is asymptotically(as N0 , 0) the same as that of order$(w + 2) reprocessing;more specifically,

limN0&0

P (2)e (w)

P (0)e (w + 2)

= 1 (36)

where P (2)e (w) denotes the decoding error probability of

the proposed order$w reprocessing with preprocessing, andP (0)

e (w) denotes the decoding error probability of the originalorder$w reprocessing.Remarks D: The choice of parameters in the above theorem isnot necessarily optimal, i.e., it does not necessarily minimizethe number of candidate codewords subject to the constraints(27) and (29). The proposed scheme is more suitable for highrate codes, say K

N ' 12 .

IV. MULTI-BASIS ORDER REPROCESSING

A. Multi-basis schemeA general multi-basis order reprocessing scheme has been

presented in [11]. In this section we investigate a specialmulti-basis order reprocessing scheme for high rate codes, i.e.,KN ' 1

2 . Consider the scenario in Figure 1 which depicts thedistribution of errors after bit hard-decision is applied and thebits are reordered in terms of the MRB and the correspondingredundancy part following the order in (1). In the low SNRregime a certain number, say # (where # . R

!= N $K), ofthe least reliable bits in the MRB is likely to be almost equallyreliable to the # most reliable bits in redundancy part (recallthat the reliabilities are sorted in decreasing order in both theMRB and the corresponding redundancy part). In the exampleof Figure 1, order-5 reprocessing must be employed to correctthe errors if the original order-w reprocessing is applied.However, if the interchange of the # = 7 bits between thetwo underlined parts results in another basis, (i.e., the columnsu1, . . . , uK"", p1, . . . , p" are linearly independent), then wecan correct the errors by only employing order-3 reprocessing

y ) ctr

2 34

0 0 0 1 0 0 0 . . . 0 1 0 0 1 1 0 155 0 1 0

Fig. 1. An example of errors occurring after the application of bit hard-decision: the number of errors among the least reliable bits of the MRB ismore than the number of errors among an equal number of the most reliablebits of the redundancy part.

on this second basis. This motivates us to construct a secondbasis by rearranging the bit positions in the order

u1, . . . , uK"", p1, . . . , pR, uK""+1, . . . , uK (37)

(recall that G = [IK P]) and then applying the greedyalgorithm (as described in Section II).Let G, y, ! be the permuted G,y, ! corresponding to the

second basis B2. Some properties of this second basis areindicated in the following proposition.Proposition 1: (i) If # <d min, then the second basis does

not involve indices corresponding to {% i}Ki=K""+1;

(ii) If [u1 . . . uK"" p1 . . . pR] is full rank, then the bitreliabilities %1 ' %2 ' . . . ' %K associated with the secondbasis are the most reliable, in the sense that

%i ' %i, i = 1, 2, . . . , K,

where %1 ' %2 ' . . . ' %K denotes the re-liabilities associated with another basis contained by{u1, . . . , uK"", p1, . . . , pR}.Proof: (i) The first property is due to the fact that at mostR $ dmin + 1 linear dependencies can occur during theconstruction of the MRB (otherwise a nonzero codeword with(Hamming) weight less than dmin can be constructed) [14].(ii) We prove the second property by contradiction. Let{gi}K

i=1 and {gi}Ki=1 be the columns of the generator matrix

corresponding to the obtained basis and the other basis re-spectively. Suppose l is the largest index such that |% l| < |%l|.We observe that g1, g2, . . . , gl must be linearly dependent ofg1, g2, . . . , gl"1, otherwise, say gi (1 % i % l) is not, then gi

must be included in the obtained basis according to the greedysearch procedure in Section II. Thus, we obtain consequently,

Span{g1, g2, . . . , gl} 1 Span{g1, g2, . . . , gl"1},

which is obviously false, since g1, g2, . . . , gl are linearlyindependent. We thus conclude the proof of part (ii). !!

Remarks E: An alternative way to generate a second basis isto apply the greedy search through the following ordering

p1, . . . , p", u1, . . . , uK . (38)

Note that [p1 . . . p" u1 . . . uK ] is full rank and thus thegreedy search is guaranteed to produce a basis. This orderinghas been used in [13]. Our simulations show that, when two-basis order$w reprocessing is exploited, the proposed ordering

Page 8: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

8

(37) results in (slightly) superior performance than the order-ing (38); however, a theoretical assertion seems particularlycomplex and remains open.We note that in the high SNR regime bit information in the

[K + 1, K + #] band is almost equally reliable to that ofthe entire MRB [1, K], rather than that of [K $ # + 1, K]in the low SNR regime. To this end, we extend the abovetwo-basis strategy in the following manner (for simplicity, weassume that K is divisible by #, say K = a#): we generatea additional bases Bi, i = 1, 2, . . . , a, by applying the greedysearch to the following sequential orders respectively:

(1) : u1, . . . , uK"", p1, . . . , pR, uK""+1, . . . , uK

(2) : u1, . . . , uK"2", uK""+1, . . . , uK , p1, . . . , pR, uK"2"+1, . . . , uK""(3) : u1, . . . , uK"3", uK"2"+1, . . . , uK , p1, . . . , pR, uK"3"+1, . . . , uK"2"...

......

......

(a$1) : u1, . . . , u", u2"+1, . . . , uK , p1, . . . , pR, u"+1, . . . , u2"

(a) : u"+1, . . . , uK , p1, . . . , pR, u1, . . . , u"

Note that in the above ordering, regardless of i, the first K$#vectors are diagonalized. Thus, each greedy searching onlyoccurs for the last # vector choices, and consequently theconstruction of all a additional bases has equal complexityto the construction of the MRB.

B. Asymptotic performance analysisIn this subsection we carry out analysis on the LECR of the

proposed multi-basis scheme in the high SNR regime. To sim-plify analysis, we assume that N, K, w are sufficiently large,thus we can reasonably ignore (i) the issues of divisibility sothat

K = a#, w = af,

where # is the number of the MRB extension bits (i.e., themost reliable bits from the redundancy part of the MRB), and(ii) the issues of linear dependency (cf. [17], p. 455), so thatthe MRB is exactly composed of the K most reliable bits andthe ith basis Bi is exactly composed by

u1, . . . , uK"i", uK"i"+"+1, . . . , uK , p1, . . . , p"

(recall that G = [IK P]).We observe that the proposed multi-basis order$w repro-

cessing scheme fails (to list the transmitted codeword as one ofthe candidate codewords) if at least w+1 bit errors occur in alla+1 bases. In an alternative understanding, a list error occurswhen there exist at least w+1 errors over all a+1 combinationsof a out of a + 1 regions indexed by [i# + 1, (i + 1)#],i = 0, 1, 2, . . . , a. This indicates that at least f + 1 errorsoccur at least in two regions out of a + 1 regions (otherwise,there are a regions, each of which contains at most f errors,and thus the basis composed of these a regions contains atmost w errors). Apart from the one region that contains atleast f + 1 errors, the remaining a regions contains at leastw +1 errors. Consequently, there are at least w + f +2 errorsover all a + 1 regions. Thus, the worst-case error event isdetermined by f bit errors in the least reliable positions ineach of the regions [i# + 1, (i + 1)#], i = 0, 1, . . . , a $ 2,and f + 1 bit errors in the least reliable positions in each of

the regions [i# + 1, (i + 1)#], i = a $ 1, a. To simplify theanalysis, we consider a lower bound case where f errors occurin each of the regions [i#+1, (i+1)#], i = 0, 1, . . . , a. Thus,following the methodology in [10], the LECR is determinedby the optimization

Minimize D2(x) ='w+f

i=1 x2i +

'aj=1(#$ f)(2 $ xjf )2 + (R $ #)(2

subject to x1 ' x2 ' . . . ' xw+f ' 1,(39)

where the subscript “jf ” stands for the product of j and f .By inspection, we determine that the optimization problem

Minimize D21(x) =

'wi=1 x2

i +'a

j=1(#$ f)(2 $ xjf )2,subject to x1 ' x2 ' . . . ' xw ' 1,

is solved with

x1 = x2 = . . . = xw =2(#$ f)

#=

2(K $ w)K

and results inD2

1 =4(K $ w)w

K.

Note that by our assumptions2(K $ w)

K' 2(K $ dmin/4)

K' 2(K $ R/4)

K' 2(K $ K/4)

K> 1.

Similarly, the optimization problem

Minimize D22(x) =

'w+fi=w+1 x2

i + (R $ #)(2 $ xw+f )2,subject to xw+1 ' xw+2 ' . . . ' xw+f ' 1,

is solved with

xw+1 = xw+2 = . . . = xw+f = max)

1,2(R $ #)R $ # + f

*

and results in

D22 =

1R $ # + f, if 2(R"")

R""+f < 1,4f(R"")R""+f , otherwise.

(40)

We are interested in maximizing the radius D2 defined in(39) among all choices of 0 % # % R, when given N, K, w(note that parameters a, f are functions of #). To this end, wefirst consider how to maximize D2

2 . For notational simplicity,let ' != w

K . We therefore simplify the condition for the firstcase of (40) to

2(R $ #)R $ # + f

% 1 2 # ' R

1 + '.

Consequently, we obtain

max"( R

1+!

{D22(#)} = R $ # + f

5555"= R

1+!

=2R'

1 + '. (41)

Note that the expression for the second case in (40) is a convexfunction of # and thus has a unique maximal point. To thisend, we take its derivative with respect to # and set it to zero:

0 = (1 $ ')(#2 $ R#) $ (2#$ R)((1 $ ')#$ R)= $#2(1 $ ') + 2#R $ R2.

Solving, we obtain# =

R

1 +-',

Page 9: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

9

which consequently results in the maximum radius

max"$ R

1+!

D22(#) =

4f(R $ #)R $ # + f

5555"= R

1+"

!

=4'R

(1 +-')2

. (42)

Combining (41) and (42), D22 achieves the maximum value

max">0

{D22(#)} = max

)2R'

1 + ',

4'R(1 +

-')2

*=

4'R(1 +

-')2

with the maximum point at # = R1+

)#.

We next consider the combination of D21 and D2

2. The casewhen

2(R $ #)R $ # + f

% 2(#$ f)#

requires, in conjunction with f = wK #, that

# ' KR

2K $ w=

R

2 $ '. (43)

Note that D2 = D21 + D2

2. Since D21 is independent of the

choice of # and the maximal point # = R1+

)#is within

the legitimate range defined by (43), D 2( R1+

)#) reaches the

maximum value

max"( R

2#!

{D2(#)} = D2

)R

1 +-'

*=

4'R(1 +

-')2

+4K'(1$').

(44)On the other hand, when # % R

2"# , D21 and D2

2 are combinedinto a single degree of freedom, that is D2 defined in (39) issolved with

x1 = x2 = . . . = xw+f =N $ w $ #

N $ # + f,

and has minimum value

D2(#) =4(N $ w $ #)(w + f)

N $ # + f=

4'(N $ 'K $ #)(K + #)N $ (1 $ ')#

.

(45)Note that (45) is a convex function and thus has derivativeequal to zero at the unique maximal point. To obtain themaximal point, we solve

0 = (N $ 'K $ #)(K + #)(1 $ ') + (N $ (1 $ ')#)(N $ 'K $ #$ (K + #))= #2(1 $ ') $ 2#N +

%N2 $ 2'KN $ '(1 $ ')K2

&

and conclude that the maximal point is

# =N

1 +-'$

0'K. (46)

In order to show that the above expression (46) is beyond therange [0, R

2"# ], we proceed to establish a tight upper bound on' in light of the following established bound on the minimumdistance (cf. [3], p. 394).Proposition 2: For binary codes of fixed rate K

N , the func-tion dmin

N asymptotically satisfies

dmin

N% 2((1 $ () (47)

for any ( satisfying ( log21$ + (1 $ () log2

11"$ = 1 $ K

N .Since K

N ' 0.5, we obtain ((1 $ () < 0.1 and dmin < 0.2N .Thus

' =w

K% dmin

4K<

N

20K% 0.1.

Apparently, under the above range of ', we have

R

2 $ '<

N

1 +-'$

0'K.

Thus, (45) is a monotonic function within the defined region[0, R

2"# ] and the maximum function value is achieved at theboundary point # = R

2"# , i.e.,

max"$ R

2#!

{D(#)} % D

)R

2 $ '

*= 4K'(1 $ ') + 4R

'(1 $ ')2 $ '

.

(48)Finally, the overall maximum radius is obtained by choosingthe larger value between (44) and (48). Note that D 2(#) isa continuous function (though it cannot be expressed in asingle formula); thus, the value D2( R

2"# ) achieved based onthe decision region [0, R

2"# ] is equal to the value achievedbased on the decision region [ R

2"# ,R

1+# ]. Hence (44) haslarger value than (48). The following theorem summarizes ouranalysis.Theorem 3: The LECR D2 of the proposed multi-basis

scheme is determined by the optimization problem defined in(39) and has the following solution:

D2(#) =

+6,

6-

4K'(1 $ ') + R $ (1 $ ')#, if # > R1+# ,

4K'(1 $ ') + 4#"(R"")R"(1"#)" , if R

1+# ' # > R2"# ,

4#(N"#K"")(K+")N"(1"#)" , otherwise,

(49)where R

!= N$K and '!= w

K . Moreover,D2(#) has a unique

maximal point and achieves the maximum value of

max"

{D2(#)} =4'R

(1 +-')2

+ 4K'(1 $ ') (50)

when# =

R

1 +-'. (51)

Consider soft-decision decoding of the (1024, 512, 112)concatenated code. This code is obtained by concatenating a(128, 73, 56) Reed-Solomon code with the (8, 7, 2) binaryparity code and inserting one additional information bit [4].In order to achieve bounded-distance soft-decision decoding,i.e., D' =

-dmin, we need

4(1024$ 512)(w + 1)1024 $ 512 + w + 1

' 112

for order-w reprocessing. Solving, we obtain w ' 29, whichimplies that order-29 reprocessing is necessary (and alsosufficient) to achieve asymptotically optimal decoding. On theother hand, for the multi-basis scheme, we have

4 ( 512'(1 +

-')2

+ 4 ( 512'(1 $ ') ' 112.

Solving, we obtain w ' 18. Therefore, multi-basis order-18reprocessing suffices to achieve the maximum decoding radius.As can be easily seen, the resulting computational savings oversingle basis order reprocessing can be tremendous. Figure 2depicts the squared LECR D2 as a function of # by fixingw = 18.

Page 10: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

10

0 100 200 300 400 500 6000

20

40

60

80

100

120

140

#

D2

Fig. 2. The squared LECR of the (1024, 512, 112) concatenated code as afunction of ! by fixing w = 18.

V. SIMULATIONSWe first briefly comment on the appropriateness of the

second assumption that we used for the analysis of prepro-cessing rules in Section III.C, i.e., the assumption that aredundancy error vector e2 follows a uniform distribution over[1, t] (or equivalently, Pr((e2)[1, t]) = 2"t)). Essentially, ourinterest lies in the mean number of recodings rather than theexact distribution, where the mean number of recodings fororder-w reprocessing using the first preprocessing rule wasdetermined in (32). We simulate order-5 reprocessing on the(192, 96) concatenated RS code with ! = 9, t = 20, andorder-4 reprocessing on the (256, 147) extended BCH codewith ! = 11, t = 28, at various signal-to-noise-ratios. Table IIshows results that are averaged over 5000 samples. As we cansee, the estimated number of TEPs obtained from (32) is closeto the actual estimates. Thus, (32) is a good approximation ofthe mean number of recodings and the uniform distribution isa plausible assumption.

Code/SNR Estimate 0.5dB 1.0dB 1.5dB 2.0dB 2.5dB 3.0dB(192, 96) 4.39 # 105 7.01 # 105 7.15 # 105 7.25 # 105 7.39 # 105 7.52 # 105 7.62 # 105

(256, 147) 1.27 # 105 1.32 # 105 1.32 # 105 1.32 # 105 1.32 # 105 1.32 # 105 1.32 # 105

TABLE IICOMPARISONS BETWEEN ESTIMATED AND SIMULATED AVERAGE NUMBER

OF RECODINGS AT VARIOUS SNRS.

We proceed to consider the soft-decision decoding of the(256, 147, 30) extended BCH code over the AWGN channel.Note that for this code there exists an efficient algebraicbounded-distance decoder (cf. [3]) that successfully correctsup to 14 bit errors, thus the Chase-2 algorithm is applicable.This algorithm systematically flips the 15 least reliable bits,each time applying algebraic decoding to correct up to 14

0 0.5 1 1.5 2 2.5 3 3.510−6

10−5

10−4

10−3

10−2

10−1

100

Signal−to−Noise−Ratio

Bloc

k−Er

ror−

Rate

Hard−decisionChase−2 algorithmOriginal order−3 reprocessingOriginal order−4 reprocessingOriginal order−5 reprocessingProposed 2−B order−3 reprocessingProposed 2−B order−4 reprocessingML lower bound

Fig. 3. Performance comparison in decoding the (256, 147, 30) extendedBCH code over the AWGN channel.

bit errors among the remaining bits. Clearly, the Chase-2algorithm requires 215 = 32768 times algebraic decodings. Itsperformance in terms of block-error-rate is plotted in Figure 3and can be seen to be far worse than ML performance,although it is well-known that this algorithm is asymptoticallyoptimal in the high SNR regime. If, on the other hand,we want to see how many least reliable bits are needed tobe systematically flipped in order to achieve near-optimalperformance at a block-error-rate of 10"5, we can compute thelist decoding error probability through a two-fold integration(as addressed in the Appendix) and require the value of thisprobability to be roughly a quarter of the ML lower boundvalue (the ML lower bound is obtained by tracking the fractionof cases where the codeword picked by the proposed two-basis(2-B) order-4 reprocessing turns out to be more likely thanthe transmitted codeword. Through numerical computation,this performance qualification is achieved by choosing thenumber of bits to be 60, i.e., in order to achieve near optimalperformance at a block-error-rate of 10"5, the computationalcost of the Chase-2 algorithm is of the order of 260 = 1018

algebraic decodings.For comparison purposes, we simulated the proposed 2-B

order$w (w = 3, 4) reprocessing with the number of STEPsset to

'wi=0

%Ki

&(i.e., 5.3 ( 106 and 1.9 ( 107 for w = 3, 4

respectively). When w = 3, the other parameters are chosento be ! = 12, t = 32, " = 19, # = 27. Figure 3 showsthat the proposed 2-B order-3 reprocessing outperforms theoriginal order-5 reprocessing. The performance data for thelatter algorithm is computed through the two-fold integration

Page 11: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

11

SNR (dB) 0.5 1.0 1.5 2.0 2.5 3.0STEPs (max) 530,000 530,000 530,000 530,000 530,000 530,000STEPs (mean) 433,303 298,859 135,249 37,915 5,955 543Codewords (max) 140,477 139,307 139,693 140,651 143,690 141,432Codewords (mean) 87,590 66,300 32,733 10,166 1,878 244

TABLE IIISTATISTICS ABOUT THE COMPUTATIONAL COST OF THE PROPOSED 2-BORDER-3 REPROCESSING ON DECODING THE (256, 147, 30) EXTENDED

BCH CODE.

SNR (dB) 0.0 0.5 1.0 1.5 2.0 2.5 3.02-B order-4 1000 1000 1000 1000 1000 1000 500Lower bound 995 991 978 951 927 858 370

TABLE IVRECORDED NUMBERS OF DECODING ERRORS AND ML LOWER-BOUNDDECODING ERRORS BY SIMULATING THE PROPOSED 2-B ORDER-4REPROCESSING FOR THE (256, 147, 30) EXTENDED BCH CODE.

determined by the probability of the event that more than 5bit errors occur on the K most reliable bits. This is clearlya lower bound due to two facts: (i) the MRB is usually lessreliable than the set of K most reliable bits; (ii) what wecompute is the list error probability which is smaller thanthe actual decoding error. Note that the plotted curve for theoriginal order$w reprocessing does not start at 0 dB, due tothe fact that the list error probability represents the decodingerror probability only if it is much larger than the ML errorprobability (in practice, it is replaced by an ML lower boundprobability) as indicated by (15). Indeed, the proposed 2-Border-3 reprocessing provides a substantial practical value: asillustrated in Table III, it performs closely to ML decodingwhile requiring very reasonable computational cost.When w = 4, the other parameters are chosen to be

! = 12, t = 32, " = 19, # = 30. Figure 3 shows that theproposed 2-B order-4 reprocessing practically achieves MLdecoding, as can be further demonstrated by Table IV. Table Vshows statistics about the computational cost. Tables III andV together indicate that, in practice, the parameters of ourmodified reprocessing algorithms can be chosen in a way thatthe number of codewords is much less than the number ofSTEPs.It is demonstrated in [20] that the “Box and Match” method

successfully decodes the (192, 96, 28) concatenated Reed-Solomon code. For comparison purposes, we also simulate thiscode with the proposed scheme. Assume that the maximumnumber of STEPs is set to 106. We set subsequently ! =12, t = 32, " = 19, # = 25. The block-error-rate asa function of SNR is plotted in Figure 4. The proposedscheme performs so close to the ML lower bound performancethat their gap is invisible. The exact values are shown inTable VI and the statistics about the computational cost arelisted in Table VII. As can be seen, the maximum number ofcandidate codewords for the proposed scheme is less than thatof order$4 reprocessing (

'4i=0

%96i

&= 3.5 ( 106), while the

proposed scheme clearly outperforms order$5 reprocessingas shown in Figure 4. Our simulation results also indicate

SNR (dB) 0.5 1.0 1.5 2.0STEPs (max) 1.90 # 107 1.90 # 107 1.90 # 107 1.90 # 107 1.90 #STEPs (mean) 1.11 # 107 5.54 # 106 1.54 # 106 3.10 # 105 2Codewords (max) 3.56 # 106 3.54 # 106 3.55 # 106 3.47 # 106 3.49 #Codewords (mean) 1.99 # 106 1.03 # 106 3.38 # 105 65,415

TABLE VSTATISTICS ABOUT THE COMPUTATIONAL COST OF THE PROPOSED 2-BORDER-4 REPROCESSING ON DECODING THE (256, 147, 30) EXTENDED

BCH CODE.

0 0.5 1 1.5 2 2.5 3 3.510−6

10−5

10−4

10−3

10−2

10−1

100

Signal−to−Noise−Ratio

Bloc

k−Er

ror−

Rate

UncodedOriginal order−4 reprocessingOriginal order−5 reprocessingProposed two−basis order−4 reprocessingML lower bound

Fig. 4. Performance comparison for decoding the (192, 96, 28) concatenatedReed-Solomon code over the AWGN channel.

that the proposed scheme evidently outperforms the “Box andMatch” method, which searches over a maximum of 4.5(10 6

candidate codewords (in addition to requiring a large memoryto store

'4i=0

%96i

&= 3.5 ( 106 codewords in 221 = 2 ( 106

boxes) to achieve near-optimal performance [20].

VI. CONCLUSIONS AND DISCUSSIONS

In this paper, we have investigated the soft-decision decod-ing of binary linear block codes. We incorporated two prepro-cessing rules into the new version of order-w reprocessing. Thefirst rule discards a test error pattern (TEP) if the weight of itspartial coset, composed of the most reliable (information) basis(MRB) and the t most reliable bits in the redundancy part, isabove a pre-determined threshold !. The second preprocessingrule discards TEPs with weight w+2 which exhibit more thanone bit error over the " most reliable bits in the redundancypart. We showed that by appropriately choosing the parameters

Page 12: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

12

SNR (dB) 0.0 0.5 1.0 1.5 2.0 2.5 3.02-B order-4 1000 1000 1000 1000 1000 1000 1000Lower bound 994 988 985 973 970 948 915

TABLE VIRECORDED NUMBERS OF DECODING ERRORS AND ML LOWER-BOUNDDECODING ERRORS BY SIMULATING THE PROPOSED 2-B ORDER-4

REPROCESSING FOR THE (192, 96, 28) CONCATENATED REED-SOLOMONCODE.

SNR (dB) 0.0 0.5 1.0 1.5 2.0 2.5 3.0STEPs (max) 106 106 106 106 106 106 106

STEPs (mean) 684,995 460,568 234,071 81,280 18,861 2,757 293Codewords (max) 567,191 575,345 654,822 577,198 741,274 795,554 679,882Codewords (mean) 122,925 86,129 46,880 18,308 5,087 1,046 211

TABLE VIISTATISTICS ABOUT THE COMPUTATIONAL COST OF THE PROPOSED 2-B

ORDER-4 REPROCESSING ON DECODING THE (192, 96, 28)CONCATENATED REED-SOLOMON CODE.

t, ! and " the proposed order$w reprocessing with preprocess-ing requires comparable complexity to order$w reprocessingbut achieves asymptotically the performance of the originalorder$(w + 2) reprocessing.

We employed iterative recoding on a second basis to com-plement the inefficiency of the MRB for practical SNRs,and extended this approach systematically to a multi-basisorder$w reprocessing scheme in the high SNR regime. Weshowed that the use of multiple bases significantly enlargesthe list error-correction radius, a commonly used performancemeasure at high SNRs. Our extensive simulations show that atpractical SNRs the proposed algorithm is more efficient thanthe original order$w reprocessing algorithm.

Before closing, we comment on the applicability of themethodology of iterative recoding. Iterative recoding has beenshown in this paper to be superior (in terms of achievingnear-ML performance for practical SNRs) compared to themethodology of iterative bounded-distance decoding (for alge-braic codes of interest). The major concern regarding iterativerecoding approaches, however, lies in the fact that they arenot amenable to VLSI design due to the tremendous hardwarecomplexity associated with the construction of the MRB,including the sorting of the generator matrix (alternativelythe parity check matrix) and the on-the-fly binary Gaussianelimination process (for some discussion on the complexity ofVLSI implementations for binary Gaussian elimination referto [6], [15]). Thus, it is imperative that future work investigatesthe complexity of VLSI implementations for iterative recodingapproaches and/or the possible existence of special classes ofcodes for which iterative recoding may be more attractive.Further study of the complexity of software implementationsof iterative recoding approaches is also important because itwill make it even more attractive for off-line applications, suchas deep-space communications.

VII. APPENDIX: PROOF OF THEOREM 1

We will use the notation o(1) to represent a term which

diminishes as N0 , 0, i.e., limN0&0 o(1) = 0 and O(1)

to mean limN0&0 O(1) = a, where a denotes some nonzero

constant. We first prove a series of lemmas. Lemmas 1–3 are

basic calculus results that can also be found in [2], but we

include them here for completeness.

Lemma 1: For x > 0,7 *

xe"u2/N0du =

)12x

+ o(1)*

N0e"x2/N0 (52)

as N0 , 0.

Lemma 2: For 0 < x < y,7 y

xe"u2/N0du =

)12x

+ o(1)*

N0e"x2/N0 (53)

as N0 , 0.

Lemma 3: Let f(·) be smooth and bounded in [x, y] (x > 0)

and independent of N0. Furthermore, assume f(x) 3= 0. Then,7 y

xf(u)e"u2/N0du =

)f(x)2x

+ o(1)*

N0e"x2/N0 (54)

as N0 , 0.

The subsequent lemmas carry out this line of analysis for the

following class of functions:

qn,w,k!=

7 *

1

)7 1

2"xe"u2/N0du

*n

(2x)"we"(w+1)x2/N0

)7 x

1e"v2/N0dv

(55)

where n > 0 and w, k ' 0. In the applications of interest,

n, w, k are integers, while the following lemmas are valid for

any real numbers.

The following lemma is an extension of the work in [19],

where w = 0 is studied.

Lemma 4: If n > w' != w + 1, then

qn,w,0 =.&1/24"n"wn"ww'"n(n + w')n+w"1/2 + o(1)

/Nn+1/2

0 e"4

(n+

as N0 , 0.

Page 13: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

13

Proof: Note that there exists a small positive number ' such

that

min{w' + n(1 $ ')2, w'(2 $ ')2} >4nw'

n + w' , (57)

1 + ' <2n

n + w' < 2 $ '. (58)

In fact, the above two inequalities hold true for ' = 0 and

thus for sufficiently small positive numbers. We decompose

the outer integral interval [1, 4) in (55) into three disjoint

intervals: [1, 1 + '), [1 + ', 2$ '), and [2$ ', 4), and treat

each integration separately. We have for the first part7 1+#

1

)7 1

2"xe"u2/N0du

*n

(2x)"we"w!x2/N0dx

%7 1+#

1

.(x $ 1)e"(1"#)2/N0

/n2"we"w!x2/N0dx

% 2"w'ne"n(1"#)2/N0

7 *

1e"w!x2/N0dx

= O(1) · N0e"(w!+n(1"#)2)/N0 , (59)

and for the third part7 *

2"#

)7 1

2"xe"u2/N0du

*n

(2x)"we"w!x2/N0dx

%7 *

2"#

)7 *

"*e"u2/N0du

*n

(2(2 $ '))"we"w!x2/N0dx

= (2(2 $ '))"w(&N0)n/2

7 *

2"#e"w!x2/N0dx

= O(1) · N1+n/20 e"w!(2"#)2/N0 . (60)

We next show that the dominant term of the integration

(55) is contained in the second part, i.e., the integration over

[1 + ', 2 $ ']. We proceed to determine this dominant term.

Plugging in (53), we obtain7 2"#

1+#

)7 1

2"xe"u2/N0du

*n

(2x)"we"w!x2/N0dx

= (1 + o(1))7 2"#

1+#

)1

2(2 $ x)N0e

"(2"x)2/N0

*n

(2x)"we"w!x2/N0dx

= (2"n"w + o(1))Nn0 e"

4nw!(n+w!)N0

7 2"#

1+#x"w(2 $ x)"ne"(n+w!)(x" 2n

n+w! )2/N0dx.(61)

We further decompose the interval [1 + ', 2 $ ') into three

disjoint intervals:

[1+', 2$') =81 + ',

2n

n + w' $ )

* $82n

n + w' $ ),2n

n + w' + )

* $82n

n + w' + ), 2 $ '

*,

where ) is a number satisfying

0 < ) < min{ 2n

n + w' $ (1 + '), 2 $ ' $ 2n

n + w' }.

By treating each integration separately

0 %7 2n

n+w!"%

1+#x"w(2 $ x)"ne"(n+w!)(x"2n/(n+w!))2/N0dx

% (1 + ')"w'"n

7 *

%e"(n+w!)x2/N0dx, (62)

0 %7 2"#

2nn+w! +%

x"w(2 $ x)"ne"(n+w!)(x"2n/(n+w!))2/N0dx

% '"n(1 + ')"w

7 *

%e"(n+w!)x2/N0dx, (63)

)2n

n + w' + )

*"w )2n

n + w' + )

*"n 7 %

"%e"(n+w!)x2/N0dx

%7 2n/(n+w!)+%

2n/(n+w!)"%x"w(2 $ x)"ne"(n+w!)(x"2n/(n+w!))2/N0dx

%)

2n

n + w' $ )

*"w )2w'

n + w' $ )

*"n 7 %

"%e"(n+w!)x2/N0dx.(64)

Note that, as N0 , 0,7 *

%e"(n+w!)x2/N0dx =

1 + o(1)2)

N0

n + w' e"(n+w!)2/N0 = o(1)N1/20 .

Thus, we have7 %

"%e"(n+w!)x2/N0dx = 2

)7 *

0e"(n+w!)x2/N0dx $

7 *

%e"(n+w!)x2/N0

= 2

9-&

2

)N0

n + w'

*1/2

$ o(1)

:N1/2

0

=

9)&

n + w'

*1/2

+ o(1)

:N1/2

0

as N0 , 0. It follows from (62)–(65) and the arbitrariness of

) that7 2"#

1+#x"w(2 $ x)"ne"(n+w!)(x" 2n

n+w! )2/N0dx

=.&1/22"n"wn"ww'"n(n + w')n+w!"1/2 + o(1)

/N1/2

0 .(66)

Combining (61) and (66) results in7 2"#

1+#

)7 1

2"xe"u2/N0du

*n

(2x)"we"w!x2/N0dx

=.&1/24"n"wn"ww'"n(n + w')n+w"1/2 + o(1)

/Nn+1/2

0 e"4nw!

(n+w!)N0(67)

as N0 , 0.

Condition (57) indicates that (59), (60) are negligible terms

compared to (67), which concludes the lemma. !!

Page 14: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

14

The following lemma is an extension of Lemma 4. The

proof is analogous to the proof of Lemma 4 and is omitted.

Lemma 5: If n > w' != w + 1 and k > 0, then

qn,w,k =.&1/22"(2n+2w+k)n"ww'"n(n + w')n+w"1/2 + o(1)

/Nn+k+1/2

0 e"(k+ 4nw!n+w! )/N0 (68)

as N0 , 0. !!

The following lemma accounts for the opposite case of

Lemma 4, i.e., the case where n % w'.

Lemma 6: If n % w' != w + 1, then

qn,w,0 =)

2"(n+w)

n + w' + o(1)*

Nn+10 e"(n+w!)/N0 (69)

as N0 , 0.

Proof: We first show that the integration over [1.5, 4) is a

negligible term. Note that as N0 , 0

7 *

1.5

)7 1

2"xe"u2/N0du

*n

(2x)"we"w!x2/N0dx

%7 *

1.5

)7 *

"*e"u2/N0du

*n

(2x)"we"w!x2/N0dx

= (&N0)n/2

7 *

1.5(2x)"we"w!x2/N0dx

= (&N0)n/2.3"w!

+ o(1)/

N0e"2.25w!/N0

= o(1) · e"(n+w!)/N0 , (70)

where the second to last equality is due to Lemma 3, and the

last one is due to the condition n % w'.

We now turn to the remaining term of qn,w,0 which inte-

grates within [1, 1.5]. Note that Lemma 2 is not applicable

to the inner integration; 12"x e"u2/N0du since 2 $ x can be

arbitrarily close to 1. To this end, we first derive the asymptotic

formula for the sub-term which integrates within [1 +), 1.5],

where ) < 0.5 is an arbitrarily positive number, and then take

the limit as ) , 0. We have7 1.5

1+%

)7 1

2"xe"u2/N0du

*n

(2x)"we"w!x2/N0dx

=7 1.5

1+%

%2(2 $ x))"n + o(1)

&Nn

0 e"n(2"x)2/N0(2x)"we"w!x2/N0dx

= Nn0 e"

4nw!(n+w!)N0

7 1.5

1+%

%(2(2 $ x))"n + o(1)

&(2x)"we"(n+w!)(x" 2n

n+w! )

= Nn0 e"

4nw!(n+w!)N0

)2"(n+w)

n + w' (1 $ ))"n(1 + ))"w + o(1)*

N0e"(n+w!)(

=)

2"(n+w)

n + w' (1 $ ))"n(1 + ))"w + o(1)*

Nn+10 e"

(n+w!)+4"(w!#n)+(n+wN0

as N0 , 0, where the first equality is due to Lemma 2 and

the third equality is due to Lemma 3. If we define

!(), N0)!=

; *1+%

.; 12"x e"u2/N0du

/n(2x)"we"w!x2/N0dx

2#(n+w)

n+w! (1 $ ))"n(1 + ))"wNn+10 e"

(n+w!)+4"w!+(n+w!)"2N0

,

(72)

then (70) and (71) imply

limN0&0

!(), N0) = 1, 0 < ) < 0.5, (73)

which further implies

lim%&0

limN0&0

!(), N0) = 1.

Note that !(), N0) by definition (72) is continuous within

[0, a] ( (0, b], where a, b < 0.5. If we define

!(), 0) = 1, 0 % ) < 0.5,

then, as indicated by (73), !(), N0) is continuous within the

closed region [0, a] ( [0, b], where a, b < 0.5, and thus is

uniformly continuous within the neighborhood [0, a] ( [0, b].

This property enables us to exchange two limits on !, i.e.,

limN0&0

lim%&0

!(), N0) = lim%&0

limN0&0

!(), N0) = 1.

We thus conclude the lemma. !!

The following lemma is an extension of Lemma 6. The

proof is analogous to that of Lemma 6 and is omitted.

Lemma 7: If n % w' != w + 1, and k > 0, then,

qn,w,k =)

2"(n+k+w)

n + w' + o(1)*

Nn+k+10 e"(n+k+w!)/N0 (74)

as N0 , 0. !!

Page 15: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

15

We first establish an explicit formula for the probability that

more than w bit errors occurred among theN$S most reliable

bits, following the developments in [1], [12]. Let f c&(x) and

fe&(x) be the density functions associated with a reliability

value %, for a correct hard decision and an erroneous hard

decision at position i. It follows that

fe&(x) =

q(x + 1)Q(1)

u(x),

f c&(x) =

q(x $ 1)1 $ Q(1)

u(x),

where u(x) is the standard step function and

q(x) = (&N0)"1/2 exp($x2/N0), Q(x) =7 *

xq(z)dz.

The corresponding cumulative distribution functions are given

respectively by

F e&(x) =

Q(1) $ Q(x + 1)Q(1)

u(x),

F c&(x) =

1 $ Q(1) $ Q(x $ 1)1 $ Q(1)

u(x).

For 1 % j % i, let *j(i) represent the j-th-ordered reliability

value among i hard-decision errors in a received sequence r,

so that

*1(i) ' *2(i) ' . . . ' *i(i).

Similarly, the remaining N$i reliability values corresponding

to correct hard decisions are also reordered in decreasing order,

denoted by

+1(N $ i) ' +2(N $ i) ' . . . ' +N"i(N $ i).

The density functions of *j(i) and +l(N $ i) are then given

by

f'j(i)(x) =i!

(j $ 1)!(i $ j)![1 $ F e

&(x)]j"1 fe&(x)F e

&(x)i"j ,

f(l(N"i)(x) =(N $ i)!

(l $ 1)!(N $ i $ l)![1 $ F c

&(x)]l"1 f c&(x)F c

&(x)N"i"l,

respectively.

Consequently, the probability of the event * j(i) ' +l(N$i)

is given by

Pr(*j(i) ' +l(N $ i)) =7 *

0f'j(i)(x)

7 x

0f(l(N"i)(z)dzdx,

(75)

and the error probability that more than w bit errors occur

among the N $ S most reliable bits is determined by

PS,w =S+w#

i=w!

"i Pr(*w!(i) ' +N"S"w(N $ i)) +N#

i=S+w!

"i,

(76)

where

w' != w + 1, "i!=

)N

i

*Q(1)i

!1 $ Q(1)

"N"i.

The above formula (76) was derived in [1], [12]. We proceed

to explore its asymptotic behavior as N0 , 0. In light of

Lemma 1, we have,

Q(1) =)

12-&

+ o(1)*

N1/20 e"1/N0 (77)

as N0 , 0. Thus,

"i =)

N

i

* ))1

2-&

+ o(1)*

N1/20 e"1/N0

*i

(1 $ o(1))N"i

=))

N

i

*2"i&"i/2 + o(1)

*N i/2

0 e"i/N0 . (78)

Note that the inner integration in (75) can be explicitly

expressed as

7 x

0f(N#S#w(N"i)(u)du

=7 x

0

(N $ i)!(N $ S $ w')!(S + w $ i)!

[1 $ F c&(u)]N"S"w!

f c&(u)F c

&(u)S+w

=7 x

0

(N $ i)!(N $ S $ w')!(S + w $ i)!

[1 $ F c&(u)]N"S"w!

F c&(u)S+w"idF&

=(N $ i)!

(N $ S $ w')!(S + w $ i)!

N"S"w!#

k=0

($1)k

S + w' $ i + k

)N $ S $ w'

k

*

=)

N $ i

N $ S $ w'

* N"S"w!#

k=0

($1)k(S + w' $ i)S + w' $ i + k

)N $ S $ w'

k

* )7 x"1

"1

Page 16: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

16

Note also that

"if'w!(i)(x) =)

N

i

*Q(1)i

.1 $ Q(1)

/N"i9

Q(x + 1)Q(1)

:wq(x + 1)

Q(1)

9Q(1) $ Q(x + 1)

Q(1)

:i"w!

=)

N

i

* .1 $ Q(1)

/N"iQ(x + 1)wq(x + 1)

!Q(1) $ Q(x + 1)

"i"w!

=)

N

i

* .1 $ Q(1)

/N"i)7 *

x+1q(v)dv

*w

q(x + 1))7 x+1

1q(z)dz

*i"w!

. (80)

Substituting (79) and (80) into (75) yields

"i Pr(*w!(i) ' +N"S"w(N $ i))

=)

N

i

*)N $ i

N $ S $ w'

* .1 $ Q(1)

/N"i N"S"w!#

k=0

($1)k(S + w' $ i)S + w' $ i + k

)N $ S $ w'

k

*pi,k

=))

N

i

*)N $ i

N $ S $ w'

*+ o(1)

* N"S"w!#

k=0

($1)k(S + w' $ i)S + w' $ i + k

)N $ S $ w'

k

*pi,k, (81)

where the last equality is due to (77) and

pi,k!=

7 *

0

)7 x"1

"1q(u)du

*S+w!"i+k )7 *

x+1q(v)dv

*w

q(x+1))7 x+1

1q(z)dz

*i"w!

dx.

We next simplify pi,k as N0 , 0, utilizing Lemma 2:

pi,k = (&N0)"(n+i)/2

7 *

1

)7 1

2"xe"u2/N0du

*n )7 *

xe"v2/N0dv

*w

e"x2/N0

)7 x

1e"z2/N0dz

*i"w!

dx

= (&N0)"(n+i)/2

7 *

1

)7 1

2"xe"u2/N0du

*n )1 + o(1)

2xN0e

"x2/N0

*w

e"x2/N0

)7 x

1e"z2/N0dz

*i"w!

dx

=.&"(n+i)/2 + o(1)

/Nw"(n+i)/2

0 qn,w,i"w! , (82)

where n = S + w' $ i + k.

Case 1: S % w'.

When i = w' and k = 0, Lemma 6 immediately indicates

pw!,0 =)&"(S+w!)/2 2"(S+w)

S + w' + o(1)*

N (S+w!)/20 e"(S+w!)/N0 .

(83)

We next show that for the case i = w', k > 0

pw!,k = o(1) · pw!,0. (84)

When i = w' and 0 < k % w'$S, Lemma 6 indicates that the

exponent (in e) of pw!,k is $S+k+w!

N0. Thus all corresponding

terms pw!,k (0 < k % w' $ S) are negligible compared to

pw!,0. When i = w' and k > w'$S, Lemma 4 indicates that

the exponent of pw!,k is $ 4(S+k)w!)(S+k+w!)N0

. Note that

4(S + k)w'

S + k + w' >4(S + k)w'

2w' = 2(S + k) > S + w'.

Thus, the terms pw!,k with k > w' $ S are again negligible

compared to pw!,0.

We next tackle the case i > w'. It suffices to consider only

i % S + w'. This is because, when i > S + w',

"i = O(1)·N i/20 e"i/N0 = o(1)·N (S+w!)/2

0 e"(S+w!)/N0 = o(1)·pw!,0.

(85)

When i = S + w',

"S+w! =))

N

S + w'

*2"(S+w!)&"(S+w!)/2 + o(1)

*N (S+w!)/2

0 e"(S+w!

(86)

When w' < i < S + w', Lemma 7 indicates that

pi,0 =)&"(S+w!)/2 2"(S+w)

S + w' + o(1)*

N (S+w!)/20 e"(S+w!)/N0 .

(87)

Similar to the preceding argument for the case where i = w '

and k > 0,

pi,k = o(1) · pi,0, i > w'. (88)

Combining (81), (83), (84), (87), and (88), we obtain

"i Pr(*w!(i) ' +N"S"w(N $ i))

=))

N

i

*)N $ i

N $ S $ w'

*&"(S+w!)/2 2"(S+w)

S + w' + o(1)*

N (S+w!)/20 e"(S

=))

N

S + w'

*)S + w'

i

*&"(S+w!)/2 2"(S+w)

S + w' + o(1)*

N (S+w!)/20 e"(S

for w' % i < S + w'. Combining (76), (85), (86), and (89),

we arrive at the conclusion

PS,w =S+w#

i=w!

))N

S + w'

*)S + w'

i

*&"(S+w!)/2 2"(S+w)

S + w' + o(1)*

N (S+0

+))

N

S + w'

*2"(S+w!)&"(S+w!)/2 + o(1)

*N (S+w!)/2

0 e"(S+w!)/

=

9)N

S + w'

* S+w!#

i=w!

)S + w'

i

*2"(S+w!)&"(S+w!)/2 + o(1)

:N (S

0

as N0 , 0.

Case 2: S > w'.

Lemmas 5 and 7 indicate that

pi,k = o(1) · pi,0, k > 0. (90)

Page 17: SOFT-DECISION DECODING OF LINEAR BLOCK CODES … · 1 SOFT-DECISION DECODING OF LINEAR BLOCK CODES USING PREPROCESSING AND DIVERSIFICATION Yingquan Wu and Christoforos Hadjicostis

17

Hence, we focus on the case k = 0. Lemma 4 reveals that,

when i = w' and k = 0,

pw!,0 =.&"(S+w)/24"S"wS"ww'"S(S + w')S+w"1/2 + o(1)

/N (S+w)/2

0 e"4Sw!

(S+w!)N0 .

(91)

When S > i > w', Lemma 6 is invoked regarding p i,0. Thus

its (dominant) exponent coefficient is expressed by

g(i) = i $ w' +4(S $ i + w')w'

S $ i + 2w' , w' % i % S.

We proceed to characterize it. Taking the derivative, we obtain

g+(i) = 1 $ 4w'2

(S $ i + 2w)2.

Clearly, g(i) is a concave function and achieves its minimum

value at boundary points i = w' or i = S. Furthermore,

the condition S > w' indicates g(w') < g(S). Thus, g(i)

achieves the minimum value 4Sw!

S+w! at i = w', i.e.,

pi,0 = o(1) · pw!,0, w' < i < S. (92)

When S % i < S + w', Lemma 7 is applicable to pi,0.

Thus, its dominant exponent coefficient is expressed by

([S $ (i $ w')] + w') + (i $ w') = S + w' >4Sw'

S + w' ,

which indicates that

pi,0 = o(1) · pw!,0, S % i < S + w'. (93)

When i ' S + w', we have similarly

"i =))

N

i

*2"i&"i/2 + o(1)

*N i/2

0 e"i/N0 = o(1)pw!,0.

(94)

Summarizing over (76), (81), (90)–(94), we conclude the

second part of the theorem

PS,w = (1 + o(1)))

N

w'

*)N $ w'

S

*pw!,0

=))

N

w'

*)N $ w'

S

*&"S+w

2 4"(S+w)S"ww'"S(S + w')S+w" 12 + o(1)

*N

S+w2

0 e"4Sw!

(S+w!)N0 .

REFERENCES[1] D. Agrawal and A. Vardy, “Generalized minimum distance decoding in

Euclidean-space: Performance analysis,” IEEE Trans. Inform. Theory,vol. 46, pp. 60–83, Jan. 2000.

[2] C. M. Bender and S. A. Orszag, Advanced Mathematical Methods forScientists and Engineers. New York : McGraw-Hill, 1978.

[3] R. E. Blahut, Algebraic Codes for Data Transmission, CambridgeUniversity Press, Cambridge, UK, 2002.

[4] E. L. Blokh and V. V. Zyablov, “Coding of generalized concatenatedcodes,” Probl. Inform. Transm., vol. 10, pp. 218–222, 1974.

[5] D. Chase, “A class of algorithms for decoding block codes with channelmeasurement information,” IEEE Trans. Inform. Theory, vol. 18, pp.170–181, Jan. 1972.

[6] M. Cosnard, J. Duprat, and Y. Robert, “Systolic triangularization overfinite fields,” J. Parallel and Distributed Computing, vol. 9, no. 3,pp. 252–260, 1990.

[7] B. G. Dorsch, “A decoding algorithm for binary block codes and J-aryoutput channels,” IEEE Trans. Inform. Theory, vol. 20, pp. 391–394,May 1974.

[8] G. D. Forney, Jr., “Generalized minimum distance decoding,” IEEETrans. Inform. Theory, vol. 12, pp. 125–131, Apr. 1966.

[9] M. Fossorier and S. Lin, “Soft-decision decoding of linear block codesbased on ordered statistics,” IEEE Trans. Inform. Theory, vol. 41, pp.1379–1396, Sept. 1995.

[10] M. Fossorier and S. Lin, “A unified method for evaluating the error-correction radius of reliability-based soft-decision algorithms for linearblock codes,” IEEE Trans. Inform. Theory, vol. 44, pp. 691–700, Mar.1998.

[11] M. Fossorier and S. Lin, “Reliability-based information set decoding ofbinary linear block codes,” IEICE Trans. Fundamentals, vol. E82-A, pp.2034–2042, Oct. 1999.

[12] M. Fossorier and S. Lin, “Error performance analysis for reliability-based decoding algorithms,” IEEE Trans. Inform. Theory, vol. 48, pp.287–293, Jan. 2002.

[13] M. Fossorier, “Reliability-based soft-decision decoding with iterativeinformation set reduction,” IEEE Trans. Inform. Theory, vol. 48, pp.3101–3106, Dec. 2002.

[14] D. Gazelle and J. Snyders, “Reliability-based code-search algorithmsfor maximum-likelihood decoding of block codes,” IEEE Trans. Inform.Theory, vol. 43, pp. 239–249, Jan. 1997.

[15] C. K. Koc and S. N. Arachchige, “A fast algorithm for GaussianElimination over GF(2) and its implementation on the GAPP,” J. Paralleland Distributed Computing, vol. 13, no. 1, pp. 118–122, 1991.

[16] A. Leon-Garcia, Probability and Random Processes for Electrical En-gineering, Addison-Wesley Publishing Company, 2nd Ed., 1994.

[17] R. Lidl and H. Niederreiter, Finite Fields, Cambridge University Press,1983.

[18] Y. Tang, T. Fujiwara, and T. Kasami, “Asymptotic optimality of theGMD and Chase decoding algorithms,” IEEE Trans. Inform. Theory,vol. 48, pp. 2401–2405, Aug. 2002.

[19] Y. Tang, S. Ling, and F.-W. Fu, “On the reliability-order-based decodingalgorithms for binary linear block codes,” IEEE Trans. Inform. Theory,vol. 52, pp. 328–336, Jan. 2006

[20] A. Valembois and M. Fossorier, “Box and match techniques applied tosoft-decision decoding,” IEEE Trans. Inform. Theory, vol. 50, pp. 796–810, May 2004.

[21] Y. Wu and C. N. Hadjicostis, “Soft-decision decoding of linear blockcodes using ordered G-space encodings,” Proceedings of IEEE Globe-com, the IEEE Global Communications Conference, vol. 2, pp. 921–925,San Antonio, TX, 2001.

[22] Y. Wu and C. N. Hadjicostis, “On soft-decision decoding of linear blockcodes using the most reliable basis,” Accepted by IEEE Trans. Inform.Theory.