9
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 48, NO. 12, DECEMBER 2000 2005 Majority-Logic-Like Vector Symbol Decoding with Alternative Symbol Value Lists John J. Metzner, Life Senior Member, IEEE Abstract—Majority-logic-like decoding is an outer concate- nated code decoding technique using the structure of a binary majority logic code. It is shown that it is easy to adapt such a technique to handle the case where the decoder is given an ordered list of two or more prospective candidates for each inner code symbol. Large reductions in failure probability can be achieved. Simulation results are shown for both block and convolutional codes. Punctured convolutional codes allow a convenient flexibility of rate while retaining high decoding power. For example, a (856, 500) terminated convolutional code with an average of 180 random first-choice symbol errors can correct all the errors in a simple manner about 97% of the time, with the aid of second-choice values. A (856, 500) maximum-distance block code could correct only up to 178 errors based on guaranteed correction capability and would be extremely complex. Index Terms—Block codes, convolutional codes, concatenated codes, majority logic. I. INTRODUCTION C ONCATENATED codes [1] correct errors in two stages. The inner code or coded modulation makes decisions about multibit “vector” symbols. The most popular outer code is the Reed–Solomon code [2]. Reed–Solomon decoders operate on symbols as field elements of . In [3] and [4], an outer code general decoding technique was described for vector symbol block codes, where check symbols are computed as modulo 2 -bit vector sums, with no large field operations. In [5] and [6], a simpler version was described for codes with the structure of a majority logic block code. Performance curves were given in [5] and [6] using a decoding technique, denoted MLL for “majority-logic-like.” MLL and vector symbol decoding are most effective when symbols that are in error usually contain numerous bit errors. This may occur due to burst noise or failure of an inner error-cor- recting code. If necessary, an effect of almost random residual bit errors may be created intentionally [3] by performing an in- vertible transform of each symbol vector prior to transmission, using different transforms for different symbol positions. An in- verse transform of an individual symbol converts the residual Paper approved by C. Schlegel, the Editor for Coding Theory and Techniques of the IEEE Communications Society. Manuscript received October 11, 1999; revised February 17, 2000 and May 15, 2000. The author is with the Department of Computer Science and Engineering, Pennsylvania State University, University Park, PA 16802 USA (e-mail: met- [email protected]). Publisher Item Identifier S 0090-6778(00)10905-5. errors into a pseudorandom bit-error pattern prior to outer code decoding. General vector symbol decoding can be applied to convolu- tional codes [7]. Also, the principle of majority logic binary de- coding can be applied to binary convolutional codes [8], [9], so MLL convolutional code decoding is also possible. A multibit symbol decoder could give an informative small list of alternative values, ordered by likelihood. The following are some examples. 1) For some codes and decoders, there is a small possibility that one wrong code word will be closer to the received sequence than the correct one, but about that there are two wrong code words closer. This will be demonstrated in Section VII. 2) Trellis decoders [10], [11] record likelihoods of various paths and can note the two or more most likely paths [12]. 3) Receiver diversity with distributed decison combining [13], [14] and retransmission automatic-repeat-request (ARQ) provide two or more decisions for the same symbol. List decoding [12], [15]–[18] uses trial error detection to re- veal if a correct word is in a list. However, here there are two choices for each symbol, and error detection is only over a large block of symbols. It is impractical to try all combinations, but in vector symbol and MLL decoding, most correct alternate choices reveal themselves almost automatically. This paper de- scribes the method in MLL block and convolutional code de- coding. A related method for general vector symbol decoding is described in [19] and [20]. II. REVIEW OF THE ORIGINAL MLL TECHNIQUE FOR BLOCK CODES A. One-Step Majority-Logic Decoding for Binary Block Codes [8] The codes have the property that there are sets of parity equations where a given bit position is checked in all equa- tions, while the other bit positions appear in at most one equation each. They form a binary matrix , where is the block length. Decision on a bit position is based on agreement with a clear majority of the equation sums. Cyclic block codes are used because of the symmetry property that if equations are found for one bit position, cyclic shifts give the same equations for the other bit positions. 0090–6778/00$10.00 © 2000 IEEE

Majority-logic-like vector symbol decoding with alternative symbol value lists

  • Upload
    jj

  • View
    216

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Majority-logic-like vector symbol decoding with alternative symbol value lists

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 48, NO. 12, DECEMBER 2000 2005

Majority-Logic-Like Vector Symbol Decoding withAlternative Symbol Value Lists

John J. Metzner, Life Senior Member, IEEE

Abstract—Majority-logic-like decoding is an outer concate-nated code decoding technique using the structure of a binarymajority logic code. It is shown that it is easy to adapt sucha technique to handle the case where the decoder is given anordered list of two or more prospective candidates for eachinner code symbol. Large reductions in failure probability canbe achieved. Simulation results are shown for both block andconvolutional codes. Punctured convolutional codes allow aconvenient flexibility of rate while retaining high decoding power.For example, a (856, 500) terminated convolutional code with anaverage of 180 random first-choice symbol errors can correct allthe errors in a simple manner about 97% of the time, with theaid of second-choice values. A (856, 500) maximum-distance blockcode could correct only up to 178 errors based on guaranteedcorrection capability and would be extremely complex.

Index Terms—Block codes, convolutional codes, concatenatedcodes, majority logic.

I. INTRODUCTION

CONCATENATED codes [1] correct errors in two stages.The inner code or coded modulation makes decisions

about multibit “vector” symbols. The most popular outercode is the Reed–Solomon code [2]. Reed–Solomon decodersoperate on symbols as field elements of . In [3] and [4],an outer code general decoding technique was described forvector symbol block codes, where check symbols are computedas modulo 2 -bit vector sums, with no large field operations. In[5] and [6], a simpler version was described for codes with thestructure of a majority logic block code. Performance curveswere given in [5] and [6] using a decoding technique, denotedMLL for “majority-logic-like.”

MLL and vector symbol decoding are most effective whensymbols that are in error usually contain numerous bit errors.This may occur due to burst noise or failure of an inner error-cor-recting code. If necessary, an effect of almost random residualbit errors may be created intentionally [3] by performing an in-vertible transform of each symbol vector prior to transmission,using different transforms for different symbol positions. An in-verse transform of an individual symbol converts the residual

Paper approved by C. Schlegel, the Editor for Coding Theory and Techniquesof the IEEE Communications Society. Manuscript received October 11, 1999;revised February 17, 2000 and May 15, 2000.

The author is with the Department of Computer Science and Engineering,Pennsylvania State University, University Park, PA 16802 USA (e-mail: [email protected]).

Publisher Item Identifier S 0090-6778(00)10905-5.

errors into a pseudorandom bit-error pattern prior to outer codedecoding.

General vector symbol decoding can be applied to convolu-tional codes [7]. Also, the principle of majority logic binary de-coding can be applied to binary convolutional codes [8], [9], soMLL convolutional code decoding is also possible.

A multibit symbol decoder could give an informative smalllist of alternative values, ordered by likelihood. The followingare some examples.

1) For some codes and decoders, there is a small possibilitythat one wrong code word will be closer to the received

sequence than the correct one, but aboutthat there aretwo wrong code words closer. This will be demonstratedin Section VII.

2) Trellis decoders [10], [11] record likelihoods of variouspaths and can note the two or more most likely paths [12].

3) Receiver diversity with distributed decison combining[13], [14] and retransmission automatic-repeat-request(ARQ) provide two or more decisions for the samesymbol.

List decoding [12], [15]–[18] uses trial error detection to re-veal if a correct word is in a list. However, here there are twochoices for each symbol, and error detection is only over a largeblock of symbols. It is impractical to try all combinations,but in vector symbol and MLL decoding, most correct alternatechoices reveal themselves almost automatically. This paper de-scribes the method in MLL block and convolutional code de-coding. A related method for general vector symbol decoding isdescribed in [19] and [20].

II. REVIEW OF THE ORIGINAL MLL T ECHNIQUE FOR

BLOCK CODES

A. One-Step Majority-Logic Decoding for Binary BlockCodes [8]

The codes have the property that there aresets of parityequations where a given bit position is checked in allequa-tions, while the other bit positions appear in at most one equationeach. They form a binary matrix , where is the blocklength. Decision on a bit position is based on agreement with aclear majority of the equation sums. Cyclic block codes areused because of the symmetry property that ifequations arefound for one bit position, cyclic shifts give the same equationsfor the other bit positions.

0090–6778/00$10.00 © 2000 IEEE

Page 2: Majority-logic-like vector symbol decoding with alternative symbol value lists

2006 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 48, NO. 12, DECEMBER 2000

B. MLL Decoding [5]

The symbols are-bit binary vectors , but the samebinary matrix is used as for binary majority logic decoding.

Let be an matrix, where each row is a vector symbol.Define a syndrome matrix

(1)

The rows of are the vector sums corresponding to theequations. Let be the th row of .1) Decoding Rule:Compute . If none of the rows of

are zero, and one same value appears intwo or more of therows, assume this replication is the error vector and correct thetest symbol. Otherwise make no correction. Continue at leastuntil every symbol has been tested; perform two or more passesthrough the symbols as allowed or warranted. The justificationis that if and no equations involving the test symbolsum to zero, it is likely that the test symbol has an error of value.Sometimes modifications can be made in the decoding rule

for better performance, depending on the symbol bit size. Anexample of this will be discussed later.

A missed or false correction often will be corrected on asecond pass through the symbols, due to other symbols havingbeen corrected. After the corrections, error detection, by in-cluding additional parity somewhere in the data, can ensure thecorrection was done right.

III. CONVOLUTIONAL CODE DECODING

Convolutional code syndrome decoding also can be describedin terms of a parity matrix which operates on the receivedsequence [8], although is semi-infinite unless the code isterminated. A class of self-orthogonal majority logic convolu-tional codes has been developed in [9] and is described in [8,Table 13.2]. We will illustrate the case of rate , memory 17from this table. For this code . Following the notation in[8], represents the error in the positiondata symbol, and

represents the error in the positioncheck symbol. The sixequations involving data symbolare shown in (2) at the bottomof the page.

For , the components with negative subscripts arezero; this corresponds to the memory of the convolutional codestarting at 17 zeros.

MLL decoding of data symbol is similar to the rules pre-viously described. One difference is that only the data symbolsare being tested.

IV. USE OFALTERNATIVE CHOICE INFORMATION

The procedure will be described based on the assumption thatthere are two choices for each vector symbol: a maximum-likeli-hood first choice and a second most likely choice. However, theprocedure extends directly to where some symbols have onlyone choice and/or some have more than two choices. Fig. 1 il-lustrates the procedure for a block code.

In a given symbol position, let represent the first-choice-bit symbol value and let represent the second-choice-bit

symbol value. There is a circulating register with two compo-nents in each position: is stored in a lower-row position and

in the corresponding upper-row position. Note that ifis correct, represents the true error value. If no second-choice value is available in some position, the upper-row posi-tion can simply be set to zero.

For each test position, five vector sums of first choices,sum(i), corresponding to the 1’s positionsin each of the five rows of matrix . For example, sum(1)consists of the positions markedand the test position marked

, which appears in all rows. In each position, two kinds ofcomparisons can be done.

A. Compare Sums with Second-Choice Entries

Consider sum(1), which is the vector mod 2 sum of the firstchoices marked and . If sum(1) is nonzero, it can be com-pared to the five values that correspond to symbol posi-tions marked and . If a match is found, that is addedto the lower entry, is made 0, and the error in most caseshas been corrected. The same is done for sums 2, 3, 4, and 5.

B. Search for Duplicate Sums

Recall that appears in all equations. The five sums can becompared to see if there is exactly one case of two or more

(2)

Page 3: Majority-logic-like vector symbol decoding with alternative symbol value lists

METZNER: MLL VECTOR SYMBOL DECODING WITH ALTERNATIVE SYMBOL VALUE LISTS 2007

Fig. 1. MLL decoding circuit, using second-choice information, for a(21; 11) vector symbol code.

nonzero identical values and no case of zero sum. If so, thisidentical value is presumed to be the error value in the test posi-tion and is used to correct the symbol. This is the same as thetest in [5] and [6] for the case where there are no second choices.

If there is a third choice could be made a third entryfor a symbol, and there is the option (possibly in a later pass) ofalso looking for a match with if the matches failfor that position.

The same method can also be applied to a terminated convo-lutional code.

V. THE “DECLARED CORRECT” I DEA

When a checking equation adds to zero, there is a high prob-ability that all symbols in the equation are error-free. Such sym-bols could be declared error-free. This could allow the decoderto skip testing them in a later pass. Since errors adding to zeromay have a significant, though small, chance of occurring, asafer rule would be to declare a symbol correct only if it is in atleast two error-free equations. The declaration can be made byassociating one extra bit with each symbol, initially 0, and setto 1 after a declaration. As a further aid to decoding ability, ifno replication is found, but all symbols except the test symbolin one equation have been declared correct, the test symbol canbe corrected by adding the equation sum. This event can be dis-covered by ANDing the included nontest-symbol declared bitswhile adding the symbol values. If the AND result is “1,” thenthe nonzero equation sum is presumably due only to the testsymbol error. The simulations in [5] and [6] did not make useof the “declared correct” concept. Also, the simulations in this

paper use the concept only for convolutional codes, in a some-what different way to be described later.

The declared concept is useful in other ways. In distributeddecision decoding, high confidence “declared” decisions can bepassed to other locations, where they could be placed in thefirst- or second-choice category. In iterative decoding with theinner code, if the inner code is convolutional, declared correctsymbols could be passed to the inner code. In selective repeatARQ, repeat could be requested only for nondeclared symbols.

VI. FINE-TUNING THE DECODER

When the symbol vector size is not large, it was found thatthe order of the tests can have a significant effect on the results.This is because the type 1 test involves a fairly large numberof somewhat independent matches, each of which has about a

chance of accidental match. For the case of convolutionalcodes, a great improvement was obtained by holding off doingthe type 1 test until after the first pass.

The criteria for duplicate test also can be fine-tuned. In priorwork [6], it was found best to search for a duplicate only if therewas no zero sum in the test equations. However, this can fail ifthere are two identical errors in the same equation, and no othererrors. In this case, even though there would be identicalnonzero sums, there would be one zero sum, so the correctionwould not be made; but simple majority logic could correct this.A moderate improvement was made by making the correctionwhen the number of nonzero identical copies was at least twomore than the number of zero sums—a compromise betweenmajority logic and MLL.

Page 4: Majority-logic-like vector symbol decoding with alternative symbol value lists

2008 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 48, NO. 12, DECEMBER 2000

VII. A SSIGNMENT OFSECOND-CHOICE PROBABILITIES

A key question is how to assign the conditional probabilitythat the second choice is wrong given that the first choiceis wrong. This will depend on the application and where thechoices come from. There are the following two cases.

Case 1) Prior to outer code decoding, a list is prepared foreach -bit symbol, consisting of one or more mostlikely candidates, in order of likelihood. That list isbased solely on received signal(s) for that symbol. Ifthere is a repetition, as in ARQ or diversity reception,precdecision combining is used to compile the listfor that symbol.

Case 2) Two or more signals are received for each symbol,such as in diversity reception, but there is nopresymbol-decision combining. For each separatesignal reception, just the one most likely symbol isrecorded, possibly with a confidence estimate. Thesymbol list is made up of the separate decisionsfor a particular symbol, ordered by any confidenceestimate. The separate decisions can be the same inthis case, whereas in Case 1) this is not possible.

To get a quantitative estimate for Case 1), consider the-bitsymbol as a random code waveform. For fixed signal-to-noiseratio (SNR), the received waveform will usually be close tosome distance from the transmitted waveform. Let ,the size of the transmitted waveform set. Sayactually is thedistance from the correct waveform. Let probability, in-dependently, that any particular other code waveform is withina sphere of radius from the received waveform, prob-ability other code waveforms are within a sphere of radiusfrom the received waveform, probability the first choice isincorrect, and probability the second choice is incorrect,given the first choice is incorrect.

inside sphere inside sphereinside sphereinside sphere

(3)

(4)

Now

(5)

From (4)

(6)

etc. (7)

Equation (7) shows that each of the second through last termin the numerator bracket of (5) is smaller than a corresponding

term in the denominator bracket, and there is also an extra de-nominator bracket term. Thus

(8)

But and , so

(9)

Normally, , so the second-choice conditional-errorprobability is less than the first-choice error probability by afactor between 1/2 and 1. But where distanceexceeds whateven a perfect decoder could correct, bothand would beclose to 1.

In Case 2), suppose there are two receptions with independentnoise. Symbol decisions are made independently for the two re-ceptions, and these are the two choices. Suppose by agreementa particular one of the two receptions is deemed first choice forall symbols (possibly because that reception is known to havecome in at a higher SNR). Since the two separate sets of deci-sions are made independently, the conditional probabilityisactually the same as the unconditional symbol-error probabilityof the second-choice reception. Whenis the error probabilityat the lower SNR, , whereas in Case 1), . Or, ifthe two receptions are at the same SNR, .

VIII. SIMULATION FOR BLOCK CODES

In the simulations, error positions are selected at random ac-cording to the symbol-error probability. A selected error posi-tion is given a randomly selected pattern of zeros and ones, ex-cluding the all—zeros pattern. This effect could be created bythe invertible symbol transform method in [3].

Fig. 2 shows block decoding error probability for MLLdecoding of the code described in Fig. 1. Shownfor reference is a maximum-distance nonbinarycode, which can be derived from a punctured or shortenedReed–Solomon code [2], [8]. With no second choice, theerror probability with 16-bit vectors is lower than for a max-imum-distance code. Use of second-choice information showsfurther substantial improvement. Cases 1) and 2) are shownfor both 8- and 16-bit symbols. Performance improves withincreasing vector size because of the reduced effect of falsematches. For this code, 16 bits is close to the large vectorperformance limit in the range ofshown. Note that, with 8-bitvectors, Case 2), independent decision case with , isactually better than Case 1) with given by (9), even though

is greater in the Case 2) example. The reverse is true at16 bits. This is because in Case 1) the second choice is alwayswrong if the first choice is right. Thus there are more chancesfor a wrong second choice to make a false match in Case 1). At8 bits, these false matches are far more common than at 16 bits.

Fig. 3 shows performance for a majority codestructure with [8] and 16-bit symbols. Again, even

Page 5: Majority-logic-like vector symbol decoding with alternative symbol value lists

METZNER: MLL VECTOR SYMBOL DECODING WITH ALTERNATIVE SYMBOL VALUE LISTS 2009

Fig. 2. Block decoding failure probability versus predecoding symbol-error probability for a(21; 11) code, with second choice, Cases 1) and 2), and withoutsecond choice. Also shown is the effect of vector bit size change for Cases 1) and 2).

Fig. 3. Block decoding failure probability versus predecoding symbol-error probability comparisons for a(73; 45) code with 16-bit symbols.

without second choice the failure probability is smaller thanwith a maximum-distance code correcting up to the guaranteedcorrection capability. Also shown is the square of the nosecond-choice failure probability. This is to be compared withthe , independent curve. For Case 2), an alternativewould be two separate block decodings for the two receptions,with success if at least one of the two decodings checks.Probability of failure would then be given by the squared curve.At below about 0.14, the squared curve becomes lower thanthe second-choice method. Another option if both decodersfail is for one to send its post-outercode decoding (or perhapsjust the ones it has “declared” correct) to the other. This canhelp because a failed decoder usually has reduced the number

of symbol errors, and thus can submit more reliable decisionsto the other than it could have before outer code decoding.With this option, the failure probability would be well belowthe square-of-no-second-choice curve. This effect would bemore dramatic in the convolutional code case, because faileddecodings average a much greater percentage reduction in datasymbol errors.

IX. SIMULATION FOR CONVOLUTIONAL CODES

The simulations used a termination after 500 data symbols,which corresponds to a block code for arate- convolutional code with memory length. Cases of

Page 6: Majority-logic-like vector symbol decoding with alternative symbol value lists

2010 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 48, NO. 12, DECEMBER 2000

Fig. 4. Whole block decoding failure probability versus predecoding symbol-error probability, comparing choice, no choice, and Cases 1) and 2) for the memory17, (1017, 500) terminated convolutional code.

Fig. 5. Whole block decoding failure probability versus predecoding symbol-error probability, showing effect of vector symbol size and number of passes, Case1), for a memory 17, (1017, 500) terminated convolutional code. Also, the effect of correcting one remaining data symbol error is shown for Case 1).

and were chosen using self-orthogonal codes from[8, Table 13.2]. Also simulated was a (856, 500) code obtainedby puncturing every third check symbol from the (1035, 500)terminated convolutional code. Results plotted are as follows.

1) The probability that the decoder fails to decode all thesymbol errors in the block.

2) The post-decoding data symbol-error probability.3) The probability there is more than one uncorrected data

symbol error after decoding.The latter case is of interest because many of the decoding

failures resulted in only one data symbol in the block left un-corrected. These cases could be corrected with a simple single-symbol error correcting code incorporated in the data.

There is another option with second-choice informationwhich is included in the simulation. Assume “declared cor-rect” information on symbols has been recorded. If a passcorrects no errors but the maximum number of passes hasnot been reached, ordinarily the decoder would halt, sinceanother pass with the same tests would do nothing. Instead,do the following operation. Look only at the symbols thathave not been declared correct, and substitute second-choicevalues for all the first-choice check symbols involving that

data symbol. Then perform one more “last try pass” on thesymbols. This was suggested by observation that an isolateddata symbol error was left when all but at most one of thesymbols that check that erroneous data symbol were in error.Replacing those check symbols with second choices wouldreplace at most one right value and is likely to reduce thenumber of check errors and allow correction of the remainingdata error(s).

Fig. 4 shows the whole block failure probability for the (1017,500) terminated memory 17 convolutional code, using 16-bitsymbols. Both Case 1), where , and Case 2),where and , are illustrated. Also shown is thesquare of the no-second-choice failure probability. This is betterthan the curve for values of less than about 0.125.

Fig. 5 shows the sensitivity to bit size, number of passesand ability to reduce to at most one remaining data symbolerror, for a terminated (1017, 500) code, Case 1). Four passes isabout as good as six passes when the symbol-error probabilityis below about 0.18. Also shown is the probability that there isgreater than one remaining error. Note that in the 12-bit case, ata symbol-error probability of 0.14 or less, the1 curve is morethan two orders of magnitude lower than the0 curve, which

Page 7: Majority-logic-like vector symbol decoding with alternative symbol value lists

METZNER: MLL VECTOR SYMBOL DECODING WITH ALTERNATIVE SYMBOL VALUE LISTS 2011

Fig. 6. Post-decoding symbol-error probability versus predecoding symbol-error probability, for a memory 17, (1017, 500) terminated convolutional code. Nosecond choice, second choice with Cases 1) and 2), and different vector bit sizes are included.

Fig. 7. Whole block decoding failure probability versus predecoding symbol-error probability, for a memory 35, (1 035 500) terminated convolutional code. Also,the effect of correcting one remaining data symbol error is shown.

means that in more than 99% of the decoding failures only oneremaining data symbol is left in error.

Fig. 6 shows the post-decoding symbol-error probability forthe (1017, 500) code. These values are far lower than the blockfailure decoding probability because even failed decodingsusually correct most of the symbol errors. This is true even inthe no-second-choice case. Looking at the no-second-choice,error-fraction-among-failed curve, we see that at ,the post-decoding symbol-error probability in failed decodingsis reduced to about 0.01. This suggests a powerful alternativein the independent receptions case. If the two are separatelydecoded and both fail, the partially corrected data can be fedfrom one decoder to allow additional passes with this new,more reliable second-choice information. The process hassimilarities to turbo code operations [21].

Better error correction can be achieved using a larger con-straint length. Fig. 7 shows the (1035, 500) lock decoding fail-ures with 20-bit symbols.

Fig. 8 shows the post-decoding symbol-error probability aftersix passes with the constraint 35 code. For the case with second-choice, curves are shown for 20-bit symbols (which approxi-mates the large vector case) and one case of 12-bit symbols. At

, there are initially an average of 331 first-choice er-rors/block (160 data errors), reduced to an average of less than5 data symbol errors/block, with 12-bit or 20-bit symbols. Forcomparison, a maximum-distance (1035, 500) block code hasguaranteed correction only up to 267 errors. At low symbol-error probabilities, the 20-bit curve is much lower than the 12-bitcurve due to excessive false matches at 12 bits.

A. Punctured Convolutional Codes

It is a simple matter to modify the decoder to allow forpuncturing. Basically, punctured symbols are just like erasures.Equations that contain a punctured check symbol are simplyignored in the match/duplicate searches.

Page 8: Majority-logic-like vector symbol decoding with alternative symbol value lists

2012 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 48, NO. 12, DECEMBER 2000

Fig. 8. Post-decoding symbol-error probability versus predecoding symbol-error probability for a memory 35, (1035, 500) terminated convolutional code. Sixpasses and 20-bit symbols are assumed.

Fig. 9. Whole block failure and post-decoding symbol probabilities versus predecoding symbol-error probability, with Case 1) choice decoding for amemory35, (856, 500) punctured and terminated convolutional code. Curves include 16- and 20-bit vector sizes. For the 20-bit size, the effect of correctingone remainingdata symbol error is shown.

Fig. 9 shows a case where every third check symbol was punc-tured from the (1035, 500) code, yielding a (856, 500) blockcode. There are only 356 check symbols, yet at , an av-erage of 180 first-choice errors, decoding with 20-bit symbolvectors and given by (9), is completely successful almost97% of the time and leaves at most one data error about 99%of the time. A maximum-distance block code cor-recting up to 178 errors would be successful less than half thetime and would be much more complex.

X. CONCLUSIONS

The ability to make use of second-choice decisions in asimple manner adds greatly to the power of MLL decoding.One-step majority logic block codes have somewhat limitedlength and rate choices. Punctured convolutional codes allowmore flexible rate choices. The one example given was justan arbitrary puncturing pattern, yet gave good results. Con-volutional code memories greater than 35 can be used for

still greater decoding power with a slightly greater than linearincrease in delay and complexity.

Maximum length codes have better erasure correctingcapability than any other possible code. However, the ability toconveniently use multiple-choice decision information appearsto more than compensate for this erasure-correcting advantage,using either block or convolutional codes. MLL performance,however, is deteriorated by small symbol bit size. If 16-bitsymbols are needed for good performance and a Reed–Solomoncode could use 8-bit symbols, the 16-bit symbols could bebroken into two 8-bit parts and two Reed–Solomon codesinterleaved. (A scheme in [22] interleaves 40-bit symbolswith five 8-bit Reed–Solomon codes.) The error probabilityamong 8-bit units will be smaller than among 16-bit units.Depending on how the inner coding–decoding is done and theerror statistics, the difference between 8-bit and 16-bit errorprobabilities may or may not be significant. If the inner codefailures were a random pattern of bit errors, the probability that

Page 9: Majority-logic-like vector symbol decoding with alternative symbol value lists

METZNER: MLL VECTOR SYMBOL DECODING WITH ALTERNATIVE SYMBOL VALUE LISTS 2013

in a wrong 16-bit decision one particular 8-bit half would becorrect is about 2 , but for some inner coder–decoders it couldbe significant. The 16-bit average-error probability cannotexceed twice the average 8-bit error probability. In the blockcode examples, if (9) applies, the MLL Case 1) two-choiceblock decoding failures are below the maximum-distance codeeven at twice the predecoding symbol-error probability.

REFERENCES

[1] G. D. Forney Jr.,Concatenated Codes. Cambridge, MA: MIT Press,1966.

[2] I. S. Reed and G. Solomon, “Polynomial codes over certain finite fields,”J. Soc. Ind. Appl. Math., vol. 8, pp. 300–304, June 1960.

[3] J. J. Metzner and E. J. Kapturowski, “A general decoding technique ap-plicable to replicated file disagreement location and concatenated codedecoding,”IEEE Trans. Inform. Theory, vol. 36, pp. 911–917, July 1990.

[4] K. T. Oh and J. J. Metzner, “Performance of a general decoding tech-nique over the class of randomly chosen parity check codes,”IEEETrans. Inform. Theory, vol. 40, pp. 160–166, Jan. 1994.

[5] J. J. Metzner, “Majority-logic-like decoding of vector symbols,”IEEETrans. Commun., vol. 44, pp. 1227–1230, Oct. 1996.

[6] , “Simulation of majority-logic-like vector symbol decoding forblock codes,” inProc. 1996 Princeton Conf. Information Systems andSciences, Mar. 1996, pp. 567–571.

[7] Y. S. Seo, “A New Decoding Technique for Convolutional Codes,” Ph.D.dissertation, Pennsylvania State Univ., University Park, May 1991.

[8] S. Lin and D. J. Costello Jr.,Error Control Coding. Englewood Cliffs,NJ: Prentice-Hall, 1983, ch. 7.

[9] J. L. Massey,Threshold Decoding. Cambridge, MA: MIT Press, 1963.[10] G. D. Forney Jr. and G. Ungerboeck, “Modulation and coding for

linear Gaussian channels,”IEEE Trans. Inform. Theory, vol. 44, pp.2384–2415, Oct. 1998.

[11] Special Issue on Bandwidth and Power-Efficient Coded Modulation Iand II, IEEE J. Select. Areas Commun., vol. 7, Aug. and Dec. 1989.

[12] N. Seshradi and C-E. W. Sundberg, “List Viterbi decoding algorithmswith applications,” IEEE Trans. Commun., vol. 42, pp. 313–323,Feb./Mar./Apr. 1994.

[13] Y. Chau and J.-T. Sun, “Diversity with distributed decisions combiningfor direct-sequence CDMA in a shadowed Rician-fading land-mobilesatellite channel,”IEEE Trans. Veh. Technol., vol. 45, pp. 237–247, May1996.

[14] E. Gerianiotic and Y. A. Chau, “Robust data fusion for multisensor de-tection systems,”IEEE Trans. Inform. Theory, vol. 36, pp. 1265–1279,Nov. 1990.

[15] J. Snyders, “Reduced lists of error patterns for maximum likelihood softdecoding,”IEEE Trans. Inform. Theory, vol. 37, pp. 1194–1200, July1991.

[16] M. P. C. Fossorier and S. Lin, “Soft-decision decoding of linear blockcodes based on ordered statistics,”IEEE Trans. Inform. Theory, vol. 41,pp. 1379–1385, Sept. 1995.

[17] D. Gazelle and J. Snyders, “Reliability-based code-search algorithmsfor maximum-likelihood decoding of block codes,”IEEE Trans. Inform.Theory, vol. 43, pp. 239–249, Jan. 1997.

[18] M. A. Shokrollahi and H. Wasserman, “List decoding of algebraic-geo-metric codes,”IEEE Trans. Inform. Theory, vol. 45, pp. 432–437, Mar.1999.

[19] J. J. Metzner, “Vector symbol decoding with list inner symbol decisionsand dependent errors,” IEEE Trans. Inform. Theory, submitted for pub-lication.

[20] , “Vector symbol decoding with list inner symbol decisions,” inProc. ISIT 2000, p. 481.

[21] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limiterror-correcting coding and decoding: Turbo-codes,” inProc. ICC’93,May 1993, pp. 1064–1070.

[22] T. Kasami, T. Takata, K. Yamashita, T. Fujiwara, and S. Lin, “On bit errorprobability of a concatenated coding scheme,”IEEE Trans. Commun.,vol. 45, pp. 536–543, May 1997.

John J. Metzner (M’55–SM’79–LS’96) was bornin New York, NY, in 1932. He received the B.E.E.,M.E.E., and Eng.Sc.D. degrees from New York Uni-versity in 1953, 1954, and 1958, respectively. He wasa Link Aviation, National Science Foundation, andDavid Sarnoff Fellow during his graduate studies.

He was a Research Scientist at New York Univer-sity from 1958 to 1967, and a member of the faculty atthe Electrical Engineering Departments of New YorkUniversity and Polytechnic University from 1967 to1974. He held faculty appointments at Wayne State

University, Detroit, MI, from 1974 to 1980, and at Oakland University, Oak-land, MI, from 1980 to 1986. He served a year as Acting Dean of the School ofEngineering and Computer Science at Oakland University from 1985 to 1986.Since 1986, he has been a Professor of Computer Engineering, with appoint-ments in both the Department of Electrical Engineering and the Department ofComputer Science and Engineering. He also served two years as Acting Directorof the Computer Engineering Program at Pennsylvania State University. In re-search, he has devised various ARQ protocols for reliable and efficient data com-munication including two selective repeat strategies. Other work has includedincremental redundancy and memory retransmission techniques, methods forefficient comparison of remote replicated data files, efficient acknowledgmentprotocols for slotted ring networks, improved broadcast retransmission proto-cols, methods for improved utilization of ALOHA and spread-spectrum mul-tiple access, and techniques for simpler and more effective error correction. Heis the author of the bookReliable Data Communications(New York: AcademicPress, 1998).