5
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-32, NO. 2, MARCH 1986 181 Error-Correcting Codes for Byte-Organized Memory Systems CHIN-LONG CHEN, SENIOR, MEMBER, IEEE Ahstreet-Techniques are presented for the construction of error-cor- recting codes for semiconductor memory subsystems that are organized in a multibit-per-chip manner. These codes are capable of correcting all single-byte errors and detecting all double-byte errors, where a byte represents the number of bits that are fed from the same chip to the same codeword. I. INTRODUCTION E RROR-CORRECTING codes (ECC’s) are applied to semiconductor memory subsystems to increase reli- ability, reduce service costs, and maintain data integrity. In particular, the class of single-error-correcting and double- error-detecting (SEC-DED) codes has been successfully applied to many computer memory subsystems [l]-[6]. These codes have become an integral part of the memory design for medium and large systems throughout the com- puter industry. The error-control effectiveness of an ECC depends on how the memory chips are organized with respect to the ECC. In the case of SEC-DED codes, the l-b/chip organization is the most effective design. In this organiza- tion, each bit of a codeword is stored in a different chip; thus any type of failure in a chip can corrupt, at most, one bit of the codeword. As long as the errors do not line up in the same codeword, multiple errors in the memory are correctable. As the trend in chip design continues toward higher and higher density, it becomes more difficult to design a l-b/chip type of memory organization because of the system granularity problem. For example, the system capacity has to be at least 4 Mb if l-Mb chips are used to design a memory with a 32-b data path. In a b-b/chip memory, a chip failure may result in one-to-b bit errors, depending on the failure type: cell, word-line, bit-line, partial-chip, or total-chip. With a maintenance strategy that allows correctable errors in the memory to accumulate, SEC-DED codes are not suitable for a b-b/chip memory. Since multiple bit errors are not correctable by SEC-DED code, the uncorrectable-error (UE) rates will be high if the distribution of chip failure types is skewed to those types that result in multiple bit Manuscript received November 27, 1984; revised August 21, 1985. This paper was presented at the IEEE International Symposium on Informa- iion Theory, Brighton, England, June 1985. - - The author is with the IBM Corporation. P.O. Box 390. Department D18, Building 707, Poughkeepsie, GY 12602: IEEE Log Number 8406617. A errors. A more serious problem is the loss of data integrity due to the miscorrection of some multiple errors. The class of single-byte error-correcting (SBC) codes may be used instead, where a byte represents b bits from the same chip. This will reduce the UE rates. However, the data integrity problem remains because errors generated by a double chip failure may not be correctable. Single-byte error-correcting and double-byte error- detecting (SBC-DBD) codes are suitable candidates for a b-b/chip memory. With an SBC-DBD code, errors gener- ated by a single chip failure are always correctable, and errors generated by a double chip failure are always detect- able. Thus the UE rates can be kept low, and the data integrity can be maintained. Techniques exist for the construction of efficient SBC codes [7]-[13]. However, the construction of SBC-DBD codes has not been investigated extensively. Although some results have been reported [14]-[16], the codes constructed have not been proved to be optimum in general. A b-bit byte is a b-bit pattern and can be considered an element of the finite field GF(q) of q elements where q = 2”. It is well known that if the minimum distance of a linear code over GF(q) is equal to or greater than four, then the code is capable of correcting all single-byte errors, and also capable of detecting all double-byte errors. Thus a necessary condition for an SBC-DBD code is that the minimum distance of the code be greater than or equal to four. In this paper, a linear block code over GF( q) is called an SBC-DBD code if the minimum distance of the code is greater than or equal to four. It is well known that for q = 2, a code with a minimum distance of four can always be constructed by adding an overall parity check to a code with a minimum distance of three [5], [S], [9]. However, for q greater than two an SBC-DBD code may not be obtained by adding an overall parity check to a code with a minimum distance of three. In this paper, we review some known techniques for the construction of SBC-DED codes over GF(q) where q > 2. We also present construction techniques for more efficient SBC-DBD codes. In Section II, we review the construction of the extended Reed-Solomon codes with three check bytes. We also present construction methods for SBC-DBD codes based on cyclic codes. In Section III, we present an iterative construction method for SBCLDBD codes. In Section IV, we present the parametersof some of the codes constructed from Sections II and III. Finally, we make some concluding comments in Section V. 001%9448/86/0300-0181$01.00 01986 IEEE

Error-correcting codes for byte-organized memory systems

Embed Size (px)

Citation preview

Page 1: Error-correcting codes for byte-organized memory systems

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-32, NO. 2, MARCH 1986 181

Error-Correcting Codes for Byte-Organized Memory Systems

CHIN-LONG CHEN, SENIOR, MEMBER, IEEE

Ahstreet-Techniques are presented for the construction of error-cor- recting codes for semiconductor memory subsystems that are organized in a multibit-per-chip manner. These codes are capable of correcting all single-byte errors and detecting all double-byte errors, where a byte represents the number of bits that are fed from the same chip to the same codeword.

I. INTRODUCTION

E RROR-CORRECTING codes (ECC’s) are applied to semiconductor memory subsystems to increase reli-

ability, reduce service costs, and ma intain data integrity. In particular, the class of single-error-correcting and double- error-detecting (SEC-DED) codes has been successfully applied to many computer memory subsystems [l]-[6]. These codes have become an integral part of the memory design for med ium and large systems throughout the com- puter industry.

The error-control effectiveness of an ECC depends on how the memory chips are organized with respect to the ECC. In the case of SEC-DED codes, the l-b/chip organization is the most effective design. In this organiza- tion, each bit of a codeword is stored in a different chip; thus any type of failure in a chip can corrupt, at most, one bit of the codeword. As long as the errors do not line up in the same codeword, mu ltiple errors in the memory are correctable.

As the trend in chip design continues toward higher and higher density, it becomes more difficult to design a l-b/chip type of memory organization because of the system granularity problem. For example, the system capacity has to be at least 4 Mb if l-Mb chips are used to design a memory with a 32-b data path.

In a b-b/chip memory, a chip failure may result in one-to-b bit errors, depending on the failure type: cell, word-line, bit-line, partial-chip, or total-chip. W ith a ma intenance strategy that allows correctable errors in the memory to accumulate, SEC-DED codes are not suitable for a b-b/chip memory. Since mu ltiple bit errors are not correctable by SEC-DED code, the uncorrectable-error (UE) rates will be high if the distribution of chip failure types is skewed to those types that result in mu ltiple bit

Manuscript received November 27, 1984; revised August 21, 1985. This paper was presented at the IEEE International Symposium on Informa- iion Theory, Brighton, England, June 1985. - -

The author is with the IBM Corporation. P.O. Box 390. Department D18, Building 707, Poughkeepsie, GY 12602:

IEEE Log Number 8406617.

A

errors. A more serious problem is the loss of data integrity due to the m iscorrection of some mu ltiple errors. The class of single-byte error-correcting (SBC) codes may be used instead, where a byte represents b bits from the same chip. This will reduce the UE rates. However, the data integrity problem remains because errors generated by a double chip failure may not be correctable.

Single-byte error-correcting and double-byte error- detecting (SBC-DBD) codes are suitable candidates for a b-b/chip memory. W ith an SBC-DBD code, errors gener- ated by a single chip failure are always correctable, and errors generated by a double chip failure are always detect- able. Thus the UE rates can be kept low, and the data integrity can be ma intained.

Techniques exist for the construction of efficient SBC codes [7]-[13]. However, the construction of SBC-DBD codes has not been investigated extensively. Although some results have been reported [14]-[16], the codes constructed have not been proved to be opt imum in general.

A b-bit byte is a b-bit pattern and can be considered an element of the finite field GF(q) of q elements where q = 2”. It is well known that if the m inimum distance of a linear code over GF(q) is equal to or greater than four, then the code is capable of correcting all single-byte errors, and also capable of detecting all double-byte errors. Thus a necessary condition for an SBC-DBD code is that the m inimum distance of the code be greater than or equal to four. In this paper, a linear block code over GF( q) is called an SBC-DBD code if the m inimum distance of the code is greater than or equal to four.

It is well known that for q = 2, a code with a m inimum distance of four can always be constructed by adding an overall parity check to a code with a m inimum distance of three [5], [S], [9]. However, for q greater than two an SBC-DBD code may not be obtained by adding an overall parity check to a code with a m inimum distance of three.

In this paper, we review some known techniques for the construction of SBC-DED codes over GF(q) where q > 2. We also present construction techniques for more efficient SBC-DBD codes. In Section II, we review the construction of the extended Reed-Solomon codes with three check bytes. We also present construction methods for SBC-DBD codes based on cyclic codes. In Section III, we present an iterative construction method for SBCLDBD codes. In Section IV, we present the parameters of some of the codes constructed from Sections II and III. F inally, we make some concluding comments in Section V.

001%9448/86/0300-0181$01.00 01986 IEEE

Page 2: Error-correcting codes for byte-organized memory systems

182 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-32, NO. 2, MARCH 1986

II. CYCLIC CODES

A cyclic code over GF(q) of length n is an ideal gener- ated by a generator polynomial g(x) in the algebra of polynomials modulo x” - 1 [9]. The number of check bytes r of the code is equal to the degree of g(x). Let (Y be a primitive element of GF(q”), and (q” - 1) = no. Then p = (Y” is a primitive root of x” - 1, and the roots of g(x) can be expressed as powers of j3. If /3’ is a root of g(x), then g(x) contains as a factor the minimum polynomial of pi [9] that contains as roots the distinct elements of p@‘, j= 1,2;** m. Let g(x) be the least common multiple of the minimum polynomials of pi, &, . . ., /3,. The code can be defined as the null space of the rows of the following parity-check matrix H [9]:

1 Pl P: . . . K’ ( H= 1 P2 P,’ *-* t%-’ (1) . . .

1 Ps P,” * * * P,“-‘I Each element of H in (1) is a power of j3 and can be expressed as an m-component column vector over GF(q). Each element of GF(q) can be expressed as a binary b X b matrix. The zero element is expressed as a b X b all-zeros matrix. The one element is expressed as a b X b identity matrix. The nonzero elements are expressed as powers of the binary b X b companion matrix [9] of a primitive polynomial used to construct GF(q). Thus H can be expressed as a binary matrix of nb columns.

An important question in cyclic codes is the relation between the roots of the generator polynomial and the minimum distance of a code. The Bose-Chaudhuri- Hocquenghem (BCH) bound is a lower bound on the minimum distance of a cyclic code [5], [8], [9]. A gener- alized BCH bound has been presented in [17]. The follow- ing theorem is a special case of the generalized BCH bound.

Theorem 1: Let p be a primitive root of x” - 1, and let {/I”“, pno+al, p‘Q+‘Q, /3’0+‘1+~2} be a subset of the roots of the generator polynomial of a cyclic code of length n over GF(q). If a, and a2 are relatively prime to n, then the code is an SBC-DBD code.

Cyclic SBC-DBD codes can be constructed from the BCH bound or the generalized BCH bound of Theorem 1. In many cases, the codes can be extended up to three bytes [lo], [ll]. In particular, SBC-DBD codes can be obtained from the following construction methods.

Construction IA (Extended Reed-Solomon Codes [7]-[ll]): Let n = q + 2, r = 3, and let p be a primitive root of xnP3 - 1. The parity-check matrix is specified by

. . . 1 H= ; ; j2 . . . j3n-4

1 0 0 0 1 0.

1 p-1 p-2 . . . p-n+4 0 0 1

The first q - 1 columns of H specify a cyclic code with a

generator polynomial that contains 1, p, and p-i as a subset of roots.

Construction 1B (Extended BCH Codes [8]-[ll]): Let n = q + 2, r = 3,andletj3beaprimitiverootofx”-1 - 1. The parity-check matrix is specified by

1 1 1 ..f 1 1 H= 1 p p2 . . . P2 0 .

The first q + 1 columns of H specify a cyclic code with a generator polynomial that contains 1, j3, and p-’ as a subset of roots.

Remarks: The SBC-DBD codes obtained from Con- structions 1A and 1B are optimum in that the codelength is the longest possible for a given q and r. However, there is only one codelength for a given q. The codelength may not be long enough to accommodate the data bits for a particu- lar application. In this case, other techniques must be used for the construction of SBC-DBD codes. In general, known SBC-DBD codes have not been proved to be optimum for r greater than three.

Construction 2 [16]: Let m be even, n = q” + 1, r = 2m, and let p be a primitive root of x n - 1. The parity- check matrix is specified by

H=Il /3 /!I2 .a. /3’-l[.

The code is a cyclic code with a generator polynomial that contains j3, flq, /V’, and p-4 as a subset of roots.

Construction 3: Let n = q2” + 1, r = 3m + 1, and let /I be a primitive root of xnm2 - 1, and p1 = /?qm+‘. The parity-check matrix is specified by

1 1 1 **. 1 1 0 H= 1 p p2 *** p-3 0 0 .

1 pi p: * * * p;-3 0 1

The first n - 2 columns of H specify a cyclic code with a generator polynomial that contains 1, fi, pqm, and pqrn+’ as a subset of roots.

Remarks: An SBC-DBD code over GF(q) with n = q2 + 1 and r = 4 can be constructed from Constructions 2 and 3. For r > 4, Construction 3 yields a longer code than Construction 2.

III. ITERATIVE CONSTRUCTION

The following two theorems have been reported in [15].

Theorem 2: Let Hl and H2 be the parity-check matrices of two SBC-DBD codes over GF(q) of length nl and n2 with rl and r2 check bytes, respectively. Assume that the ith column of Hl is of the form Xi;) , where I is a b X b

I I identity matrix and X(i) is a binary matrix with b columns and (rl)b - b rows. Let the jth column of H2 be Y(j), and let T(i, j) be a column of a matrix H with

T(i, j) = x(i)

I I Xi> ’ i = 1,2;**, nl; j = 1,2;+*, n2.

Then the code with H as the parity check is an SBC-DBD

Page 3: Error-correcting codes for byte-organized memory systems

CHEN: BYTE-ORGANIZEDMEMORYSYSTEMS 183

code of length n = (nl)( n2) with (rl) + (r2) - 1 check codelength n = (nl)(n2) - 1 and number of check bytes hvt.en. r = (rl) + (r2) - 1. -, ----

Example: Let b = 3, q = 8. A (Id, 7) SBC-DBD code Example: ‘An SBC-DBD code from Construction 3

over GF(8) can be constructed from Construction 1B. has n = q2” + 1 and Y = 3m + 1. The null space contains

Since the parity-check matrix contains an all-ones as a row a vector with one zero and (n - 1) ones. According to

vector, two of these codes can be used to construct a Construction 4, two such codes can be used to construct an

(100,95) SBC-DBD code. SBC-DBD code of length n2 - 1 with 6m + 1 check bytes.

Theorem 3: Let Hl be the parity-check matrix of an SBC-DBD code with length n and with r check bytes. Let IV. SOME CODE PARAMETERS X(i) be a column of Hl, and let H be the matrix whose columns T(i) are defined by The construction methods presented in the previous

sections yield some SBC-DBD codes that are more effi- 0

Tti) = X(i) ) I I for i = 1,2;.., n, cient than other known codes. We will discuss some of these codes in this section. In particular, we will present the most efficient codes for r less than or equal to seven. Table

and I shows the parameters of the most efficient SBC-DBD

T(i) = x(il n> 3 I I

for i = (n + l),(n + 2);..,2n. codes known to the author.

The code with H as the parity-check matrix is an SBC-DBD code with length 2n and with (r + 1) check

TABLE1 CODEPARAMETERSFOR SBC-DBD CODES

bytes.

Theorems 2 and 3 provide iterative methods of con- structing long codes from short codes. The construction of Theorem 2 requires that one of the component codes contain an all-ones vector over GF(q) in the null space. This requirement can be removed. The following construc- tion is an extension of Theorem 2 with a relaxed condition on a component code. The validity of the construction is proved in the Appendix.

Construction 4: Let Cl and C2 be (nl, nl - rl) and (n2, n2 - r2) SBC-DBD codes over GF(q) with Hl and H2, respectively, as parity-check matrices. Assume that Hl contains a row vector with one zero and (nl) - 1 ones. A column T(i) of Hl can be expressed as xyij for i = 1, and I I as I I xiij for i = 2,3; - . , nl. Assume also that X(j) # X(l)e for j # 1, e # 0, and e E GF( q). Similarly, assume that H2 contains a row vector with one zero and (n2) - 1 ones. A column of H2 is expressed as ,,c, for j = 1, and I I as

I I $, for j = 2,3,. . . , n2. Assume also that Y(j) #

Y(l)e for j # 1, e # 0, and e E GF(q). Define T(0, i, j) and T(l, i, j) as the following matrices:

T(0, i, j) = x(io+ 1) , i = 1,2;+*,(nl) - I, Y(l)

and

r=3 n=2h+2 r=4 n = 22h + 1 r = 5 n = 2(22h + 1)

r = 6 n = (2h + 2)(22h + 1)

r = 7 n = (22h + 1)’ - 1

Codelength

h 2 3 4 5

6 10 18 34 17 65 257 1025 41* 133* 514 2050

102 650 4626 34850 288 4224 66048 1050624

*Computer generated code

For r = 3, opt imum codes are obtained from Construc- tions 1A and 1B. These codes can accommodate up to b(q - 1) data bits. For r = 4, the most efficient codes can be constructed from Construction 2 or Construction 3. The codelengths of these codes are equal to q2 + 1.

For r = 5, codes with length 2(q2 + 1) can be con- structed from codes with r = 4 and length q2 + 1 by applying Theorem 3. Intuitively, one would think that longer codes should be constructed. However, aside from a computer search, there is no other construction method that yields longer codes. For q = 4, a code of length 41 was reported in [18]. For q = 8, the author has found a code of length 133. Both codes are generated from a computer search.

T(l, i, j) = T(i) I I For r = 6, codes with n = (q + 2)(q2 + 1) can be con- Y(j + 1) ’ strutted. From Construction lB, the null space of the

J-1? ..1 ; - 1,2; * .,(n2) - 1. extended Reed-Solomon codes with three check bytes 1 = I,L,“‘, Ill, J - contains a vector of all ones. It can be shown 1151 that the

Let H be the matrix that contains as columns all possible null space of the codes from Construction 1A also contains T(0, i, j) and T(l, i, j). The code with H as the parity- a vector of all ones. Thus, applying Theorem 2 to a check matrix is an SBC-DBD code over GF(q) with (q + 2, q - 1) extended Reed-Solomon code and a (q2 +

Page 4: Error-correcting codes for byte-organized memory systems

184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-32, NO. 2, MARCH 1986

1, q2 - 3) SBC-DBD code, an SBC-DBD code with r = 6 and n = (q + 2)( q 2 + 1) can be constructed.

For r = 7, codes with n = ( q2 + 1)2 - 1 can be con- structed. From Construction 3, a ( q2 + 1, q2 - 3) SBC-DBD code can be obtained. The null space of the code contains a vector with one zero and q2 ones. Apply- ing Construction 4, an SBC-DBD code of length (q 2 + 1) 2 - 1 with r = 7 can be constructed.

V. CONCLUSION

In this paper, we have discussed the need for error-cor- recting codes to be able to correct single-byte errors and detect double errors. These SBC-DBD codes are suitable for a computer memory organized in a multibit-per-chip manner. We also have presented techniques for the construction of SBC-DBD codes. Combining these construction techniques, we have obtained some SBC-DBD codes that are more efficient than previously known codes. A table of the best SBC-DBD codes known has been presented.

A sufficient condition for a code to have a minimum distance of four is that the syndrome of no error, the syndromes of all single errors, and the syndromes of dou- ble errors, one of which is at the first position, are all distinct. From this condition, we can derive an upper bound on the codelength of an (n, n - r) SBC-DBD code, which is 1 + (q - 1)n + (n - l)(q - 1)2 I q’. All known SBC-DBD codes for r > 3 do not meet the upper bound. The bound may not be tight enough. The author also believes that there is plenty of room for improvement in the construction of SBC-DBD codes.

The generation of check bits for an SBC-DBD code is trivial when the parity-check matrix is expressed in a binary systematic form. Each check bit is obtained from an exclusive-or operation on the data bits at the positions of ones of the corresponding row vector in the parity-check matrix. Exactly the same operation is required as in the generation of check bits for an SEC-DED code. Al- gorithms for error correction and error detection for SBC-DBD codes have been discussed in [6] and [15]. In general, more circuits are required than for SEC-DED codes.

APPENDIX

Theorem 4: The code constructed from Construction 4 with parameters n = (nl)( n2) - 1 and r = (rl) + (r2) - 1 is an SBC-DBD code.

Proof We have only to prove that any three columns of H are linearly independent over GF(q). We will show this by considering different cases for the distribution of three columns of H. Let hl, h 2, and h 3 be a set of three columns of H. We will first assume that a set of coefficients el, e2, and e3 over GF(q) exists that satisfies the dependent relation (hl)( el) + (h 2)( e 2) + (h3)( e3) = 0. We will then arrive at a contradiction to the assumptions.

Case I, three columns in T(0, i, j): The first row of Hl is an all-ones at positions 2,3,. . . , nl, and the first row of H2 has a zero at the first position. Thus from Theorem 2, any set of three columns is linearly independent over GF( q).

Case 2, three columns in T(I, i, j): The first row of H2 is an all-ones at positions 2,3; . ., n2. From Theorem 2, any set of three columns is linearly independent over GF( q).

Case 3, one column in from T(0, i,j) and two columns in T(1, i, j): The three columns are of the form

0 x( il) x( i2) X(i + 1) X( il) X(i2) ,

Y(l) Y(jl+l) Y(j2+1)

where x(i1) and x(i2) are either zero or one. First, let us assume that x(i1) = 0 and x(i2) = 1. Then il = 1. If the three columns are linearly dependent, then e3 = 0 from the first row of the three-column matrix. From the second row, we have X(i + 1) = X( il) e for some nonzero element e of GF( q), which violates an assumption of Hl. Next, let us assume that x(i1) = x(i2) = 1. From the first row, we have e2 = e3. If jl = j2, then el = 0 from the third row. From the second row, we have il = i2, which means that the last two columns are identical. If jl # j2, then the first and the third rows form a submatrix of H2, which implies that the three columns are linearly independent over GF(q). Finally, let us assume that x(i1) = x(i2) = 0. Then il = i2 = 1 because there is only one column in Hl that con- tains a zero as the first element. The three-column matrix be- comes

X(i + 1) x(1) X(l) Y(l) Y(jl+l) Y(j2+1) ’

We will treat this in Case 5. Case 4, two columns in T(0, i, j) and one coIumn in T(I, i, j):

The three columns are of the form

0 X( il + 1) X(iZO+ 1) X;i)

Y(l) Y(l) Y(j+l) ’

where x is either zero or one. If x = 1 and the three columns are linearly dependent, then e3 = 0 from the first row. From the second row, we have il = i2, which implies that the first two columns are identical. If x = 0, then i = 1. The matrix reduces to Case 5.

Case 5: Consider the matrix

X(i + 1) X(l) X(l) Y(l) Y(jl+l) Y(j2+1) ’

If X(i + 1) = 0, the linear dependence of the columns is the same as that of the columns of the matrix

0 1 1 Y(1) Y(jl+l) Y(j2+1)

However, this matrix forms a submatrix of H2. Thus the columns are linearly independent. If X( i + 1) # 0, we have X( i + 1)el = X(l)(e2 + e3)andY(l)el = Y(j1 + l)e2 + Y(j2 + l)e3.If e2 = e3, then el = 0, and jl = j2. Thus the last two columns are identical. If e2 # e3, then X(i + 1) = x(l)e, for some non- zero element e of GF(q), which contradicts an assumption of Hl. Q.E.D.

Page 5: Error-correcting codes for byte-organized memory systems

CHEN: BYTE-ORGANIZED MEMORY SYSTEMS 185

Ul

PI

[31

[41

151

[61

[71

PI

[91

REFERENCES

R. W . Hamming, “Error detecting and error correcting codes,” Bell $rst. Tech. J., vol. 29, pp. 147-160, Apr. 1950. M. Y. Hsiao, “A class of optimal min imum odd-weight-column SEC-DED codes,” IBM J. Res. Develop., vol. 14, pp. 395-401, July 1970. M. Y. Hsiao, W . C. Carter, .I. W . Thomas, and W . R. Stringfellow, “Reliability, availability and serviceability of IBM computer sys- tems: A quarter century of progress,” IBM J. Res. Develop., vol. 25, pp. 453-465, Sep. 1981. D. P. Siewiorek and R. S. Swarz, The Theory and Practice of Reliable System Design. Bedford, MA: Digital Press, Digital Equipment Corp., 1982. S. Lin and D. J. Costello, Jr., Error Control Coding: Fundamentals and Applications. Englewood Cliffs, NJ: 1983. C. L. Chen and M. Y. Hsiao, “Error-correcting codes for semicon- ductor memory applications: A state-of-the-art review,” IBM J. Res. Develop., vol.-28, pp. 124-134, Mar. 1984. I. S. Reed and G. Solomon. “Polvnomial codes over certain finite field,” .I. Sot. Ind. Appl. Math., vol. 8, pp. 300-304, June 1960. E. R. Berlekamp, Algebraic Coding Theory. New York: McGraw- Hill, 1968. W . W . Peterson and E. J. Weldon, Jr., Error Correcting Codes, 2nd ed. Cambridge, MA: MIT Press, 1972.

PO1

illI

W I

[I31

P41

1151

W I

[I71

W I

T. Kasami, S. Lin, and W . W . Peterson, “Some results on cyclic codes which are invariant under the affine group and their applica- tions,” Inform. Contr., vol. 11, pp. 475-496, Nov. 1967. J. K. Wolf, “Adding two information symbols to certain nonbinary BCH codes and some applications,” Bell Cyst. Tech. J., vol. 48, pp. 2405-2424, 1969. D. C. Bossen, “b-adjacent error correction,” IBM J. Res. Develop., vol. 14, pp. 402-408, July 1970. S. J. Hong and A. M. Patel, “A general class of maximal codes for computer applications,” IEEE Trans. Comput., vol. C-21, pp. 1322-1331, Dec. 1972. T. T. Dao, “SEC-DED nonbinary code for fault-tolerant byte- organized memory implemented with quaternary logic,” IEEE Trans. Comput., vol. C-30, pp. 662-666, Sep. 1981. S. Kaneda and E. Fujiwara, “Single byte error correcting double byte error detecting codes for memory systems,” IEEE Trans. Comvut.. vol. C-31. DD. 596-602. Julv 1982. C. L. Chen, “ Byte%ented error-correcting codes for semiconduc- tor memory systems,” in Proc. 14th Int. Conf. Fault-Tolerant Computing, June 1984, pp. 84-87. C. R. P. Hartmann and K. K. Tzeng, “Generalization of the BCH-bound,” Inform. Contr., vol. 20, pp. 489-498, 1972. H. Itoh and M. Nakamichi, “SbEC-DbED codes derived from experiments on a computer for semiconductor memory systems,” Trans. IECE Japan, vol. E66, pp. 741-748, Aug. 1983.