9

Click here to load reader

Decoding algebraic-geometric codes up to the designed minimum distance

  • Upload
    trn

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Decoding algebraic-geometric codes up to the designed minimum distance

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 1, JANUARY 1993 37

Decoding Algebraic-Geometric Codes up to the Designed Minimum Distance

Gui-Liang Feng and T.R.N. Rao, Fellow, IEEE

Abstruct- A simple decoding procedure for algebraic-geo- metric codes Cn(D, G ) is presented. This decoding procedure is a generalization of Peterson’s decoding procedure for the BCH codes. It can be used to correct any L(d* - l ) / 2 J or fewer errors with complexity O ( n 3 ) , where d’ is the designed minimum distance of the algebraic-geometric code and n is the codelength.

Index Terms- Error-correcting codes, algebraic-geometric codes, decoding procedure, correcting L(d* - 1)/2] errors.

I. INTRODUCTION

HE MOST important development in the theory of error- T correcting codes in recent years is the introduction of methods from algebraic geometry to construct linear codes. These so called algebraic-geometric codes were introduced by Goppa. In 1982, Tsfasman, Vl5duf and Zink [ l ] obtained an extremely exciting result: the existence of a sequence of codes that exceeds the Gilbert-Varshamov bound [2]. For this paper, they received the IEEE Information Theory Group Paper Award for 1983. Since then, many papers dealing with algebraic-geometric codes have followed [3]-[ 101.

Good code constructions are very important. Moreover, it is desirable and important to derive simple decoding procedures which can correct as many errors as possible. Justesen et al. [ l l ] first presented a decoding procedure for codes from nonsingular plane algebraic curves. This decoding procedure can only correct L(d* - g - 1)/2J or fewer errors, where d* is the designed minimum distance of the code and g is the genus of the curve involved in the construction. Skorobogatov and Vl5dut [12] generalized their ideas and gave a decoding procedure which can correct any L(d* - g - 1)/2J or fewer errors for codes from arbitrary algebraic curves. In their paper, Skorobogatov and VlBduI also presented a modified algorithm, correcting more errors, but in general, not up to the designed minimum distance. Using profound results from algebraic geometry, Pellikaan [13] gave a decoding procedure which decodes up to L(d* - 1)/2J errors. However, his decoding procedure is very complex and is not completely effective. Recently, Justesen et al. [14] improved on their original decoding procedure in several ways and gave a new decoding procedure for codes from arbitrary regular plane curves, which can decode up to L(d* - g/2 - l)/2J errors.

Manuscript received November 5 , 1991; revised May 20, 1992. This work was supported in part by the Office of Naval Research under Grant N00014- 91-5-1067. This work was presented in part at the 9th Intemational Symposium for Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes, New Orleans, LA, October 1991.

The authors are with the Center for Advanced Computer Studies, University of Southwestern Louisiana, Lafayette, LA 70504.

IEEE Log Number 9203437.

In this paper, we present a fairly simple decoding pro- cedure capable of decoding up to L(d* - l ) / 2 j errors. The improvement is obtained by using a form of majority scheme to find unknown syndromes in the well-known algorithm. The procedure can be implemented easily by hardware or software.

The paper is organized as follows. In the next section, for easy reference, we include a fundamental iterative algorithm (FIA), which is very similar to the Gaussian elimination and can be used to easily derive the Berlekamp-Massey algorithm and the generalized Berlekamp-Massey algorithm [ 161. Then we modify the FtA and give some related properties, which will be used in other sections. In Section 111, a new decoding procedure for algebraic-geometric codes CQ(D, G) with G = mQ is presented. In order to easily understand this decoding procedure, one example is shown in Section IV. Finally, some conclusions are given in Section V.

11. FUNDAMENTAL ITERATIVE ALGORITHM

In this section, the fundamental iterative algorithm (FIA) [16] is modified. This modified algorithm is our main al- gorithm for decoding algebraic-geometric codes up to the designed minimum distance. To a certain extent it is similar to the Berlekamp-Massey algorithm, which is the main algo- rithm for decoding BCH codes up to the designed minimum distance. For easy reference, the FIA is described briefly in the following. This algorithm is for finding the smallest initial set of dependent columns in a matrix over any field F . That is, let

be such a matrix, to find the smallest I and c1, . . . , ci such that

ai,i+i + ci . ai,i + . . . + ci . ai,l = 0 for i = 1,2, . . . , M . (2.1)

For each column j , let C(i-l>j)(z) = ~ ~ ~ ~ c f - ” ’ ) ~ ~ , where c t - l t j ) = 1, be defined as the polynomial with the property that

Page 2: Decoding algebraic-geometric codes up to the designed minimum distance

38 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 1, JANUARY 1993

where u(~)(z) = 1 + a h , 1 x + ah,2x2 + . . . + U ~ , N Z ~ and [f(z)]j expresses the coefficient of x3 in f(x). Accordingly, the initial polynomial for column j is designated C(O>j)(z) with C ( o ~ l ) ( x ) = 1 as the initial polynomial for the first column. Let

C ( i - l , j ) ( z ) . a ( i ) ( x ) ] I

be the discrepancy at row i and column j . For some column j , if di , j = 0 for i = 1 , 2 , . . . , T - 1, then we have C(O>j)(x) = C ( l > j ) ( x ) = . . . = C(r-12j) (x) . Suppose dr,j # 0 and there is no U , where 1 < U < j and C(")(x) = C ( r - l l u ) ( x ) ,

then we define the final polynomial of column j , C ( j ) ( x ) = C ( r - l i j ) ( x ) , at the location of ar,j and place a cross "x" to indicate that the discrepancy is nonzero (we call it the final discrepancy of column j ; the discrepancies at the locations of ai,j for i < T are all zero), and then move on the next column. We then start examining the top element of the next column with C ( ' f + l ) ( x ) = C ( j ) ( x ) = d r - ' > j ) ( x ) . Otherwise, there is a C(")(x) at a column U , where 1 < U < j such that C(")(x) = d r - ' > " ) ( x ) . Then C( '~ j ) ( x ) can be obtained by the following equation

.

d dr,, (2.4) C h q x ) = C ( - 1 > q x ) - 22 c(")(x)xj-".

The following can be easily seen. Lemma 1: There is an "x" in column j , if and only if it is

linearly independent on its previous columns. Thus, beginning with the first column, we examine the

elements in successive columns of matrix A, one by one, from the top row towards the bottom row. If the problem has a solution for A, d M l 1 ) ( x ) is a solution. From [16], it is easily seen that if dz,J = 0, then the subvector of the top i components of column j is a linear combination of the subvectors of the top i components of its previous j - 1 columns. Hereinafter, it is said that column j is apartial linear combination of its previous columns with the top i components. The linear combination coefficients are the coefficients of

Thus, we have the following iterative algorithm, which is called the fundamental iterative algorithm (FIA). Two storage tables, D and C, are set up for this algorithm. Table D stores

C("J)(x) .

the discrepancies dr , J , the final discrepancy of column j , while Table C stores the corresponding polynomials C ( J ) ( x ) , the final polynomial of column j .

Fundamental Iterative Algorithm: Step 1) Empty Tables D and C, 1 -+ j , 1 --t T , 1 -+

Step 2) Compute dr,J = [C(J)(z) . a( ' ) (x ) lJ . Step 3) IF dr,J = 0, THEN

C ( J ) (x).

a) IF 7' = hf, then j - 1 + I, c ( J ) ( x ) -+ c ( x ) , STOP; we have a solution.

b) otherwise T + 1 -+ T , and return to Step 2).

Step 4) IF dr,J # 0, THEN

a) IF there exists a dr," E D for some 1 < U < j , THEN

and return to Step 3). b) otherwise, dr,j is stored in D, C( j ) ( x ) --f

C(j+')(x) and C( j ) ( x ) is stored in C, mark "x" at row T and column j . IF j < N , THEN j -I- 1 + j , 1 -+ T , and return to Step 2). Otherwise STOP, as this problem has no solution.

Now let us consider another related problem. Let S' be such a matrix: (see matrix at bottom of page) where @ and # are all unknown, and other elements are known. In each column j, if s; , j is @, then the values of su,j are known for U < a and sv,j are all # for v > i ; if there is no @ and s;,j is the first #, then the elements above si,j are known and the elements below si,j are all #. Similarly, for each row i, if s;,j is an @, then the value of si,u are all known for U < j, and si,v are all # for U > j ; if there is no @ and s;, j is the first #, then the left elements of si,? are known and the right elements are all #.

Suppose column j is partial linear combination of its previous columns with the top i - 1 components and si,j is an @, we want to know whether there is a unique value of @ at (i ,j) such that column j is a partial linear combination of its previous columns with the top i components. If there is, how is this unique value determined? The key step in decoding some linear ,error correcting codes can be reduced to this problem

-s l , l s1 ,2 s1 ,3 s 1 , 4

s 2 , l 52,2 s2 ,3 s2 ,4

s 3 , l s3 ,2 53,3 53,4

s 4 , l s4 ,2 54,3 s4 ,4

s 5 , l s5 ,2 55,3 s5,4

s 6 , l s6 ,2 s6 ,3 s6 ,4

s 7 , l s?,2 s?,3 s?,4

S8, l S8,2 @ # S9, l @ # #

- # # # #

s1 ,5

s2 ,5

53,5

s4 ,5

s5 ,5

@ # # #

s6 ,5

s1 ,6

s2 ,6

s3 ,6

s4 ,6

s5 ,6 @ # # # #

s1 ,7 s 1 , 8 s1 ,9 S1,lO s1,11

s2 ,7 s2 ,8 s2 ,9 @ # 53,7 53,8 @ # # @ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #

Page 3: Decoding algebraic-geometric codes up to the designed minimum distance

FENG AND RAO: DECODING ALGEBRAIC-GEOMETRIC CODES 39

(See next section): finding a partial linear combination with the Step 1) Empty Tables D, C, E, and F, 1 + j , r , top R components for a given integer R (if there is) and find all @, which are uniquely determined by the above requirement. In order to solve this problem, we modify the FIA as follows:

and 1 -+ C( j ) ( x ) . Step 2) Compute d , j = [C(J)(Z)S(~)(Z)]~. Step 3) IF dr,3 = 0, THEN

2') The algorithm stops when C(Rd)(x) or C ( N ) ( ~ ) is obtained.

2") Once C ( i - l > j ) ( x ) is obtained, where s;,j is an @ or an #, then the algorithm defines the final polynomial of column j , C( j ) ( x ) = d i - ' > j ) ( x ) , and moves to the top element of the next column, namely, s l , j + l .

Thus, after applying the modified FIA, the matrix D = ( d i j ) , called the discrepancy matrix, is obtained. The discrep- ancy matrix looks like the following:

- x S2, l

s3,1

s 4 , l

s5,l

s 6 , l

s 7 , l

sa, 1

S9 , l

- #

0 0 0 0 X

s6 ,2

5?,2

s s , 2 @ #

0 0 0 0 0 0 0 0 0 0 x 0 0 0 0 0 @ # x s3 ,4 0 0 0 0 @ # #

54,3 s4 ,4 0 0 @ # # # # s5 ,3 s5 ,4 0 0 # # # # #

S?,3 s7 ,4 @ # # # # # # @ # # # # # # # # # # # # # # # # # # # # # # # # # #

s6,3 s6 ,4 @ # # # # #

From this matrix, it is easily to see the following. 2a) If there is no "x" in column j , then column j is

a partial linear combination of its previous columns with the top i - 1 components, where i is the smallest integer that an @ or ## is at ( i l j ) . For the above matrix, i = 6, j = 6; i = 4, j = 7; and so on.

2b) Suppose s;,j is an @. This @ can be uniquely deter- mined such that column j is a partial linear combination of its previous columns with the top i components, if and only if there is no "x" at the locations of si,+ for 1 < U < j . For example, @ at (4,7) can be uniquely determined.

2c) If the value of @ at ( i l j ) is uniquely determined by (2b), then we have

7-1

where

h=O

that is,

h = l

Now we formally modify the FIA to solve this prob- lem. We need two more tables, E and F, which store these uniquely determined values of @ and their corresponding O i > j ) ( x ) , respectively. Let d r ) ( x ) = 1 + sr,1Z + 5r ,2x2+ . . . + S r , N x N .

A Mod$ed Fundamental Iterative Algorithm (MFIA):

a) IF r = R OR IF j = N and s ~ + ~ , N is an @ or #, THEN STOP (according to (2'));

b) IF S , + I , ~ is an @, r < R, and j < N, check if there is not S , + I , ~ E D for 1 < U < j ; when it is true, calculate -xi==', $1 . Sr+l ,J-h ,

this value and C ( J ) ( x ) are stored in E and F, respectively; then C ( J ) ( x ) + C ( J + l ) ( x ) ,

j + 1 + j , 1 + r , and return to Step 2) (according to (2'') and 2a)-2c));

c) IF S , + I , ~ is an #, r < R and j < N, then C ( J ) ( z ) -+ C ( J + l ) ( x ) , j + 1 -+ j , 1 -+ r , and return to Step 2) (according to 2');

d) otherwise, r + 1 + r and return to Step 2).

Step 4) IF d,,+ # 0, THEN

a) IF there exists a dp,u E D for some 1 < U < j , THEN

c(3)(z) - dr,3 c(")(x)x'-" + c(3)(x) dr,u

and return to Step 3); b) otherwise, d , , is stored in D, C(J ) (x ) is stored

in C, mark "x" at the row r and the column j , c ( J ) ( ~ ) + ~ ( ~ + l ) ( x ) , j + 1 -+ j , 1 -+ r and return to Step 2).

When the algorithm stops, a partial linear combination of column j with the top R components can be found or all values of @, which are uniquely determined by partial linear combinations, can be found. Obviously, the complexity of this algorithm is O ( M N 2 ) .

111. A NEW DECODING PROCEDURE

In this section, we consider the general algebraic-geometric codes Co(D, G) from X, where X is a nonsingular projective curve over p', D is the divisor P I + ' . . + P,, G = m . Q, and Q # P i for 1 < i < n. The linear code Ca(D1 G) of length n over Fp is the image of the linear map a*: O( G - D) -+ F; defined by

a*(w) := (Resp, ( U ) , Respz(w), . . . , Resp, ( U ) )

From [15], the parameters of the code are

d * = m - 2 g + 2 , k = m - g + l , t = [ (d* - 1)/2] = [ (m + l ) / 2 ] - g .

We may assume that t > 0 or m > 29. Recall that a number o; is nongap for Q if L(oiQ) # L((oi - 1)Q). Then a function $ot exists with pole order o; at Q. As is well known, the nongaps satisfy the following.

Lemma 2: OQ = 0,

0 < 0 1 < 02 < * * . < og-1 < 2g1

oi = i + g , for i = g , g + 1 , . . . , m - g .

Page 4: Decoding algebraic-geometric codes up to the designed minimum distance

40 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 1, JANUARY 1993

- s1,l s 1 2 s 1 , 3 ” ’ s 1 , m - g s l , m - g + l

sZ,l s 2 , Z s 2 3 . ’ ’ SZ m-g sZ m-g+l 1. I I I I I

-Sm-g+1,1 Sm-g+1,2 S m - g + 1 3 * . . S m - g + ~ , m - g Sm-g+l ,m-g+ l

S m - g , l Sm-g ,2 Sm-g ,3 ... Sm-g,m-g Sm-g ,m-g+ l

Suppose the value of Sa,3 in S are all known, we have the following theorem.

Theorem I : If column j of S is a partial linear combination of its previous columns with the top [(m + 1)/2] components, and if the coefficients are a, for 1 < i 6 j - 1, then column j is linearly dependent on its previous columns and the error rational points are the roots of

3-1

f3 - a, . f a = 0. (3.3) t=l

Proof: See [12, Theorem 11. 0 Unfortunately, the values of sm+l, . . . , s,+~ are unknown,

that is, the values of Sa,j for 0,-1 + o j - l = m + 1, m + 2,. . . , m + g are unknown, (3.3) may not be found from the matrix S. Thus, the key problem in decoding algebraic-geometric codes is finding of the real values of sa for i = m + 1,m + 2, . . . ,m + g and (3.3). This problem can be solved by the MFIA developed in Section 11. In order to find the values of sa for i = m + l , m + 2, . . . , m + g , our decoding procedure is to find them iteratively, i.e., first find sm+l, then s m + ~ , and so on, till s,+~ is found. In the following, we describe how to find sm+, if we know sa for i = O,ol, . . . ,m,. . . ,m + w - 1, where 1 < w 6 g.

Suppose Sm+l,.’.,~m+~-l are found and sm + w is still unknown, where 1 6 w < g. Now we want to find sm+,. Let S* be the rewritten S, where sp is substituted for S i , j ( O Z - l + oj- l = p) for p 6 m + w - 1, @ is substituted for Su,u(ou--l + 0,-1 = m + w), i.e., for sm+,, and # is substituted for S k , h ( o k - l + oh-1 > m + w), i.e., for sq with q > m + w. Obviously, S* satisfies the requirements of S’ in Section 11. We will use the MFIA to find the value of @, i.e., the value of sm+,.

Before describing how to find the value of sm+,, we first calculate the number of @, i.e., the number of sm+, in S*. The next lemma is useful.

Lemma 3: Let A, be the number of nongaps in [l, w - 11 for 1 6 w 6 g, then

A , 6 [(w - 1)/2J. (3.4)

Proof: Obviously, it is true for w = 1. For 2 6 w < g,

Case 1) If w is a gap, then we consider the two subcases.

a) w is odd. If s E [ l , w - 11, then w - s E [l, w - 11 and s # w - s, s and w - s are not both nongaps. Thus, (3.4) is true.

b) w is even. w/2 E [l, w - 11 can not be a nongap. If s E [l, w - 11 and s # w/2 then w - s E [ l , w - 11, s # w - s, s and w - s are not both nongaps. Thus, A, 6 (w - 2)/2 and (3.4) is true. ,

Case 2) If w is a nongap, then from Lemma 2 and 2 6 w 6 g,wecanassumethatw,w+l,...,w+p-lare nongaps and w+p is a gap, where 1 6 p < g. From

we consider the two cases

Page 5: Decoding algebraic-geometric codes up to the designed minimum distance

FENG AND M O : DECODING ALGEBRAIC-GEOMETRIC CODES 41

the previous proof, On the other hand, A, + p < [(w + p - 1)/2J and (3.4) is true.

< [(w + p - l) /2]. = A , + p , we have

0

b) If an @ is at (2, j), then it can be uniquely determined by a partial linear combination of its previous columns with the top i components, if and only if no “ x ” affects it.

c) An “x” at (1,-1) does not affect any @. An “ x ” in the first row and not in the first column affects at most one

Thus, for the number of @, we have the following theorem.

Theorem 2: The number of @ in S* is at least m - 2g, that @. An “ x ” in the first column and not in the first row affects at most one @. An “ x ” neither in the first row is, at least d* - 2. nor in the first column affects at most two @. Proof: For each 1 < 3 < m - g + 1, from (3,l) there is

an @ in column j , if and only if Now we have the following theorem.

m + w - 03-1 E {o0,01,. . . , om-g}. (3.5) Theorem 3: Applying the MFIA to matrix S*, there is at least one @, which can be uniquely determined by a partial linear combination, namely, there is at least one candidate value for s,+, calculated by Step 3b).

Equivalently, there is no @ in column j, if and only if

m+ w - 0 3 - 1 6 {oo,ol,.-,om-g}. (3.6)

Let us calculate the number of j , which satisfy (3.6). We

a) If m + w - oj-1 > m, i.e., 03-1 < w, then (3.6) is satisfied. From Lemma 3 the number of such j is 1 + A , (j = 1 satisfies this condition).

b) If m + w - oj-1 < m, i.e., 03-1 3 w, then the num- ber of m + w - 03-1 being gaps is B,, which is the number of the gaps in [w, 29 - 11.

From this discussion, the number of columns, in which there are no @, is 1 + A , + B,. Since the number of nongaps in [l, w -11 is A,, the number of gaps in [l, w-11 is w - 1 -Aw. From Lemma 2, we have

consider the following two cases

w - 1 - A, + B, = 9 .

1 + A , + B, = 1 + A , + 9 - w + 1 + A , < 1 + 9 ,

m - 29. o

(3.7)

Thus, from Lemma 3, we have

namely, the number of @ is at least (m - g + 1) - (g + 1) =

Proof: Let t = L(d* - 1)/2]. Since w < t, there are at most w linearly independent columns in X as well as in S. Suppose there are p “ x ” in the discrepancy matrix D after applying the MFIA to S*. From Property a), p < v < t. We consider the following three cases.

1) If an “ x ” is at (1, l), then it does not affect any @, and the other p - 1 “ x ” affect at most 2(p - 1) @ from Property c). Thus, at most 2p - 2 @ are affected.

2) If ap “ x ” in the first column is not in the first row, then it affects at most one @ from Property c). But, there must be another “ x ” in the first row; this “ x ” affects at most one @, too (if the received vector has w errors for 1 < w < t , then the known syndromes are not all zero and there must be one “ x ” in the first row and in the first column, respectively). Thus, the other p - 2 “ x ” can affect at most 2(p - 2) . Hence, qt most total 2 p - 2 @ are affected.

3) If p = 1, from the discussion in 2), this “ x ” must be at (1,l) . Thus, there is no @ affected from Property c), that is, 2 p - 2 (in this case, 2p - 2 = 0) @ are affected.

,

Now we are going to explain how to use the MFIA to find the value of sm+,. Some useful facts are introduced as follows.

From Theorem 2, the number of @ is at least d* - 2, at least d* - 2 - ( 2 p - 2 ) 2 1 @ is not affected and it is a candidate

0 from Property b).

Lemma 4: If (A) there is a unique value of this @ such that column j of S* is a partial linear combination of its previous columns with the top i components and (B) column j of S is linearly dependent on its previous columns, then the unique

Generally speaking, after applying the MFIA to S* there are often more than one candidate and they may be not all correct. Fortunately, sm+, can be easily determined from all candidates by the following theorem.

value must be equal to s,+,. Conversely, from Lemma 1 and Lemma 4, we have the

following. Lemma.5: If (A) is true and (C) the unique value is not

equal to s,+,, then column j of S is linearly independent from the previous columns, and when the FIA is applied to S, there is an “ x ” at ( i , j ) .

For convenience, if (A) is true, then the unique value is called as a candidate value for sm+,, or simply, candidate. Thus, in order to determine sm+,, our first objective is to find all candidates. In Section 11, we have developrd the MFIA to find all candidates. If there is an “ x ” at (2, j ) , we say this “ x ” affects @ in row i and @ in column j . When the MFIA is applied to S*, we can see the following properties.

a) If there is an “x” in column j , then column j of S is linearly independent on its previous columns.

Theorem 4: After applying the MFIA to S*, the number of correct candidates is greater than the number of the incorrect candidates.

Proof: Suppose that there are p “ x ” in the discrepancy matrix D after applying the MFIA to S*. From the proof of Theorem 3, we can obtain at least d* - 2 - (2p- 2 ) candidates. If a candidate is equal to sm+,, it is called a correct candidate, otherwise it is called an incorrect candidate. Now we are going to calculate the number of incorrect candidates.

From Property b), the columns of S, which correspond to these columns of S* containing “x”, are linearly independent on their previous columns. The number of these columns is p. Since the total number of linearly independent columns of S is U or less (because the rank of X is v or less), and from Lemma 5, the number of the incorrect candidates is at most

Page 6: Decoding algebraic-geometric codes up to the designed minimum distance

42 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 1, JANUARY 1993

v - p. Therefore, at least d* - 2 - 2(p - 1) - (v - p ) = d* - v - p candidates are correct, and the number of the correct candidates is greater than the number of the incorrect candidates (d* - v - p > v - p) . The proof is completed. U

Once s,+, is found, S* is modified: substituting the correct value of s,+, for @ and substituting @ for s,+,+1. Since the number of sm+,+1, that is, of new @, is d* - 2 or more. If (3.3) is not found, then by the same reason and in the same way, s,+,+1 can be found.

However, it is easily seen that to apply the MFIA to the new S* is equivalent to modifying the discrepancy matrix D as follows: substitute the value of s,+, for all @ being not candidates and eliminate the discrepancy at this place if possible, that is, there is no “ x ” on its column and there is one “ x ” on its row; substitute 0 for @ being a correct candidate and “x” for @ being a incorrect candidate (by Lemma 4 and Lemma 5); and place @ at ( i , j ) , at which s,+,+1 was. The complexity of modifying discrepancy matrix D is O ( ( m - g + 1)’). Thus, in our decoding procedure the construction of S* has complexity O ( ( m - g + 1)’n). The complexity of the algorithm for determining value of s,+~ is O ( ( ~ n - g + l ) ~ ) . To find s , + ~ , s,+3,. . . , s , + ~ at most g - 1 modifications of D are required (for some error patterns, (3.3) can be early found), which has complexity O((g - l ) (m - g + 1)’). Therefore, to find all s, for i = m + 1, . . . , m + g, and (3.3), the complexity is O ( ( m - g + 1)2n).

Our decoding procedure can be outlined as follows. 1) Calculate the syndromes st from the received vector for

2) Determine s, for i = m + 1 , . . , m + g and (3.3). 3) Find the roots of (3.3) among the rational points. 4) Solve a system of linear equations and obtain the error

locations and error magnitudes at the same time. Obviously, this decoding procedure is a generalization of

Peterson’s decoding procedure for the BCH codes. It is easily seen that the complexity of this decoding procedure is 0 ( n 3 ) where m N n.

2 = OO,O1,”.,O,-g+l.

IV. AN EXAMPLE In this section through an example, we show this simple

decoding procedure. Let us consider a plane curve defined by the equation f ( X , Y , Z ) = X5 + Y4Z + Y Z 4 = 0 over G F p 4 ) . This curve is called the Hermitean curve. With Q = (0 : 1 : 0), x = X / Z , has pole order 4 and y = Y / Z has pole order 5. From [lS] we know that Hermitean curve has the genus g = 6 and 65 rational points in GF(z4) . Define the code with length 64, i.e., D is all finite rational points, and G = 23Q. The linear code CQ(D, G) of length 64 over GF(z4) is the image of the linear map a* : R(G - 0) +

GF(24)64 defined by

From Riemann-Roch theorem, the designed minimum dis- tance d* = deg(G) - 29 + 2 = 13. Any six or fewer errors can be corrected. Since deg(G)-g+l = 18, let f l , fz, . , flg

be a basis of L(G). Then,

fl(Pl> fl(p2) “ ’ fl(p64)

fld(p1) flg(p2) ’ * ‘ fld(P64) H = [ I I . . . I ] (4.1)

is a parity check matrix for CQ(D,G) . A basis of L(G) is

f4 = X2; (8)

fio = y3; (15) f13 = x2Y2; (18)

fl = k ( 0 ) f2 = x; (4) f 3 = Y; (5) f5 = Xy; (9)

fil = x4; (16) f14 = XY3; (19)

f6 = y2; (10)

fi2 = x3y; (17) fi5 = y4; (20)

f7 = s3; (12) fFj = X2y; (13) fg = XY2; (14)

f16 = z4y; (21) f17 = X3y2; (22) f18 = X2y3; (23) (4.2)

where the number in parenthesis indicates the pole order of the corresponding function with Q.

For convenience, let $o = fi, $4 = fz, $5 = f3, $8 = f4,

49 = f5, $10 = f6, and 4; = f;-5 f o r i = 1 2 , 1 3 , . . . , 2 3 . Thus,

OrdQ($i) = - 2 , for i = 0 , 4 , 5 , 8 , 9 , 1 0 , 1 2 , 1 3 , . . . , 2 3 . For each z, there may be many $ such that ordo(&) = -2.

But it is sure that these $ must be linear combinations of $ 0 , $ 3 , $5, . , 4;. For example, ord~(x’) = -20. Since X 5 + Y 4 Z + YZ4 = 0, we have

x5 = y4 + y, x6 = xy4 + xy,

and so on.

on. Then, from the previous equations, we have Let $20 = y4 and $io = x5, $24 = xy4, $a4 = x6, and so

$io = $20 + $5, 4 4 4 = $24 + $9. (4.3)

Let U = (ul,uz,...,u64) be a received word, e = (el, e2, . . . , e64) be an error vector, and e = (c1, c2, . . . , C64) be a codeword. Thus, we have U = e + c. Then, we define the syndromes

si : = (U, a*$;) 64

= c u j $ ; ( P j ) , for z = 0,4 ,5 ,8 ,9 ,10 ,12 ,13 , . . . ,23; j = 1 64

= ej$i(pj), (4.4) j=1

64

si : = (U, a*+:) = u j q ( ( ~ j > , for i = 20 j=1

64

= ejd:(Pj). j=1

Let

s; = ey$;(Pj) and s: = ej$i(Pj) for i 2 24, 64 64

j=1 j=1

(4-5)

Page 7: Decoding algebraic-geometric codes up to the designed minimum distance

FENG AND RAO: DECODING ALGEBRAIC-GEOMETRIC CODES 43

S O

s 4

s 5 s8

S9

310 312 513

314

515 s16

s17 518 s 19

s20

321 I::

where +i and 4; have order -2 with Q. Certainly, si and SI for i 2 24 are unknown. From (4.3)-(4.5), we know that

From (4.2) and (4.4), we have

SO = a s 4 = (Y14 s 5 = a2 88 = a'' s/20 = s20 + s 5 , 5/24 = s24 + s 9 . s g = a4 510 = = (1' si3 =

- a a14 a2 ,11 &4 ,9 a4 all a10 a 5 1 a6 a8 a 2 a 5 a12 a 5 d o - a14 ($1 a4 a4 ($0 1 a 6 a8 a 2 a a 5 a 1 0 @ # # #

@ # # # # 1 a 6 CY8 a CY12 a 5 a 1 0 @ # # # # # # # ($1 a4 ,11

@ # # # # # # # # a 9 a 1 0 a 5 a8 a2 a 5 a 5 do @ # # # # # # # # # cy4 1 ff6 a a 1 2 a 5 @ # # # # # # # # # # #

# # # # # # # # # # # # a 1 0 a 8 a2 a 5 d 0 @ # # # # # # # # # # # # a 5 a 2 a 5 a 1 0 @ # # # # # # # # # # # # # 1 a Q12 @ # # # # # # # # # # # # # #

Q6 a12 a5 # # # # # # # # # # # # # # # a S a 5 a 1 0 # # # # # # # # # # # # # # # a 2 a 1 0 @ # # # # # # # # # # # # # # # a 5 @ # # # # # # # # # # # # # # # # a 1 2 # # # # # # # # # # # # # # # # # f f 5 # # # # # # # # # # # # # # # # #

- , l o # # # # # # # # # # # # # # # # # -

a2 a4 ,9 ($1 ,lo a5 a6 a8 a2 &5 a12 a5 Ly10

a4 ($1 ($0 a6 a2 a12 a5 ,IO

,11 cr6 a8 a12 a5

= ( a 1 2 , a 4 , a 7 , a 8 , a 9 , a g , ~ , . . . ,o ,o ,o ,o , . . . ,o , In order to find 524 (and sk4), we construct S* as shown in the first matrix at bottom of page where @ expresses 524 or sk4 and # expresses other unknown syndromes. In this example, S* is shown in the second matrix at bottom of page.

Applying the MFIA to S*, we have the following discrep- ancy matrix, where the candidate in column j is denoted by @3

0,0,0, 0,o) 1

that is, u1 = cy1', 212 = a4, 213 = a7, 214 = a8, 215 = a', 216 = a', and other components are 0. Let Pl = (1 : 1 : a) , P2 = (1 : 1 : a'), P3 = (1 : 1 : a4), P 4 = (1 : 1 : a'), p 5 = (0 : 0 : I), P 6 = (1 : a 1).

s 4 S8

39 SI2

513

514

s 1 6 s17 518 319

s 2 1 522

Q # # #

si0

s 2 3

-55

s 9

s10 513

514 s15

s 1 7 s18 319 320

'921

322

@ # # # #

s 2 3

s8 s 9 810 512 s13 SI4 515 s16 317 s 1 8 s19 s20 321 522 s 2 3 512 513 s14 s16 s 1 7 518 s19 320 321 522 523 @ # # # s13 s14 s15 517 s18 s19 si0 321 522 s 2 3 Q # # # # 316 517 818 si0 s21 522 323 @ # # # # # # # s 1 7 s18 s19 s21 522 323 @ # # # # # # # # 518 s19 s20 522 523 @ # # # # # # # # # S k O S 2 1 S 2 2 @ # # # # # # # # # # # s 2 1 s 2 2 s 2 3 # # # # # # # # # # # # s 2 2 5 2 3 @ # # # # # # # # # # # # 523 @ # # # # # # # # # # # # # @ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #

Page 8: Decoding algebraic-geometric codes up to the designed minimum distance

44 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 1, JANUARY 1993

and “x ” in column j is denoted by xg, and their values are in the first matrix at bottom of page where x 1 = a, x 2 = 1, and

a’. From Theorem 4, 324 = @ 5 = @6 = @9 = @ l o = a8 are correct and @4 = @7 = @11 = are not correct. si4 should be a5.

Now we modify the discrepancy matrix as follows. Substi- tute a8 (or a5) for all @being not candidates and eliminate the discrepancies at these places if it is possible (the complexity is at most O(t2) , then substitute @ for ~ 2 5 (or si5). Then, place 0 at (5,10), (6,9), (9,6), and (10,5), respectively (the candidate values for ~ 2 4 at these locations were correct) and place x4 at (11,4), x7 at (7,7), and x11 at (4, ll), (the candidate values for si4 are not correct). The new matrix is equivalent to that the MFIA has been applied to new S*, where the values of ~ 2 4 and

x3 = a5; @4 = @7 = @11 = &I4, @ 5 = @6 = @9 = @ l o =

sh4 are known and the values of 525 and si5 are unknown. So to find ~ 2 5 and .si5, that is, calculate the values of candidates of all 5 2 5 and eliminates of discrepancies at some places, where 524 were. After this discrepancy matrix is modified, we have the second matrix at bottom of page where x 1 = a, x 2 = 1,

By the same way we can find that S26 = a, S27 = a4, s28 = 1, and ~ 2 9 = a8. Thus, we can find two solutions of (3.3). They are linear dependent relations of column 5 with the first 4 columns and column 6 with the first 4 columns, respectively. We have

x3 = C Y 5 , x4 = x7 = x11 = a12; 525 = 0 6 = @ l o = 1.

zy + z2 + y + 2 = 0.

y + z +a4z+CY4y= 0. 2 2

X I 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 a 1 4 x 2 0 0 0 0 0 0 0 0 0 0 0 0 @ # # # a2 a4 x3 0 0 0 0 0 0 0 0 O O @ # # # #

0 0 0 0 0 0 0 @ 1 1 # # # # # # # ($1 a4

0 0 0 0 0 O @ l O # # # # # # # # a4 ($1 ($0

a g a 1 0 a 5 0 0 0 0 O @ g # # # # # # # # # a4 1 Q6 0 0 0 @ 7 # # # # # # # # # # # f f 1 l f f 6 f f 8 0 0 O # # # # # # # # # # # #

a 5 a 2 a 5 0 @ 5 # # # # # # # # # # # # # 1 a l 2 @ 4 # # # # # # # # # # # # # #

a 6 a l 2 a 5 # # # # # # # # # # # # # # # a 8 a 5 f f 1 0 # # # # # # # # # # # # # # # f f 2 a 1 0 @ # # # # # # # # # # # # # # # a5 @ # # # # # # # # # # # # # # # ’ #

f f12 # # # # # # # # # # # # # # # # # a 5 # # # # # # # # # # # # # # # # # .do # # # # # # # # # # # # # # # # #

a 1 0 a s a 2 0 0 @ 6 # # # # # # # # # # # #

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x2 0 0 0 0 0 0 0 0 0 0 0 0 0 @ # # a 4 X 3 O O O O O O O O O O O @ # # # a 4 a l 1 0 0 0 0 0 0 0 x 1 1 @ # # # # # #

0 0 0 0 0 0 0 @ # # # # # # # ,11

a 1 O f f 5 0 0 0 0 O O @ l O # # # # # # # #

1 a6 0 0 0 x 7 @ # # # # # # # # # # ff6 a’ 0 0 0 @ # # # # # # # # # # # f f 8 a 2 0 0 0 # # # # # # # # # # # #

a a 1 2 x 4 @ # # # # # # # # # # # # # f f 1 2 a 5 @ # # # # # # # # # # # # # # f f 5 a 1 0 # # # # # # # # # # # # # # # . l O . s # # # # # # # # # # # # # # # a8 @ # # # # # # # # # # # # # # # @ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #

f f 2 a 5 0 0 @ 6 # # # # # # # # # # # #

Page 9: Decoding algebraic-geometric codes up to the designed minimum distance

FENG AND M O : DECODING ALGEBRAIC-GEOMETRIC CODES

The two equations have six common roots, the five points on the line y = x:

PI = (1 : 1 : a ) , P2 = 1 : 1 : a2),

Ps = (0 : 0 : l), P3 = (1 : 1 : a4), P4 = i 1 : 1 : a8),

and the point of intersection of the two lines 2 + 1 = 0 and y + x + a4:

P6 (1 a 1).

In order to find the real error locations and the error magnitudes, we should solve reduced parity check equations in the six unknowns e l , ea, e3, e4, e5, and e6. Solving these linear equations, we have

el = al’; e3 = a7; e5 = ag;

e2 = a4; e4 = 2; e6 = a’.

Thus, there are six errors. They are at P I , P2, P3, P4, Ps, and P6 and the error magnitudes are al’, a4, a?, a‘, a’, and a9, respectively.

V. CONCLUSION In this paper, we have derived a very simple decoding

procedure for decoding algebraic-geometric codes CO (D, G) with G = m . Q up to [(d* - 1)/2] errors. The computa- tion complexity of this decoding procedure is O(n3). This decoding procedure is a generalization of Peterson’s decoding procedure. It should be noted that the decoding procedure is applicable to decoding some cyclic codes up to the van Lint - Wilson’s bound and decoding some cyclic codes beyond actual minimum distance.

ACKNOWLEDGMENT

The authors are deeply grateful to J.R. Myrick, Jr. and J. Kralik for helpful discussion and comments concerning this work. The authors would like to thank the referees, the Associate Editor, Dr. A. Tietavainen, and I.M. Duursma for their many valuable suggestions on the style and presentation of this paper. Mr. Duursma found some minor typing errors

45

in the early version of this paper and suggested the example in this new version. He also obtained a different proof of our results during the review process and generalized the decoding procedure to the case of arbitrary G during revision of the paper.

REFERENCES

M. A. Tsfasman, S. G. Vliduf and T. Zink, “Modular curves, Shimura curves and Goppa codes, better than Varshamove-Gilbert bound,” Math. Nachr., vol. 104, pp. 13-28, 1982. F. J. MacWilliams and N. J. A. Sloane, The Theory ofError-Correcting Codes. Amsterdam, The Netherlands: North Holland, 1977. G. L. Katsman, M. A. Tsfasman, and S. G. Vlidul, “Modular curves and codes with a polynomial construction,” IEEE Trans. Inform. Theory, vol. IT-30, pp. 353-355, Mar. 1984. J. Wolfmann, “Recent results on coding and algebraic geometry,” in Proc. 3rd Inr. Con$ AAECC-3, Grenobe, France, July 1985,

Y. Driencourt, “Some properties of elliptic codes over a field of characteristic 2,” in Proc. 3rd In?. Con$ AAECC-3, Grenobe, France, July 1985, pp. 185-193. Y. Driencourt and J.F. Michon, “Elliptic codes over field of characteristic 2,” J. Pure andAppl. Algebra, vol. 45, pp. 15-39, Mar. 1987. H. J. Tiersma, “Remarks on codes from Hermitian curves,” IEEE Trans. Inform. Theory, vol. IT-33, pp. 605-609, Sept. 1987. J. H. van Lint and T. A. Springer, “Generalized Reed-Solomon codes from algebraic geometry,” IEEE Trans. Inform. Theory, vol. IT-33, pp. 305-310, May 1987. J. P. Hensen, “Codes on the Klein quartic, ideals and decoding,” IEEE Trans. Inform. Theory, vol. IT-33, pp. 919-923, Nov. 1987. S. Harari, “New codes from algebraic curves of genus 2,” presented at the IEEE Inr. Symp. Inform. Theory, Ann Arbor, MI, Oct. 1986. J. Justesen, K. J. Larsen, H. Elbrond Jensen, and T. HBoldt, “Construc- tion and decoding of a class of algebraic geometry codes,” IEEE Trans. Inform. Theory, vol. 35, pp. 811-821, July 1989. A. N. Skorobogatov and S. G. Vlgdut, “On decoding of algebraic-geo- metric codes,” IEEE Trans. Inform. Theory, vol. 36, pp. 1051-1060, SeDt. 1990.

pp. 167-184.

[13] R.’Pellikaan, “On a decoding algorithm for codes on maximal curves,” IEEE Trans. Inform. Theory, vol. 35, pp. 1228-1232, Nov. 1989.

[14] J. Justesen, K.J. Larsen, H. Elbrond Jensen, and T. Hoholdt, “Fast decoding of codes from algebraic plane curves,” IEEE Trans. Inform. Theory, vol. 38, pp. 111-119, Jan. 1992.

[15] J.H. van Lint, “Algebraic geometry codes,” in Coding Theory and Design Theory, vol. 20 (IMA Volumes in Mathematics and Its Applica- tions). New York Springer, 1988, pp. 137-162.

[16] G.L. Feng and K.K. Tzeng, “A generalization of the Berlekamp- Massey algorithm for multisequence shift-register synthesis with appli- cations to decoding cyclic codes,” IEEE Trans. Inform. Theory, vol. 37, pp. 1274-1287, Sept. 1991.