Acessibilidade / Reportar erro

Goppa and Srivastava codes over finite rings

Abstract

Goppa and Srivastava codes over arbitrary local finite commutative rings with identity are constructed in terms of parity-cleck matrices. An efficient decoding procedure, based on the modified Berlekamp-Massey algorithm, is proposed for Goppa codes.

Goppa code; Srivastava code


Goppa and Srivastava codes over finite rings

Antonio Aparecido de AndradeI; Reginaldo Palazzo Jr.II,*

IDepartment of Mathematics, Ibilce, Unesp, 15054-000 São José do Rio Preto, SP, Brazil, E-mail: andrade@ibilce.unesp.br

IIDepartment of Telematics, Feec, Unicamp, 13083-852 Campinas, SP, Brazil, E-mail: palazzo@dt.fee.unicamp.br

ABSTRACT

Goppa and Srivastava codes over arbitrary local finite commutative rings with identity are constructed in terms of parity-cleck matrices. An efficient decoding procedure, based on the modified Berlekamp-Massey algorithm, is proposed for Goppa codes.

Mathematical subject classification: 11T71, 94B05, 94B40.

Key words: Goppa code, Srivastava code.

1 Introduction

Goppa codes form a subclass of alternant codes and they are described in terms of a polynomial called Goppa polynomial. The most famous subclasses of alternant codes are BCH codes and Goppa codes, the former for their simple and easily instrumented decoding algorithm, and the latter for meeting the Gilbert-Varshamov bound. However, most of the work regarding construction and decoding of Goppa codes has been done considering codes over finite fields. On the other hand, linear codes over rings have recently generated a great deal of interest.

Linear codes over local finite commutative rings with identity have been discussed in papers by Andrade [1], [2], [3] where it was extended the notion of Hamming, Reed-Solomon, BCH and alternant codes over these rings.

In this paper we describe a construction technique of Goppa and Srivastava codes over local finite commutative rings. The core of the construction technique mimics that of Goppa codes over a finite field, and is addressed, in this paper, from the point of view of specifying a cyclic subgroup of the group of units of an extension ring of finite rings. The decoding algorithm for Goppa codes consists of four major steps: (1) calculation of the syndromes, (2) calculation of the elementary symmetric functions by modified Berlekamp-Massey algorithm, (3) calculation of the error-location numbers, and (4) calculation of the error magnitudes.

This paper is organized as follows. In Section 2, we describe a construction of Goppa codes over local finite commutative rings and an efficient decoding procedure. In Section 3, we describe a construction of Srivastava codes over local finite commutative rings. Finally, in Section 5, the concluding remarks are drawn.

2 Goppa Codes

In this section we describe a construction technique of Goppa codes over arbitrary local finite commutative rings in terms of parity-check matrices, which is very similar to the one proposed by Goppa [4] over finite fields. First, we review basic facts from the Galois theory of local finite commutative rings.

Throughout this paper denotes a local finite commutative ring with identity, maximal ideal and residue field = º GF(pm), for some prime p, m will be a positive integer, and [x] denotes the ring of polynomials in the variable x over . The natural projection [x] ® [x] is denoted by µ, where µ(a(x)) = (x).

Let f(x) be a monic polynomial of degree h in [x] such that µ(f(x)) is irreducible in [x]. Then f(x) is also irreducible in [x] [5, Theorem XIII.7]. Let be the ring [x]/áf(x)ñ. Then is a local finite commutative ring with identity and it is called a Galois extension of of degree h. Its residue field is 1 = / º GF(pmh), where is the unique maximal ideal of , and is the multiplicative group of 1, whose order is pmh-1.

Let * denote the multiplicative group of units of . It follows that * is an abelian group, and therefore it can be expressed as a direct product of cyclic groups. We are interested in the maximal cyclic subgroup of *, hereafter denoted by s, whose elements are the roots of xs - 1 for some positive integer s such that gcd(s, p) = 1. There is only one maximal cyclic subgroup of * having order relatively prime to p [5, Theorem XVIII.2]. This cyclic group has order s = pmh-1.

The Goppa codes are specified in terms of a polynomial g(z) called Goppa polynomial. In contrast to cyclic codes, where it is difficult to estimate the minimum Hamming distance d from the generator polynomial, Goppa codes have the property that d > deg(g(z)) + 1.

Let g(z) = g0 + g1z + ¼ + grzr be a polynomial with coefficients in and gr ¹ 0. Let h = (a1, a2, ¼ , an) be a vector consisting of distinct elements of s such that g(ai) are units from for i = 1, 2, ¼ , n.

Definition 2.1. A shortened Goppa code (h, w, g) of length n < s over

has parity-check matrix

where w = (g(), g(), ¼, g()). The polynomial g(z) is called Goppa polynomial.

Definition 2.2. Let (h, w, g) be a Goppa code.

• If g(z) is irreducible then (h, w, g) is called an irreducible Goppa code.

• If, for all c = (c1, c2, ¼, cn) Î (h, w, g), it is true that c¢ = (cn, cn-1, ¼, c1) Î (h, w, g), then (h, w, g) is called a reversible Goppa code.

• If g(z) = (z - a)r then (h, w, g) is called a commutative Goppa code.

• If g(z) has no multiple zeros then (h, w, g) is called a separable Goppa code.

Remark 2.1. Let (h, w, g) be a Goppa code.

1. We have (h, w, g) is a linear code.

2. A parity-check matrix with elements from is then obtained by replacing each entry of H by the corresponding column vector of length h from .

3. For a Goppa code with polynomial gl(z) = (z - bl)rl, where bl Î s, we have

which is row-equivalent to

Consequently, if g(z) = (z - bl)rl = gl(z), then the Goppa code is the intersection of Goppa codes with Goppa polynomial gl(z) = (z - bl)rl, for l = 1, 2, ¼, k, and its parity-check matrix is given by

4. Alternant codes are a special case of Goppa codes [3, Definition 2.1].

It is possible to obtain an estimate of the minimum Hamming distance d of (h, w, g) directly from the Goppa polynomial g(z). The next theorem provides such an estimate.

Theorem 2.1. The code (h, w, g) has minimum Hamming distance d > r + 1.

Proof. We have (h, w, g) is an alternant code (h, w, g) with h = (a1, a2, ¼, an) and w = (g(a1)-1, g(a2)-1, ¼, g(an)-1) [3, Definition 2.1]. By [3, Theorem 2.1] it follows (h, w, g) has minimum distance d > r + 1.

Example 2.1. Let = 2[i] and = , where f(x) = x3 + x + 1 is irreducible over and i2 = -1. If a is a root of f(x), then a generates a cyclic group s of order s = 23 - 1 = 7. Let h = (a, a4, 1, a2), g(z) = z3 + z2 + 1 and w = (g(a)-1, g(a4)-1, g(1)-1, g(a2)-1) = (a3, a5, 1, a6). Since deg(g(z)) = 3, it follows that

is the parity-check matrix of a Goppa code (h, w, g) over with length 4 and minimum Hamming distance at least 4.

Example 2.2. Let = 2[i] and = , where f(x) = x4 + x + 1 is irreducible over . Thus s = 15 and 15 is generated by a, where a4 = a + 1. Let g(z) = z4 + z3 + 1, h = (1, a, a2, a3, a4, a5, a6, a8, a9, a10, a12) and w = (1, a6, a12, a13, a9, a10, a11, a3, a14, a5, a7). Since deg(g(z)) = 4, it follows that

is the parity-check matrix of a Goppa code (h, w, g) over 2[i] with length 11 and minimum Hamming distance at least 5.

2.1 Decoding procedure

In this subsection we present a decoding algorithm for Goppa codes (h, w, g). This algorithm is based on the modified Berlekamp-Massey algorithm [6] which corrects all errors up to the Hamming weight t < r/2, i.e., whose minimum Hamming distance is r + 1.

We first establish some notation. Let be a local finite commutative ring with identity as defined in Section 2 and a be a primitive element of the cyclic group s, where s = pmh - 1. Let c = (c1, c2, ¼, cn) be a transmitted codeword and b = (b1, b2, ¼, bn) be the received vector. Thus the error vector is given by e = (e1, e2, ¼, en) = b - c.

Given a vector h = (a1, a2, ¼, an) = () in , we define the syndrome values sl of an error vector e = (e1, e2, ¼, en) as

Suppose that n < t is the number of errors which occurred at locations x1 = with values .

Since s = (s0, s1, ¼, sr-1) = bHt = eHt, then the first r syndrome values sl can be calculated from the received vector b as

The elementary symmetric functions s1, s2, ¼, sn of the error-location numbers x1, x2, ¼, xn are defined as the coefficients of the polynomial

where s0 = 1. Thus, the decoding algorithm being proposed consists of four major steps:

Step 1 - Calculation of the syndrome vector s from the received vector.

Step 2 - Calculation of the elementary symmetric functions s1, s2, ¼, sn from s, using the modified Berlekamp-Massey algorithm [6].

Step 3 - Calculation of the error-location numbers x1, x2, ¼, xn from s1, s2, ¼, sn, that are roots of s(x).

Step 4 - Calculation of the error magnitudes y1, y2, ¼, yn from xi and s, using Forney's procedure [7].

Now, each step of the decoding algorithm is analyzed. There is no need to comment on Step 1 since the calculation of the vector syndrome is straightforward. The set of possible error-location numbers is a subset of {a0, a1, ¼, as-1}. In Step 2, the calculation of the elementary symmetric functions is equivalent to finding a solution s1, s2, ¼, sn, with minimum possible n, to the following set of linear recurrent equations over

where s0, s1, ¼, sr-1 are the components of the syndrome vector. We make use of the modified Berlekamp-Massey algorithm to find the solutions of Equation (2). The algorithm is iterative, in the sense that the following n - ln equations (called power sums)

are satisfied with ln as small as possible and = 1. The polynomial s(n)(x) = represents the solution at the n-th stage. The n-th discrepancy is denoted by dn and defined by dn = . The modified Berlekamp-Massey algorithm for commutative rings with identity is formulated as follows. The inputs to the algorithm are the syndromes s0, s1, ¼, sr-1 which belong to . The output of the algorithm is a set of values si, i = 1, 2, ¼, n, such that Equation (2) holds with minimum n. Let s(-1)(x) = 1, l-1 = 0, d-1 = 1, s(0)(x) = 1, l0 = 0 and d0 = s0 be the a set of initial conditions to start the algorithm as in Peterson [8]. The steps of the algorithm are:

1. n ¬ 0.

2. If dn = 0, then s(n+1)(x) ¬ s(n)(x) and ln+1¬ ln and to go 5).

3. If dn ¹ 0, then find m < n - 1 such that dn - ydm = 0 has a solution y and m - lm has the largest value. Then, s(n+1)(x) ¬ s(n)(x) - yxn-ms(m)(x) and ln+1 ¬ max{ln, lm + n - m}.

4. If ln+1 = max{ln, n + 1 - ln} then go to step 5, else search for a solution D(n+1)(x) with minimum degree l in the range max{ln, n + 1 - ln} < l < ln+1 such that s(m)(x) defined by D(n+1)(x) - s(n)(x) = xn-ms(m)(x) is a solution for the first m power sums, dm = -dn, with a zero divisor in . If such a solution is found, s(n+1)(x) ¬ D(n+1)(x) and ln+1¬ l.

5. If n < r - 1, then dn = sn + .

6. n ¬ n + 1; if n < r - 1 go to 2); else stop.

The coefficients satisfy Equation (2). At Step 3, the solution to Equation (2) is generally not unique and the reciprocal polynomial r(z) of the polynomial s(r)(z) (output by the modified Berlekamp-Massey algorithm), may not be the correct error-locator polynomial

(z - x1)(z - x2)¼(z - xn),

where xj = , for j = 1, 2, ¼, n and i = 1, 2, ¼, n, are the correct error-location numbers. Thus, the procedure for the calculation of the correct error-location numbers is the following:

  • compute the roots

    z

    1,

    z

    2, ¼,

    z

    n of r(

    z);

among the xi = , j = 1, 2 ¼, n, select those xi's such that xi - zi are zero divisors in . The selected xi's will be the correct error-location numbers and each kj, for j = 1, 2, ¼, n, indicates the position j of the error in the codeword.

At Step 4, the calculation of the error magnitude is based on Forney's procedure [7]. The error magnitude is given by

for j = 1, 2, ¼, n, where the coefficients sjl are recursively defined by

sj,i = si + xjsj,i-1, i = 0, 1, ¼, n-1,

starting with s0 = sj,0 = 1. The Ei = g(xi)-1, for i = 1, 2, ¼, n, are the corresponding location of errors in the vector w. It follows from [9, Theorem 7] that the denominator in Equation (3) is always a unit in .

Example 2.3. As in Example 2.1, if the received vector is given by b = (0, i, 0, 0), then the syndrome vector is given by s = bHt = (ia5, ia2, ia6). Applying the modified Berlekamp-Massey algorithm, we obtain the following table

Thus s(3)(z) = 1 + a4z. The root of r(z) = z + a4 (the reciprocal of s(3)(z)) is z1 = a4. Among the elements 1, a, ¼ , a6 we have x1 = a4 is such that x1 - z1 = 0 is a zero divisor in . Therefore, x1 is the correct error-location number, and k2 = 4 indicates that one error has occurred in the second coordinate of the codeword. The correct elementary symmetric function s1 = a4 is obtained from x - x1 = x - s1 = x - a4. Finally, applying Forney's method to s and s1, gives y1 = i. Therefore, the error pattern is given by e = (0, i, 0, 0).

Example 2.4. As in Example 2.2, if the received vector is b = (0, 0, 1, 0, 0, 0, 0, 0, i, 0, 0), then the syndrome vector is given by

s = bHt = (a12 + ia14,a14 + ia8,a + ia2,a3 + ia11)

Applying the modified Berlekamp-Massey algorithm, the following table isobtained

Thus s(4)(z) = 1 + a11z + a11z2. The roots of r(z) = z2 + a11z + a11 (the reciprocal of s(4)(z)) are z1 = a2 and z2 = a9. Among the elements 1, a, a2, ¼ , a14, we have x1 = a2 and x2 = a9 are such that x1 - z1 = x2 - z2 = 0 are zero divisors in . Therefore, x1 and x2 are the correct error-location numbers and k3 = 2 and k9 = 9 indicates that two errors have occurred, one in position 3, and the other in position 9, in the codeword. The correct elementary symmetric functions s1 and s2 are obtained from (x-x1)(x-x2) = x2+s1x+s2. Thus, s1 = s2 = a11. Finally, Forney's method applied to s, s1 and s2, gives s11 = s1 + x1s10 = a11 + a2 = a9 and s21 = s1 + x2s20 = a11 + a9 = a2. Thus, by Equation (3), we obtain y1 = 1 and y2 = i. Therefore, the error pattern is given by e = (0, 0, 1, 0, 0, 0, 0, 0, i, 0, 0).

3 Srivastava codes

In this section we define another subclass of alternant codes over local finite commutative rings which is very similar to the one proposed by J. N. Srivastava in 1967, in an unpublished paper [10], called Srivastava codes. These codes over finite fields are defined by parity-check matrices of the form

where a1, a2, ¼, ar are distinct elements from GF(qm) and b1, b2, ¼, bn are all the elements in GF(qm) except 0, and l > 0.

Definition 3.1. A shortened Srivastava code of length n < s over A has parity-check matrix

where a1, ¼, an, b1, ¼, br are n + m distinct elements of s and l > 0.

Theorem 3.1. The Srivastava code has minimum Hamming distance d > r + 1.

Proof. The minimum Hamming distance of this code is at least r + 1 if and only if every combination of r or fewer columns of H is linearly independent over , or equivalently, that the submatrix

is nonsingular for any subset {i1, ¼, ir} of {1, 2, ¼, n}. The determinant of this matrix can be expressed as

where the matrix H2 is given by

Note that det(H2) is a Cauchy determinant of order r, and therefore we conclude that the determinant of the matrix H1 is given by

where k = (-1)m, m = and µ(x) = (x - b1)(x - b2) ¼ (x - br). Then by [9, Theorem 7] we have det(H1) is a unit in and therefore d > r + 1.

Definition 3.2. Suppose r = kl and let a1, ¼ , an, b1, ¼ , bk be n+k distinct elements of

s, w1, ¼ , wn be elements of
s. A generalized Srivastava code of length n < s over has parity-check matrix

where

for j = 1, 2, ¼ , k.

Theorem 3.2. The generalized Srivastava code has minimum Hamming distance d > kl + 1.

Proof. The proof of this theorem requires nothing more than the application of the Remark 2.1(3) and of the Theorem 3.1, since the matrices (1) and (9) are equivalent, with g(z) = (z - bi)l.

Example 3.1. As in Example 2.2, if

n = 8, r = 6, k = 2, l = 3,

{a1, a2, ¼, a8} = {a4, a3, a5, a, a7, a12, a10, a2},

{b1, b2} = {a9, a6} and {w1, ¼, w8} = {a, a, a2, a4, a7, a10, a9, a3},

then the matrix

is the parity-check matrix of a generalized Srivastava code over

2[i] of length 8 and minimum distance at least 7.

4 Conclusions

In this paper we presented construction and decoding procedure for Goppa codes over local finite commutative rings with identity. The decoding procedure is based on the modified Berlekamp-Massey algorithm. The complexity of the proposed decoding algorithm is essentially the same as that for Goppa codes over finite fields. Furthermore, we present the construction of Srivastava codes over local finite commutative rings with identity.

5 Acknowledgments

This paper has been supported by Fundação de Amparo à Pesquisa do Estado de São Paulo - FAPESP - 02/07473-7.

Received: 3/VI/04. Accepted: 11/V/05

#609/04.

  • [1] A.A. Andrade and R. Palazzo Jr., Hamming and Reed-Solomon codes over certain rings, Computational and Applied Mathematics, 20 (3) (2001), 289-306.
  • [2] A.A. Andrade and R. Palazzo Jr., Construction and decoding of BCH codes over finite commutative rings, Linear Algebra and its Applications, 286 (1999), 69-85.
  • [3] A.A. Andrade, J.C. Interlando and R. Palazzo Jr., Alternant and BCH code over certain rings, Computational and Applied Mathematics, 22 (2) (2003), 233-247.
  • [4] V.D. Goppa, A new class of linear error-correcting codes, Probl. Peredach. Inform., 6 (3) (1970), 24-30.
  • [5] B.R. McDonald Finite rings with identity, Marcel Dekker, Inc., New York (1974).
  • [6] J.C. Interlando, R. Palazzo Jr. and M. Elia, On the decoding of Reed-Solomon and BCH codes over integer residue rings, IEEE Trans. Inform. Theory, IT-43 (1997), 1013-1021.
  • [7] G.D. Forney Jr., On decoding BCH codes, IEEE Trans. Inform. Theory, IT-11 (1965), 549-557.
  • [8] W.W. Peterson and E.J. Weldon Jr., Error Correcting Codes, MIT Press, Cambridge, Mass., (1972).
  • [9] A.A. Andrade and R. Palazzo Jr., A note on units of a local finite rings, Revista de Matemática e Estatística, 18 (2000), 213-222.
  • [10] H.J. Helgert, Srivastava Codes, IEEE Trans. Inform. Theory, IT-18 (2) (1972).

Publication Dates

  • Publication in this collection
    07 Nov 2005
  • Date of issue
    Aug 2005

History

  • Received
    03 June 2004
  • Accepted
    11 May 2005
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br