Acessibilidade / Reportar erro

Alternant and BCH codes over certain rings

Abstract

Alternant codes over arbitrary finite commutative local rings with identity are constructed in terms of parity-check matrices. The derivation is based on the factorization of x s - 1 over the unit group of an appropriate extension of the finite ring. An efficient decoding procedure which makes use of the modified Berlekamp-Massey algorithm to correct errors and erasures is presented. Furthermore, we address the construction of BCH codes over Zm under Lee metric.

codes over rings; alternant codes; BCH codes; Galois extensions of local commutative rings; algebraic decoding; modified Berlekamp-Massey algorithm; errors and erasures decoding; Lee metric


Alternant and BCH codes over certain rings

A.A. AndradeI; J.C. InterlandoI; R. Palazzo Jr.II

IDepartment of Mathematics, Ibilce, Unesp, 15054-000 São José do Rio Preto, SP, Brazil, E-mail: andrade@mat.ibilce.unesp.br/ E-mail: carmelo@mat.ibilce.unesp.br

IIDepartment of Telematics, Feec, Unicamp, P.O. Box 6101, 13081-970 Campinas, SP, Brazil, E-mail: palazzo@dt.fee.unicamp.br

ABSTRACT

Alternant codes over arbitrary finite commutative local rings with identity are constructed in terms of parity-check matrices. The derivation is based on the factorization of xs – 1 over the unit group of an appropriate extension of the finite ring. An efficient decoding procedure which makes use of the modified Berlekamp-Massey algorithm to correct errors and erasures is presented. Furthermore, we address the construction of BCH codes over Zm under Lee metric.

Mathematical subject classification: 11T71, 94B05, 94B40.

Key words: codes over rings, alternant codes, BCH codes, Galois extensions of local commutative rings, algebraic decoding, modified Berlekamp-Massey algorithm, errors and erasures decoding, Lee metric.

1 Introduction

Alternant codes form a large and powerful family of codes. They can be obtained by a simple modification of the parity-check matrix of a BCH code. The most famous subclasses of alternant codes are BCH codes and (classical) Goppa codes, the former for their simple and easily instrumented decoding algorithm, and the later for meeting the Gilbert-Varshamov bound. However, most of the work regarding construction and decoding of alternant codes has been done considering codes over finite fields. On the other hand, linear codes over integer rings have recently generated a great deal of interest because of their new role in algebraic coding theory and their successful application in combined coding and modulation. A remarkable paper by Hammons et al. [1] has shown that certain binary nonlinear codes with good error correcting capabilities can be viewed through a Gray mapping as linear codes over

4. Moreover, Calderbank et al. [2] studied cyclic codes over 4. Viewing many BCM (block coded modulation) schemes as group block codes over groups, in [3] it was shown that group block codes over abelian groups can be studied via linear codes over finite rings. Andrade and Palazzo [4] constructed BCH codes over finite commutative rings with identity. Also, Greferath and Vellbinger [5] have investigated codes over integer residue rings under the aspect of decoding. The Lee metric ([6], [7]) was developed as an alternative to the Hamming metric for transmission of nonbinary signals over certain noisy channels. Roth and Siegel [8] have constructed and decoded BCH codes over GF(p) under Lee metric.

In this paper we address the problems of constructing and decoding alternant codes over arbitrary finite commutative local rings with identity and the problems of construction of BCH codes for the Lee metric. The core of the construction technique mimics that of alternant and BCH codes over a finite field, and is based on the factorization of xs – 1 over an appropriate extension ring. The decoder is capable of handling both errors and erasures, which enables the implementation of generalized minimum distance decoding (GMD) to further reduce the probability of decoding error [9].

This paper is organized as follows. In Section 2, we describe a construction of alternant codes over a finite commutative local ring with identity and an efficient decoding procedure is proposed. We show how this decoding procedure can also be used to handle erasures. In Section 3, we describe a construction of BCH codes over

q, where q is a prime power, under Lee metric. The question of the existence of a simple decoding algorithm for these codes remains open.

2 Alternant Code

In this section we present a construction technique of alternant codes over finite commutative local rings with identity, in terms of parity-check matrices. First we collect basic definitions and facts from the Galois theory of commutative rings, which are necessary to characterize such matrices. Throughout this section we assume that is a finite commutative local ring with identity, maximal ideal and residue field , where m is a positive integer and p is a prime. Let f(x) be a monic polynomial of degree h in [x], such that m(f(x)) is irreducible in [x], where m is the natural projection. Then by [10, Theorem XIII.7(a)], we have f(x) also irreducible in [x]. Let be the ring . Then is a finite commutative local ring with identity and is called a Galois extension of of degree h. Its residue field is , where is the maximal ideal. We have that is the multiplicative group of 1, whose order is pmh – 1.

Let

* denotes the multiplicative group of units of . It follows that * is an abelian group, and therefore it can be expressed as a direct product of cyclic groups. We are interested in the maximal cyclic group of *, hereafter denoted by s, whose elements are the roots of xs – 1 for some positive integer s such that gcd(s,p) = 1. From [10, Theorem XVIII.2], there is only one maximal cyclic subgroup of * having order relatively prime to p. This cyclic group has order s = pmh – 1.

Definition 2.1. Let h = (a1, a2,...,an) be the locator vector, consisting of distinct elements of

s, and let w = (w1,w2,...,wn) be an arbitrary vector consisting of elements of
s. Define the matrix H by

where r is a positive integer. Then H is the parity-check matrix of a shortened alternant code (n,h,w) of length n < s over

.

It is possible to obtain an estimate of the minimum Hamming distance d of (n,h,w) directly from the parity-check matrix. The next theorem provides such an estimate.

Lemma 2.1. Let a be an element of

s of order s. Then the difference is a unit in if 0 < l1 < l2< s – 1.

Proof. It is sufficient to show that 1 – aj, j = 1,2,...,s - 1 is a unit. If 1 – aj Î for some 1 < j < s – 1, it follows that aj = 1, which is a contradiction.

Theorem 2.1. (n,h,w) has minimum Hamming distance d > r + 1.

Proof. Suppose c is a nonzero codeword in (n,h,w), such that the weight wH(c) < r. Then, cHT = 0. Deleting n – r columns of the matrix H corresponding to zeros of the codeword, it follows that the new matrix H' is Vandermonde, and therefore its determinant is a unit in . Thus, the only possibility for c is the all-zero codeword.

Example 2.1. The polynomial f(x) = x3 + x + 1 is irreducible over 2, and over the commutative local ring = 2[i], where i2 = –1. Thus = is a Galois extension of . Let a be a root of f(x). We have that a generates acyclic group s of order s = 23 – 1 = 7 in *. Setting h = (1,a,...,a6) and w = (1,1,1,1,1,1,1), if r = 2 then we have an alternant code (7,h,w) over 2[i] with minimum Hamming distance of at least 3.

2.1 Decoding Procedure

This section is devoted to developing a decoding method for an alternant code as defined in the previous section. Let (n,h,w) be an alternant code with minimum Hamming distance at least r + 1, i.e., this code can correct up to t = [(r + 1)/2] errors, where [n] denotes the largest integer less than or equal to n. Then t = (r + 1)/2 when r is odd, and t = r/2 when r is even. The idea is to extend efficient standard decoding procedures for BCH codes which work well over fields (as described, for example, in [12], [13], [14], and [15]) to finite commutative local rings with identity. Note that these afore mentioned decoding procedures do not work over rings, in general. As an example, the original Berlekamp-Massey algorithm [12], [16], which is fundamental in the decoding process of a BCH code, cannot be applied directly if the elements of the sequence to be generated do not lie in a field.

First, we establish some notation. Let denotes the ring defined in Section 2 and a be a primitive element of s. Let c = (c1,c2,...,cn) be the transmitted codeword and r = (r1,r2,...,rn) be the received vector. The error vector is given by e = (e1,e2,...,en) = r – c. Given a locator vector h = (a1,...,an) = in , we define the syndrome values Î s, of an error vector e = (e1,...,en), as

Suppose that n < t is the number of errors which occurred at locations , with values . Since s = rHT = eHT, where s = (s0,...,sr–1), the first r syndrome values can be calculated from the received vector r as , = 0,1,...,r – 1. The elementary symmetric functions s1,s2,...,sn of the error-location numbers x1,x2,...,xn are defined as the coefficients of the polynomial

where s0 = 1. Thus, the decoding procedure being proposed consists of four major steps [11]:

Step 1 - Calculation of the syndrome vector from the received vector;

Step 2 - Calculation of the elementary symmetric functions s1,s2,...,sn from s using the modified Berlekamp-Massey algorithm [11];

Step 3 - Calculation of the error-location numbers x1,x2,...,xn from s1,s2,...,sn that are roots of s(x);

Step 4 - Calculation of the error magnitudes y1,y2,...,yn from xi and s by Froney's procedure [13].

Next we analyze each step of the decoding algorithm in some detail. Since calculation of the syndromes is straightforward, we will not make any comments on Step 1.

The set of possible error-location numbers is a subset of {a0,a1,..., as–1}. The elementary symmetric functions s1, s2,...,sn (where n denotes the number of errors introduced by the channel) are defined as the coefficients of the polynomial (x – x1)(x – x2)...(x – xn) = xn + s1xn–1 + ... + sn–1x + sn. In Step 2, the calculation of the elementary symmetric functions is equivalent to finding a solution s1,s2,..., sn, with minimum possible n, to the following set of linear recurrent equations over

where s0,s1,...,sr–1 are the components of the syndrome vector. We make use of the modified Berlekamp-Massey algorithm [11] to find the solutions of Eq. (1), that hold for commutative rings with identity. We call attention to the fact that in rings care must be taken regarding zero divisors, multiple solutions of the system of linear equations, and also with an inversionless implementation of the original Berlekamp-Massey algorithm. The algorithm is iterative, in the sense that the following n – ln equations (called power sums)

are satisfied with ln as small as possible and s(0) = 1. The polynomial s(n)(x) = represents the solution at the n-th stage. The n-th discrepancy will be denoted by dn and defined by dn = . The modified Berlekamp-Massey algorithm for commutative rings with identity is formulated as follows: The inputs to the algorithm are the syndromes s0,s1,..., sr–1 which belong to . The output of the algorithm is a set of values si, 1 < i < n, such that the equations in Eq. (1) hold with minimum n. In order to start the algorithm, set the initial conditions: s(–1)(x) = 1, l–1 = 0, d–1 = 1,s(0)(x) = 1, l0 = 0, and d0 = s0 [15]. Thus, we have the following steps:

1) n ¬ 0.

2) If dn = 0, then s(n+1)(x) ¬ s(n)(x), and ln+1¬ ln, and go to 5).

3) If dn ¹ 0, then find an m < n – 1 such that dn – ydm = 0 has a solution in y and m – lm has the largest value. Then, s(n+1)(x) ¬ s(n)(x) – yxn–ms(m)(x), and ln+1¬ max{ln,lm + n – m}.

4) If ln+1 = max{ln,n + 1 – ln} then go to 5), else search for a solution D(n+1)(x) with minimum degree l in the range max{ln,n + 1 – ln} < l < ln+1 such that s(m)(x) defined by D(n+1)(x) – s(n)(x) = xn–ms(m)(x) is a solution for the first m power sums, dm = –dn, and with a zero divisor in . If such a solution is found, s(n+1)(x) ¬ D(n+1)(x), and ln+1¬ l.

5) If n < r – 1, then dn+1¬ sn+1 + + ... + .

6) n ¬ n + 1; if n < r go to 2), else stop.

The coefficients satisfy the equations in Eq. (1). The basic difference between the modified Berlekamp-Massey algorithm and the original one lies in the fact that the modified algorithm allows updating a minimal polynomial solution s(n)(x) (at the n-th step) from a previous solution s(m)(x), whose discrepancy can even be a noninvertible element in the commutative ring under consideration. This process does not necessarily lead to a minimal solution s(n+1)(x) (at the (n + 1)-th stage). So, Step 4, calculated at Step 3, is checked to be a minimal solution. This search consists of finding a polynomial s(m)(x), satisfying certain conditions, and being a solution for the first m power sums. Since the number of polynomials s(m)(x) to be checked is not too large, Step 4 does not essentially increase the complexity.

In Step 3, the calculation of error location numbers over rings requires one more step than over fields, because in the solution of Eq. (1) is generally not unique and the reciprocal of the polynomial s(r)(z) (output by the modified Berlekamp-Massey algorithm), namely r(z), may not be the right error locator polynomial (z – x1)(z – x2)...(z – xn) where xi = (j is an integer in the range 1 < j < n such that kj indicates the position of the error in the codeword) are the correct error-location numbers, n is the number of errors, and a is the generator of s. Thus, the procedure for the calculation of the correct error-location numbers [11] is given by

• Compute the roots of r(z) (the reciprocal of s(r)(z)), say, z1,z2,...,zn,

• Among the xi = , j = 1,2...,n, select those xi's such that xi – zi are zero divisors in . The selected xi's will be the correct error-location numbers and kj, j = 1,2,...,n, indicates the position of the error in the codeword.

In Step 4, the calculation of the error magnitudes is based on Forney's procedure [13]. The error magnitudes y1,y2,...,yn are given by

where the coefficients are recursively defined by sj,i = si + xjsj,i–1, i = 0,1,...,n – 1, starting with sj,0 = s0 = 1. The Ej = wij, j = 1,2,..., n are the corresponding location of errors in the vector w. Again making use of Lemma 2.1, it can be shown that the denominator in Eq. (2) is always a unit in .

Example 2.2. Let 7 be the cyclic group as in Example 2.1. Considering h = (a5,a, a4,a2) = , w = (a4, a,a4,a) and r = 2, we have an alternant code over 2[i] of length 4 and minimum Hamming distance at least 3. Let H be the parity-check matrix. Assume that the all-zero codeword c = (0,0,0,0) is transmitted, and the vector r = (0,0,i,0) is received. Then the syndrome vector is s = rHT= (ia4,ia). By the modified Berlekamp-Massey algorithm we obtain s(2)(z) = 1 + a4z. The root of r(z) = z + a4 (the reciprocal of s(2)(z)) is z1 = a4. Among the elements a0,...,a6, we have that x1 = a4 satisfies x1 – z1 = 0 (zero divisor in ). Therefore, x1 is the correct error-location number since k3 = 4 indicates that one error has occurred in the third coordinate of the codeword. The correct elementary symmetric function s1 = a4 is obtained from x – x1 = x – s1 = x – a4. Finally, applying Forney's method to s and s1, gives y1 = i. Therefore, the error pattern is e = (0,0,i,0).

2.2 Error-and-Erasure Decoding

In this subsection we briefly discuss how the decoding procedure for alternant codes can be used to correct errors and erasures. The development is based in [15, pp. 305-307]. We know that if the minimum distance d of a code satisfies d > 2t + e + 1, then e erasures and up to t errors can be corrected by . Suppose that n < t errors occur in positions , with respective nonzero magnitudes y1, y2, ..., yn. Suppose further that e erasures occur in positions , with respective magnitudes v1, v2, ..., ve. Note that whereas e and the ui are known to the decoder, the vi are not. The syndrome of a received vector r is given by

where , and . Defining the elementary symmetric functions tk of the known erasure locations by , and the modified syndromes tj by tj = ; j = e, e + 1, ..., r – 1, it can be shown that

Since xi ¹ 0, and xi ¹ ui, it follows that fi ¹ 0. The equations in Eq. (4) can be efficiently solved for the xi using the modified Berlekamp-Massey algorithm. We call attention to the fact that the first value assumed by the exponent j is e, instead of 0, as before. Now, with the xi known, all we need to complete the decoding process is to solve the equations in Eq. (3) in order to find yi and vi. To this end, Forney's procedure can be applied again as in Step 4 of the decoding procedure for the alternant codes.

Example 2.3. Let = 4[x]/(x3 + x + 1). The element a = x2 = (0,0,1) generates 7. Considering h = (a2,a, a3,a5, a4,a6) = , w = (1,1,1,1,1,1) and r = 4, we have an alternant code over 4 of length 6 and minimum Hamming distance at least 5. This code can correct 1 error and 2 erasures. Assume that the all-zero codeword c = (0,0,0,0,0,0) is transmitted, and the vector r = (2,0,?,0,0,?) is received, where "?" denotes an erasure. Note that the erasures in r can take values on 4. For example, we can "guess" that the erasure in the third coordinate is a 3, and the erasure in the sixth coordinate is a 2. Thus the received vector is r = (2,0,3,0,0,2). It follows that the components of the syndrome vector are s0 = (3,0,0), s1 = (1,2,3), s2 = (1,1,3), and s3 = (2,1,3). From the equation (u – a3)(u – a6) = t0u2 + t1u + t2 we obtain the elementary symmetric functions tk of the known erasures locations, that is, t0 = (1,0,0), t1 = (2,3,2), and t2 = (0,3,3). The modified syndromes are therefore t2 = (0,2,0) and t3 = (0,0,2). Applying the modified Berlekamp-Massey algorithm to the sequence {t2,t3} , we obtain s(z) = 1 + (0,1,0)z. The root of r(z) = z + (0,1,0) is z1 = (0,3,0). Among the elements a0,...,a6, we have that a4 = (2,1,0) satisfies a4 – z1 = 0 (zero divisor in ). Therefore, x1 = = a2 is the correct error-location number, since e = 2. It indicates that one error has occurred in the first position of the codeword. Forney's procedure applied to t1 = a2 gives y1 = 2, v1 = 3, and v2 = 2. Therefore, the error pattern is e = r.

3 BCH code

In this section we present a construction technique of BCH codes over commutative ring of integers modulo q, where q is a prime power, in terms of parity-check matrices under Lee metric based in the work by Roth and Siegel [8]. First we collect basic definitions and facts from Lee metric over m, where m is a positive integer.

Definition 3.1. Let

m denotes the commutative ring of integers modulo m, where m is a positive integer.

• The Lee value of an element a Î m is defined by

where

is the greatest integer smaller than or equal to
.

• The Lee distance between two vectors a = (a1,a2,...,an) and b = (b1,b2,...,bn) over

m is defined by

where dL(ai,bi) = min{ai – bi,bi – ai}(mod m), i = 1,2,...,n.

• The Lee weight of a vector a = (a1,a2,...,an) over

m is defined by

• The minimum Lee distance, dL(X), of a subset X of

is the minimum Lee distance between any pair of distinct vectors in X.

Remark 3.1.

• The elements 0,1,2,..., of m are defined as the positive elements. The rest of the elements are the negative ones [8].

• The minimum Lee distance of a code is defined as the minimum Lee distance between all pairs of codewords. For linear codes, the difference of any two codewords is also a codeword. Thus, the minimum Lee distance of a linear code is equal to the minimum Lee weight of its nonzero codewords.

• The minimum Lee distance of a code is greater than or equal to the minimum Hamming distance of the same code, and smaller than or equal to the Lee distance between the two codewords which define the minimum Hamming distance. Thus

• The Lee distance defines a metric over

m.

•For m = 2 and 3, Lee and Hamming distance coincide. For m > 3, the Lee distance between two n-tuples is greater than or equal to the Hamming distance between them.

Let

q[x] denotes the ring of polynomials in the variable x over q, where q is a prime power p. Let p(x) be a monic polynomial of degree h, irreducible over p. We have that f(x) is also irreducible over q. Let = denotes the set of residue classes of polynomials in x over q, modulo the polynomial f(x). This ring is a local commutative with identity and is called a Galois extension of q of degree h. Let s, where s = ph – 1, be the maximal cyclic subgroup of * such that gcd(s,p) = 1.

Definition 3.2. [8] Let h = (a1,a2,..., an) be the locator vector consisting of distinct elements of s. Now define matrix H as

where r is a positive integer. Then H is the parity-check matrix of a shortened BCH code(n,h) of length n < s over

q.

Thus, a word c = (c1,c2,...,cn) Î is in (n,h) if and only if it satisfies the following r parity-check equations over :

The codes (n,h) for which n = ph – 1 will be called primitive. In this case, h is unique, up to permutation of coordinates.

Given a transmitted word c = (c1,c2,...,cn) Î (n,h) and a received word b Î , the error vector is defined by e = b – c. The number of Lee errors is given by wL(e), that is, the number of Lee errors is the smallest number of additions of ±1 to the coordenates of the transmitted codeword c which yields the received word b. Since the Lee weight satisfies the triangle inequality, using a code of minimum Lee distance dL allows one to correct any pattern of up to (dL – 1)/2 Lee errors.

Given a locator vector h = (a1,a2,..., an) of a code (n,h) and a word b = (b1,b2,...,bn) Î , we define the locator polynomial associated with b as the polynomial

We define the syndrome values sl of an error vector e = (e1,e2,...,en) by

The formal syndrome series S(x) is defined by

Given a codeword c Î (n,h), following the approach in Roth and Siegel [8], we define the word c+ = by

and let c– = c+ – c. That is, c+ is equal to c at the late positive entries, and is zero otherwise, whereas the entries of c– take the Lee values of the negative entries of c, leaving the other locations zero.

In the next Proposition, a lowerbound for the minimum Lee distance is obtained when wL(c+) ¹ wL(c–).

Proposition 3.1. [8] If c Î (n,h) and wL(c+) ¹ wL(c–) then wL(c)> q.

Proof. Since cHT = 0, we have that c+HT = c–HT. The first equation in this last equality reads wL(c+) = wL(c–)(mod q), that is, wL(c+) = wL(c–) ± lq, for some integer l. Therefore, wL(c) = wL(c+) + wL(c–) > q.

Example 3.1. The polynomial f(x) = x3 + x + 1 is irreducible over 4. Thus the finite commutative ring = is a Galois extension of 4. Let a be a root of f(x). We have that b = a8 generates a cyclic group s of order s = 23 – 1 = 7 in *. Letting h = (1,b,b2, b3,b4, b5,b6) and r = 2, we have an BCH code (7,h) over 4. Let

be the parity-check matrix. We have that c = (3102101) Î (7,h) and wL(c+) = 5, wL(c–) = 1 and wL(c) = 6 > 4.

4 Acknowledgments

The authors would like to thank the referees for theirs helpful suggestions and comments which improved the presentation of this paper.

Received: 7/III/01.

#542/01.

  • [1] A.R. Hammons Jr., P.V. Kumar, A.R. Calderbank, N.J.A. Sloane and P. Solé, The Z4-linearity of Kerdock, Preparata, Goethals and related codes, IEEE Trans. Inform. Theory, IT-40 (1994), pp. 301-319.
  • [2] A.R. Calderbank, G. McGuire, P.V. Kumar and T. Helleseth, Cyclic codes over Z4, locator polynomials, and Newton' identities, IEEE Trans. on Inform. Theory, 42 (1996), pp. 217-226.
  • [3] E. Biglieri and M. Elia, On the construction of group block codes, Annales des Telecommunications, Tome 50, No. 9-10 (1995), pp. 817-823.
  • [4] A.A. Andrade and R. Palazzo Jr., Construction and decoding of BCH codes over finite commutative rings, Linear Algebra and Its Applications, 286 (1999) pp. 69-85.
  • [5] M. Greferath and U. Vellbinger, Efficient decoding of Zpk-linear codes, IEEE Trans. Inform. Theory, 44 (1998), pp. 1288-1291.
  • [6] C.Y. Lee, Some properties of nonbinary error-correcting codes, IRE Trans. Inform. Theory, vol. 4, no. 4 (1958), pp. 77-82.
  • [7] W. Ulrich, Non-binary error correction codes, Bell Sys. Tech. J., vol. 36, no. 6 (1957), pp. 1341-1387.
  • [8] R.M. Roth and P.H. Siegel, Lee-metric BCH codes and their application to constrained and partial-reponse channels, IEEE Trans. Inform. Theory, vol. 40, no. 4 (1994), pp. 1083-1096.
  • [9] G.D. Forney Jr., Generalized minimum distance decoding, IEEE Trans. Inform. Theory, IT-12 (1966) pp. 125-131.
  • [10] B.R. McDonald, Finite Rings with Identity, Marcel Dekker, Inc., New York (1974).
  • [11] J.C. Interlando, R. Palazzo Jr. and M. Elia, On the decoding of Reed-Solomon and BCH codes over integer residue rings, IEEE Trans. Inform. Theory, IT-43 (1997), pp. 1013-1021.
  • [12] E.R. Berlekamp, Algebraic Coding Theory, McGraw-Hill, New York, (1968).
  • [13] G.D. Forney Jr., On decoding BCH codes, IEEE Trans. Inform. Theory, IT-11 (1965), pp. 549-557.
  • [14] F.J. MacWilliams and N.J.A. Sloane, The Theory of Error Correcting Codes, North-Holland, Amsterdam (1977).
  • [15] W.W. Peterson and E.J. Weldon Jr., Error Correcting Codes, MIT Press, Cambridge, Mass., (1972).
  • [16] J.L. Massey, Shift-register synthesis and BCH decoding, IEEE Trans. Inform. Theory, IT-15 (1969), pp. 122-127.

Publication Dates

  • Publication in this collection
    19 July 2004
  • Date of issue
    2003

History

  • Received
    07 Mar 2001
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br