Acessibilidade / Reportar erro

Generalizations of Aitken's process for accelerating the convergence of sequences

Abstract

When a sequence or an iterative process is slowly converging, a convergence acceleration process has to be used. It consists in transforming the slowly converging sequence into a new one which, under some assumptions, converges faster to the same limit. In this paper, new scalar sequence transformations having a kernel (the set of sequences transformed into a constant sequence) generalizing the kernel of the Aitken's delta2 process are constructed. Then, these transformations are extended to vector sequences. They also lead to new fixed point methods which are studied.

convergence acceleration; Aitken process; extrapolation; fixed point methods


Generalizations of Aitken's process for accelerating the convergence of sequences

Claude BrezinskiI; Michela Redivo ZagliaII

ILaboratoire Paul Painlevé, UMR CNRS 8524, UFR de Mathématiques Pures et Appliquées, Université des Sciences et Technologies de Lille, 59655 - Villeneuve d'Ascq cedex, France, E-mail: Claude.Brezinski@univ-lille1.fr

IIUniversità degli Studi di Padova, Dipartimento di Matematica Pura ed Applicata

Via Trieste 63, 35121 - Padova, Italy, E-mail: Michela.RedivoZaglia@unipd.it

ABSTRACT

When a sequence or an iterative process is slowly converging, a convergence acceleration process has to be used. It consists in transforming the slowly converging sequence into a new one which, under some assumptions, converges faster to the same limit. In this paper, new scalar sequence transformations having a kernel (the set of sequences transformed into a constant sequence) generalizing the kernel of the Aitken's D2 process are constructed. Then, these transformations are extended to vector sequences. They also lead to new fixed point methods which are studied.

Mathematical subject classification: Primary:65B05, 65B99; Secondary: 65H10.

Key words: convergence acceleration, Aitken process, extrapolation, fixed point methods.

1 Introduction

When a sequence (Sn) of real or complex numbers is slowly converging, it can be transformed into a new sequence (Tn) by a sequence transformation. Aitken's D2 process and Richardson extrapolation (which gives rise to Romberg's method for accelerating the convergence of the trapezoidal rule) are the most well known sequence transformations. It has been proved that a sequence transformation able to accelerate the convergence of all sequences cannot exist [8] (see also [7]). In fact, each transformation is only able to accelerate the convergence of special classes of sequences. This is the reason why several sequence transformations have to be constructed and studied.

For constructing a new sequence transformation, an important object is its kernel (we will explain why below). It is the set of sequences (Sn), characterized by a particular expression or satisfying a particular relation between its terms, both involving an unknown parameter S (the limit of the sequence if it converges or its antilimit if it does not converge), that are transformed into the constant sequence (Tn = S). For example, the kernel of the Aitken's D2 process (see its definition below) is the set of sequences of the form Sn = S + aln, n = 0,1,..., where a ¹ 0 and l ¹ 1 or, equivalently, satisfying the relation a0(Sn - S) + a1(Sn+1 - S), for n = 0,1,..., with a0a1¹ 0 and a0 + a1¹ 0. If |l| < 1, then (Sn) converges to its limit S. Otherwise, S is called the antilimit of the sequence (Sn).

The construction of a sequence transformation having a specific kernel consists in giving the exact expression of the parameter S for any sequence belonging to this kernel. This expression makes use of several consecutive terms of the sequence starting from Sn, and it is valid for all n. Thus, by construction, for all n, Tn = S. When applied to a sequence not belonging to its kernel, the transformation produces a sequence (Tn) which, under some assumptions, converges to S faster than (Sn), that is

In that case, it is said that the transformation accelerates the convergence of (Sn).

In fact, a sequence transformation is based on interpolation followed by extrapolation. For example, the parameters a, l and S appearing in the kernel of the Aitken's process are computed by solving the system Sn+i = S + aln+i for i = 0,1,2. If the sequence (Sn) to be transformed does not belong to the kernel, then the value of S (and also those of a and l) obtained from this system depends of n and it is denoted by Tn. In order to fully understand the procedure followed for obtaining the transformations given in this paper, let us explain in details how the Aitken's D2 process is derived from its kernel. The sequences of its kernel satisfy (Sn - S)/ln = a. Applying the usual forward difference operator D (it is an annihilation operator as will be explained below) to both sides leads to D((Sn - S)/ln) = Da = 0. Thus Sn+1 - S = l(Sn - S) and it follows S = (Sn+1 - lSn)/(1 - l). The problem is now to compute l. Applying the operator D to Sn = S + aln gives DSn = aln(l - 1), and we obtain l = DSn+1/DSn. Thus, replacing l by this expression in the formula for S, leads to the transformation

which, by construction, has a kernel including all sequences of the form Sn = S + aln or, equivalently, such that Sn+1 - S = l(Sn - S) for all n. We see that the denominator in this formula is D2Sn, thus the name of the transformation.

An important point to notice for numerical applications is that Formula (1) is numerically unstable. It has to be put under one of the equivalent forms

which are more stable. Indeed, when the terms Sn, Sn+1 and Sn+2 are close to S, a cancellation appears in Formula (1). Its numerator and its denominator are close to zero, thus producing a first order cancellation. A cancellation also appears in the three preceding formulae, but it is a cancellation on a correcting term, that is, in some sense, a second order cancellation (see [5, pp. 400-403] for an extensive the discussion).

Similarly, in the sequel, when a transformation is written as Tn = Nn/Dn, it is usually unstable. If the computation of Tn makes use of Sn,...,Sn+k, then Tn can also be put under one of the forms Tn = Sn+i - (Sn+iDn - Nn)/Dn, for any i = 0,...,k, which, after simplification in the numerator Sn+iDn - Nn, is more stable.

By construction, we saw that, for all n, Tn = S, if (Sn) belongs to the kernel of the transformation. To prove that this condition is also necessary is more difficult. One has to start from the condition for all n, Tn = S, and then to show that it implies that the relation defining the kernel is satisfied. This is why, in this paper, we will only say that the kernels of the transformations studied include all sequences having the corresponding form since additional sequences can also belong to the kernel. Let us mention that, for the Aitken's process, the condition is necessary and sufficient.

Of course, one can ask why the notion of kernel is an important one. Although this result was never proved, it is hoped (and it was experimentally verified) that if a sequence is not "too far away" from the kernel of a certain transformation, then this transformation will accelerate its convergence. For example, the kernel of the Aitken's process can also be described as the set of sequences such that for all n, (Sn+1 - S)/(Sn - S) = l ¹ 1. It is easy to prove that this process accelerates the convergence of all sequences for which there exists l ¹ 1 such that limn® ¥(Sn+1 - S)/(Sn - S) = l. On sequence transformations, their kernels, and extrapolation methods see, for example, [5, 14, 16].

The Aitken's D2 process is one of the most popular and effective convergence acceleration method. In this paper we will construct scalar sequence transformations whose kernels generalize the kernel of this transformation which consists, as we saw above, of sequences such that Sn = S + aln for n = 0,1,... . Defining and studying generalizations of Aitken's process leads to interesting applications as those described in [11] and [12]. In this paper, we will consider two new generalizations of the kernel of Aitken's process, namely consisting of sequences of the form Sn = S + (a + bxn)ln (Section 2) and Sn = S + ln/(a + bxn) (Section 3), where (xn) is a given known sequence. Compared to the sequences in the kernel of Aitken's process, the additional term bxn can completely change the behaviour since non monotonic sequences or non strictly alternating ones are now included into the kernel. According to the value of l and to the choice of (xn), we can have sequences whose error (in absolute value) first increases, and then tends to zero, or divergent sequences whose error (in absolute value) begins to approach zero, and then goes to infinity, thus imitating asymptotic series. This extra term can also be considered as a subdominant contribution (that is a kind of second order term) to the sequence as pointed out by Weniger [15]. Let us mention that, as showed in particular in [13], there is a strong connection between asymptotics and extrapolation methods.

In Section 5, these transformations will be extended to vector sequences.The related fixed point methods will be studied in Section 6.

The following definitions are needed in the sequel. The forward difference operator D is defined by

Dun = un+1 - un,

Dk+1un = Dkun+1 - Dkun,

the divided difference operator d is defined by

and the reciprocal difference operators by

with

-1un = 0 and 0un = un.

We also remind the Leibniz's rule for the operator D

D(unvn) = un+1Dvn = vnDun.

2 A first scalar kernel

We will construct a sequence transformation with a kernel containing all sequences of the form

where S, a, b and l are unknown (possibly complex) numbers and (xn) a known (possibly complex) sequence.

We have, for all n,

2.1 First technique

From (3), we obtain

Extracting S from this relation leads to a first transformation whose kernel includes all sequences of the form (2)

The problem is now to compute the unknowns b and l (or l and bln+1) appearing in (5). Applying the forward difference operator D to (4), we get

which gives b (or bln+1) if l is known.

Writing down (6) also for the index n + 1, we obtain a system of two nonlinear equations in our unknowns. The unknown product bln+1 can be eliminated by division and we get, after rearrangement of the terms,

This is a cubic equation which provides, in the real case, a unique l only if it has one single real zero. So, another procedure for the computation of l has to be given.

It is possible to compute l by writing this cubic equation for the indexes n, n + 1 and n + 2. Thus we obtain a system of 3 linear equations in the 3 unknowns l, l2, and l3

an+il3 + bn+il = gn+il = dn+i, i = 0, 1, 2

with

an+1 = Dxn+2+iDSn+i

bn+i = -(Dxn+1+iDSn+1+i + Dxn+2+iDSn+1+i + Dxn+1+iDSn+i)

gn+i = Dxn+1+iDSn+2+i + Dxn+iDSn+1+i + Dxn+1+iDSn+1+i

dn+i = Dxn+iDSn+i+2.

We solve this system for the unknown l, then we compute ln+1, and we finally obtain b by (6).

Another way to proceed is to replace bln+1 by its expression in (5). Then the transformation can also be written as

with rn = Dxn+1/Dxn, and where l is computed by solving the preceding linear system. Formula (9) is more stable than Formula (8). In (9), it is also possible to replace l2 by its value given as the solution of the preceding linear system instead of squaring l, thus leading to a different transformation with also a kernel containing all sequences of the form (2).

Remark 1. Obviously, for sequences which do not have the form (2), the value of l obtained by the previous procedures depends on n.

If Dxn is constant, l = 1 satisfies the preceding system but its matrix is singular. In this case, (7) reduces to

DSn+2 - lDSn+1 = l(DSn+1 - lDSn)

and l can be computed by solving the system

DSn+il2 - 2DSn+1+il = -DSn+i+2, i = 0, 1.

Then, the transformation given by (8) or (9) becomes

Formula (11) is more stable than (10).

Let us give a numerical example to illustrate this transformation. We consider the sequence

with xn = na. We took S = 1, l = 1.15, a = 3.5, and b = 2. With these values, the first term in (Sn) diverges, while the second one tends to zero and, according to [15], is a subdominant contribution to (Sn). On Figure 1, the solid lines represent, in a logarithmic scale, and from top to bottom, the absolute errors of Sn, of the Aitken's D2 process (which uses 3 consecutive terms of the sequence to accelerate), of its first iterate (which uses 5 terms), and of its second iterate (which uses 7 terms). The dash-dotted line corresponds to the error of (11) (which uses 5 terms) with xn defined as above (which implies the knowledge of a), and the dashed line to the error of (9) (which uses 6terms). Iterating a process, such as Aitken's, consists in reapplying it to the new sequence obtained by its previous application. On this example, the numerical results obtained by the Formula (8) and by the more stable Formula (9) are the same. The computations were performed using Matlab 7.3.


The results of Figure 1 show that Aitken's process and its iterates have no impact on the divergence of the sequence (Sn). On the contrary, the dominant contribution has been suppressed at the beginning by the transformations (9) and (11), and they generate sequences which converge before diverging again. Of course, when n grows, the subdominant contribution is almost zero, and this is why these sequence transformations no longer operate. This is particularly visible with transformation (11) which produces a rapidly diverging sequence. On the contrary, transformation (9) exhibits a behavior similar to the behavior of an asymptotic series. Thus stopping its application after 27 or 28 terms leads to an error of the order of 10-4. It must be noticed that the numerical results are quite sensitive to changes in the parameters a, l, and b if Formula (10) is used, while they are not with Formula (11).

2.2 Second technique

Since, for all n, d(a + bxn) is a constant, then for all n, Dd(a + bxn) = 0. Thus

and it follows, for n = 0,1,...,

Extracting S from this relation, we obtain the following sequence transformation whose kernel includes all sequences of the form (2)

Formula (15) is more stable than (14).

The problem is again to express l from the terms of the sequence (Sn). We assume that an annihilation operator for the sequence (Dxn) is known, that is a linear operator L such that for all n, L(Dxn) = 0. Such operators are quite useful in deriving sequence transformations (as in the case of the Aitken's process where L º D). They were introduced by Weniger [14]. Thus, applying L to (13), we get, for all n,

This polynomial equation of degree 2 has 2 solutions and we don't know which solution is the right one. So, we will compute simultaneously l and l2. For that, we write (16) for the indexes n and n + 1, and we obtain a system of two linear equations in these two unknowns

lL(Sn+1(Dxn + xn+1)) - l2L(SnDxn+1) = L(Sn+2Dxn)

lL(Sn+2(Dxn+1 + Dxn+2)) - l2L(Sn+1Dxn+2) = L(Sn+3Dxn+1).

It must be noticed that this approach requires more terms of the sequence than solving directly the quadratic equation (16) for l.

The solution of the system is

with

Replacing l and l2 in (14) or (15) completely defines our sequence transformation. Let us remark that, for sequences which are not of the form (2), a different transformation is obtained if the preceding expression for l is usedin (14) or (15), and then squared for getting l2.

Remark 2. The annihilation operator L used for obtaining (16) from (13) must be independent of n since it has to be applied to a linear combination of terms of the sequence (xn) whose coefficients can depend on n. So L cannot be Dd since, although it is an annihilation operator for (xn), we have Dd(Dxn) ¹ 0.

Let us mention that annihilation operators are only known for quite simple sequences. For example, the annihilation operator corresponding to xn = nk is L º Dk+1, since Dk+1nk = Dk(Dnk). If, for all n, xn is constant, the transformation (11) is recovered.

This construction could be extended to a kernel whose members have the form Sn = S + Pk(xn)ln, where Pk is a polynomial of degree k. Indeed, since dk Pk(xn) is a constant, we have, for all n,

However, the difficulty in applying the two techniques described above lies in the derivation and the solution of a system of equations involving l and its powers. We will not pursue in this direction herein.

3 A second scalar kernel

We will now construct a sequence transformation with a kernel containing all sequences of the form

where S, a, b and l are unknown scalars and (xn) a known sequence.

We have, for all n, d(a + bxn) = d(ln/(Sn - S)) = b. Thus

which is a nonlinear equation in S. Therefore the problem is to bypass such a nonlinearity.

We have, for all n,

and, therefore,

Setting en = Sn - S and reducing to the same denominator, we have

3.1 First technique

The main drawback of (18) is that it is a quadratic equation in S. Let us consider the particular case where Dxn is constant. If we apply the operator D, then, by Leibniz's rule, D(en+pen+q) = en+p+1Den+q + en+qDen+p. Since Den+i = DSn+i, we obtain a linear expression in S. Remark that, for eachproduct en+pen+q, we can choose, in Leibniz's rule, either un = en+p and vn = en+q or vice versa. However, after simplification, all these choices lead to the same expression.

Applying D to (18), we obtain, when Dxn is constant,

Define the transformation

with

Nn = l2Sn+1(Sn+2 - Sn) - 2l(Sn+3Sn+1 - Sn+2Sn) + Sn+2(Sn+3 - Sn+1)

Dn = l2(Sn+2 -Sn) - 2l(Sn+3 - Sn+2 + Sn+1 - Sn) + (Sn+3 - Sn+1).

Then, by (19), the kernel of the transformation (20) includes all the sequences of the form (17).

It remains to compute l, and the problem can be solved as above. If, in (19), we do not separate l and S, the unknowns are l2, l, l2S, lS and S. So, writing (19) for the indexes n,...,n + 4, leads to a system of 5 linear equationsin these 5 unknowns

which provides l and l2.

This system can also be solved directly for the unknown S, thus providing another transformation whose kernel includes all sequences of the form (17).

Let us give a numerical example to illustrate this transformation. We consider the sequence

We took S = 1, l = -1.2, b = 0.1, a = 1.1, and b = 2.5. With these values, the first term in (Sn) diverges, while the second one tends to zero. This second term is a subdominant contribution to (Sn). On Figure 2, the solid lines represent, in a logarithmic scale, and from top to bottom, the absolute errors of Sn, of the Aitken's D2 process (which uses 3 consecutive terms of the sequence to accelerate), of its first iterate (which uses 5 terms), and of its second iterate (which uses 7 terms). The dashed line corresponds to the error of (20) (which uses 8 terms).


We see that, as in our first example, the dominant contribution has been suppressed at the beginning and that, when n grows, the subdominant contribution is almost zero and this is why the sequence transformation no longer operates.

3.2 Second technique

Equation (18) can be written as

Remark 3. If the reciprocal difference operator is applied to the sequence 1/(a + bxn) = (Sn - S)/ln, we get, for all n, 2(1/(a + bxn)) = 2((Sn - S)/ln) = 0, and (18) is exactly recovered.

For extracting S from (23), we need to know an annihilation operator L for the sequence (Dxn+1- lDxn), and we obtain the following transformation whose kernel includes all sequences of the form (17)

Here

As explained above, if xn is a polynomial of degree k in n, then L º Dk+1.

The problem is now to compute l. However, if the unknowns l and S are not separated in (23), we obtain, after applying the annihilation operator L, an equation in the unknowns l2, l, l2S, lS and S. So, writing (23) for the indexes n, ..., n + 4, leads to a system of 5 linear equations in these 5 unknowns. This system can be solved for the unknown l and its value used in (24). The system can also be solved for the unknowns l and l2 and their values used in (24). Finally, the system can be solved directly for the unknown S. Thus several transformations whose kernels include all sequences of the form (17) can be obtained. When, for all n, xn = n, the transformation (24) reduces to (20) since the l's obtained from these systems are the halves of the l corresponding to the system (21).

4 Other transformations

Let us now discuss some additional transformations.

In the particular case xn = n, the following transformation also has a kernel including all sequences of the form (2). It was obtained by Durbin [10], and it is defined as

Let us mention that, in this particular case, the second Shanks transformation (the fourth column of the e-algorithm) e2: (Sn) (e2(Sn) = ) also has a kernel containing all sequences of the form (2), see [4].

We remark that the denominator of (25) is D4Sn, and it is easy to see that this transformation can also be written as

This expression leads to the idea of other transformations of a similar form with a denominator equal to

where = k!/(i!(k - i)!) is the binomial coefficient. So, we obtain a whole class of transformations defined by

The kernels of these transformations are unknown, but it is easy to see that they all contain the kernel of the Aitken's process. These transformations have to be compared with the Dk processes [9] given by

The case k = 1 corresponds to the Aitken's D2 process and, for k = 2, the second column of the q-algorithm is recovered (see [5]). The kernel of this transformation is the set of sequences such that, for all n,

that is

where Pk-1 is a polynomial of degree k - 1 in n.

5 The vector case

The sequence transformations described in Sections 2 and 3 will now be used in the case where Sn and S are vectors of dimension p. Obviously, for a vector sequence, a scalar transformation could be used separately on each component. However, such a procedure is usually less efficient than using a transformation specially built for treating vector sequences.

We begin by the first kernel studied in Section 2. Different situations could be considered

Since our purpose is only to show how to proceed with vector sequences, we will not treat all these cases. The procedures followed are similar to those described in [6] and [1].

Let us consider the first transformation in the case where l and b are scalars, and a and xn vectors. Formulae (5) and (6) are still valid. Taking the scalar product of (6) with two linearly independent vectors y1 and y2, and eliminating b gives

Writing down this relation also for the index n + 1 leads to a system of two equations in the two unknowns l and l2, which completely defines the first vector transformation.

Another way of computing l and l2 consists in writing down (6) for the indexes n and n + 1 and taking the scalar products with a unique vector y. Eliminating b gives an equation in our two unknowns. Then, this equation is written down for the indexes n and n + 1, thus leading again to a system of two equations.

For the second transformation, nothing has to be changed until (16) included. For the computation of l and l2 one can proceed as for the first transformation. The relation (16) can be multiplied scalarly by y1 and y2. Thus, a system of two equations is obtained for the unknowns. It is also possible to write down (16) for the indexes n and n + 1 and to multiply these two equations scalarly by the same vector y.

For the second kernel considered in Section 3, it can be written as Sn = S + (a + bxn)-1ln, which shows that a + bxn can be a matrix and l a vector. Obviously, a and bxn cannot be vectors. This second kernel can then be treated in a way similar to the first one.

6 Fixed point methods

There is a close connection between sequence transformations and fixed point iterations for finding x Î p such that x = F(x), where F is a mapping of p into itself [2]. In this Section, we will see how to convert the vector sequence transformations of Section 5 into fixed point methods. The procedure is similar to obtaining Steffensen's method from Aitken's D2 process in the case p = 1.The transformations obtained in this way are often related to quasi-Newton methods; see [3].

In each sequence transformation, the computation of Tn makes use of Sn, ..., Sn+m, where the value of m differs for each of them. An iteration of a fixed point method based on a sequence transformation consists in the following steps for computing the new iterate xn+1 from the previous one xn

1. Set S0 = xn.

2. Compute Si+1 = F(Si) for i = 0, ..., m - 1.

3. Apply the sequence transformation to S0, ..., Sm, and compute T0.

4. Set xn+1 = T0.

7 Conclusions

In this paper, we discussed how to construct scalar sequence transformations with certain kernels generalizing the kernel of the Aitken's D2 process. As we could see, generalizations for the types of kernels we considered are not so easy to obtain and the corresponding algorithms need some efforts to be implemented. However, we experimented some cases where they were quite effective.The convergence and the acceleration properties of these transformations remain to be studied. Then, we showed how extend some of these transformations to the case of vector sequences. Since we only gave the idea how to proceed, a systematic study of such vector transformations and their applications have to be pursued. Finally, we explained how to convert these vector transformations into fixed point iterations. They also have to be analyzed.

Acknowledgements. We would like to thank Mohamed Ait Tidili for a careful reading of a first version of this work, and pointing out some improvements. We also thank the referee for her/his careful reading, and her/his remarks and suggestions which helped us to clarify some points, and to ameliorate the paper.

Received: 09/X/06. Accepted: 05/III/07.

#682/06.

  • [1] C. Brezinski, Vector sequence transformations: methodology and applications to linear systems. J. Comput. Appl. Math., 98 (1998), 149-175.
  • [2] C. Brezinski, Dynamical systems and sequence transformations. J. Phys. A: Math. Gen., 34 (2001), 10659-10669.
  • [3] C. Brezinski, A classification of quasi-Newton methods. Numer. Algorithms, 33 (2003), 123-135.
  • [4] C. Brezinski and M. Crouzeix, Remarques sur la procédé D2 d'Aitken. C.R. Acad. Sci. Paris, 270 A (1970), 896-898.
  • [5] C. Brezinski and M. Redivo Zaglia, Extrapolation Methods. Theory and Practice North-Holland, Amsterdam, 1991.
  • [6] C. Brezinski and M. Redivo Zaglia, Vector and matrix sequence transformations based on biorthogonality. Appl. Numer. Math., 21 (1996), 353-373.
  • [7] J.P. Delahaye, Sequence Transformations Springer-Verlag, Berlin, 1988.
  • [8] J.P. Delahaye and B. Germain-Bonne, Résultats négatifs en accélération de la convergence. Numer. Math., 35 (1980), 443-457.
  • [9] J.E. Drummond, A formula for accelerating the convergence of a general series. Bull. Aust. Math. Soc., 6 (1972), 69-74.
  • [10] F. Durbin, Private communication, 20 June 2003.
  • [11] G. Fikioris, An application of convergence acceleration methods. IEEE Trans. Antennas Propagat., 47 (1999), 1758-1760.
  • [12] A. Navidi, Modification of Aitken's D2 formula that results in a powerful convergence accelerator, in Proceedings ICNAAM-2005 T.E. Simos et al. eds., Viley-VCH, Weinheim, 2005, pp. 413-418.
  • [13] G. Walz, Asymptotics and Extrapolation Akademie Verlag, Berlin, 1996.
  • [14] E.J. Weniger, Nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series. Comput. Physics Reports, 10 (1989), 189-371.
  • [15] E.J. Weniger, Private communication, 12 September 2003.
  • [16] J. Wimp, Sequence Transformations and Their Applications Academic Press, New York, 1981.

Publication Dates

  • Publication in this collection
    09 Jan 2008
  • Date of issue
    2007

History

  • Accepted
    05 Mar 2007
  • Received
    09 Oct 2006
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br