Acessibilidade / Reportar erro

Lavrentiev-prox-regularization for optimal controlof PDEs with state constraints

Abstract

A Lavrentiev prox-regularization method for optimal control problems with point-wise state constraints is introduced where both the objective function and the constraints are regularized. The convergence of the controls generated by the iterative Lavrentiev prox-regularization algorithm is studied. For a sequence of regularization parameters that converges to zero, strong convergence of the generated control sequence to the optimal control is proved. Due to the proxcharacter of the proposed regularization, the feasibility of the iterates for a given parameter can be improved compared with the non-prox Lavrentiev-Regularization.

optimal control; pointwise state constraints; prox regularization; Lavrentiev regularization; pde constrained optimization; convergence; feasibility


Lavrentiev-prox-regularization for optimal controlof PDEs with state constraints

Martin Gugat

Lehrstuhl 2 für Angewandte Mathematik, Martensstr. 3, 91058 Erlangen, Germany, E-mail: gugat@am.uni-erlangen.de

ABSTRACT

A Lavrentiev prox-regularization method for optimal control problems with point-wise state constraints is introduced where both the objective function and the constraints are regularized. The convergence of the controls generated by the iterative Lavrentiev prox-regularization algorithm is studied. For a sequence of regularization parameters that converges to zero, strong convergence of the generated control sequence to the optimal control is proved. Due to the proxcharacter of the proposed regularization, the feasibility of the iterates for a given parameter can be improved compared with the non-prox Lavrentiev-Regularization.

Mathematical subject classification: 49J20, 49M37.

Key words: optimal control, pointwise state constraints, prox regularization, Lavrentiev regularization, pde constrained optimization, convergence, feasibility.

1 Introduction

In the applications modelled by optimal control problems, pointwise state constraints are important since often, practical considerations require certain restrictions on the state. Unfortunately, for problems of pde-constrained optimal control with state constraints, in general the corresponding multipliers are not contained in a function space but only given as measures (see [1]). In order to obtain regular multipliers, the Lavrentiev regularization has been introduced, that transforms the pure state constraint to a mixed state-control constraint. This method is studied for example in [5, 7, 8, 9, 11] and in the references cited there. We do not claim to give a complete list of references about this subject here but want to mention in particular the paper [12], where problems of optimal boundary control are studied and the references therein. Due to the regularization, for the regularized auxiliary problems that are control problems with mixed pointwise control-state constraints multipliers with L2-regularity exist, see [13].

In the (non-prox) Lavrentiev regularization there is a single real-valued regularization parameter λ > 0. For each parameter λ, an auxiliary problem with a mixed state-control constraint is defined. To obtain convergence, this Lavrentiev regularization parameter λ must converge to 0+. However, as λ decreases the problems become more and more difficult to solve. For each fixed λ > 0 in general the generated controls are infeasible for the original problem.

In this paper we introduce a Lavrentiev prox-regularization method where for a given parameter value λ, the feasibility is improved. In our regularization apart from the real-valued regularization parameter λ a control function appears as a second regularization parameter in the state constraints. If the zero control is chosen, the non-prox Lavrentiev regularization is obtained. During the algorithm, this control parameter is updated iteratively. Moreover, in our method also a regularization parameter

> 0 appears in the objective function in the same way as in the classical prox-regularization algorithm (see for example [10, 4]. We show that for a sequence of regularization parameters (λk ,k ) converging to zero, the new algorithm where the control regularization parameter is updated iteratively generates a sequence of controls that converges with respect to the L2-norm to the optimal control.

We start by considering the elliptic optimal control problem with pointwise state constraints and pointwise control constraints (section 2) and the corresponding Lavrentiev prox-regularization (section 3). Then we turn to the elliptic optimal control problem with pointwise state constraints only (section 4) and the Lavrentiev prox-regularization (section 5) for this problem.

At the end of the paper we present examples where we compare the convergence of the Lavrentiev prox-regularization method with the non-prox Lavrentiev regularization. We give two numerical examples where the Lavrentiev prox iteration converges faster than the non-prox Lavrentiev regularization.

2 The Elliptic Problem with pointwise control constraints

In this section we introduce an elliptic optimal control problem with state constraints and L-control constraints.

Let N {2, 3, 4,...} and Ω RN be a bounded domain with C0,1 boundary Γ. Let a desired state yd

L(Ω) be given. Let a real number κ > 0 be given. Define the objective function

.

In addition, let control bounds ua, ub

L(Ω) be given such that ua< ubon Ω. Let state bounds ya , yb
L(Ω) be given such that ya< ybalmost everywhere on Ω. Let n denote the normal derivative with respect to the outward unit normal vector. As in [2], let A an elliptic differential operator of the form

where the coefficients aijbelong to C and satisfy the inequality

for all ξ

RN and for all x Ω for some M > 0, m > 0 and a0
Lr (Ω) is not identically zero with r > Np/(N + p) for some fixed p > N, a0> 0 in Ω.

Define the following elliptic optimal control problem with distributed control, pointwise state constraints and pointwise control constraints:

Note that for a solution u* of Q, we have u*

L(Ω).

As in [5], the notation G is used for the control to state map that gives the state y as a function of the control u, G : L2(Ω) H1(Ω). The notation S is used for the control to state map as an operator L2(Ω) L2(Ω) which is the composition of G and the suitable embedding operator.

3 Lavrentiev Prox Regularization

For u

L2(Ω) define K (u) = J (G(u), u). Let the Lavrentiev regularization parameter λ > 0 and the prox-regularization parameter ε > 0 and ν L(Ω) be given. We consider the regularized problem

Let ν* denote the optimal value of Q, and ν(λ, ε, ν) denote the optimal value of Qλ,ε, ν. Let F* denote the admissible set of Q and F(λ, ε, ν) denote the admissible set of Qλ,ε ν.

If ν

F* is a solution of Qλ,ε ν then

.

Moreover, if u* is the solution of Q we have

.

We consider the following

Lavrentiev Prox-Regularization Algorithm:

For the convenience of the reader, we also describe the (non-prox) Lavrentiev regularization algorithm that has been considered in the literature for example in [5, 7, 8, 9, 11]:

(non-prox) Lavrentiev Regularization Algorithm:

3.1 Uniform boundedness of the feasible sets of Q λ,ε ,u

Due to the pointwise control constraints, the feasible points of Qλ,ε,u are uniformly bounded in L(Ω):

Lemma 3.1. Let u

F(λ, ε, v) be a feasible point of Qλ,ε, ν. Then

3.2 Well-definedness and convergence of the generated sequence

In this section we study the convergence of the solutions (uk )k for k .

Theorem 3.2. Assume that there exists a Slater control and > 0 such that

Define . Let p (0, 1). Assume that in each step, λk is chosen such that λk < 1 and λk1-p < /(2M). Then Q has a solution, the Lavrentiev prox-regularization algorithm is well-defined, and if and we have

Moreover, there exists a constant C10 > 0 such that for all k

For a real number z we use the notation z+= (z +|z|)/2. Hence we have z+= max{z, 0}. For the constraint violation we have the upper bound

Proof. First we show the existence of a solution of Q. Since is feasible for Q, we have ν* < . let (mk )k denote of minimizing sequence for Q, that is the points mk

L2(Ω) are feasible for Q and K (mk) = ν*. Since the sequence (mk)k is bounded in L(Ω), we can choose a subsequence that converges weakly* in L(Ω) to a limit point
L(Ω). Then this subsequence converges also weakly in L2(Ω) to . Since the subsequence converges weakly in L2(Ω), we have . Moreover, the weak* convergence in L(Ω) implies that is feasible for Q. Hence is a solution of Q. Due to the strong convexity of the objective function, this solution isuniquely determined.

Now we consider the sequence (uk) generated by the Lavrentiev prox-regularization algorithm. Due to the control constraints, this sequence is bounded. Choose p (0, 1). Define τk = λkp and the function νk = (1 - τk )u*+ τk where u* denotes the solution of Q. Then ua < νk< uband we have

On the other hand, we have Gk )+λkk - uk) > ya. Hence vk

Fk k , uk).This implies that the iteration is well defined.

Now we assume that the sequences (λk )k and (εk)k converge to zero. Then we have τk 0 and thus . Thus we have

Let

L(Ω) denote a weak* limit point of the sequence (uk)k . Then
F* and we have

.

Since

F*, the uniqueness of the solution of Q implies = u*. Hence the sequence (uk)k converges weakly* to u*. This implies the equation

Note that the convergence of is also an immediate consequence of the compactness of the solution operator S and the weak convergence of uk to u* with respect to the L2(Ω) topology.

The weak convergence of uk to u* in L2(Ω) and the convergence of the norms imply .

There exists a Lipschitz constant C > 0 such that for all points ν1, ν2

L(Ω) with and , respectively we have K 1) < K 2) + . Hence we have

where µ(Ω) = 1 dx. For all k > 1, the point is in F*. Hence

Define C10 = 2MC . Then (4) follows.

To obtain the bound for the constraint violation we have used the fact that the lower and the upper state bound cannot be violated simultaneously, hence for all controls u the sets M1 ={x Ω : (G(u) - yb)(x) > 0} and M2 ={x Ω : (ya- G(u))(x) > 0} are disjoint. On the set M1 we have (G(uk+1) - yb)+< λk |uk+1 - uk| and on the set M2 we have (ya- G(uk+1))+< λk |uk+1 - uk|.Since M1 M2 Q the assertion follows by integration.

Remark 3.3. Note that we have the inequality

where the prox-parameter ε does not appear explicitly. Hence for the optimal value we have the upper bound

.

4 The Elliptic Problem without pointwise control constraints

In this section we introduce an elliptic optimal control problem with state constraints. Here no L(Ω)-control constraints are present.

Let N {2, 3} and Ω RN be a bounded domain with C0,1 boundary r.Let a desired state yd

L(Ω) be given. Let a real number κ > 0 be given. Define the objective functions J (y, u) and K (u) as above. Let state bounds ya, yb
L(Ω) be given.

Define the following elliptic optimal control problem with distributed control and pointwise state constraints minimize J (y, u) subject to

As in [5], the notation G is used for the control to state map that gives the state as a function of the control, G : L2(Ω) H1(Ω) L(Ω). The notation S is used for the control to state map as an operator L2(Ω) L2(Ω) which is the composition of G and the suitable embedding operator.

5 Lavrentiev Prox Regularization

Let the Lavrentiev prox-regularization parameters λ > 0,

> 0 and ν L(Ω) be given.

We consider the regularized problem

Let ω* denote the optimal value of P, and ω(λ , , ν) denote the optimal value of Pλ,, ν. Let F* denote the admissible set of P and F(λ, , v) denote the admissible set of Pλ,ε ν.

Concerning the regularity of the multipliers corresponding to the inequality constraints in Pλ,, ν, we can apply Theorem 2.1 in [5] that states that we find multipliers in the function space L2(Ω).

We consider the following

Lavrentiev Prox-Regularization Algorithm:

As far as the regularization of the objective function is concerned, this algorithm is related to the prox-regularization as considered in [10, 4]. The difference is that for our state-constrained problem regularization terms appear both in the constraints and in the objective function.

In our discussion we use the choice u1 = 0. First we show that the iteration is well-defined. For u1 = 0 problem Pλ1,

1,u1 is of the form studied in the papers about Lavrentiev regularization [9, 7, 6], hence the corresponding existence results are applicable. As in Section 3, the non-prox Lavrentiev-Regularization corresponds to the definition of the regularization parameters uk+1 = 0,k+1 = 0 for all k > 0, that is the non-prox Lavrentiev-Regularization is the algorithm: In step k solve Pλk ,0,0.

In step k, the function uk+1 satisfies the state constraint

.

Hence uk+1 L(Ω) and the function

is feasible for Pλk+1,

k+1,uk+1. Therefore the iteration is well-defined.

5.1 Properties of λI + S.

The following Lemma states that is uniformly bounded. Moreover, the operators converge pointwise to the zero operator for λ 0+. We use this Lemma in Example 2.

Lemma 5.1. Let

denote the operator norm of S as a map from L2(Ω) to L2(Ω). For all λ > 0 we have the inequality

Let u

L2(Ω),u 0 and let λk > 0 with . Then

.

Proof. See [5].

5.2 Boundedness of the generated sequence

The iteration of the Lavrentiev prox-regularization method generates a bounded sequence if the regularization parameters are chosen sufficiently small. This can be seen as follows:

Lemma 5.2. Assume that there exists a Slater control

L(Ω) and > 0 such that

on Ω. Assume that in each step, λk is chosen such that

Assume that in each step,

k is chosen such that the sequence

is bounded. Then the sequence (uk)k generated by the Lavrentiev Prox-Regularization Algorithm is bounded in L2(Ω).

Remark 5.3. Note that the conditions (8) and (9) can easily be satisfied during the iteration by choosing λk and k sufficiently small since the functions and ukare known.

Proof. For all k we have the inequalities

a hence

Fk ,k, uk) which implies the inequality

and the assertion follows due to the boundedness of the sequence in (9).

5.3 Convergence of the generated sequence

We study now the convergence of the sequence (uk)k generated by the Lavrentiev Prox-Regularization Algorithm.

Theorem 5.4. Assume that the solution u*of P is in L(Ω) and that there exists a Slater control

L(Ω) and > 0 such that

on Ω. Let p (0, 1) be given. Assume that in each step, λk (0, 1) is chosen such that

Assume that in addition the sequence

is bounded.

If and

we have

For the constraint violation we have the upper bound

Remark 5.5. Condition (10) can easily be satisfied during the iteration by choosing λk sufficiently small since the functions and ukare known. Condition (11) can be satisfied if an a priori bound for is known. For the problem with additional pointwise control constraints, this problem does not occur, see section 2.

Lemma 5.4 states that if the λk and the k decrease sufficiently fast we obtain convergence.

Proof. Since (10) holds and λk < 1, condition (8) also holds, hence since inaddition condition (9) holds lemma 5.2 implies that the sequence is bounded.

Let denote a weak limit point in L2(Ω) of the sequence (uk)k . Then

F*.Moreover, we have the inequality

.

Define τk = λkp and the function νk = (1 - τk )u*+ τk . Then we have

On the other hand, we have

Hence νk

Fk ,k , uk). Moreover, . Thus we have

Hence we have . This implies that K () = ω*.Since

F*, the uniqueness of the solution of P implies= u*. Hence the sequence (uk)k converges weakly to u*.

As in the proof of Theorem 3.2 we obtain (12). For all k we have the inequalities

.

This implies the L2-bound for the constraint violation

In Theorem 5.4 we have stated that is a bound for the constraint violation. In the non-prox Lavrentiev-Regularization (In step k solve Pλk ,0,0) we have the corresponding bound that satisfies the inequality

If u* 0 and we have the inequality

which indicates that at least asymptotically, the Lavrentiev prox-regularizationmethod yields smaller bounds for contraint violation.

6 Examples

In this section we study two examples that allow to compare the performance ofthe Lavrentiev prox-regularization method and the non-prox Lavrentiev regularization method.

Example 1. Consider a problem P, where for the optimal control we have u*

L(Ω) and both inequality constraints are not active, that is we have ya< G(u*)< ybin the sense that

.

In this case, u* is an unconstrained local minimal point of K and the convexity of K implies that ω*= K (u*) = K (u). Let ν L2(Ω) be given. Since F(λ, , ν) L2(Ω), for all λ > 0 we have the inequality

.

Since u*

F(λ, , u* ) we have ω(λ, , u*) < K (u*) = ω*, hence in this case ω(λ, , u*) = ω*. Thus with the choice u1 = u*, the Lavrentiev proxregularization method generates the constant sequence uk = u* for all k and all λk > 0, k > 0, even if the sequences (λk)k , (k )k do not converge to zero.

More generally, we have u*

Fk ,k , uk) if

.

In this case

.

If

k = 0, this yields ω*= ω(λk ,k , uk), hence in this case uksolves Pλk, k, uk .

If

k > 0, the method reduces to a classical prox regularization for problem P, where the constraints are not regularized. For the non-prox Lavrentiev regularization method, u* is the solution with the parameter λk if u* F(λk , 0, 0)which is the case if

.

If and k = 0, the Lavrentiev prox-regularization method can find u* with larger parameter values λk than the non-prox Lavrentiev regularization.

Example 2. Consider a problem P, where for the solution both inequality constraints are active almost everywhere in Ω, that is we have ya= G(u*) = yb and the Slater condition is violated. Assume that ya

C2(Ω) satisfies the boundary conditions n ya= 0 in Γ. In this case, we have S(u*) = ya.

The non-prox Lavrentiev regularization method computes the solution of Pλk ,0,0 for which we have the following equation: (λkI + G) = ya. Hence (λkI + S)(- u*) = ya- λku*- ya=-λku*.

This yields

,

hence if λk 0 due to Lemma 5.1 we have

.

The Lavrentiev prox-regularization method computes the solution uk+1 of Pλk ,k ,uk for which we have the following equation: (λkI + G)uk+1 - λkuk= ya. Hence we have (λkI + S)(uk+1 - u*) = ya+ λkuk- λku*- ya= λk(uk - u*).

Thus if uk u* we have due to Lemma 5.1

(see the proof of Lemma 5.1) hence the algorithm generates a bounded sequence with strictly decreasing distance to u* also if (λk)k does not converge to zero.

We have (λkI + S)(uk+1 - u*) - λkuk= -λku*, hence if λk 0 we have

.

Example 3. Let κ = 0, A = - Δy + y, yd 1 and J (y, u) = (1/2) (y -1)2 dx. Choose ya= 0, yb= 1. Then the optimal control that solves P is u* 1and we have ω*= 0.

For all λ > 0 we have the inequality

,

hence u* is infeasible for the auxiliary problem P(λ, 0, 0) used in the Lavrentiev regularization method. However,λ = 1/(1 + λ) is in F(λ, 0, 0) hence we have the inequality

.

For every ν

L(Ω) with 1 < ν < on Ω we have the inequality

hence u* is feasible for P(λ, , v) and we have the inequality

.

So if 1 < u1< with the choice = 0, the Lavrentiev prox regularization algorithm finds the optimal control u* in one step. For > 0 with a fixed parameter λ, we can make the regularization error arbitrarily small by choosing sufficiently small.

Example 4. Let Ω =[0,π]×[0,π]. Define the desired state

.

Consider the problem

The state constraint y < 1 - implies that for x2> π/6 we have

.

This yields the optimal state

Figure 1 shows the desired state ydand the optimal state y* that is generated by the optimal control


shown in Figure 2.


The optimal control u* has a jump discontinuity at x2 = π/6. The optimalvalue ω* of P is given by the equation

We use a discretization based upon Fourier-expansions. In the general case, this corresponds to a representation of the control as a series of eigenfunctions of the operator A. Here we write the control function u as a cosine series of the form

.

Then

and we obtain the following series representation for the state:

.

Hence we have

.

For the objective function, this yields

So we see that for cos( jx2) the problem Pλ, ε ν ,is equivalent to the problem

By replacing the infinite series by a finite sum we obtain a semi-infinite optimization problem with a quadratic objective function and linear constraints.

In our numerical implementation we used the finite sums and a finite number of inequality constraints corresponding to the 1001 grid points 0.001π j for j {0,..., 1000}. We solved the finite-dimensional optimization problems with the program fmincon from the matlab optimization toolbox.

Let u1 = 0 and let ω(λ) denote the optimal value of Pλ,0,u1 which is used in the non-prox Lavrentiev regularization.

Table 1 contains the optimal values of the discretized problems for variousvalues of λ. The state error eyhas been computed as

with the computed state ycand the grid points xij.

The constraint violation eν has been computed as

Starting with u1 = 0, we have computed the Lavrentiev-prox regularization iterates whith λk = 10 * 10(1-k)/2. Let ωk denote the optimal value of Pλk, 0, uk which is used in the Lavrentiev prox iteration. Table 2 shows the results.

In this case the Lavrentiev-prox regularization iterates shown in Table 2 converge faster than the iterates generated with the non-prox Lavrentiev regularization shown in Table 1.

Figure 3 shows the optimal state y11 that is computed in step 11 of the Lavrentiev-prox iteration with λ11 = 10-4.


Example 5. As in Example 4 let Ω =[0,π]×[0,π] and define the desired state yd(x1, x2) = 1 - cos(x2).

The state constraint y < 1 implies that for x2> π/2 we have

.

This yields the optimal state

Figure 4 shows the desired state ydand the optimal state y* that is generated by the optimal control


shown in Figure 5. The optimal control u* is continuous. The optimal value ω* of P is given by the equation

.


As in Example 4, we use a discretization based upon Fourier-expansions. For cos( j x2) the problem Pλ, ε,ν is equivalent to the problem

By replacing the infinite series by a finite sum we obtain a semi-infinite optimization problem with a quadratic objective function and linear constraints. Again in our numerical implementation we used the finite sums and a finite number of inequality constraints corresponding to the 1001 grid points 0.001π j for j {0,..., 1000}. Again we used the program fmincon from the matlab optimization toolbox to solve the finite-dimensional optimization problems.

Let u1 = 0 and let ω(λ) denote the optimal value of Pλ,0,u1 which is used in the non-prox Lavrentiev regularization. Table 3 contains the results for the solution of the discretized problems for various values of λ. The state error eyhas been computed as in (17) and the constraint violation ev has been computed as in (18).

Starting with u1 = 0, we have computed the Lavrentiev-prox regularization iterates whith λk = 10 * 10(1-k)/2. Let ωk denote the optimal value of Pλk ,0,ukwhich is used in the Lavrentiev prox iteration. Table 4 shows the results.

Figure 6 shows the optimal state y11 that is computed in step 11 of the Lavrentiev-prox iteration with λ11 = 10-4.


In this example the Lavrentiev-prox regularization iterates shown in Table 4 converge faster than the iterates generated with the non-prox Lavrentiev regularization shown in Table 3.

7 Conclusion

In this paper we have introduced the Lavrentiev prox-regularization method for elliptic optimal control problems. The cost for the solution of the parametric auxiliary problems in each step of the method is the same as for the non-prox Lavrentiev regularization method since the auxiliary problems are of exactly the same form. Hence also the same numerical methods can be used for the solution, for example primal-dual active set methods, interior point methods or semismooth Newton methods, see [5, 6, 8, 11]. Our numerical examples indicate that the Lavrentiev prox-iteration gives approximations of the same quality as the non-prox Lavrentiev regularization method in fewer steps and with larger regularization parameters. We have also applied the method successfully for the solution of optimal control problems with hyperbolic systems see [3].

Acknowledgements. The author wants to thank the anonymous referee for the helpful hints that have substantially improved this paper.

Received: 16/X/08. Accepted: 23/IV/09.

#CAM-30/08.

  • [1] E. Casas, Control of an Elliptic Problem with Pointwise State Constraints SIAM J. Control Optim., 24 (1986), 1309-1318.
  • [2] E. Casas and M. Mateos, Second Order Optimality Conditions for Semilinear Elliptic Control Problems with Finitely Many State Constraints SIAM J. Control Optim., 40(5) (2002), 1431-1454.
  • [3] M. Gugat and A. Keimer and G. Leugering, Optimal Distributed Control of the Wave Equation subject to State Constraints To appear in ZAMM.
  • [4] A. Kaplan and R. Tichatschke, Stable Methods for ill-posed variational problems Akademie Verlag, Berlin (1994).
  • [5] C. Meyer, U. Prüfert and F. Tröltzsch, On two numerical methods for state-constrainded elliptic control problems Optimization Methods and Software, 22 (2007), 871-899.
  • [6] C. Meyer, Optimal control of semilinear elliptic equations with application to sublimation chrystal growth Dissertation, Technische Universität Berlin (2006).
  • [7] C. Meyer and F. Tröltzsch, On an elliptic optimal control problem with pointwise mixed control-state constraints Recent Advances in Optimization, A. Seeger, Ed., Springer-Verlag, (2006), 187-204.
  • [8] C. Meyer and A. Rösch and F. Tröltzsch, Optimal control of PDEs with regularized pointwise state constraints Computational Optimization and Applications, 33 (2006), 209-228.
  • [9] I. Neitzel and F. Tröltzsch, On Convergence of Regularization Methods for Nonlinear Parabolic Optimal Control Problems with Control and State Constraints, submitted.
  • [10] R.T. Rockafellar, Augmented Lagrange multiplier functions and applications of the proximal point algorithm in convex programming Math. Oper. Res., 1 (1976), 97-116.
  • [11] F. Tröltzsch and I. Yousept, A Regularization method for the numerical solution of elliptic boundary control problems with pointwise state-constraints Computational Optimization and Applications, 42 (2009), 43-66.
  • [12] F. Tröltzsch and I. Yousept, Source representation strategy for optimal boundary control problems with state constraints Zeitschrift für Analysis und ihre Anwendungen, 28 (2009), 189-203.
  • [13] F. Tröltzsch, Regular Lagrange multipliers for control problems with mixed pointwise control-state-constraints SIAM Journal on Optimization, 15 (2005), 616-634.

Publication Dates

  • Publication in this collection
    08 July 2009
  • Date of issue
    2009

History

  • Accepted
    23 Apr 2009
  • Received
    16 Oct 2008
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br