## Services on Demand

## Article

## Indicators

- Cited by SciELO
- Access statistics

## Related links

- Similars in SciELO

## Share

## Computational & Applied Mathematics

*On-line version* ISSN 1807-0302

### Comput. Appl. Math. vol.30 no.1 São Carlos 2011

#### http://dx.doi.org/10.1590/S1807-03022011000100008

**Solving the dual subproblem of the Method of Moving Asymptotes using a trust-region scheme**

**Márcia A. Gomes-Ruggiero; Mael Sachine ^{*}; Sandra A. Santos **

Department of Applied Mathematics, IMECC, UNICAMP, 13081-970, Campinas, SP, Brazil E-mails: marcia@ime.unicamp.br / mael@ime.unicamp.br / sandra@ime.unicamp.br

**ABSTRACT**

An alternative strategy to solve the subproblems of the Method of Moving Asymptotes (MMA) is presented, based on a trust-region scheme applied to the dual of the MMA subproblem. At each iteration, the objective function of the dual problem is approximated by a regularized spectral model. A globally convergent modification to the MMA is also suggested, in which the conservative condition is relaxed by means of a summable controlled forcing sequence. Another modification to the MMA previously proposed by the authors *[Optim. Methods Softw.,* 25 (2010), pp. 883-893] is recalled to be used in the numerical tests. This modification is based on the spectral parameter for updating the MMA models, so as to improve their quality. The performed numerical experiments confirm the efficiency of the indicated modifications, especially when jointly combined.

**Mathematical subject classification:** 49M37,49M29, 65K05, 90C30.

**Key Words:** nonlinear programming, Method of Moving Asymptotes, spectral parameter, global convergence, dual problem.

**1 Introduction**

This study proposes a new strategy for solving the subproblems of the Method of Moving Asymptotes (MMA) by means of its dual formulation, using a trust-region technique. The MMA is a very popular method within the structural optimization community and applies to the inequality constrained nonlinear programming problem with simple bounds as follows:

where *x = (x _{1}, ..., x_{n})^{T} * ∈ R

^{n}is the vector of the variables, x

_{j}^{min}and

*x*are given real numbers for each

_{j}^{max}*j*and

*f*are real-valued twice continuously differentiable functions.

_{0}, f_{1}, ..., f_{m}The original version of the MMA [16] was introduced in 1987 by Svanberg, as a generalization of the convex linearization method (CONLIN) [9], without global convergence. In 1995, Svanberg [17] proposed a globally convergent version. Several other MMA versions have appeared since then, see for instance [5, 20, 22, 23] and references therein. In 1998, Svanberg [18] developed a primal-dual interior-point method for solving the subproblems, in which a sequence of relaxed Karush-Kuhn-Tucker (KKT) conditions are solved by Newton's method. In 2003, Ni [15] proposed a globally convergent algorithm that combines the method of moving asymptotes with a trust-region technique, in order to solve bound-constrained problems. In its more recent version [19], the MMA was merged into the Conservative Convex and Separable Approximation (CCSA) class of methods, which are globally convergent.

In the current work, the dual problem associated with the MMA subproblem is stated and analyzed. The explicit expression of the dual objective function is accessible due to the separability of the rational models of the MMA. The intrinsic features of such a function are highlighted, namely being concave and continuously differentiable. The discontinuities of the second-order derivatives are discussed as well. Motivated by such features, we have proposed a trustregion approach for solving the dual of the MMA subproblem by means of a quadratic model that has a spectral regularization term. The solution of the trust-region subproblem has a closed form.

Another contribution of this work is related to the conservative condition responsible for defining the current outer iterate and ensuring the global convergence of the method. A relaxed conservative condition is proposed, based on a summable controlled forcing sequence [13], so that the maintenance of global convergence of the MMA with this modification is proved. The modified MMA algorithm together with its convergence results are presented in [11].

In the numerical experiments, a third modification of the MMA previously proposed by the authors [10] is incorporated. This modification, also considered in the algorithm and theoretical results of [11], is based on the spectral parameter for the updating of a key parameter of the method, that ensures strict convexity of the model functions. The second-order information provided by the spectral parameter is included in the model functions that define the rational approximations of both the objective function and the nonlinear constraints at the beginning of each iteration, so as to improve the quality of the models. The computational results corroborate the proposed modifications, especially when jointly combined.

The structure of this paper is as follows. In Section 2, the basic ideas of the MMA are presented, and the relaxed conservative condition is described. In Section 3, a discussion of the dual problem associated with the MMA sub-problem is provided together with details of our trust-region approach applied to the dual of the MMA subproblem. The numerical results are given in Section 4, and final remarks, in Section 5, conclude the text.

**2 The Method of Moving Asymptotes and our modifications**

Following Svanberg's approach [16], artificial variables *y = (y _{1}, ..., y_{m})^{T}* are introduced in problem (1), so that the following enlarged problem is addressed:

where and *c _{i}* and

*d*are real numbers such that

_{i}*c*

_{i}__>__0 and

*d*0 for

_{i}>*i =*1, ...,

*m*. The constants

*c*must be chosen large enough so that the variables

_{i}*y*are zero at the optimal solution, in case the original problem has a nonempty feasible set and fulfills a constraint qualification (e.g. Mangasarian-Fromovitz [14]).

_{i}The 2002 version of MMA for solving problem (2) performs outer and inner iterations. The indices *(k, l)* are used to denote the *l-th* inner iteration within the *k*-th outer iteration.

To start, it is necessary to choose *x ^{(1)} * ∈ X, and then to compute

*y*

^{(1)}obtaining an initial feasible estimate (x

^{(1)},

*y*

^{(1)}) for problem (2).

Thus, given (x^{(k)}, y^{(k)}), a subproblem is generated and solved. This subproblem is obtained from (2), replacing the objective function and the functions that define the inequality constraints by separable strictly convex models *g _{i}^{(k,l)}.* Moreover, the original box is reduced, being defined around the current point with the aid of the parameter σ

*This subproblem is given next*

^{(k)}.for *k * ∈ {1, 2, 3, ...} and l ∈ {0, 1, 2, ...}, where

*X ^{(k)} = {x * ∈

*X*|

*x*∈

_{j}*[x*σ

_{j}^{(k)}- 0.9_{j}

^{(k)},

*x*0.9σ

_{j}^{(k)}+_{j}

^{(k)}],

*j =*1, ...,

*n}.*

The vector σ* ^{(k)} = (*σ

*σ*

^{(k)}, ...,*contains strictly positive parameters and its updating is done as in [19].*

_{n}^{(k)})^{T}Denoting the optimal solution of subproblem (3) by *(x ^{(k,l)}, y^{(k,l)}),* at the l-th inner iteration, if the

*conservative condition*holds at

*x*for all functions of the problem, that is,

^{(k,l)}then we set *(x*^{(k}+^{1)}, y^{(k}+^{1)}) =, and the *k*-th outer iteration is completed, after l inner iterations. Otherwise,if for at least an index *i * ∈ {0, 1 , . *. . , m},* another inner iteration must be performed. The model for the function *f _{i}* is maintained the same for the index

*i*such that the approximation is conservative in

*x*, that is,

^{(k,l)}*g*≡

_{i}^{(k,l)}(x)*g*). For the indices for which the approximation does not fulfill the conservative condition (4) in

_{i}^{(k,l}+^{1)}(x*x*the model is modified so that the new approximation

^{(k,l)},*g*may satisfy the conservative condition in.

_{i}^{(k,l}+^{1)}It is worth mentioning that the conservative condition is demanded for both the objective function and the constraints, producing, with regards to problem (2), strict reduction of the objective function value and feasible iterates, respectively.

In the MMA, the approximating functions are stated as

where the poles of the moving asymptotes *l _{j}^{(k)}* and

*u*are

_{j}^{(k)}and the coefficients *p _{ij}^{(k,l)}*,

*q*and

_{ij}^{(k,l)}*r*are given by

_{i}^{(k,l)}Within an outer iteration *k*, the only difference between two inner iterations are the values of the parameters *p _{i}^{(k,l)}*. These parameters are strictly positive, so that all the approximating functions

*g*are strictly convex and every subproblem has a single global optimum. The updating of parameters

_{i}^{(k,l)}*p*is the one suggested in [19].

_{i}^{(k,l)}The model functions *g _{i}^{(k,l)}*are first-order approximations to the original functions

*fi*at the current estimate, that is, conditions

must hold for all *i =* 0, 1, ..., *m*. Another condition that must be satisfied by the approximating functions is separability, that is,

Such a property is crucial in practice, because the Hessian matrices of the approximations are diagonal ones, allowing us to address large-scale problems. The model proposed in [19] satisfies such a condition with *g _{i0}^{(k,l)}= r_{i}^{(k,t)}* and

In [10], a modified version of the MMA is proposed based on the spectral parameter, used in the updating of the parameters *p _{i}^{(k,l)}.* The second-order information provided by the spectral parameter is included in the model functions

*g*that define the rational approximations of the objective function and of the nonlinear constraints at the beginning of each outer iteration. The motivation for proposing this idea came from the numerical observation that in many cases the algorithm with Svanberg's original updating stops making significant progress in the solution of the sequence of solved subproblems. By improving the quality of the approximations this drawback was overcome. Moreover, the idea preserves the global convergence property of the CCSA class, as proved in [11].

_{i}^{(k,l)}We have devised another modification for the MMA, based on relaxing the conservative condition, by means of a summable controlled forcing sequence. In this sense, we say that the *relaxed conservative condition* holds at the iterate *x ^{(k,l)}* if

Therefore, the conservative condition is more relaxed at the beginning of the generated sequence, and ultimately achieved in the end. The original conservative condition is recovered if .

When it comes to the global convergence analysis, the arguments follow the outline of the presented in [19], with the main modifications briefly described bellow. The whole sequence of steps of this procedure is described in detail, together with the convergence results, and presented in [11].

Let the functions *F _{i}* be defined, for

*x*∈

*X*and

*y*∈

*R*, by

^{m}The original conservative condition (4) provides strict reduction in the objective function value *F _{0}* and feasible iterates for problem (2). In Lemma 5 of [11] we prove that, adopting the relaxed conservative condition (5), the outer iterates of problem (2) might be infeasible and the values F

_{0}might increase. However, the most important remark of Lemma 5 is that this occurs in a controlled way, i.e.:

*F*

_{i}(x^{(}^{k}^{)}, y^{(k)})__<__

*for*

_{k-1,i}*i*

__>__1 and

*k*

__>__2 and F

_{0}

*(x*

^{(k}+^{1)}, y^{(k}+^{1)}) < F_{0}(x^{(}^{k}^{)}, y^{(}^{k}^{)}) +*for*

_{k,0}*k*

__>__1, where u

*max{1, |*

_{k,i}=uk*g*|}. After this lemma we have also proved that

^{(k,l)}(^{(k,l)})Due to these results, in Lemma 7 of [11], we prove that the sequence {F_{0}(x^{(}^{k}^{)}, * y ^{(k)}*)} is convergent. This result is the same one stated by Lemma 7.8 of [19], in which the monotonicity had a crucial role on the proof. Considering this, the whole reasoning of [19] remains valid based on the fact that μ

*→ 0 as*

_{k}*k*→ ∞, so that the global convergence of the MMA modified algorithm is maintained. The actual choice for the sequence is provided in Section 4, within the description of the numerical results.

In the section that follows, a brief analysis of the dual of the MMA subproblem and its properties will motivate a new strategy for solving the MMA subproblems.

**3 Solving the MMA subproblems: interior-point methods versus a trust-region strategy**

In this section, we propose a new strategy for solving the MMA subproblems by means of the associated dual problem, using a trust-region technique. This new strategy is an alternative for both approaches already devised by Svanberg: the dual and the primal-dual interior-point ones. The dual approach is based on Lagrangian relaxation duality and was implemented with a linesearch technique [16]. In the primal-dual interior-point approach, a sequence of relaxed KKT conditions are solved by Newton's method [18]. We have devised a regularized quadratic model for the dual subproblem with the solution expressed in a closed form.

To simplify the notation, we omit the indices *k* and *I* of the outer and inner iterations, respectively. We denote the bounds of the variables by the values α* _{j}* and β

_{j}, i.e., α

*max {*

_{j}=*x*,

_{j}^{min}*x*

_{j}^{(k)}- 0.9σ

_{j}

^{(k)}} and β

*min {x*

_{j}=

_{j}^{max}, x

_{j}^{(}^{k)}+ α

*β*

_{j}and*} and β*

_{j}= max {x_{j}^{min}, x_{j}^{(k)}-_{j}= min

*0.9*σ

_{j}

^{(k)}}, for

*j*= 1, ...,

*n*, so that the box constraints of the MMA sub-problems are: α

_{j }__<__x

_{j }__<__β

*for*

_{j}*j =*1, ...,

*n*and

*y*

_{i}__>__0 for

*i =*1, ...,

*m*.

Initially, we show how to obtain an explicit expression of the dual objective function, thereby generating the dual problem corresponding to the MMA sub-problem. We highlight some properties associated with the dual function. Then, we propose a trust-region scheme and present the algorithm.

*3.1 The dual problem associated with the MMA subproblem*

Considering only the main constraints, since the simple box and the non-negativity constraints will be incorporated in the minimization process, the Lagrangian corresponding to subproblem (3) is given by:

and λ = (λ_{1}, ..., λ* _{m}* )

*is the vector of Lagrange multipliers.*

^{T}The dual objective function W is defined, for λ __>__ 0, as follows:

Note that the separability of the MMA primal approximations allows the Lagrangian function *L(x*, *y*,λ) to be written as the sum of *n + m* individual functions and therefore, the *(n + m*)-dimensional minimization problem (7) can be split into the *n + m* minimization problems (8) and (9). The use of the minimum instead of the infimum in expressions (7)-(9) is justified by the existence of the minimizers of (8) and (9). The expressions of these minimizers, which we denote by *x _{j}*(λ) and

*y*(λ

_{i}*), respectively, are:*

_{i}Note that *xj* : R^{m} → R and *y _{i}* : R → R are continuous functions of λ, but not differentiable at the points λ such that

*x*(λ) = α

_{j}*j*and

*x*(λ) = β

_{j}*, for all*

_{j}*j =*1, ..., n, and λ

_{i}= c

_{i}, for all

*i =*1, ..., m, respectively. Because there are explicit expressions for the minimizers

*x*(λ) of (8) and

_{j}*y*(λ

_{i}_{i}) of (9), there is also an explicit expression for the dual objective function (7), which is:

Thus, the dual problem corresponding to the MMA subproblem (3) is given by

Once the dual problem (13) is solved, the optimal solution of the MMA (primal) subproblem (3) is obtained by replacing the dual optimal solution in the expressions of *xj (*λ*)* and *y _{i} (*λ

_{i}).*3.2 Properties of the dual function*

Before proposing our approach to solve the dual problem (13) corresponding to the MMA subproblem (3), we comment on some properties associated with the dual function W.

First, note that the function W : R^{m} → R is concave, since it is the pointwise minimum of a collection of functions which are linear in λ. Moreover, it is continuous because *x _{j} (*λ

*)*and

*y*λ

_{i}(*depend continuously on λ and*

_{t})*l*

_{j}<*α*

_{j}__<__

*x*λ

_{j}(*)*

__<__β

*λ*

_{j}*. More than that, the function W is continuously differentiable and its first-order partial derivatives with respect to the dual variables λ*

_{j}< u_{j}*are given by the constraints of the primal subproblem evaluated at*

_{i}*xj (*λ

*)*and

*yi (*λ

*i),*i.e.,

for all *i = 1, ..., m* and λ ∈ * R*, as stated in Proposition 6.1.1 of [2]. Note that, although *y* does not belong to a compact set, the existence of the minimizer (12) justifies the usage of such a proposition.

Since the dual problem can be written explicitly, and the associated primal problem displays a relatively simple algebraic form, the second-order partial derivatives of the dual function can be written in a closed form:

where we have abused on the notation by referring to ∂ *x _{j}/*

*∂*λ

*λ*

_{k}(*),*as

*x*λ

_{j}(*)*is not differentiable at all points. The value of such a derivative assumed by a free

variable *x _{j}* (λ), i.e., α

*<*

_{j}*x*(λ) < β

_{j}*, may be different from the value of this derivative when the variable*

_{j}*x*(λ) is fixed, i.e.,

_{j}*x*(λ) = α

_{j}*or*

_{j}*x*(λ) = β

_{j}*, which is obviously zero. This means that the second derivatives of the dual function are discontinuous whenever a free primal variable becomes fixed, or vice versa. From the primal-dual relationships (10), we see that the dual space is partitioned in several regions separated by second-order hypersurfaces of discontinuity. These surfaces are defined by*

_{j}*x*

_{j}^{*}(λ) = α

*and*

_{j}*x*

_{j}^{*}(λ) = β

*, where*

_{j}*x*

_{j}^{*}(λ) is given by (11).

*3.3 Trust-region method*

In this subsection we present a strategy to solve the dual subproblems of the MMA, using a trust-region scheme. Consider then the dual problem corresponding to the MMA subproblem as the minimization of function *W* (λ) subject to no other constraints than non-negativity requirements on the dual variables:

_{o}

where *W*(λ) = —W (λ). The quadratic model for the function *W*, adopted at each iteration * ^{}* of the trust-region algorithm is:

where η^{()} is the spectral parameter associated with the function *W* at the current iterate, that is,

with

The second-order term in the quadratic model *m _{}* can be interpreted as a quadratic regularization term of a linear model of the function

*W*in the proximal sense, where the spectral parameter η

^{()}has the flavour of an adaptive regularization parameter [12]. This interpretation justifies the second-order term of the model, since the Hessian matrix ∇

^{2}

*W*is discontinuous. Furthermore, models similar to (15) have been considered, as in [1] where the quadratic term of the model includes the spectral parameter in order to speed up a procedure based on the projected gradient, and in [21] where spherical quadratic convex approximations are employed in gradient-only optimization methods.

At each iteration *,* we should minimize the model *m _{}* subject to a trust region and to the nonnegativity of the dual variables. Any norm may be used to define the trust region, but since the feasible set of (14) is an orthant, the choice || • || ∞ fits better in the sense that the constraints of the trust-region subproblem are simple-bounded ones.

Therefore, we obtain the problem

where is the trust-region radius. The solution of problem (17) is given by the closed form

A model algorithm based on the trust-region framework is given next for completeness.

**Algorithm: A trust-region approach applied to the dual of the MMA sub-problem**

Given λ^{(1)}, Δ^{(1)} *>* 0, 0 *< v< **ω <* 1, 0 *< **γ _{0} < γ_{1} <* 1

__<__γ

*for*

_{2},*= 1, 2, ...*until convergence

**1.** Compute using (16) andas in (18).

**2.** Compute *W()* and

**3.** Set

**4.** Set

In the first iteration of our algorithm, to compute the spectral parameter we need another estimate λ^{(0)} distinct from the initial estimate λ^{(1)}. This estimate λ^{(0)} is computed by perturbing λ^{(1)}, i.e., λ^{(0)} = λ^{(1)} + s. In the numerical tests, we have used ε *=* 10^{-3}.

We have stated a summary of the algorithm used in the numerical experiments. The choice adopted in this work for updating the trust-region radius is based on [6, 7]. For further algorithmic details, see [11].

Despite not being our primary motivation, it is worth mentioning that (18) coincides with the first Spectral Projected Gradient (SPG) trial point (cf. [3]) for problem (14) within the bound constraints of (17). For further details on the SPG, see also [4] and references therein. Instead of adopting the linesearch procedure of the SPG algorithm, we use a trust-region scheme. As usual in methods that employ spectral gradients, better practical results are obtained by not imposing sufficient functional decrease at every iteration. In this sense, the acceptance condition of the Step 3 provides a nonmonotone decrease for the function *W* because θ > u may be seen as the relaxed Armijo-like condition

whenever

**4 Numerical results**

This section is concerned with the description of the computational tests of modified versions of the MMA, based on the spectral updating, the relaxed conservative condition, and our trust-region approach applied to the dual of the MMA subproblem. The code was implemented in Matlab and the experiments were run in a Mac Pro with two Xeon E5462 processors of 2.8 Ghz and 12 GB of RAM memory (without multiprocessing).

Two families of academic problems were addressed, parameterized by the number of variables *n >* 1, and suggested in [19]. Their general structure resembles that of topology optimization problems, namely nonconvex problems with a large number of variables, upper and lower bounds on all variables, and a relatively small number of general inequality constraints.

Problem 1 has a strictly convex objective function and nonlinear constraints defined by means of strictly concave functions, so that the feasible region is nonconvex. Problem 2, on the other hand, has a strictly concave objective function and the functions that define the feasible region are strictly convex. They are stated as

where the square matrices *S, P* and *Q* of dimension *n* are symmetric and positive definite. Their elements are given by

for all *i* and *j*. The feasible starting points for Problems 1 and 2 are *x ^{(1)} =* (0.5, 0.5)

^{T}e R

^{n}and

*x*(0.25, 0.25)

^{(1)}=^{T}e R

^{n}, respectively.

The problem dimension *n* varied in {100, 500, 1000, 2000} for both problems. Problems 1 and 2 are formulated as in (1), so they were initially written in the format (2) with *d _{i} =* 1 and

*c*1000, for

_{i}=*i = 1, ..., m.*These choices have produced

*y =*0 for each outer iterate.

To establish the stopping criteria, note that the KKT conditions of the considered problems may be stated as follows, using the notation α+ = max{0, α} and α = max{0, —α}:

The 2n + 4 equalities displayed previously may be concisely stated as *r _{}*

*(x, λ) = 0, φ = 1, ..., 2n + 4. As a by-product of the strategies employed to solve the problems, the inequalities of the KKT system are always fulfilled by the primal and dual variables,*

_{φ}*x*and λ

_{j}_{i}, respectively. The outer loop finishes successfully whenever

*x*and λ are such that

The sequence used in (5) to relax the conservative condition was chosen as follows:

where is the residue of the KKT conditions of problem (2) at the *k*-th outer iteration. To ensure that the sequence *N _{k}* is bounded, we take

*N*= min{

_{k}*N*, N

_{k}_{max}}. However, the value N

_{max}, set at 10

^{12}in the numerical tests, was never reached. In this way, the sequence naturally fulfills the assumption (6).

Three strategies were adopted to solve the problems: in **Strategy 1** the spectral parameter was used to update the parameters at the beginning of each outer iteration; in **Strategy 2** the relaxed conservative condition (5) was employed as the acceptance criterion so that the solution of the MMA sub-problem becomes the next outer iterate; in **Strategy 3** both strategies 1 and 2 are combined.

We have compared eight distinct instances: Svanberg's primal-dual approach (PD), our dual trust-region approach (TR), and the combination of these approaches with each of the three strategies described above.

From the whole set of comparative results involving all these combinations, and thoroughly described in [11], we have noticed that the strategies that used the dual trust-region approach are competitive in terms of the demanded number of iterations, and more efficient when it comes to the CPU time spent, in comparison with those that rely upon the primal-dual approach.

Among the instances that used the dual trust-region approach, we have observed that in most of the cases Strategy 1 usually needs slightly more outer iterations to reach convergence than the pure algorithm without any modification. However, the amount of additional inner iterations decreases in a larger proportion, so that for both problems, the total number of solved subproblems is smaller for the spectral strategy than for the method without further modifications.

Analyzing Strategy 2, for Problem 1, we have noticed that despite the increase in the number of outer iterations, the additional inner iterations demanded were so few that the total number of solved subproblems is even smaller than in Strategy 1. For Problem 2, although the additional amount of inner iterations performed is not so small, all in all, the total effort decreases when compared with Strategy 1 and with the original algorithm.

Focusing now on Strategy 3, the results obtained were excellent. The number of both outer and additional inner iteration decreased by a large amount, and consequently the CPU time spent is the least among the four instances that used the dual trust-region approach for solving the MMA subproblem.

Additional tests were produced by randomly generating ten initial points within the simple bounds of the problems, for each of the dimensions under consideration. Thus, eighty tests were solved, forty for Problem 1 and forty for Problem 2. Each strategy was used for solving these tests, and the results corroborate the previous ones. It is important to note that for each test, including the aforementioned, with the initial point from the literature, all the strategies achieved the same optimal solution.

In Figure 1 we depict the performance profiles [8] of the results corresponding to the eighty generated tests. Figure 1(a) is concerned with the number of solved subproblems, whereas Figure 1(b), with the CPU time spent. In both profiles we notice that Strategies 3 PD and 3 TR are the most efficient. From Figure 1(a) it is evident that Svanberg's PD is more efficient in terms of the number of solved subproblems. Nevertheless, both approaches (PD and TR) for solving the subproblems are competitive, when compared pairwise with each of the three strategies. When it comes to the CPU time spent, the strategies that rely upon the dual trust-region approach were more efficient, as can be seen in Figure 1(b)..

**5 Conclusions**

We have proposed a new strategy for solving the MMA subproblems by means of its dual formulation, using a trust-region technique. This alternative approach deals with the dual problem associated with the MMA subproblem, that is a maximization problem of a concave function under nonnegativity constraints. We have taken advantage of the dual objective function properties, such as being concave and continuously differentiable up to first-order, together with the existence of a closed form for the solution of the subproblem obtained with a regularized spectral model within a trust-region scheme. Such a globalization strategy was the key point in recasting, in a simpler way, the dual approach originally adopted by Svanberg [16], and replaced by the primal-dual approach [18]. We have also presented a modification for the MMA, based on relaxing the conservative condition by means of a summable controlled forcing sequence, so that the maintenance of global convergence is proved [11]. Another modification for the MMA, previously proposed by the authors, was recalled to be used in the numerical tests. It is based on the spectral parameter for updating the parameters so as to improve the quality of the MMA models.

The numerical experiments revealed that the suggested dual approach is simpler and more efficient than Svanberg's primal-dual strategy for solving the family of test problems under consideration. Indeed, we have noticed that the performance of our dual trust-region approach was quite similar to the one of Svanberg's primal-dual approach in terms of the employed number of iterations, but when it comes to the CPU time demanded, our approach was by far superior. Additionally, the performances of both the trust-region dual and the primal-dual approaches were improved in an increasing pattern with the addition of each suggested modification, namely using the spectral updating (Strat- egy 1), the relaxed conservative condition (Strategy 2) and the combination of these two ideas (Strategy 3), pointing out the potential contribution of such modifications for the original algorithm.

**Acknowledgements.** This work is partially supported by FAPESP (06/528467, 06/53768-0, 10/09773-4), CNPq (303465/2007-7, 306220/2009-1) and PRONEX-Optimization. The authors are grateful for the comments and suggestions of the anonymous referees, which helped to improve the presentation of the original manuscript.

**References**

[1] A. Auslender, P.J.S. Silva and M. Teboulle, *Nonmonotone projected gradient methods based on barrier and Euclidean distances.* Comput. Optim. Appl., **38** (2007), 305-327. [ Links ]

[2] D.P. Bertsekas, *Nonlinear Programming: Second Edition.* Athena Scientific, Belmont (2003). (Second Printing). [ Links ]

[3] E.G. Birgin, J.M. Martinez and M. Raydan, *Nonmonotone spectral projected gradient methods on convex sets.* SIAM J. Optim., **10** (2000), 1196-1211. [ Links ]

[4] E.G. Birgin, J.M. Martinez and M. Raydan, *Spectral Projected Gradient Methods*. In *Encyclopedia of Optimization,* C.A. Floudas and P.M. Pardalos, editors. Second Edition. Springer (2009), 3652-3659. [ Links ]

[5] M. Bruyneel, P. Duysinx and C. Fleury, *A family of MMA approximations for structural optimization.* Struct. Multidiscip. Optim., **24** (2002), 263-276. [ Links ]

[6] A R. Conn, N.I.M. Gould and Ph.L. Toint, *LANCELOT: A Fortran Package for Large-Scale Nonlinear Optimization (Release A).* Springer-Verlag, Berlin, Heidelberg (1992). [ Links ]

[7] A.R. Conn, N.I.M. Gould and Ph.L. Toint, *Trust-Region Methods.* SIAM, Philadelphia (2000). [ Links ]

[8] E.D. Dolan and J.J. Moré, *Benchmarking optimization software with performance profiles.* Math. Program., **91** (2002), 201-213. [ Links ]

[9] . Fleury and . raibant, *Structural optiization: A new dual method using mixed variables.* Internat. J. Numer. Methods Engrg., **23** (1986), 409-428. [ Links ]

[10] M.A. Gomes-Ruggiero, M. Sachine and S.A. Santos, *A spectral updating for the method of moving asymptotes.* Optim. Methods Softw., **25**(6) (2010), 883-893. [ Links ]

[11] M.A. Gomes-Ruggiero, M. Sachine and S.A. Santos, *Globally convergent modifications to the Method of Moving Asymptotes and the solution of the subproblems using trust regions: theoretical and numerical results*. Technical Report RP 15/10, IMECC, Unicamp, revised in November (2010), 44 p. Available at http://www.ime.unicamp.br/rel_pesq/relatorio.html. [ Links ]

[12] O. Güler, *New proxial point algoriths for convex minimization.* SI J. Optim., **2**(4) (1992), 649-664. [ Links ]

[13] D.H. Li and M. Fukushima, *A derivative-free line search and global convergence of Broyden-like method for nonlinear equations.* Optim. Methods Softw., **13**(3) (2000), 181-201. [ Links ]

[14] O.L. Mangasarian and S. Fromovitz, *The Fritz John necessary optimality conditions in the presence of equality and inequality constraints.* J. Math. Anal. Appl., **17** (1967), 37-47. [ Links ]

[15] Q. Ni, *A globally convergent method of moving asyptotes with trust region technique.* Optim. Methods Softw., **18**(3) (2003), 283-297. [ Links ]

[16] K. Svanberg, *The method of moving asymptotes - a new method for structural optimization.* Internat. J. Numer. Methods Engrg., **24** (1987), 359-373. [ Links ]

[17] K. Svanberg, *A Globally Convergent Version of MMA without Linesearch.* In: G.I.N. Rozvany and N. Olhoff (eds). Proceedings of the First World Congress of Structural and Multidisciplinary Optimization, (1995), 9-16. [ Links ]

[18] K. Svanberg, *The Method of Moving Asymptotes - Modelling aspects and solution schemes.* Lecture Notes for the DCAMM course Advanced Topics in Structural Optimization, (1998), 24 p. [ Links ]

[19] K. Svanberg, *A class of globally convergent optimization methods based on conservative convex separable approximations.* SIAM J. Optim., **12** (2002), 555-573. [ Links ]

[20] H. Wang and Q. Ni, *A new method of moving asymptotes for large-scale unconstrained optimization.* Appl. Math. Comput., **203** (2008), 62-71. [ Links ]

[21] D.N. Wilke, S. Kok and A.A. Groenwold, *The application of gradient-only optimization methods for problems discretized using non-constant methods.* Struct. Multidiscip. Optim., **40** (2010), 433-451. [ Links ]

[22] W.H. Zhang and C. Fleury, *A modification of convex approximation methods for structural optimization.* Comput. & Structures, **64** (1997), 89-95. [ Links ]

[23] C. Zillober, *lobal convergence of a nonlinear programming method using convex approximations.* Numer. Algorithms, **27** (2001), 256-289. [ Links ]

Received: 15/VIII/10. Accepted: 05/I/11.

* Corresponding author.