Acessibilidade / Reportar erro

Active-set strategy in Powell's method for optimization without derivatives

Abstract

In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell's method [38] for derivative-free optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this model is minimized to obtain a new trial point. The whole process is embedded within a trust-region framework. Our algorithm uses infinity norm instead of the Euclidean norm and we solve a box constrained quadratic subproblem using an active-set strategy to explore faces of the box. Therefore, a bound constrained optimization algorithm is easily extended. We compare our im_ plementation with NEWUOA and BOBYQA, Powell's algorithms for unconstrained and bound constrained derivative free optimization respectively. Numerical experiments show that, in general, our algorithm require less functional evaluations than Powell's algorithms.

derivative-free optimization; active-set method; spectral gradient method


Active-set strategy in Powell's method for optimization without derivatives

Ma. Belén ArouxétI; Nélida EchebestI; Elvio A. PilottaII

IDepartamento de Matemática, Facultad de Ciencias Exactas, Universidad Nacional de La Plata 50 y 115, La Plata (1900), Buenos Aires, Argentina

IIFacultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, CIEM (CONICET), Medina Allende s/n, Ciudad Universitaria (5000) Córdoba, Argentina E-mails: belen|opti@mate.unlp.edu.ar / pilotta@famaf.unc.edu.ar

ABSTRACT

In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell's method [38] for derivative-free optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this model is minimized to obtain a new trial point. The whole process is embedded within a trust-region framework. Our algorithm uses infinity norm instead of the Euclidean norm and we solve a box constrained quadratic subproblem using an active-set strategy to explore faces of the box. Therefore, a bound constrained optimization algorithm is easily extended. We compare our im_ plementation with NEWUOA and BOBYQA, Powell's algorithms for unconstrained and bound constrained derivative free optimization respectively. Numerical experiments show that, in general, our algorithm require less functional evaluations than Powell's algorithms. Mathematical subject classification: Primary: 06B10; Secondary: 06D05.

Keywords: derivative-free optimization, active-set method, spectral gradient method.

1 Introduction

We consider the bound constrained optimization problem where the derivatives of the objective function f are not available and the functional values f(x) are typically very expensive or difficult to compute. That is

L < U, and we assume that V f (x) cannot be computed for any x. First of all we consider the unconstrained problem,

This situation frequently occurs in problems where functional values f(x) either come from physical, chemical or geophysical measurements or are the results from very complex computer simulations. The diversity of applications includes different problems such as rotor blade design [10], wing platform design [4], aeroacustic shape design [26], hydrodynamic design [17] and also problems of molecular geometry [1, 27], groundwater community [19, 30], medical image registration [33] and dynamic pricing [24].

There are several essentially different methods for solving this kind of problems [14]. A first group of methods includes the direct search or pattern search methods. They are based on the exploration of the variables space using function evaluations in sample points given by a predefined geometric pattern. That is the case of methods where sampling is guided by a suitable set of directions [15,41] and based on simplices and operations over them, such as the Nelder-Mead algorithm [31]. They do not exploit the possible smoothness of the objective function and, therefore, require a very large number of function evaluations. They can also be useful for non-smooth problems. A comprehensive survey of these methods can be found in [23].

A second group of methods is based on modelling the objective function by multivariate interpolation in combination with trust-region techniques. These methods were introduced by Winfield [42, 43]. A polynomial model is built in order to interpolate the objective function at the points where the functional values are known. The model is then minimized over the trust-region and a new point is computed. The objective function is evaluated at this new point and thus possibly enlarging the interpolation set. This new computed point is checked as to whether the objective function is improved and the whole process is repeated until convergence is achieved. Thus, the geometry of the interpolation set points and the model minimization are the keys to a good performance of the algorithms.

At the present time, there are several implementations of algorithms based on interpolation approaches, although the most tested and well established are DFO developed by Conn, Scheinberg and Toint [11, 12, 13] and NEWUOA developed by Powell [34, 35, 36, 37, 38, 39,40]. See also the Wedge method developed by Marazzi and Nocedal [25] and the code developed by Berghen and Bersini [5], named CONDOR, which includes a parallel version based on NEWUOA.

In this article we consider the model-based trust-region method NEWUOA [38] because this code performed very well in recent comparison benchmark articles by [18, 29, 32]. Moreover, Moré and Wild [29] report that NEWUOA is the most effective derivative-free unconstrained optimization method for smooth functions. These results and the recent developments by Powell [39] have encouraged us for further development of model-based methods. Recently, Powell introduced the algorithm BOBYQA [40], a new version of NEWUOA, that successfully solves bound constrained optimization problems.

In NEWUOA [38] and BOBYQA [40] the trust-region subproblem is defined using the Euclidean norm and it is solved by a (truncated) conjugate gradient method. In our research, we make use of the infinity norm instead of the Euclidean norm, and use an active-set strategy in combination with the spectral projected gradient method in the same way that was proposed by Birgin and Martinez [7]. This strategy is not computationally expensive and it has been successful for medium-scale problems, so we consider it could be useful for our algorithm.

The numerical results and the observations made in this paper are based on experiments involving all the smooth problems suggested in [29]. We have also tested the algorithms for a set of medium-scale problems (100 variables). We compare our implementation with NEWUOA (for unconstrained optimization problems) and BOBYQA (for bound constrained optimization problems).

This article is organized as follows. In Section 2 we describe the main ideas of the interpolation-based methods for derivative-free optimization. In Section 3 we give a short description of the NEWUOA solver for derivative-free optimization. In Section 4 we describe the active-set strategy for solving the quadratic

minimization trust-region subproblem. In Section 5 we show numerical results of our implementation for unconstrained and bound constrained optimization problems and we give some comments about the performance. Finally, conclusions are given in Section 6.

2 Main ideas of the interpolation-based methods for derivative-free optimization

Trust-region strategies have been considered in derivative-free optimization in many articles [11, 12, 13, 37, 38, 39]. Basically, the main steps of the trust-region method for nonlinear programming are the following:

Step 1: Building interpolation step. Given a current iterate xk, build a good local approximation model (e.g., based on a second order Taylor approximation):

where dk e R, gk e Rn and G e Rnxn is a symmetric matrix, whose coefficients are determined by using the interpolation conditions.

Step 2: Subproblem minimization. Set a trust-region radius Ak that define the trust-region

and minimize mk in Bk.

Step 3: Accepting or rejecting the step. Compute the ratio

If pk is sufficiently positive, the iteration is successful: the next iteration point, xk+1 = xk + s will be taken and the trust-region radius could be enlarged. If pk is not positive enough, then the iteration was not successful: the current iterate xk will be kept and the trust-region radius is reduced.

2.1 Interpolation ideas

To define the model mk (xk + s) we need to obtain the vector gk and the symmetric matrix Gk. They are both determined by requiring that the model mk interpolates the function f at a set Y = {y1, y2, ..., yp} of points containing the current iterate xk

The cardinality of Y must be to get a full quadratic model mk. Since there are coefficients to be determined in the model, the interpolation conditions represent a square system of linear equations in the coefficients dk, gk, Gk. If the interpolation points {y1, y2, ..., yp} are adequately chosen, the linear system is nonsingular and the model could be uniquely determined [14]. In practice, however, conditions about the geometry of the interpolation set (poisedness) are required in order to obtain a good model.

3 The NEWUOA and BOBYQA algorithms

NEWUOA is an algorithm proposed by Powell in [38] based on previous articles [34, 35, 36, 37]. This method has a sophisticated strategy in order to manage the trust-region radius and the radius of the interpolation set. The smaller of the two radii is used to force the interpolation points to be sufficiently far apart to avoid the influence of noise in the function values. Hence, the trust-region updating step is more complicated than the classical steps in the trust-region framework [14].

The main characteristic features of NEWUOA are the following:

(i) It uses quadratic approximations to the objective function aiming at a fast rate of convergence in iterative algorithms for unconstrained optimization. However, each quadratic model has independent coefficients to be determined, and this number could be prohibitively expensive in many applications with large n. So, NEWUOA tries to construct suitable quadratic models from fewer data. Each interpolation set has p points where The default value in NEWUOA is p = 2n + 1. Since p could be less than the interpolation set Y may not be complete. The remaining degrees of freedom are calculated by minimizing the Frobenius norm of the difference of two consecutive Hessian models [37]. This procedure defines the model uniquely, whenever the constraints (1) are consistent and p > n + 2, because the Frobenius norm is strictly convex. The updates along the iterations take advantage of the assumption that every update of the interpolation points is the replacement of just one point by a new one. Powell justified in [37], using the Lagrange polynomials of the interpolation points when one point x + replaces one of the points in Y, that it is possible to maintain the linear independence of the interpolation condition (1). It was shown that these conditions are inherited by the new interpolation points, when x + replaces y' in Y, whenever they are chosen such that the Lagrange polynomial Lt (x+) is nonzero. Furthermore, the preservation of linear independence in the linear system (1) by the sequence of iterations, in the presence of computer rounding errors, may be more stable if| Lt (x+) | is relatively large.

(ii) It solves the trust-region quadratic minimization subproblem using a truncated conjugate gradient method. The Euclidean norm is adopted to define the trust-region Bk.

(iii) Updates of the interpolation set points are performed via the following steps:

(a) If the trust-region minimization of the k-th iteration produces a step s which is not too short compared to the maximum distance between the sample points and the current iterate, then the function f is evaluated at xk + s and this new point becomes the next iterate, xk+1, whenever the reduction in f is sufficient. Furthermore, if the new point xk + s is accepted as the new iterate, it is included into the interpolation set Y, by removing the point yt so that the distance ||xk — yt|| and the value |Lt(xk + s)| are as large as possible. The trade-off between these two objectives is reached by maximizing the weighted absolute value ωt |Lt(xk + s)|, where ωλt reflects the distance || xk - yt|| .

(b) If the step s is rejected, the new point xk + s could be accepted into Y, by removing the point yt such that the value ωt |Lt (xk + s)|is maximized, as long as either |Lt (xk + s)| > 1 or ||xk-yt ||> r Δk is satisfied for a given r > 1.

(c) If the improvement in the objective function is not sufficient, and it is considered that the model needs to be improved, then the algorithm chooses a point in Y which is the farthest from xk and attempts to replace it with a point which maximizes the absolute value of the corresponding Lagrange polynomial in the trust-region.

The general scheme of NEWUOA is the following:

Step 0: Initialization. Given xo, NPT the number of interpolation points, Y the set of interpolation points, po > 0, 0 < pend < 0^1po, Δ = po, p = po, t = 0, k <— 0.

Build an initial quadratic model mo (xo + s) of the function f(x).

Step 1: Solve the quadratic trust-region problem. Compute

Step 2: Acceptance test. If compute ratio

Step 2.1: Update the trust-region radius. Reduce Δ and keep p. Set yt the farthest interpolation point from xk. If || yt — xk|| > 2Δ go to Step 3, otherwise go to Step 5.

Step 2.2: Update the trust-region radius andp. Go to Step 6.

Step 3: Alternative iteration. Re-calculate s in order to improve the geometry of the interpolation set.

Step 4: Update the interpolation set and the quadratic model.

Step 5: Update the approximation. If ratio > 0, set xk+1 → λxk, kk + 1 and go to Step 1.

Step 6: Stopping criterion. If p = pend declare "end of the algorithm", otherise set xk+1xk, kk + 1 and go to Step 1.

The algorithm terminates if one of following options holds: the quadratic model does not decrease, p = pend or the maximum number of iterations is reached.

Numerical results of NEWUOA [39] encouraged the author to introduce some modifications in order to solve bound constrained optimization problems.

The resulting algorithm is called BOBYQA [40].

In BOBYQA, Powell proposed the routine TRSBOX for solving the trust-region problem, obtaining an approximate solution of mk (xk + s) in the intersection of a spherical trust-region with the bound constraints, via a truncated conjugate gradient algorithm. In BOBYQA we have replaced the routine TRSBOX by our version described below. Besides that, in the BOBYQA algorithm, others changes to the "variables" are designed to improve the model without reducing the objective function f(x), using the subroutine ALTMOV. In that routine, the quadratic model mk(xk + s) is ignored in the construction of a new xk + s by an alternative iteration. This subroutine (ALTMOV) is used by BOBYQA when the inclusion of the new iterate xk + s, computed by TRSBOX, could determine a near linear dependence in the interpolation conditions. Thus, BOBYQA replaces the s of TRSBOX by a new s, computed by mean ALTMOV solving a quadratic function that is different to mk (xk + s).

4 The active-set strategy for solving the box constrained subproblem

The quadratic minimization trust-region subproblem is one of the most expensive parts of the algorithm NEWUOA and BOBYQA. Both algorithms use the Euclidean norm to define the trust-region. In our case, we adopted the oo-norm since we have bounds on the variables.

Given xk and Δk > 0 the current approximation and the trust-region radius, respectively, we define

The trust-region subproblem is given by

where gk e Rn and Gk is a n x n symmetric matrix.

In our algorithm, called TRB-Powell (Trust-Region Box), we replaced the quadratic solver of NEWUOA and BOBYQA by an active-set method that uses the strategy described in [7]. For solving the trust-region problem (2), this method uses a truncated Newton approach with line searches whereas, for leaving the faces, uses spectral projected gradient iterations as defined in [9]. Many active constraints can be added or deleted at each iteration so that the method is useful for large-scale problems. Besides that, numerical results have proved that this strategy is successful and efficient for medium and large-scale problems [2, 3, 6, 7, 8].

Specifically, suppose that sj is the current iterate which belongs to a particular face of Ωk. In order to decide if it is convenient to quit this face or to continue exploring it, we compute the gradient Vmk(sj), its projection onto Ωk (denoted by gp(sj)) and its projection onto this face (gj(sj)). Given n e (0, 1), if

does not hold, the face is abandoned and sj+1 is computed performing one iteration using the spectral projected gradient algorithm, as it was described in [7]. On the other hand, while the condition (3) is satisfied, an approximate minimum of mk(xk + s) belonging to the closure of Ωk is computed using the conjugated gradient method. This procedure terminates when ||gP(s*)|| is lower than a tolerance for some s*.

The theoretical results in [7] allow us to assure that the application of this method to the quadratic model subject to Ωk is well-defined and the convergence criterion is reached. In fact, Birgin and Martinez have proved that a Karush-Kuhn-Tucker (KKT) is computed up to an arbitrary precision. Also, assuming that all the stationary points are nondegenerate, the algorithm identifies the face to which the limit belongs in a finite number of iterations [7].

5 Numerical experiments

As mentioned in the introduction, we compared TRB-Powell with NEWUOA and BOBYQA in terms of number of function evaluations, as it is usual in derivative-free optimization articles. We have chosen two groups of test problems, small-size and medium-size problems. TRB-Powell was developed in Fortran 77, as well as NEWUOA and BOBYQA. We have used Intel Fortran Compiler 9.1.036. Codes were compiled and executed in a PC running Linux OS, AMD 64 4200 Dual Core.

5.1 Test problems

Concerning the unconstrained case we considered a set of small-size problems proposed by Moré and Wild in [29]. Most of them consist of nonlinear least squares minimization problems from CUTEr collection [20]. The number of variables of these problems varies from 2 to 12. Also, aiming to test problems with larger dimensions, we considered several problems employed by Powell [38] where n varies from 20 to 100. These functions are Arwhead, Chrosen, Chebyqad, Extended Rosenbrock, Penalty1, Penalty2, Penalty3 and Vardim. All these functions are smooth and the respective results are showed in Table 3.

On the other hand, for bound constrained optimization we have considered some small-size problems from [22, 28, 29], whose names and dimensions can be seen in Table 4.

In order to test medium-size problems we have considered the Arwhead, Chebyqad, Penalty1 and Chrosen functions subject to bound constraints. The bound constraints of the first three problems require that all the components of x to be in the interval [—10, 10] and in the case of Chrosen function in the interval [—3, 0]. Furthermore, we have considered one particular test problem studied in [40]. That problem, denominated "points in square (Invdist2)", has many different local minima. The objective function being the sum of the reciprocals of all pairwise distances between the points Pi ,i = 1, ..., M in two dimensions, where M = n/2 and where the components of Pi are x (2i — 1) and x (2i). Thus, each vector x e Rn defines the M points Pi. The initial point considered x0 gives equally spaced points on a circle. The bound constraints of this problem require that all the components of x to be in the interval [—1, 1].

The details of the results in [40] showed that they are highly sensitive to computer rounding errors. Other set of medium-size problems was taken from [28]. They are Ackley, Griewangk and Rastrigin functions, which have several local minima in each one particular search domain: 15 < x (i) < 30, —600 < x (i) < 600, and —5.12 < x (i) < 5.12, i = 1, ..., n respectively.

These functions have the global minimum at x* = 0 and the corresponding value f (x*) = 0.

Moreover, we have considered some problems of CUTEr that were used in the recent article by Gratton et al. [21], where the authors compared their new solver BC-DFO with BOBYQA. The detailed list of these problems and their characteristics is provided in Table 1. It shows the name and dimension of the problems, the number of variables which are bounded from below and above and the minimum value f * which was reported in [21]. If one variable has not an upper or lower bound we used a very large bound for numerical considerations (1010 or —1010 respectively). The results of these experiments can be seen in Table 6.

5.2 Implementation details

We used the default parameters for codes NEWUOA and BOBYQA. We run both codes with a number m = 2n + 1 interpolation points using the Frobenius norm approach, as Powell suggest [38].

Initial points and initial trust-region radius ere the same as in the cited references [21,22, 28,29,38].

The stopping criterion that we have used is the same that Powell used, that is, the iterative process stops when the trust-region radius is lower than a tolerance Pend = 10-6.

The maximum number of function evaluations allowed for the unconstrained case was:

max fun = 9000, for small-size problems, and max fun = 80000, for medium-size problems.

The maximum number of function evaluations allowed for the bound constrained case was:

max fun = 9000, for small-size problems, and max fun = 20000, for medium-size problems.

In the following tables the symbol (**) indicates that the respective solver failed to find a solution or the maximum number of function evaluations allowed was reached.

Algorithmic parameters in TRB-Powell (Step 1) used:

εP = 10-6.

n = 0.1. We have analyzed other values, but the best results were obtained with this value of x).

Y = 10-4, σmn = 10-10 and σmax = 1010.

In the conjugate gradient method we have used

5.3 Numerical results: unconstrained problems

Tables 2 and 3 report the name of the small and medium-size unconstrained optimization problems respectively, the number of function evaluations (Feval) used to reach the stopping criterion and the functional values obtained for TRB-Powell and NEWUOA codes (f (xend)).

The results in Table 2 enable us to make the following observations.

The number of function evaluations required by NEWUOA is the smallest in 16 of the 36 problems while for TRB-Powell this number is the smallest in 17 problems. In the rest of the problems, both algorithms performed the same number of function evaluations.

The average of the evaluations required in this whole set of problems was 1119 for TRB-Powell and 1272 for NEWUOA. In Bard problem NEWUOA got a local minimum while TRB-Powell obtained the global minimum. The obtained functional values are similar for both methods. NEWUOA obtained lower functional values in 18 problems whereas TRB-Powell did it in 14 of them.

These results are summarized in Figure 1 using the performance profiles described by Dolan and Moré in [16]. Given a set of problems P and a set S of optimization solvers, they compare the performance on problem p e P by a particular algorithm s e S with the best performance by any solver on this problem. Denoting by tps the number of function evaluations required when solving problem p e P by the method s e S, they define the performance ratio:


They assume that rps e [1, rM] and that rps = rM only when problem p is not solved by solver s. They also define the fraction

where np is the number of solved problems. Thus, we draw Qs(τ), with τ e [1, rM).

In a performance profile plot, the top curve represents the most efficient method within a factor τ of the best measure. The percentage of the test problems for which a given method is best in regard to the performance criterion being studied is given on the left axis of the plot. It is necessary to point out that when both methods coincide with the best result, both receive the corresponding mark. This means that the sum of the successful percentages may exceed 100%.

Figure 1 shows the performance profile for both solvers in the interval [1, 1.97]. It can be seen that NEWUOA performs less function evaluations in 52% of the problems while TRB-Powell does it in 55% of problems and the last one has the best performance for t e [1.18, 1.58].

For medium-scale problems we tested eight problems from [38], with dimension n = 20, 40, 80, 100. Table 3 reports the numerical results, from which we can observe:

TRB-Powell required less function evaluations than NEWUOA in 23 of the 32 problems.

The average of the evaluations made by TRB-Powell was 21219 and for NEWUOA was 29611. For eight problems TRB-Powell performed better than NEWUOA (NEWUOA needs more than 10000 extra function evaluations).

In Arwhead, Penalty 3, Rosenbrock Extended, Chrosen (n = 20) and Chrosen (n=80) problems, it can be seen that TRB-Powell was more efficient respect to the final functional value obtained (f (xend)). In particular, the Penalty 3 was solved significantly better by TRB-Powell than NEWUOA.

TRB-Powell obtained lower functional values than NEWUOA in 18 of 32 problems, while NEWUOA did it in 9. In the rest of the problems both methods achieve the same functional values.

This seems to indicate that for medium-size problems TRB-Powell performs better than NEWUOA.

These results are summarized in Figure 2 using the performance profiles described before. Since rp,s is large for several medium-size problems, we used a logarithmic scale in base 2 in the x-axis, as recommended in [16]. Thus, we draw

where rM > 0 is such that rp,s< rM, for all p and s.

Figure 2

5.4 Numerical results: bound constrained problems

Tables 4 and 5 show the performance of BOBYQA and TRB-Powell for small and medium-size bound constrained problems, respectively. We observe in Table 4 that the number of function evaluations required by TRB-Powell is less than BOBYQA in 14 of the 16 test problems. Moreover, the obtained functional values were similar for both methods. This performance of TRB-Powell has been depicted in Figure 3, where it has required less function evaluations in 93% of the problems.


On the other hand, for medium-size bound constrained problems, we observe in Table 5 that TRB-Powell required less function evaluations than BOBYQA in 23 of the 33 test problems. Besides that, our method compares well with BOBYQA in the sense of TRB-Powell needed less function evaluations. The final functional values were similar for BOBYQA and TRB-Powell. Figure 4 shows that TRB-Powell solved the problems using less function evaluations in 78% of them.


Table 6 shows the results obtained by BOBYQA (BQA) and TRB-Powell (TRB) on the experiments with the problems described in Table 1. It shows the name of the problem, the number of function evaluations needed by TRB-Powell and BOBYQA to attain two, four, six and eight significant figures of the objective function value f*, reported by the authors of [21]. We indicate with "f" when a problem do not obtain the precision required.

The results reported in Table 6 show that both solvers fail to solve one test problem in all four cases. Moreover, TRB-Powell did not obtain the precision of 8 digits in the HS25 problem. For low accuracy BOBYQA solved 17% of the test cases faster than TRB-Powell and TRB-Powell solved 73% of the problems faster. For 8 correct significant figures, BOBYQA solved 17% of the test cases faster, and TRB-Powell 67% of the problems faster. For 4 and 6 significant digits BOBYQA solved 13% of the test cases faster than TRB-Powell and TRB-Powell solved 67% and 70% of the problems faster, respectively.

6 Conclusions

We have presented a modified version of the algorithms NEWUOA and BOBYQA for solve unconstrained and bound constrained derivative-free optimization problems. Our method use an active-set strategy for solving the trust-region subproblems. Since we consider the infinity norm, a box constrained quadratic optimization problem has to be solved at each iteration.

We have compared our new version TRB-Powell with the original NEWUOA and BOBYQA. The numerical results reported in this paper are encouraging and suggest that our algorithm takes advantage of the active-set strategy to explore the trust-region box. The number of function evaluations was reduced in most of the cases. These promising numerical results and the new articles by M.J.D. Powell [39, 40] encourage us for further development in constrained optimization without derivatives.

Acknowledgements. We are indebted to two anonymous referees hose comments helped a lot to improve this paper.

Received: 15/08/10.

Accepted: 05/01/11.

#CAM-321/10.

Work sponsored by CONICET and SECYT-UNC.

  • [1]  P. Alberto, F. Nogueira, H. Rocha and L. Vicente, Pattern search methods for user-provided points: Application to molecular geometry problems. SIAM Journal on Optimization, 14(4) (2004), 1216-1236.
  • [2] R. Andreani, E. Birgin, J.M. Martínez and M.L. Schuverdt, On augmented lagrangian methods with general lower-level constraints. SIAM Journal on Optimization, 18 (2007), 1286-1309.
  • [3] R. Andreani, E. Birgin and J.M. Martínez and M.L. Schuverdt, Augmented lagrangian methods under the constant positive linear dependence constraint qualification. Mathematical Programming, 111 (2008), 5-32.
  • [4] C. Audet and J. Dennis Jr., A pattern search filter method for nonlinear programming without derivatives. SIAM Journal on Optimization, 14(4) (2004), 980-1010.
  • [5] F. Berghen and H. Bersini, CONDOR: a new parallel, constrained extension of Powell's UOBYQA algorithm: Experimental results and comparison with the DFO algorithm. Journal of Computational and Applied Mathematics, 181(1) (2005), 157-175.
  • [6] E. Birgin, R. Castillo and J.M. Martinez, Numerical comparison of augmented lagrangian algorithms for nonconvex problems. Computational Optimization and Applications, 31 (2005), 31-56.
  • [7] E. Birgin and J.M. Martinez, Large-scale active-set box-constrained optimization method with spectral projected gradients. Computational Optimization and Applications, 23(1) (2002), 101-125.
  • [8] E. Birgin and J.M. Martinez, Improving ultimate convergence of an augmented lagrangian method. Optimization Methods and Software, 23 (2008), 177-195.
  • [9] E. Birgin, J.M. Martinez and M. Raydan, Nonmonotone spectral projected gradient methods on convex sets. SIAM Journal on Optimization, 10 (2000).
  • [10] A. Booker, J. Dennis Jr., P. Frank, D. Serafim and V. Torczon, Optimization using surrogate objectives on a helicopter test example. Computational Methods for Optimal Design and Control, J. Borggaard, J. Burns, E. Cliff and S. Schreck eds. Birkhäuser, (1998), 49-58.
  • [11]  A. Conn, K. Scheinberg and P. Toint, On the convergence of derivative-free methods for unconstrained optimization. Approximation Theory and Optimization: Tributes to M. Powell, M. Buhmann and A. Iserles Eds., Cambridge University Press, Cambridge, UK (1997), 83-108.
  • [12] A. Conn, K. Scheinberg and P. Toint, Recent progress in unconstrained nonlinear optimization without derivatives. Mathematical Programming, 79 (1997), 397414.
  • [13] A. Conn, K. Scheinberg and P. Toint, A derivative free optimization algorithm in practice. In Proceedings of the 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, St. Louis, MO (1998).
  • [14] A. Conn, K. Scheinberg and L. Vicente, Introduction to derivative-free optimization. SIAM (2009).
  • [15] J.E. Dennis and V. Torczon, Direct search methods on parallel machines. SIAM Journal on Optimization, 1 (1991), 448-474.
  • [16] E. Dolan and J. Moré, Benchmarking optimization software with performance profiles. Mathematical Programming, 91 (2002), 201-213.
  • [17] R. Duvigneau and M. Visonneau, Hydrodynamic design using a derivative-free method. Structural and Multidisciplinary Optimization, 28(2) (2004), 195-205.
  • [18] G. Fasano, J. Morales and J. Nocedal, On the geometry phase in model-based algorithms for derivative-free optimization. Optimization Methods and Software, 24(1)(2009), 145-154.
  • [19] K. Fowler, J. Reese, C. Kees, J. Dennis Jr., C. Kelley, C. Miller, C. Audet, A. Booker, G. Couture, R. Darwin, M. Farthing, D. Finkel, J. Gablonsky, G. Gray and T. Kolda, A comparison of derivative-free optimization methods for groundwater supply and hydraulic capture community problems. Advances in Water Resources, 31(5) (2008), 743-757.
  • [20] N.I. Gould, D. Orban and Ph.L. Toint, CUTEr and SifDec: A constrained and unconstrained testing environment, revisited. ACM Transactions on Mathematical Software, 29(4) (2003), 373-394.
  • [21] S. Gratton, Ph.L. Toint and A. Tröltzsch, An active-set trust region method for derivative-free nonlinear bound-constrained optimization. Technical Report. CERFACS. Parallel Algorithms Team., 10 (2010), 1-30.
  • [22] W. Hock and K. Schittkowski, Nonlinear programming codes, Lecture Notes in Economics and Mathematical Systems, Springer, 187 (1980).
  • [23] T. Kolda, R. Lewis and V. Torczon, Optimization by direct search: New perspectives on some classical and modern methods. SIAM Review, 45(3) (2003), 385-482.
  • [24] T. Levina, Y. Levin, J. Mcill and M. Nediak, Dynamic pricing with online learning and strategic consumers: An application of the aggregation algorithm. Operations Research (2009), 385-482.
  • [25] M. Marazzi and J. Nocedal, edge trust region methods for derivative free optimization. Mathematical Programming, 91(2) (2002), 289-305.
  • [26] A. Marsden, M. Wang, J. Dennis and P. Moin, Optimal aeroacustic shape design using the surrogate management framework. Optimization and Engineering, 5(2) (2004), 235-262.
  • [27] J. Meza and M.L. Martínez, On the use of direct search methods for the molecular conformation problem. J. Comput. Chem., 15(6) (1994), 627-632.
  • [28] M. Molga and C. Smutnicki, Test functions for optimization needs. Available at http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf (2005).
  • [29] J. Moré and S. Wild, Benchmarking derivative-free optimization algorithms. SIAM Journal on Optimization, 20(1) (2009), 172-191.
  • [30] P. Mugunthan, C. Shoemaker and R. Regis, Comparison of function approximation, heuristic and derivative-based methods for automatic calibration fo computationally expensive groundwater bioremediation models. Water Resour. Res., 41(11)(2005), 1-17.
  • [31] J. Nelder and R. Mead, A simplex method for function minimization. Computer Journal, 7 (1965),308-313.
  • [32] R. Oeuvray, Trust-region methods based on radial basis functions with application to biomedical imaging. Ph.D. thesis (2005).
  • [33] R. Oeuvray and M. Bierlaire, A new derivative-free algorithm for the medical image registration problem. International Journal of Modelling and Simulation, 27(2) (2007), 115-124.
  • [34] M.J.D. Powell, On the Lagrange functions of quadratic models that are defined by interpolation. Optimization Methods and Software, 16 (2001), 289-309.
  • [35] M.J.D. Powell, UOBYQA: Unconstrained optimization by quadratic approximation. Mathematical Programming, 92(3) (2002), 555-582.
  • [36] M.J.D. Powell, On trust region methods for unconstrained minimization without derivatives. Mathematical Programming, 97(3) (2003), 605-623.
  • [37] M.J.D. Powell, Least Frobenius norm updating of quadratic models that satisfy interpolation conditions. Mathematical Programming, 100(1) (2004), 183-215.
  • [38] M.J.D. Powell, The NEWUOA software for unconstrained optimization without derivatives. Nonconvex Optimization and Its Applications, Springer US, 83 (2006).
  • [39] M.J.D. Powell, Developents of NEWUOA for minimization without derivatives. IMA Journal of Numerical Analysis, 28(4) (2008), 649-664.
  • [40] M.J.D. Powell, The BOBYQA algorithm for bound constrained optimization without derivatives. Technical report, Department of Applied Mathematics and Theoretical Physics, Cambridge University, Cambridge, England (2009).
  • [41] V. Torczon, On the convergence of pattern search algorithms. SIAM Journal on Optimization, 7(1) (1997), 1-25.
  • [42] D. Winfield, Functions and functional optimization by interpolation in data tables. Ph.D. thesis (1969).
  • [43] D. Winfield, Functions minimization by interpolation in a data table. IMA Journal of Applied Mathematics, 12(3) (1973), 339-347.

Publication Dates

  • Publication in this collection
    22 Mar 2011
  • Date of issue
    2011

History

  • Received
    15 Aug 2010
  • Accepted
    05 Jan 2011
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br