Services on Demand
Article
Indicators
 Cited by SciELO
 Access statistics
Related links
 Similars in SciELO
Share
Computational & Applied Mathematics
Online version ISSN 18070302
Comput. Appl. Math. vol.31 no.2 São Carlos 2012
http://dx.doi.org/10.1590/S180703022012000200011
A new double trust regions SQP method without a penalty function or a filter^{*}
Xiaojing Zhu^{**}; Dingguo Pu
Department of Mathematics, Tongji University, Shanghai, 200092, PR China. Email: 042563zxj@163.com
ABSTRACT
A new trustregion SQP method for equality constrained optimization is considered. This method avoids using a penalty function or a filter, and yet can be globally convergent to firstorder critical points under some reasonable assumptions. Each SQP step is composed of a normal step and a tangential step for which different trust regions are applied in the spirit of Gould and Toint [Math. Program., 122 (2010), pp. 155196]. Numerical results demonstrate that this new approach is potentially useful.
Mathematical subject classification: 65K05, 90C30, 90C55.
Key words: equality constrained optimization, trustregion, SQP, global convergence.
1 Introduction
We consider nonlinear equality constrained optimization problems of the form
where we assume that f : ^{n}→ ^{} and c : ^{n}→^{m} with m < n are twice differentiable functions.
A new method for first order critical points of problem (1.1) is proposed in this paper. This method belongs to the class of twophase trustregion methods, e.g., Byrd, Schnabel, and Shultz [3], Dennis, ElAlem, and Maciel [6], Gomes, Maciel, and Martínez [13], Gould and Toint [15], Lalee, Nocedal, and Plantenga [17], Omojokun [21], and Powell and Yuan [23]. Also, our method, since it deals with two steps, can be classified in the area of inexact restoration methods proposed by Martínez, e.g., [1, 2, 8, 18, 19, 20].
The way we compute trial steps is similar to Gould and Toint's approach [15] that uses different trust regions. Each step is decomposed into a normal step and a tangential step. The normal step is computed by solving a vertical subproblem which aims to minimize the GaussNewton approximation of the infeasibility measure within a normal trust region. The tangential step is computed by solving a horizontal subproblem which aims to minimize the quadratic model of the Lagrangian within a tangential trust region on the premise of controlling the linearized infeasibility measure. Similarly, in Martínez's inexact restoration methods, a more feasible intermediate point is computed in the feasibility phase, and then a trial point is computed on the tangent set that passes through the intermediate point to improve the optimality measure.
In most common constrained optimization methods, penalty functions are used to decide whether to accept trial steps. Nevertheless, there exist several difficulties associated with using penalty functions, and in particular the choice of penalty parameters. A too low parameter can result in an infeasible point being obtained, or even an unbounded increase in the penalty. On the other hand, a too large parameter can weaken the effect of the objective function, resulting for example in slow convergence when the iterates follow the boundary of the feasible region. To avoid using a penalty function, Fletcher and Leyffer [10] proposed filter techniques that allow a step to be accepted if it sufficiently reduces either the objective function or the constraint violation. For more theoretical and algorithmic details on filter methods, see, e.g., [4, 9, 11, 14, 24, 25, 26, 27, 28].
The main feature of our method is that a new step acceptance mechanism that avoids using a penalty function or a filter, and yet can promote global convergence. In this sense, our method shares some similarities with Bielschowsky and Gomes' dynamic control of infeasibility (DCI) method [1] and Gould and Toint's trust funnel method [15]. These methods adopt the idea of progressively reducing the infeasibility measure. Of course, the new step acceptance mechanism in this paper is quite different from the trust funnel and the trust cylinder used in DCI.
The paper is organized as follows. In Section 2, we describe some main details on the new algorithm. Assumptions and global convergence analysis are presented in Section 3. Section 4 is devoted to some numerical results. Conclusions are made in Section 5.
2 The algorithm
2.1 Step computation. At the beginning of this section we define an infeasibility measure as follows
where · denotes the Euclidean norm.
Each SQP step is composed of a normal step and a tangential step for which different trust regions are used in the spirit of [15]. The normal step aims to reduce the infeasibility, and the tangential step which approximately lies on the plane tangent to the constraints aims to reduce the objective function as much as possible.
The normal step n_{k} is computed by solving the trustregion linear leastsquares problem, i.e.,
Here c_{k} = c(x_{k}) and A_{k} = A(x_{k}) is the Jacobian of c(x) at x_{k}. We do not require an exact GaussNewton step for (2.2), but a Cauchy condition
for some constant κ_{c} ∈ (0, 1). In addition, we assume the boundedness condition
for some constant κ_{n} > 0. Note that the above two requirements on n_{k} are very reasonable in both theory and practice. If x_{k} is a feasible point, we set n_{k} = 0.
After obtaining n_{k}, we then aims to find a tangential step t_{k} such that
to improve the optimality on the premise of controlling the linearized infeasibility measure. Define a quadratic model function
where f_{k} = f(x_{k}), g_{k} = ∇ f(x_{k}), and B_{k} is an approximate Hessian of the Lagrangian
Then we have
and
where = g_{k }+ B_{k}n_{k}.
Let Z_{k} be an orthonormal basis matrix of the null space of A_{k} if rank (A_{k}) < n. We assume t_{k} satisfies the following Cauchylike condition
for some constant κ_{f} ∈ (0, 1), where
Meanwhile we also require t_{k} does not increase the linearized infeasibilitymeasure too much in the sense that
for some constant κ_{t} ∈ (0, 1). This condition on t_{k} can be satisfied if t_{k} is enforced to lie (approximately) on the null space of A_{k}. Achieving both of (2.6) and (2.8) are quite reasonable since we can compute t_{k} as a sufficiently approximate solution to
which is equivalent to
The dogleg method or the CGSteihaug method can therefore be applied [17]. When rank (A_{k}) = n, we set χ_{k} = 0 and t_{k} = 0.
After obtaining t_{k}, we define the complete step
To obtain a relatively concise convergence analysis, we further impose that
for some sufficiently large constant κ_{s} > 1. In fact, (2.9) can be viewed as an assumption on the relativity of the sizes of and . It should be made clear that κ_{c}, κ_{f}, κ_{n}, κ_{t}, κ_{s} are not chosen by users but theoretical constants. It also should be emphasized that the double trust regions approach applied here differs from that of Gould and Toint [15]. They do not compute t_{k} if n_{k} lies out of the ball {ν :  ν  < κ_{B} min (, ), κ_{B} ∈ (0, 1)} and require the complete step s_{k} = n_{k }+ t_{k} lies within the ball {ν :  ν  < min (, )}. In our approach, the sizes of n_{k} and t_{k} are more independent of each other, but a stronger assumption (2.9) is made. For more details about the differences see [15].
Now we consider the estimation of the Lagrange multiplier λ_{k + 1}. We do not exactly compute
where the superscript ^{I} denotes the MoorePenrose generalized inverse, but compute λ_{k + 1} approximately solving the leastsquares problem
such that
for some tolerance τ_{0} > 0.
2.2 Step acceptance. After computing the complete step s_{k}, we turn to face with the task of accepting or rejecting the trial point x_{k }+ s_{k}.
We do not use a penalty function or a filter, but establish a new acceptance mechanism to promote global convergence. Let us now construct a dynamic finite set called hset,
where the l elements are sorted in a decreasing order, i.e., H_{k}_{,1} > H_{k}_{,2} > ... > H_{k}_{,l}. The hset is initialized to H_{0} = {u,..., u} for some sufficiently large constant
where x_{0} is the starting point. We then consider the following three conditions:
Here β, γ are two constants such that 0 < γ < β < 1. Note that (2.14) and (2.15) imply
After x_{k}_{ + 1} = x_{k }+ s_{k} has been accepted as the next iterate, we may update the hset with a new entry
This means we replace H_{k}_{,1} with and then resort the elements of hset in a decreasing order. It is clear to see that the infeasibility measure of the iterates is controlled by the hset, and that the length of the hset l affects the strength of infeasibility control although only H_{k}_{,1} and H_{k}_{,2} are involved in conditions (2.132.15).
All iterations are classified into the following three types:

ftype. At least one of (2.132.15) is satisfied and

htype. At least one of (2.132.15) is satisfied but (2.18) fails.

ctype. None of (2.132.15) is satisfied.
If k is an ftype iteration, we accept s_{k} and set x_{k}_{ + 1} = x_{k }+ s_{k} if
and are updated according to
and
If k is an htype iteration, we always accept s_{k} and set x_{k}_{ + 1} = x_{k }+ s_{k }. and are updated according to
and
If k is a ctype iteration, we accept s_{k} and set x_{k}_{ + 1} = x_{k }+ s_{k} if
where
and are updated according to
and
The parameters in (2.202.23), (2.25) and (2.26), τ_{1},τ_{2}, , , are some positive constants such that τ_{2} < 1 < τ_{1}, and < .
One can easily make some conclusions from the update rules of the trust regions. Firstly, we observe that if k is successful, we have
Secondly, is left unchanged on unsuccessful ctype iterations whenever x_{k} is infeasible and < , and is left unchanged on unsuccessful ftype iterations. Thirdly, is reduced on unsuccessful ftype iterations and maybe reduced on unsuccessful ctype iterations, and can only be reduced on unsuccessful ctype iterations. These properties are very crucial for our algorithm.
2.3 The algorithm. Now a formal statement of the algorithm is presented as follows.
Algorithm 1. A trustregion SQP algorithm without a penalty function or a filter.
Step 0: Initialize k = 0, x_{0} ∈ ^{n}, B_{0} ∈ S^{n}^{×n}. Choose parameters , , , ∈ (0, + ∞) that satisfy < , < , β, γ, θ, ζ, η, τ_{2} ∈ (0, 1), τ_{1}, u ∈ [1, + ∞) and l ∈ {2, 3,...}.
Step 1: If k = 0 or iteration k1 is successful, solve (2.10) for λ_{k + 1}.
Step 2: Solve (2.2) for n_{k} that satisfies (2.3) and (2.4) if c_{k} ≠ 0. Set n_{k} = 0 if c_{k} = 0.
Step 3: Compute t_{k} that satisfies (2.5), (2.6), (2.8) and (2.9) if rank (A_{k}) < n.
Set t_{k} = 0 if rank (A_{k}) = n. Complete the trial step s_{k} = n_{k }+ t_{k}.
Step 4: (ftype iteration) One of (2.132.15) is satisfied and (2.18) holds.
4.1: Accept x_{k }+ s_{k} if (2.19) holds.
4.2: Update and according to (2.20) and (2.21).
Step 5: (htype iteration) One of (2.132.15) is satisfied but (2.18) fails.
5.1: Accept x_{k }+ s_{k}.
5.2: Update and according to (2.22) and (2.23).
5.3: Update the hset with .
Step 6: (ctype iteration) None of (2.132.15) is satisfied.
6.1: Accept x_{k }+ s_{k} if (2.24) holds.
6.2: Update and according to (2.25) and (2.26).
6.3: Update the hset with if x_{k }+ s_{k} is accepted.
Step 7: Accept the trial point. If x_{k }+ s_{k} has been accepted, set x_{k}_{ + 1} = x_{k }+ s_{k}, else set x_{k}_{ + 1} = x_{k}.
Step 8: Update the Hessian. If x_{k }+ s_{k} has been accepted, choose a symmetric matrix B_{k}_{ + 1}.
Step 9: Go to the next iteration. Increment k by one and go to Step 1.
Remarks. i) Conditions (2.32.6), (2.8) and (2.9) are some basic requirements for step computations. We assume they are satisfied for all iterations. ii) htype iterations must be successful according to the mechanism of the algorithm. iii) The mechanism of the algorithm implies that the hset H_{k} is updated only on htype and successful ctype iterations. iv) Compared with the trustcylinder of DCI [1] and the trustfunnel [15], our set mechanism may be more flexible for controlling the infeasibility measure.
3 Global convergence
Before starting our global convergence analysis, we make some assumptions as follows.
Assumptions A
A1. Both f and c are twice differentiable.
A2. There exists a constant κ_{B} > 1 such that, ∀ ξ ∈ ∪_{k > 0}[x_{k}, x_{k }+ s_{k}], ∀ k, and ∀ i ∈ {1,..., m},
A3. f is bounded below in the level set,
A4. There exist two constants κ_{h}, κ_{σ} > 0 such that
where σ_{min}(A) represents the smallest singular value of A.
In what follows we denote some useful index sets:
the set of successful iterations, , , and , the sets of ftype, htype, and ctype iterations.
The first two lemmas reveal some useful properties of the hset. These properties play an important role in the following convergence analysis, particularly in driving the infeasibility measure to zero.
Lemma 1 If k ∈ S and x_{k} is a feasible point which is not a first order critical point, then k must be an ftype iteration and therefore the hset is left unchanged in iteration k. Furthermore, each component of the hset is strictly positive.
Proof. Since x_{k} is feasible, = 0 and therefore k cannot be a successful ctype iteration according to (2.24). Since x_{k} is a feasible point which is not a first order critical point, it follows n_{k} = 0, = and (2.18) holds. Thus k must be an ftype iteration. Then, according to the mechanism of the algorithm, each component of H_{k} must be strictly positive.
Lemma 2. For all k, we have
and H_{k,} _{1} is monotonically decreasing in k.
Proof. Without loss of generality, we can assume that all iterations are successful. We first prove the following inequality
by induction. According to (2.12), we immediately have that (3.5) is true for k = 0. For k > 1, we consider the following three cases.
The first case is that k1 ∈ . Then one of (2.132.15) holds and therefore, according to the hypothesis h(x_{k}_{  1}) < H_{k}_{  1,1}, we have from (2.132.15) that
Since the hset is not updated on an ftype iteration, we have H_{k}_{,1} = H_{k}_{  1,1}. Thus (3.5) follows.
The second case is that k1 ∈ . Lemma 1 implies that x_{k}_{  1} is an infeasible point. Then one of (2.14) and (2.15) holds and H_{k}_{  1} is updated with . It follows from condition (2.14) or (2.15) that
Therefore the update rules of the hset, together with (2.17), implies that (3.5) holds.
The third case is that k  1 ∈ C. Then, according to (2.17) and (2.24), we have h(x_{k}) < . Since H_{k}_{  1} is updated with , it follows < H_{k}_{,1}. Hence we obtain (3.5) from the above two inequalities.
Since max(h(x_{k}_{ + 1}), h(x_{k})) < H_{k}_{,1} we have < H_{k}_{,1} from (2.17). Thus the monotonic decrease of H_{k}_{,1} follows. Finally, (3.4) follows immediately from (3.5) and the monotonic decrease of H_{k}_{,1}. 
We now verify that our algorithm satisfies a Cauchylike condition on the predicted reduction in the infeasibility measure.
Lemma 3. For all k, we have that
Proof. It follows from (2.3) and (2.8) that
The following lemma is a direct result of (3.1).
Lemma 4. For all k, we have that
Proof. The proof is identical to that of the first part of Lemma 3.1 of [15]. 
The following lemma is a direct result of Taylor's theorem.
Lemma 5. For all k, we have that
and
where κ_{C} > is a constant.
Proof. The proof is similar to that of Lemma 3.4 of [15]. 
The following lemma is very important as for most of trustregion methods.
Lemma 6. Suppose that k ∈ and that
Then > η. Similarly, suppose that k ∈ C, c_{k} ≠ 0, and
Then > η.
Proof. The proof of both statements is similar to that of Theorem 6.4.2 of [5]. In fact, using (2.6), (2.18) and (3.1), we have
Then it follows from (2.9) and (3.8) that if (3.10) holds then
Hence, the first conclusion follows. Similarly, we use (2.9), (3.6), (3.7) and (3.9) to obtain the second conclusion. 
We now verify below that our algorithm can eventually take a real iteration at any point which is not an infeasible stationary point of h(x). We recall beforehand the definition of an infeasibility stationary point of h(x).
Definiton 1. We call an infeasible stationary point of h(x) if satisfies
The algorithm will fail to progress towards the feasible region if started from an infeasible stationary point since no suitable normal step can be found in this situation. If such a point is detected, restarting the whole algorithm from a different point might be the best strategy.
Lemma 7. Suppose that first order critical points and infeasible stationary points never occur. Then we have that S = + ∞.
Proof. Since x_{k }+ s_{k} must be accepted if k is an htype iteration, we only consider k ∈ ∪ . First consider the case that x_{k} is infeasible. Since the assumption that x_{k} is not an infeasible stationary point implies > 0, it follows from (3.6) that > 0 and from Lemma 6 that > η for sufficiently small . It also follows from Lemma 6 that > η for sufficiently small if χ_{k} > 0. Note that k ∈\ implies χ_{k} > 0, χ_{k}_{ + 1} = χ_{k}, and = τ_{2}, and k ∈ \ implies = τ_{2}. Therefore, a successful iteration must be finished at x_{k} in the end.
Next we consider the case that x_{k} is feasible. Since x_{k} is not a first order critical point we have χ_{k} > 0. Then it follows from Lemma 6 that > η for sufficiently small . Furthermore, (2.13) must be satisfied if < . Because, according to (3.9) and the fact that c_{k }+ A_{k}s_{k} = 0 when c_{k} = 0 implied by (2.8), we have h(x_{k }+ s_{k}) < κ_{C}s_{k}^{2} < H_{k}_{,1}. Hence a successful iteration must be finished at x_{k} in the end.The following lemma is a crucial result of the mechanism of the algorithm.
Lemma 8. Suppose that, for some ε_{f} > 0,
Then
where
Moreover, (3.13) can be reduced to > µ_{f} if x_{k} is infeasible. Similarly, suppose that, for some ε_{c} > 0,
Then,
Proof. The two statements are proved in the same manner, and immediately result from (2.27), Lemma 6, the proof of Lemma 7 and the update rules of the the trustregion radii.
Now we consider the global convergence property of our algorithm in the case that successful ctype and htype iterations are finitely many.
Lemma 9. Suppose that S = +∞ and that ( ∪ ) ∩  < + ∞. Then
and there exists an infinite subsequence ⊂ such that
Proof. Since all successful iterations must be ftype for sufficiently large k, we can deduce from (2.18) and (2.19) that f(x_{k}) is monotonically decreasing in k for all sufficiently large k. For the purpose of deriving a contradiction, we assume that (3.12) holds for an infinite subsequence ⊂ . Then (2.6), (2.18), (3.1) and (3.13) together yield that, for sufficiently large k ∈ ,
Then we have from (2.19) and the above inequality that, for sufficiently large k ∈ ,
Since the assumption of the lemma implies that the hset is updated for finitely many times, we have that H_{k}_{,1} is a constant for all sufficiently large k. This, together with the monotonic decrease of f(x_{k}), implies that lim_{k → ∞} f(x_{k}) =  ∞. Since Lemma 2 implies {x_{k}} is contained in the level set defined by (3.2), the below unboundedness of f(x_{k}) contradicts the assumption A3. Hence (3.12) is impossible and (3.16) follows.
Now consider (3.17). Assume that x_{k} is infeasible for all sufficiently large k; otherwise (3.17) follows immediately for some infinite subsequence ⊂ . Then it follows from the monotonic decrease of f(x_{k}), (2.16), and Lemma 1 of [11] that lim_{k }_{→ ∞} h(x_{k}) = 0, which also derives (3.17).
Next we verify that the constraint function must converge to zero in the case that htype iterations are infinitely many.
Lemma 10. Suppose that  = + ∞. Then lim_{k → ∞} h(x_{k}) = 0.
Proof. Denote = {k_{i}}. Recalling that at least one of (2.132.15) holds on htype iterations and that x_{ki} is infeasible by Lemma 1, we deduce from (2.14), (2.15), (2.17) and (3.4) that
It then follows from the mechanism of the hset that
Hence, from the above inequality and the monotonic decrease of H_{k}_{,1}, we have that
Thus, from (3.4) and (3.18), the result follows. 
In what follows, to obtain global convergence, we will exclude a scenario for successful ctype iterations which is less unlikely than being trapped into a local infeasible stationary point. This scenario is
We now verify below that the constraints also converges to zeros in the case that successful ctype iterations are infinitely many provided that the above undesirable situation is avoided.
Lemma 11. Suppose that  ∩  = + ∞ and that (3.19) is avoided. Then lim_{k→∞}h(x_{k}) = 0.
Proof. We first prove that
Assume, for the purpose of deriving a contradiction, that (3.14) holds for some infinite subsequence indexed by ⊂ ∩ . Recall that the hset is updated on successful ctype iterations and denote {k_{i}} = . It then follows from (2.17), (2.24), (3.4), (3.6), (3.7), (3.14) and (3.15) that
It then follows from the above inequality, the monotonic decrease of H_{k}_{,1} and the mechanism of the hset that
This, together with  = + ∞, yields that H_{k}_{,1} is unbounded below, which is impossible. Hence (3.20) holds.
Since (3.19) does not hold, it follows from (3.20) that
Thus, there exists an infinite subsequence indexed by ⊆ ∩ such that
Since h(x_{k}_{ + 1}) < h(x_{k}) for all k ∈ ∩ , the above limit implies
by (2.17). Then (3.18) follows from the facts that the hset is updated on successful ctype iterations and that H_{k}_{,1} is monotonically decreasing. Therefore the result follows immediately from (3.4) and (3.18).
In what follows, we give the global convergence property of our algorithm in the case that successful ctype and htype iterations are infinitely many.
Lemma 12. Suppose that (∪ ) ∩  = + ∞ and that (3.19) is avoided. Then
and if β is sufficiently close to 1, we have
Proof. Limit (3.21) follows immediately from Lemmas 10 and 11. Then we consider (3.22). It follows from (2.4) and (3.21) that
Therefore, from (3.1) and (3.23), we have
where
Assume now again, for the purpose of deriving a contradiction, that (3.12) holds for all k sufficiently large. Then, if x_{k} is infeasible, we have from (2.6), (3.1), and Lemma 8 that
for all k sufficiently large. It then follows from (3.24) that, for all sufficiently large k,
It is easy to see that, for all sufficiently large k such that (3.25) holds, we have
and therefore (2.18) holds. If x_{k} is feasible, then n_{k} = 0 and therefore (2.18) must hold. Thus k cannot be an htype iteration for all sufficiently large k.
Now consider any sufficiently large k ∈(∪ ) ∩ so that (3.25) holds and
Since (3.25) holds, we have k ∈ ∩ by the above analysis. Note that Lemma 1 implies that x_{k} is infeasible. It follows from (3.3), (3.6), (3.7) and (3.27) that
Reasoning as in the proof of (3.15) in Lemma 8, one can conclude that
This, together with lim_{k → ∞} c_{k} = 0, implies that
for all sufficiently large k. Then (3.28) and (3.29) yield
where
Since k ∈ ∩ , we have from (2.24) and (3.30) that
Then the above inequality implies that if β ∈ (0,1) is sufficiently close to 1, more specifically, if
it follows
Then x_{k}_{ + 1} satisfies condition (2.14) and therefore k cannot be a ctype iteration, which contradicts k ∈ . Hence (3.22) holds and the proof is now completed.
We now present the main theorem on the basis of all the results obtained above.
Theorem 1. Suppose that first order critical points and infeasible stationary points never occur and that (3.19) is avoided. Then there exists a subsequence indexed by such that
and if β is sufficiently close to 1,
As a consequence, if β is sufficiently close to 1, any accumulation point of the sequence {x_{k}}_{k ∈ K} is a first order critical point.
Proof. It is easy to see that if
then we have from (2.4), (2.7) and (3.1) that
This means χ_{k} defined by (2.7) is actually an optimality measure for firstorder critical points. Then the desired conclusions immediately follow from Lemmas 7, 9 and 12.
4 Numerical results
In this section, we present some numerical results for some small size examples to demonstrate our new method may be promising. All the experiments were run in MATLAB R2009b. Details about the implementation are described as follows.
We initialized the approximate Hessian to the identity matrix B_{0} = I and updated B_{k} by Powell's damped BFGS formula [22]. The dogleg method was applied to compute both normal steps and tangential steps. Moreover, each tangential step was found in the null space of the Jacobian. We computed the Lagrangian multiplier by using MATLAB's lsqlin function. The parameters for Algorithm 1 were chosen as:
Now we compare the performance of Algorithm 1 with that of SNOPT Version 5.3 [12] based on the numbers of function and gradient evaluations required to achieve convergence. A standard stopping criterion is used for Algorithm 1, i.e.,
and
The test problems here are all the equality constrained problems from [16]. We ran SNOPT under default options on the NEOS Server (http://www.neosserver.org/neos/solvers/nco:SNOPT/AMPL.html). The corresponding results are shown in Table 1, where Nit, Nf, and Ng represent the numbers of successful iterations, of function evaluations and of gradient evaluations, respectively. It can be observed from Table 1 that Algorithm 1 is generally superior to SNOPT for these problems.
We also plot the logarithmic performance profiles proposed by Dolan and Moré [7] in Figure 1. In the plots, the performance profile is defined by
where r_{p, s} is the ratio of Nf or Ng required to solve problem p by solver s and the lowest value of Nf or Ng required by any solver on this problem. The ratio r_{p, s} is set to infinity whenever solver s fails to solve problem p. It can be observed from Figure 1 that Algorithm 1 outperforms SNOPT for these problems.
5 Conclusions
In this paper, a new double trust regions sequential quadratic programming method for solving equality constrained optimization is presented. Each trial step is computed using a double trust regions strategy in two phases, the first of which aims feasibility and the second, optimality. Thus, the approach is similar to inexact restoration methods for nonlinear programming. The most important feature of this paper is to prove global convergence without using a penalty function or a filter. We propose a new step acceptance technique, the hset mechanism, which is quite different from Gould and Toint's trustfunnel and Bielschowsky and Gomes' trust cylinder. Numerical results demonstrate the efficiency of this new approach.
References
[1] R.H. Bielschowsky and F.A.M. Gomes, Dynamic control of infeasibility in equalitly constrained optimization. SIAM J. Optim., 19 (2008), 12991325. [ Links ]
[2] E.G. Birgin and J.M. Martínez, Local convergence of an inexact restoration method and numerical experiments. J. Optim. Theory Appl., 127 (2005), 229247. [ Links ]
[3] R.H. Byrd, R.B. Schnabel and G.A. Shultz, A trust region algorithm for nonlinearly constrained optimization. SIAM J. Numer. Anal., 24 (1987), 11521170. [ Links ]
[4] C.M. Chin and R. Fletcher, On the global convergence of an SLPfilter algorithm that takes EQP steps. Math. Program., 96 (2003), 161177. [ Links ]
[5] A.R. Conn, N.I.M. Gould and P.L. Toint, Trust Region Methods, No. 01 in MPSSIAM Series on Optimization. SIAM, Philadelphia, USA (2000). [ Links ]
[6] J.E. Dennis, M. ElAlem and M.C. Maciel, A global convergence theory for general trustregionbased algorithm for equality constrained optimization. SIAM J. Optim., 7 (1997), 177207. [ Links ]
[7] E.D. Dolan and J.J. Moré, Benchmarking optimization software with performance profiles. Math. Program., 91 (2002), 201213. [ Links ]
[8] A. Fischer and A. Friedlander, A new line search inexact restoration approach for nonlinear programming. Comput. Optim. Appl., 46 (2010), 333346. [ Links ]
[9] R. Fletcher, N. Gould, S. Leyffer, P.L. Toint and A. Wächter, Global convergence of a trustregion SQPfilter algorithm for general nonlinear programming. SIAM J. Optim. 13 (2002), 635659. [ Links ]
[10] R. Fletcher and S. Leyffer, Nonlinear programming without a penalty function. Math. Program., 91 (2002), 239269. [ Links ]
[11] R. Fletcher, S. Leyffer and P.L. Toint, On the global convergence of a filterSQP algorithm. SIAM J. Optim., 13 (2002), 4459. [ Links ]
[12] P.E. Gill, W. Murray and M.A. Saunders, SNOPT: an SQP algorithm for largescale constrained optimization. SIAM Rev., 47 (2005), 99131. [ Links ]
[13] F.M. Gomes, M.C. Maciel and J.M. Martínez, Nonlinear programming algorithms using trust regions and augmented lagrangians with nonmonotone penalty parameters. Math. Program., 84 (1999), 161200. [ Links ]
[14] C.C. Gonzaga, E.W. Karas and M. Vanti, A globally convergent filter method for nonlinear programming. SIAM J. Optim., 14 (2003), 646669. [ Links ]
[15] N.I.M. Gould and P.L. Toint, Nolinear programming without a penalty function or a filter. Math. Program., 122 (2010), 155196. [ Links ]
[16] W. Hock and K. Schittkowski, Test examples for nonlinear programming codes, SpringerVerlag (1981). [ Links ]
[17] M. Lalee, J. Nocedal and T.D. Plantenga, On the implementation of an algorithm for largescale equality constrained optimization. SIAM J. Optim., 8 (1998), 682706. [ Links ]
[18] J.M. Martínez, Twophase model algorithm with global convergence for nonlinear programming. J. Optim. Theory Appl., 96 (1998), 397436. [ Links ]
[19] J.M. Martínez, Inexact restoration method with lagriangian tangent decrease and new merit function for nonlinear programming. J. Optim. Theory Appl., 111 (2001), 3958. [ Links ]
[20] J.M. Martínez and E.A. Pilotta, Inexact restoration algorithms for constrained optimization. J. Optim. Theory Appl., 104 (2000), 135163. [ Links ]
[21] E.O. Omojokun, Trust region algorithms for optimization with nonlinear equality and inequality constraints. Ph.D. thesis, Dept. of Computer Science, University of Colorado, Boulder (1989). [ Links ]
[22] M.J.D. Powell, A fast algorithm for nonlinearly constrained optimization calculations. In Numerical Analysis Dundee 1977, G.A. Watson, (ed.), SpringerVerlag, Berlin, (1978), 144157. [ Links ]
[23] M.J.D. Powell and Y. Yuan, A trust region algorithm for equality constrained optimization. Math. Program., 49 (1991), 189211. [ Links ]
[24] A.A. Ribeiro, E.W. Karas and C.C. Gonzaga, Global convergence of filtermethods for nonlinear programming. SIAM J. Optim., 19 (2008), 12311249. [ Links ]
[25] C. Shen, W. Xue and D. Pu, A filter SQP algorithm without a feasibility restoration phase. Comput. Appl. Math., 28 (2009), 167194. [ Links ]
[26] S. Ulbrich, On the superlinear local convergence of a filterSQP method. Math. Program., 100 (2004), 217245. [ Links ]
[27] A. Wächter and L.T. Biegler, Line search filter methods for nonlinear programming: local convergence. SIAM J. Optim., 16 (2005), 3248. [ Links ]
[28] A. Wächter and L.T. Biegler, Line search filter methods for nonlinear programming: motivation and global convergence. SIAM J. Optim., 16 (2005), 131. [ Links ]
Received: 16/IX/11.
Accepted: 22/IX/11.
#CAM414/11.
*This research is supported by the National Natural Science Foundation of China (No. 10771162).
^{**}Corresponding author.