Acessibilidade / Reportar erro

CONSTANT RANK CONSTRAINT QUALIFICATIONS: A GEOMETRIC INTRODUCTION

Abstract

Constraint qualifications (CQ) are assumptions on the algebraic description of the feasible set of an optimization problem that ensure that the KKT conditions hold at any local minimum. In this work we show that constraint qualifications based on the notion of constant rank can be understood as assumptions that ensure that the polar of the linear approximation of the tangent cone, generated by the active gradients, retains it geometric structure locally.

constraint qualifications; error bound; algorithmic convergence


1 INTRODUCTION

Consider the general nonlinear optimization problem

where the vector of decision variables x lies in ℝn and all the functions ƒ k : ℝn → ℝ, k = 1,..., m + p, are assumed to be continuously differentiable. Denote by I and J

the index set of the equality and inequality constraints respectively, i.e. I ={1, ..., m} and J ={ m + 1, ..., m + p}. For a given feasible point x, we say that a constraint is active if the respective function ƒk is biding at x, that is if ƒ k (x) = 0. In particular, all the equality constraints are active at feasible points. As for the inequalities, we denote the index set of the active inequality constraints as A(x) = {j | ƒj (x) = 0, jJ}.

Optimality conditions play a central role in the study of optimization problems. They are properties that a point must satisfy whenever it is a reasonable candidate for a solution to (NOP).

Usually, one desires conditions that are easily verifiable and stringent enough to rule out most non-solutions.

Typical optimality conditions involve the gradients of the objective and constraints functions at a point of interest. Arguably the most used one is the Karush-Kuhn-Tucker condition (KKT)[29[29] KARUSH W. 1939. Minima of functions of several variables with inequalities as side constraints. PhD thesis, University of Chicago.,28[28] JOHN F. 1948. Extremum Problems with Inequalities as Subsidiary Conditions. In: K. O. Friedrichs, O. E. Neugebauer, and J. J. Stoker, editors, Studies and Essays: Courant Anniversary Volume, pages 187-204. Wiley-Interscience, New York.,31[31] KUHN HW & TUCKER AW. 1951. Non linear Programming. In: J. Neyman, editor, Proceeding of the Second Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA. University of California Press.]. It states that if a feasible point x is a local minimizer of (NOP), then there are Lagrange multipliers λi ∈ ℝ, iI, and µj ∈ ℝ+, jA(x), such that

Unfortunately, it may fail at a local minimum unless extra assumptions hold. In order to ensure that it is necessary for optimality, the constraint set description given by the constraint functions have to conform to special conditions called constraint qualifications [34[34] MANGASARIAN OL. 1994. Nonlinear programming. SIAM.,11[11] BERTSEKAS DP. 1999. Nonlinear programming. Athena Scientific, Belmont Mass., 2nd ed. edition.,10[10] BAZARAA MS, SHERALI HD & SHETTY CM. 2006. Nonlinear programming: theory and algorithms. John Wiley and Sons.].

The aim of this text is to present a family of constraint qualifications that use the notion of constant rank and that have had recently a great impact in algorithmic convergence, second-order conditions, and parametric analysis, among other applications[26[26] JANIN R. 1984. Directional derivative of the marginal function in non linear programming. In Sensitivity, Stability and Parametric Analysis (Mathematical Programming Studies), pages 110-126. Springer Berlin Heidelberg.,39[39] QI L & WEI Z. 2000. On the Constant Positive Linear Dependence Condition and Its Application to SQP Methods. SIAM Journal on Optimization, 10(4):963-981.,5[5] ANDREANI R, MARTÍNEZ JM & SCHUVERDT ML. 2005. On the Relation between Constant Positive Linear Dependence Condition and Quasinormality Constraint Qualification. Journal of Optimization Theory and Applications, 125(2):473-483.,6[6] ANDREANI R, MARTÍNEZ JM & SCHUVERDT ML. 2007. On second-order optimality conditions for nonlinear programming. Optimization, 56(5-6):529-542.,3[3] ANDREANI R, BIRGIN EG, MARTINEZ JM & SCHUVERDT ML. 2008. On Augmented Lagrangian Methods with General Lower-Level Constraints. SIAM Journal on Optimization, 18(4):1286-1309.,2[2] ANDREANI R, BIRGIN EG, MARTÍNEZ JM & SCHUVERDT ML. 2008. Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Mathematical Programming, 111(1-2):5-32.,37[37] MINCHENKO L & STAKHOVSKI S. 2011. On relaxed constant rank regularity condition in mathematical programming. Optimization, 60(4):429-440.,38[38] MINCHENKO L & STAKHOVSKI S. 2011. Parametric Nonlinear Programming Problems under the Relaxed Constant Rank Condition. SIAM Journal on Optimization, 21(1):314-332.,8[8] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming, 135(1-2):255-273.,9[9] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. Two New Weak Constraint Qualifications and Applications. SIAM Journal on Optimization, 22(3):1109-1135.]. We will do this by presenting the KKT conditions from a geometric point of view, that naturally leads to constant rank assumptions.

2 GEOMETRIC OPTIMALITY CONDITIONS AND KKT

In this section we follow the ideas from[12[12] BERTSEKAS DP. 2003. Convex analysis and optimization. Athena Scientific.,10[10] BAZARAA MS, SHERALI HD & SHETTY CM. 2006. Nonlinear programming: theory and algorithms. John Wiley and Sons.]. Let x be a feasible point, a point belonging to F, that we want to study.

If the optimization problem was unconstrained we know that the condition

is necessary for optimality. Otherwise, taking small steps in any direction d ∈ ℝn such that ∇ƒ0(x)'d < 0, would lead to better points. Since (NOP) is a constrained problem, the only directions that need to be considered are the directions that point inward the feasible set. That is the directions in

This set is a cone known as the cone of feasible directions. It gives rise the following optimality condition:

However, the cone of feasible directions can be small, and even empty, if the feasible set is not convex. For example, consider the feasible set F = {(x1, x2) | x2 = sin (x1)}, a sinus curve in the plane, and x = (0,0). In this case there is not any straight direction that points inside the feasible set from x. But, in this same example it is easy to find directions that point "almost" inside F. They are given by the tangent to the curve at x. Even though points in this tangent are not feasible, there are feasible points very close to it, so close that the directional derivative of the objective at the tangent direction can determine whether those nearby feasible points are better or worse than x whenever this directional derivative is nonzero.

Hence, it may be interesting to consider not only directions that point straight inside the feasible set from x, but also directions that are tangent to the feasible set. Remembering that tangents are defined as limit of secant directions this leads us to the definition of the tangent cone to F at x

Once again it is not hard to show that if x is a local minimum of (NOP) then

This condition is known as the (first order) geometric optimality condition [12[12] BERTSEKAS DP. 2003. Convex analysis and optimization. Athena Scientific.,10[10] BAZARAA MS, SHERALI HD & SHETTY CM. 2006. Nonlinear programming: theory and algorithms. John Wiley and Sons.]. The term geometric is used to emphasize that it does not directly depend on the algebraic description of the feasible set, given by the constraints. It actually depends only on the (local) shape of the set itself.

Finally, note that the geometric condition can be rewritten as

This directly recalls the definition of the polar of a cone C, which is the cone Cº of all directions that make an obtuse angle with the directions in C. What is written above is that -∇ƒ 0(x) belongs to the polar of the tangent cone. This can be stated compactly as

where Tº denotes the polar of the cone tangent to the feasible set at x.

The main drawback of the geometric condition is that it is not easy to compute the tangent cone directly from the definition. It might then be interesting to search for some approximation of T that depends on the functional definition of the constraints. Naturally, this can be achieved linearizing the active constraints around x using the functions gradients. That is, we can expect that, at least in most situations, the linearized cone

should be a good approximation of the tangent T. But what is the exact relationship between T and L? Or even better, in the light of the compact form of the geometric condition given in (2), what is the relation between Tº and the polar of the linearized cone Lº?

A first result in this direction is that TL and hence Tº ⊃ Lº. Therefore, if we try to replace Tº in (2) we arrive at

a condition that may fail whenever Lº is strictly contained in Tº, even though the original geometric condition always hold at a local minimum.

On the other hand, using Farkas Lemma[12[12] BERTSEKAS DP. 2003. Convex analysis and optimization. Athena Scientific.] it is easy to compute the polar of L. It has the form

Hence, condition (3) is actually just the KKT condition and the observation above explains why the KKT condition may fail at a local minimum. See Figure 1.

Figure 1
A feasible set where the KKT condition can fail. The feasible set is the dark gray region to the left. The tangent cone at the origin is just the negative horizontal axis while the linearized cone is the complete horizontal line. The polar of the tangent cone is the semi-plane of positive horizontal values. The polar of the linearized cone is the vertical line which is properly contained in T◦.

It becomes clear now that any condition that can assert that Lº = Tº

is a tool to ensure that KKT is always necessary for optimality. These conditions only deal with the constraint set and are known as constraint qualifications (CQ). Actually, the condition Lº = Tº

itself is known in the optimization literature as the Guignard's constraint qualification[19[19] GUIGNARD M. 1969. Generalized Kuhn-Tucker Conditions for Mathematical Programming Problems in a Banach Space. SIAM Journal on Control, 7(2):232-241.,18[18] GOULD FJ & TOLLE JW. 1971. A Necessary and Sufficient Qualification for Constrained Optimization. SIAM Journal on Applied Mathematics, 20(2):164-172.].

Definition 2.1.Guignard's constraint qualification holds at x if Lº = Tº.

In 1971, Gould and Tolle showed that if a constraint set is such that for all possible continuously differentiable objectives that have local minimum at x the KKT condition holds, then Guignard's condition also holds. Hence, as expected, this is the weakest constraint qualification possible.

Throughout the optimization literature many other CQs were stated. Another famous example, that can be easily understood under the light of discussion above, is Abadie's CQ[1[1] ABADIE J. 1967. On the Kuhn-Tucker Theorem. In: J. Abadie, editor, Nonlinear Programming, pages 21-36. John Wiley, New York.].

Definition 2.2.Abadie's constraint qualification holds at x if L = T.

It is simple to see that if Abadie's CQ hold, the polars of the cones also coincide and hence Guignard's CQ holds. Figure 2 gives an example where Guignard's condition hold while Abadie's CQ fails. Figure 3 is an example where Abadie holds.

Figure 2
An important feasible set is given by the complementarity conditions, that define the positive axes. In this case the feasible set and the tangent cone coincide, but the linearized cone is the whole positive orthant. Even though L properly contains T, both cones have the same polars.
Figure 3
Abadie holds, that is L = T .

These two conditions are usually of more theoretical importance, as they still directly use full geometric information of the feasible set by depending explicitly on T, or its polar, in their statement. We will now start to discuss constraint qualifications that only require properties from the functional description of the feasible set. We are particularly interested in constraint qualifications that are associated to the convergence of optimization algorithms and that involve the notion of constant rank of a set of active gradients[11[11] BERTSEKAS DP. 1999. Nonlinear programming. Athena Scientific, Belmont Mass., 2nd ed. edition.,35[35] MANGASARIAN OL & FROMOVITZ S. 1967. The Fritz John necessary optimality conditions in the presence of equality and inequality constraintsconstraints. Journal of Mathematical Analysis and Applications, 17(1):37-47.,26[26] JANIN R. 1984. Directional derivative of the marginal function in non linear programming. In Sensitivity, Stability and Parametric Analysis (Mathematical Programming Studies), pages 110-126. Springer Berlin Heidelberg.,39[39] QI L & WEI Z. 2000. On the Constant Positive Linear Dependence Condition and Its Application to SQP Methods. SIAM Journal on Optimization, 10(4):963-981.,37[37] MINCHENKO L & STAKHOVSKI S. 2011. On relaxed constant rank regularity condition in mathematical programming. Optimization, 60(4):429-440.,8[8] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming, 135(1-2):255-273.,9[9] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. Two New Weak Constraint Qualifications and Applications. SIAM Journal on Optimization, 22(3):1109-1135.].

3 CONE DECOMPOSITION AND CLASSICAL CONSTRAINTS QUALIFICATIONS

Let C ⊂ ℝn be a closed convex cone. One of its most important geometric properties is whether C contains a line (passing through the origin) or is composed only of half-lines. Since C is convex, this is equivalent to ask if the origin is an extreme point of C or whether the largest subspace contained in C is just the origin. If the answer to any of the three last questions is yes, then C is called a pointed cone. In this sense studying the largest subspace contained in a cone is important to better understand its geometric properties. This subspace is called the lineality space of C and can be defined as

see[12[12] BERTSEKAS DP. 2003. Convex analysis and optimization. Athena Scientific.].

Now, let us turn our attention to the polar of the linearized cone that appears in the KKT conditions, that is to

It is clear from the expression above that the lineality space of Lº contains at least the subspace spanned by the equalities gradients. Hence Lº can only be pointed if there are no equality constraints. Actually we may argue that L(Lº) is expected to be exactly this subspace, as equalities are associated to multipliers that are free of sign. For example, consider the interpretation of the KKT conditions as an equilibrium with the active constraints reacting against minus the gradient of the objective function. Let k be the index of an active constraint such that both ±∇ƒk(x) ∈ Lº . Then this constraint can react against any movement with a component in ∇ƒk(x) direction, acting like a track that only allow movements in its tangent space. It seems to act as an equality.

Now, using the polyhedral representation of Lº, it is easy to compute its lineality space. In fact, let us define

then

In particular, I' contains the indexes of all equalities. It may also contain the index of some inequality constraints, the ones with index in

By construction I' = II_

Moreover, defining , the indexes of all inequalities whose gradients do not belong to the lineality space, we can decompose Lº as a (direct) sum of the form

The first term is a subspace, the lineality space L(Lº), and the second a pointed cone.

We can learn a lot about the basic shape of the cone L by analyzing the decomposition above and identifying the dimensions of the subspace and pointed cone components. For example in ℝ2 the basic shapes of all possible cones are:

  • A single point. The subspace component and the pointed cone are only the origin, that is both have zero dimension.

  • A ray. The subspace component is of dimension 0 and the pointed cone of dimension 1.

  • A line. Now the subspace component is of dimension 1 and the pointed cone of dimension 0.

  • An angle. The subspace component has dimension 0 and the pointed cone, dimension 2.

  • A semi-plane. The subspace has dimension one and the pointed cone has dimension 1 or 2 (pointing to the same side of the subspace).

  • The whole plane. The subspace component has dimension 2 and the pointed cone dimension 0.

One of the main points of this contribution is to show that many constraint qualifications can be better understood taking into consideration this cone decomposition. Actually, we will show that a whole family of CQs is directly or indirectly trying to ensure that locally the decomposition is stable, at least from the point of view of the dimensions involved. With this aim, let us start with the two most important constraint qualifications, linear independence (regularity) and Mangasarian-Fromovitz CQ.

Definition 3.1.The linear independence constraint qualification, or regularity condition, holds at x if the gradients of all active constraints at x are linearly independent.

In this case the index set of the subspace component I' is simply I, as showing that jJ_ directly shows that ∇ ƒj(x) can be written in terms of the other active gradients. Moreover, since linear independence is a property that is preserved locally we can see that the local version of Lº

still has linearly independent generators for both the subspace component and the pointed cone. That it, it still have the same basic shape as Lº(x) = Lº.

Next we introduce the Mangasarian-Fromovitz constraint qualification (MFCQ)[35[35] MANGASARIAN OL & FROMOVITZ S. 1967. The Fritz John necessary optimality conditions in the presence of equality and inequality constraintsconstraints. Journal of Mathematical Analysis and Applications, 17(1):37-47.] using the notion of positive linear independence.

Definition 3.2.Let V = (v1, v2, ..., vk) be a tuple of vectors inn and I, J index sets such that IJ = {1, 2, ..., k}. A positive combination of elements of V with respect to the (ordered) pair (I, J) is a vector in the form

The tuple V is said to be positively linearly independent (PLI) if the only way to write the zero vector using positive combinations is using zeros coefficients. Otherwise the vectors are said to be positive linear dependent (PLD).

Now we are ready to state MFCQ. Actually, we use an alternative definition that can be found in[41[41] ROCKAFELLAR RT. 1993. Lagrange Multipliers and Optimality. SIAM Review, 35(2):183-238.]. The original definition has a stronger geometric flavor but is not well suited for our discussion.

Definition 3.3.The Mangasarian-Fromovitz constraint qualification (MFCQ) holds at x if the set of gradients of all active constraints at x is PLI with respect to (I, A(x)).

Once again, it is easy to see that the MFCQ implies that the index set of the subspace component of Lº is simply the index set of all equalities. In fact, to say that an active inequality gradient is in L(Lº) is to say that the gradients are PLD. Hence, Mangasarian-Fromovitz CQ is asking that the natural decomposition of the cone is given precisely by the division of the constraints among equalities and inequalities, that is I' = I, J_ = ∅, and J+ = A(x). Moreover, it also requires that the subspace component have the same dimension around x, as it is spanned by linearly independent vectors, therefore preserving its geometric structure locally.

Finally, an interesting remark is that the gradients in the definition of the pointed cone, the gradients with index in J+, are always PLI. Hence, their dimension and basic direction will be preserved locally without further assumptions as positive linear independence is preserved locally by continuous transformations just like the usual linear independence.

4 CONSTANT RANK CONSTRAINT QUALIFICATIONS AND BEYOND

After the discussion above, it starts to become clear that a key property to ensure the validity of a constraint qualification is that the geometric structure of Lº should be preserved around x. Moreover, we have learned that this is somewhat summarized by the dimension of its subspace component L(Lº) or, in other words, the rank of {∇ ƒ i (x) | iI'}. This idea seems to be behind a family of constant rank constraint qualifications that appeared in the mid eighties with Janin[26[26] JANIN R. 1984. Directional derivative of the marginal function in non linear programming. In Sensitivity, Stability and Parametric Analysis (Mathematical Programming Studies), pages 110-126. Springer Berlin Heidelberg.] and many variations following[39[39] QI L & WEI Z. 2000. On the Constant Positive Linear Dependence Condition and Its Application to SQP Methods. SIAM Journal on Optimization, 10(4):963-981.,37[37] MINCHENKO L & STAKHOVSKI S. 2011. On relaxed constant rank regularity condition in mathematical programming. Optimization, 60(4):429-440.,8[8] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming, 135(1-2):255-273.,9[9] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. Two New Weak Constraint Qualifications and Applications. SIAM Journal on Optimization, 22(3):1109-1135.].

The first constant rank condition was introduced by Janin to study the directional derivative of the marginal value function of a perturbed optimization problem[26[26] JANIN R. 1984. Directional derivative of the marginal function in non linear programming. In Sensitivity, Stability and Parametric Analysis (Mathematical Programming Studies), pages 110-126. Springer Berlin Heidelberg.]. It was then used in the context of optimization problems with equilibrium constraints[32[32] LUO Z-Q, PANG JS & RALPH D. 1996. Mathematical Programs with Equilibrium Constraints. Cambridge University Press.,42[42] SCHOLTES S & STÖHR M. 1999. Exact Penalization of Mathematical Programs with Equilibrium Constraints. SIAM Journal on Control and Optimization, 37(2):617-652.], bilevel optimization[15[15] DEMPE S. 1992. A necessary and a sufficient optimality condition for bilevel programming problems. Optimization, 25(4):341-354.], variational inequalities[27[27] JIANG H & RALPH D. 2000. Smooth SQP Methods for Mathematical Programs with Nonlinear Complementarity Constraints. SIAM Journal on Optimization, 10(3):779-808.], and second order conditions[6[6] ANDREANI R, MARTÍNEZ JM & SCHUVERDT ML. 2007. On second-order optimality conditions for nonlinear programming. Optimization, 56(5-6):529-542.,4[4] ANDREANI R, ECHAGÜE CE & SCHUVERDT ML. 2010. Constant-Rank Condition and Second-Order Constraint Qualification. Journal of Optimization Theory and Applications, 146(2):255-266.].

Definition 4.1. The constant rank constraint qualification (CRCQ) holds at x if for all index subsets KI ∪ A(x) the set of gradients {∇ ƒ k (y) | k ∈ K} has the same rank locally around x

It is clear from the definition that CRCQ is a generalization of regularity since linear independence of a group of vectors also imply linear independence of all its subgroups. Moreover, linear independence of continuous gradients is stable locally, i.e. if it holds at x it holds in a neighborhood of x. This last fact helps to explain why the definition above asks for a property to hold close to x and not only at x. In effect, any constraint qualification will need to imply Guignard's condition that equates the pointwise object Lº with the geometric object Tº.Since Tº depends on all feasible points close to x, it is natural that any constraint qualifications should require, implicitly or explicitly, properties locally around x.

CRCQ also implies that the index set J_ will remain constant in a neighborhood of x, as the signs of the positive combinations will be preserved by continuity. Hence, the subspace component of the local perturbation of the polar of the linearized cone Lº(x) will be spanned by gradients with the same indexes as in Lº and its geometry will be preserved due to rank preservation.

The constant rank condition was then generalized by taking into account that the multipliers associated to inequality constraints are always positive, similarly to the way MFCQ generalizes regularity. In 2000, Qi and Wei introduced the notion of constant positive linear dependence[39[39] QI L & WEI Z. 2000. On the Constant Positive Linear Dependence Condition and Its Application to SQP Methods. SIAM Journal on Optimization, 10(4):963-981.] that was proved to be a constraint qualification in[5[5] ANDREANI R, MARTÍNEZ JM & SCHUVERDT ML. 2005. On the Relation between Constant Positive Linear Dependence Condition and Quasinormality Constraint Qualification. Journal of Optimization Theory and Applications, 125(2):473-483.].

Definition 4.2.The constant positive linear dependence (CPLD) constraint qualification holds at x if, for all subsets II and JJ, the positive linear dependence with respect to (I, J) of the gradients {∇ ƒ k | kIJ} imply that they remain PLD in a neighborhood of x

This condition proved to be very useful in the convergence analysis of optimization algorithms like SQP[39[39] QI L & WEI Z. 2000. On the Constant Positive Linear Dependence Condition and Its Application to SQP Methods. SIAM Journal on Optimization, 10(4):963-981.], exterior penalty and augmented Lagrangian methods[3[3] ANDREANI R, BIRGIN EG, MARTINEZ JM & SCHUVERDT ML. 2008. On Augmented Lagrangian Methods with General Lower-Level Constraints. SIAM Journal on Optimization, 18(4):1286-1309.,2[2] ANDREANI R, BIRGIN EG, MARTÍNEZ JM & SCHUVERDT ML. 2008. Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Mathematical Programming, 111(1-2):5-32.], and inexact restoration[17[17] FISCHER A & FRIEDLANDER A. 2010. A new line search inexact restoration approach for non linear programming. Computational Optimization and Applications, 46(2):333-346.]. It was also generalized to problems with complementarity and vanishing constraints[23[23] HOHEISEL T, KANZOW C & SCHWARTZ A. 2012. Convergence of a local regularization approach for mathematical programmes with complementarity or vanishing constraints. Optimization Methods and Software, 27(3):483-512.,24[24] HOHEISEL T, KANZOW C & SCHWARTZ A. 2013. Theoretical and numerical comparison of relaxation methods for mathematical programs with complementarity constraints. Mathematical Programming, 137(1-2):257-288.].

Again, CPLD implies that the indexes in J_ are stable close to x and hence the subspace component of Lº (x) is generated by gradients with the same indexes as in x. The index sets in the cone decomposition are stable. Moreover, it can be shown that it also implies that the rank of the subspace component is constant close to x. We will give more details on this fact below when we define the relaxed version of CPLD.

The conditions above impose assumptions on all subsets of the active constraints. Follow up conditions appeared requiring properties only of subsets of the active constraints that contain all equalities. Actually, for equality constrained problems it was already known that the constant rank of the full set of gradients was sufficient to qualify the constraints[4[4] ANDREANI R, ECHAGÜE CE & SCHUVERDT ML. 2010. Constant-Rank Condition and Second-Order Constraint Qualification. Journal of Optimization Theory and Applications, 146(2):255-266.]. Later on, Minchenko and Stakhovski incorporated inequality constraints[37[37] MINCHENKO L & STAKHOVSKI S. 2011. On relaxed constant rank regularity condition in mathematical programming. Optimization, 60(4):429-440.].

Definition 4.3.The relaxed constant rank constraint qualification (RCR) holds at x if for all index sets sets in the form K = IJ, where JA(x), the set of gradients {∇ ƒ k (y) | kK} has the same rank locally around x

In[37[37] MINCHENKO L & STAKHOVSKI S. 2011. On relaxed constant rank regularity condition in mathematical programming. Optimization, 60(4):429-440.], the authors showed that RCR implies the existence of a local error bound, that is that it is possible to estimate the distance to the feasible set by using a natural infeasibility measure which is in turn a constraint qualification[43[43] SOLODOV MV. 2010. Constraint qualifications. In: COCHRAN JJ, COX LA, KESKINOCAK P, KHAROUFEH JP & SMITH JC. (Editors). Wiley Encyclopedia of Operations Research and Management Science. John Wiley & Sons, Inc., Hoboken, NJ, USA.]. They also showed that the error bound property holds under CPLD. A follow up work also used RCR to study the directional differentiability of the optimal value function of a perturbed problem and second order optimality conditions[38[38] MINCHENKO L & STAKHOVSKI S. 2011. Parametric Nonlinear Programming Problems under the Relaxed Constant Rank Condition. SIAM Journal on Optimization, 21(1):314-332.].

This condition was then generalized using only positive linear dependence in place of rank by Andreani et al. in[8[8] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming, 135(1-2):255-273.].

Definition 4.4. The relaxed constant positive linear dependence constraint qualification (RC-PLD) holds at x if

  • The gradients of all equality constraints have the same rank in a neighborhood of x

  • Let I ⊂ I be the index set of a basis of the space spanned by gradients of all equalities at x. Then, for all JA(x) the positive linear dependence with respect to (I, J of {∇ ƒ k (x) | k ∈ I ∪ J} implies that it remains PLD in a neighborhood of x

This constraint qualification have the same interesting properties as CPLD. It is stable locally, that is if it holds at x it holds at all feasible points close to x[8[8] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming, 135(1-2):255-273.]. It implies the existence of an error bound[8[8] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming, 135(1-2):255-273.]. It has been used in the context of problems with complementarity constraints and parametric analysis[21[21] GUO L, LIN G-H & YE JJ. 2012. Stability Analysis for Parametric Mathematical Programs with Geometric Constraints and Its Applications. SIAM Journal on Optimization, 22(3): 1151-1176.,20[20] GUO L & LIN G-H. 2013. Noteson Some Constraint Qualifications for Mathematical Programs with Equilibrium Constraints. Journal of Optimization Theory and Applications, 156(3):600-616.,22[22] GUO L, LIN G-H & YE JJ. 2013. Second-Order Optimality Conditions for Mathematical Programs with Equilibrium Constraints. Journal of Optimization Theory and Applications, 158(1):33-64.,14[14] CHIEU NH & LEE GM. 2013. A Relaxed Constant Positive Linear Dependence Constraint Qualification for Mathematical Programs with Equilibrium Constraints. Journal of Optimization Theory and Applications, 158(1):11-32.], convergence analysis of algorithms[16[16] ECKSTEIN J & SILVA PJS. 2013. A practical relative error criterion for augmented Lagrangians. Mathematical Programming, 141: 319-348.,25[25] IZMAILOV AF, SOLODOV MV & USKOV EI. 2012. Global Convergence of Augmented Lagrangian Methods Applied to Optimization Problems with Degenerate Constraints, Including Problems with Complementarity Constraints. SIAM Journal on Optimization, 22(4):1579-1606.,13[13] BUENO LF, FRIEDLANDER A, MARTÍNEZ JM & SOBRAL FNC. 2013. Inexact Restoration Method for Derivative-Free Optimization with Smooth Constraints. SIAM Journal on Optimization, 23(2):1189-1213.], and vector optimization[33[33] MACIEL MC, SANTOS SA & SOTTOSANTO GN. 2012. On Second-Order Optimality Conditions for Vector Optimization: Addendum. Journal of Optimization Theory and Applications.].

The last two conditions still require properties of gradients whose indexes belong to all possible subsets of active inequalities. We know from the discussion of the previous section that only the active constraints with index in I' , are relevant in the definition of the subspace component of Lº, see [5[5] ANDREANI R, MARTÍNEZ JM & SCHUVERDT ML. 2005. On the Relation between Constant Positive Linear Dependence Condition and Quasinormality Constraint Qualification. Journal of Optimization Theory and Applications, 125(2):473-483.] and the following discussion. It is then natural to define a related CQ:

Definition 4.5.The constant rank of the subspace component (CRSC) constraint qualification holds at x if there is a neighborhood of x where the rank of {∇ ƒ k (y) | kI'} remains constant.

This condition was recently introduced by Andreani et al. in[9[9] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. Two New Weak Constraint Qualifications and Applications. SIAM Journal on Optimization, 22(3):1109-1135.]. An equivalent condition, with an extra, superfluous, assumption, was developed independently by Minchenko and Stakhovski in[36[36] MINCHENKO L & STAKHOVSKI S. 2010. About generalizing the Mangasarian-Fromovitz regularity condition (in Russian). Doklady BGUIR, 8:104-109.], see also[30[30] KRUGER AY, MINCHENKO L & OUTRATA JV. 2013. On relaxing the Mangasarian-Fromovitz constraint qualification. Positivity, 18(1):171-189.].

The CRSC generalizes all the previous constraint qualifications while keeping their most important theoretical properties. In particular, it is implied by RCPLD[9[9] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. Two New Weak Constraint Qualifications and Applications. SIAM Journal on Optimization, 22(3):1109-1135.], a result that is not very obvious but that can be shown using the theory from[40[40] REAY JR. 1966. Unique Minimal Representations with Positive Bases. The American Mathematical Monthly, 73(3):253-261.], see[9[9] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. Two New Weak Constraint Qualifications and Applications. SIAM Journal on Optimization, 22(3):1109-1135.].

Among the major properties of feasible sets that conform with CRSC we would like to emphasize the following.

  • • Under the CRSC, the inequality constraints in J_ hold as equalities for all feasible points close to x. That is, even though those constraints appear only as inequalities in the description of the feasible set, the reverse inequality is implied by the other constraints locally. In this sense, the subspace component of Lº is actually generated only by the equalities that appear explicitly or implicitly in the description of the feasible set just like in MFCQ.

  • • The CRSC is stable, that is, if it holds at a feasible point x it holds for all feasible point in a neighborhood of x. This was proved in [9[9] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. Two New Weak Constraint Qualifications and Applications. SIAM Journal on Optimization, 22(3):1109-1135.] showing that set J_ is also stable.

  • • The CRSC implies the error bound property like the previous CQs. The error bound is also in turn a constraint qualification, as it implies Abadie's CQ[43[43] SOLODOV MV. 2010. Constraint qualifications. In: COCHRAN JJ, COX LA, KESKINOCAK P, KHAROUFEH JP & SMITH JC. (Editors). Wiley Encyclopedia of Operations Research and Management Science. John Wiley & Sons, Inc., Hoboken, NJ, USA.].

  • • The convergence theory of many optimization algorithms can be extended from requiring CPLD to just CRSC. This is true at least for methods in the family of pure penalty algorithms, multiplier methods, sequential quadratic programming, and inexact restoration. This can be shown using the approximate - KKT sequences[7[7] ANDREANI R, HAESER G & MARTÍNEZ JM. 2011. On sequential optimality conditions for smooth constrained optimization. Optimization, 60(5):627-641.] and a weaker constraint qualification called constant positive generators (CPG), that can also be used to generalize convergence results for interior points methods. See details in[9[9] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. Two New Weak Constraint Qualifications and Applications. SIAM Journal on Optimization, 22(3):1109-1135.].

5 CONCLUSION

We introduced a geometric view of constraints qualifications based on the constant rank condition and showed that their key property is that they preserve the geometric structure of the lineality space of the polar of linearized cone Lº . The weakest condition of this family, called the constant rank of the subspace component (CRSC) still preserves important properties like stability, the validity of an error bound and is adequate to study the convergence of many optimization algorithms like inexact restoration[13[13] BUENO LF, FRIEDLANDER A, MARTÍNEZ JM & SOBRAL FNC. 2013. Inexact Restoration Method for Derivative-Free Optimization with Smooth Constraints. SIAM Journal on Optimization, 23(2):1189-1213.] and augmented Lagrangian methods[9[9] ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. Two New Weak Constraint Qualifications and Applications. SIAM Journal on Optimization, 22(3):1109-1135.].

ACKNOWLEDGMENTS

This work was supported by PRONEX-Optimization (PRONEX-CNPq/FAPERJ E-26/171.510/ 2006-APQ1), CEPID-CeMEAI (FAPESP 2013/07375-0), Fapesp (Grants 2013/05475-7 and 2012/20339-0), CNPq (Grants 305217/2006-2 and 305740/2010-5).

REFERENCES

  • [1]
    ABADIE J. 1967. On the Kuhn-Tucker Theorem. In: J. Abadie, editor, Nonlinear Programming, pages 21-36. John Wiley, New York.
  • [2]
    ANDREANI R, BIRGIN EG, MARTÍNEZ JM & SCHUVERDT ML. 2008. Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Mathematical Programming, 111(1-2):5-32.
  • [3]
    ANDREANI R, BIRGIN EG, MARTINEZ JM & SCHUVERDT ML. 2008. On Augmented Lagrangian Methods with General Lower-Level Constraints. SIAM Journal on Optimization, 18(4):1286-1309.
  • [4]
    ANDREANI R, ECHAGÜE CE & SCHUVERDT ML. 2010. Constant-Rank Condition and Second-Order Constraint Qualification. Journal of Optimization Theory and Applications, 146(2):255-266.
  • [5]
    ANDREANI R, MARTÍNEZ JM & SCHUVERDT ML. 2005. On the Relation between Constant Positive Linear Dependence Condition and Quasinormality Constraint Qualification. Journal of Optimization Theory and Applications, 125(2):473-483.
  • [6]
    ANDREANI R, MARTÍNEZ JM & SCHUVERDT ML. 2007. On second-order optimality conditions for nonlinear programming. Optimization, 56(5-6):529-542.
  • [7]
    ANDREANI R, HAESER G & MARTÍNEZ JM. 2011. On sequential optimality conditions for smooth constrained optimization. Optimization, 60(5):627-641.
  • [8]
    ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming, 135(1-2):255-273.
  • [9]
    ANDREANI R, HAESER G, SCHUVERDT ML & SILVA PJS. 2012. Two New Weak Constraint Qualifications and Applications. SIAM Journal on Optimization, 22(3):1109-1135.
  • [10]
    BAZARAA MS, SHERALI HD & SHETTY CM. 2006. Nonlinear programming: theory and algorithms. John Wiley and Sons.
  • [11]
    BERTSEKAS DP. 1999. Nonlinear programming. Athena Scientific, Belmont Mass., 2nd ed. edition.
  • [12]
    BERTSEKAS DP. 2003. Convex analysis and optimization. Athena Scientific.
  • [13]
    BUENO LF, FRIEDLANDER A, MARTÍNEZ JM & SOBRAL FNC. 2013. Inexact Restoration Method for Derivative-Free Optimization with Smooth Constraints. SIAM Journal on Optimization, 23(2):1189-1213.
  • [14]
    CHIEU NH & LEE GM. 2013. A Relaxed Constant Positive Linear Dependence Constraint Qualification for Mathematical Programs with Equilibrium Constraints. Journal of Optimization Theory and Applications, 158(1):11-32.
  • [15]
    DEMPE S. 1992. A necessary and a sufficient optimality condition for bilevel programming problems. Optimization, 25(4):341-354.
  • [16]
    ECKSTEIN J & SILVA PJS. 2013. A practical relative error criterion for augmented Lagrangians. Mathematical Programming, 141: 319-348.
  • [17]
    FISCHER A & FRIEDLANDER A. 2010. A new line search inexact restoration approach for non linear programming. Computational Optimization and Applications, 46(2):333-346.
  • [18]
    GOULD FJ & TOLLE JW. 1971. A Necessary and Sufficient Qualification for Constrained Optimization. SIAM Journal on Applied Mathematics, 20(2):164-172.
  • [19]
    GUIGNARD M. 1969. Generalized Kuhn-Tucker Conditions for Mathematical Programming Problems in a Banach Space. SIAM Journal on Control, 7(2):232-241.
  • [20]
    GUO L & LIN G-H. 2013. Noteson Some Constraint Qualifications for Mathematical Programs with Equilibrium Constraints. Journal of Optimization Theory and Applications, 156(3):600-616.
  • [21]
    GUO L, LIN G-H & YE JJ. 2012. Stability Analysis for Parametric Mathematical Programs with Geometric Constraints and Its Applications. SIAM Journal on Optimization, 22(3): 1151-1176.
  • [22]
    GUO L, LIN G-H & YE JJ. 2013. Second-Order Optimality Conditions for Mathematical Programs with Equilibrium Constraints. Journal of Optimization Theory and Applications, 158(1):33-64.
  • [23]
    HOHEISEL T, KANZOW C & SCHWARTZ A. 2012. Convergence of a local regularization approach for mathematical programmes with complementarity or vanishing constraints. Optimization Methods and Software, 27(3):483-512.
  • [24]
    HOHEISEL T, KANZOW C & SCHWARTZ A. 2013. Theoretical and numerical comparison of relaxation methods for mathematical programs with complementarity constraints. Mathematical Programming, 137(1-2):257-288.
  • [25]
    IZMAILOV AF, SOLODOV MV & USKOV EI. 2012. Global Convergence of Augmented Lagrangian Methods Applied to Optimization Problems with Degenerate Constraints, Including Problems with Complementarity Constraints. SIAM Journal on Optimization, 22(4):1579-1606.
  • [26]
    JANIN R. 1984. Directional derivative of the marginal function in non linear programming. In Sensitivity, Stability and Parametric Analysis (Mathematical Programming Studies), pages 110-126. Springer Berlin Heidelberg.
  • [27]
    JIANG H & RALPH D. 2000. Smooth SQP Methods for Mathematical Programs with Nonlinear Complementarity Constraints. SIAM Journal on Optimization, 10(3):779-808.
  • [28]
    JOHN F. 1948. Extremum Problems with Inequalities as Subsidiary Conditions. In: K. O. Friedrichs, O. E. Neugebauer, and J. J. Stoker, editors, Studies and Essays: Courant Anniversary Volume, pages 187-204. Wiley-Interscience, New York.
  • [29]
    KARUSH W. 1939. Minima of functions of several variables with inequalities as side constraints. PhD thesis, University of Chicago.
  • [30]
    KRUGER AY, MINCHENKO L & OUTRATA JV. 2013. On relaxing the Mangasarian-Fromovitz constraint qualification. Positivity, 18(1):171-189.
  • [31]
    KUHN HW & TUCKER AW. 1951. Non linear Programming. In: J. Neyman, editor, Proceeding of the Second Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA. University of California Press.
  • [32]
    LUO Z-Q, PANG JS & RALPH D. 1996. Mathematical Programs with Equilibrium Constraints. Cambridge University Press.
  • [33]
    MACIEL MC, SANTOS SA & SOTTOSANTO GN. 2012. On Second-Order Optimality Conditions for Vector Optimization: Addendum. Journal of Optimization Theory and Applications.
  • [34]
    MANGASARIAN OL. 1994. Nonlinear programming. SIAM.
  • [35]
    MANGASARIAN OL & FROMOVITZ S. 1967. The Fritz John necessary optimality conditions in the presence of equality and inequality constraintsconstraints. Journal of Mathematical Analysis and Applications, 17(1):37-47.
  • [36]
    MINCHENKO L & STAKHOVSKI S. 2010. About generalizing the Mangasarian-Fromovitz regularity condition (in Russian). Doklady BGUIR, 8:104-109.
  • [37]
    MINCHENKO L & STAKHOVSKI S. 2011. On relaxed constant rank regularity condition in mathematical programming. Optimization, 60(4):429-440.
  • [38]
    MINCHENKO L & STAKHOVSKI S. 2011. Parametric Nonlinear Programming Problems under the Relaxed Constant Rank Condition. SIAM Journal on Optimization, 21(1):314-332.
  • [39]
    QI L & WEI Z. 2000. On the Constant Positive Linear Dependence Condition and Its Application to SQP Methods. SIAM Journal on Optimization, 10(4):963-981.
  • [40]
    REAY JR. 1966. Unique Minimal Representations with Positive Bases. The American Mathematical Monthly, 73(3):253-261.
  • [41]
    ROCKAFELLAR RT. 1993. Lagrange Multipliers and Optimality. SIAM Review, 35(2):183-238.
  • [42]
    SCHOLTES S & STÖHR M. 1999. Exact Penalization of Mathematical Programs with Equilibrium Constraints. SIAM Journal on Control and Optimization, 37(2):617-652.
  • [43]
    SOLODOV MV. 2010. Constraint qualifications. In: COCHRAN JJ, COX LA, KESKINOCAK P, KHAROUFEH JP & SMITH JC. (Editors). Wiley Encyclopedia of Operations Research and Management Science. John Wiley & Sons, Inc., Hoboken, NJ, USA.

Publication Dates

  • Publication in this collection
    Sep-Dec 2014

History

  • Received
    19 Oct 2013
  • Accepted
    28 Dec 2013
Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 Rio de Janeiro RJ - Brasil, Tel.: +55 21 2263-0499, Fax: +55 21 2263-0501 - Rio de Janeiro - RJ - Brazil
E-mail: sobrapo@sobrapo.org.br