Acessibilidade / Reportar erro

Saddle point and second order optimality in nondifferentiable nonlinear abstract multiobjective optimization

Abstracts

This article deals with a vector optimization problem with cone constraints in a Banach space setting. By making use of a real-valued Lagrangian and the concept of generalized subconvex-like functions, weakly efficient solutions are characterized through saddle point type conditions. The results, jointly with the notion of generalized Hessian (introduced in [Cominetti, R., Correa, R.: A generalized second-order derivative in nonsmooth optimization. SIAM J. Control Optim. 28, 789-809 (1990)]), are applied to achieve second order necessary and sufficient optimality conditions (without requiring twice differentiability for the objective and constraining functions) for the particular case when the functionals involved are defined on a general Banach space into finite dimensional ones.

Multiobjetive optimization; abstract optimization problems; nonlinear programming; saddle point conditions; generalized second order conditions; generalized convexity


O artigo trata de um problema de otimização vetorial entre espaços de Banach com restrições envolvendo cones. Usando-se uma lagrangiana que toma valores escalares e o conceito de funções subconvexas generalizadas, soluções fracamente eficientes são caracterizadas por condições do tipo ponto de sela. Os resultados, em conjunto com a noção de Hessiana generalizada (introduzida em [R. Cominetti, R. Correa, A generalized second-order derivative in nonsmooth optimization, SIAM J. Control Optim., 28 (1990), 789-809]), são aplicados para se obter condições necessárias e suficientes de segunda ordem para o caso particular em que as funcionais envolvidas são definidas em um espaço de Banach geral mas com valores em espaços de dimensão finita (sem exigir que as funções objetivo e de restrições sejam duas vezes diferenciáveis).

Otimização multi-objetivo; problemas de otimização abstratos; programação não-linear; condições de otimalidade tipo ponto de sela; condições de segunda ordem generalizadas; convexidade generalizada


Saddle point and second order optimality in nondifferentiable nonlinear abstract multiobjective optimization

L.B. dos SantosI,1 1 lucelina@ufpr.br; partially supported by the Spanish Ministry of Education and Science (MEC) - Grant MTM2010-15383, and by the National Council for Scientific and Technological Development (CNPq-Brazil) - Grant 476043/2009-3. 2 marko@ueubiobio.cl; partially supported by the Spanish Ministry of Education and Science (MEC) - Grant MTM2010-15383 and by the National Fund for Scientific and Technological Development (Fondecyt-Chile) - Project 1080628. 3 antunes@ibilce.unesp.br; supported by he State of São Paulo Research Foundation - FAPESP - Grant 2011/01977-2. ; M.A. Rojas-MedarII,2 1 lucelina@ufpr.br; partially supported by the Spanish Ministry of Education and Science (MEC) - Grant MTM2010-15383, and by the National Council for Scientific and Technological Development (CNPq-Brazil) - Grant 476043/2009-3. 2 marko@ueubiobio.cl; partially supported by the Spanish Ministry of Education and Science (MEC) - Grant MTM2010-15383 and by the National Fund for Scientific and Technological Development (Fondecyt-Chile) - Project 1080628. 3 antunes@ibilce.unesp.br; supported by he State of São Paulo Research Foundation - FAPESP - Grant 2011/01977-2. ; V.A. de OliveiraIII,3 1 lucelina@ufpr.br; partially supported by the Spanish Ministry of Education and Science (MEC) - Grant MTM2010-15383, and by the National Council for Scientific and Technological Development (CNPq-Brazil) - Grant 476043/2009-3. 2 marko@ueubiobio.cl; partially supported by the Spanish Ministry of Education and Science (MEC) - Grant MTM2010-15383 and by the National Fund for Scientific and Technological Development (Fondecyt-Chile) - Project 1080628. 3 antunes@ibilce.unesp.br; supported by he State of São Paulo Research Foundation - FAPESP - Grant 2011/01977-2.

IDepartamento de Matemática, Universidade Federal do Paraná, Setor de Ciências Exatas, Centro Politécnico, CP 019081, Jd. das Américas, CEP 81531-990 Curitiba, PR, Brasil

IIGrupo de Matemática Aplicada, Departamento de Ciencias Básicas, Facultad de Ciencias, Universidad del Bío-Bío, Campus Fernando May, Casilla 447, Chillán, Chile

IIIDepto. de Matemática Aplicada, Instituto de Biociências, Letras e Ciências Exatas, UNESP - Univ. Estadual Paulista, Campus de São José do Rio Preto, R. Cristóvão Colombo, 2265, Jd. Nazareth,15054-000 São José do Rio Preto, SP, Brasil

ABSTRACT

This article deals with a vector optimization problem with cone constraints in a Banach space setting. By making use of a real-valued Lagrangian and the concept of generalized subconvex-like functions, weakly efficient solutions are characterized through saddle point type conditions. The results, jointly with the notion of generalized Hessian (introduced in [Cominetti, R., Correa, R.: A generalized second-order derivative in nonsmooth optimization. SIAM J. Control Optim. 28, 789-809 (1990)]), are applied to achieve second order necessary and sufficient optimality conditions (without requiring twice differentiability for the objective and constraining functions) for the particular case when the functionals involved are defined on a general Banach space into finite dimensional ones.

Keywords. Multiobjetive optimization, abstract optimization problems, nonlinear programming, saddle point conditions, generalized second order conditions, generalized convexity.

RESUMO

O artigo trata de um problema de otimização vetorial entre espaços de Banach com restrições envolvendo cones. Usando-se uma lagrangiana que toma valores escalares e o conceito de funções subconvexas generalizadas, soluções fracamente eficientes são caracterizadas por condições do tipo ponto de sela. Os resultados, em conjunto com a noção de Hessiana generalizada (introduzida em [R. Cominetti, R. Correa, A generalized second-order derivative in nonsmooth optimization, SIAM J. Control Optim., 28 (1990), 789-809]), são aplicados para se obter condições necessárias e suficientes de segunda ordem para o caso particular em que as funcionais envolvidas são definidas em um espaço de Banach geral mas com valores em espaços de dimensão finita (sem exigir que as funções objetivo e de restrições sejam duas vezes diferenciáveis).

Palavras-chave. Otimização multi-objetivo, problemas de otimização abstratos, programação não-linear, condições de otimalidade tipo ponto de sela, condições de segunda ordem generalizadas, convexidade generalizada.

1. Introduction and Formulation of the Problem

In many situations, practical or theoretical, finite dimensional spaces are not the most suitable ones in order to model or study a given problem. Alike, oftentimes, scalar objective programming is not the most appropriate scenario. Therefore the development of optimality conditions for vectorial abstract programming problems is of great importance.

The role of nonsmooth analysis is of notable importance in the optimization theory. This is due at least to the following reasons. First, in practice, differentiability assumptions may be too restrictive. Second, as pointed by Cominetti and Correa [8], there are many usual techniques commonly employed in optimization that generate "nonsmoothness" even when the problems are differentiable. This arises, for example, in duality theory, sensitivity and stability analysis, decomposition techniques, penalty methods, among others.

With respect to necessary optimality conditions without differentiability, one can resort to those of Fritz John or Kuhn-Tucker type, which are obtained under various generalized derivatives concepts, or to saddle point conditions, where no differentiability assumption is required. Still on necessary conditions, we should mention the second order ones, that can also be obtained through generalized (second order) derivatives. Now, on sufficient conditions, there are those based on convexity or generalized convexity and second order type conditions. Both can be addressed in nondifferentiable frameworks.

In the last years, an extended differentiability theory has been developed through various concepts of generalized differentiability and first order optimality conditions to the problem of scalar optimization have been established (see Clarke [7] and Rockafellar [18]).

Also, a significant theory of generalized second order differentiability has been developed (see, for example, Aubin and Ekeland [1], Chaney [5] and Hiriart-Urruty [12]). In particular, Cominetti and Correa introduced in [8] the notions of second order directional derivative and generalized Hessian and gave some second order optimality conditions for an abstract scalar minimization problem.

We now cite some few works concerning the topics aforementioned. Multiplier rules of Fritz John and Kuhn-Tucker types were studied, for example, in Bellaassali and Jourani [3], Da Cunha and Polak [10], Jahn [14] and Dos Santos et al. [19]. Saddle point conditions were investigated, for instance, in Bigi [4] and Chen et al. [6]. Bigi characterized saddle points assuming convex data and Chen et al. used a distinct type of generalized convexity. Second order conditions were explored, more recently, in Gfrerer [11] and Taa [20]. In [11], the results were obtained by making use of Hadamard derivatives. In [20] abstract problems are considered, but under twice differentiability.

The reader who is interested in a more comprehensive bibliography review on these issues can consult the articles just quoted, which provide fine lists of references.

The aim of this paper is to contribute to the development of the optimality conditions theory of nondifferentiable nonlinear abstract multiobjective optimization. At first, we will consider a vector optimization problem which can be posed as

where f : SEF and g : SEG are given (not necessarily differentiable) functions, S is a nonempty subset of E and E, F and G are Banach spaces. The spaces F and G are ordered by closed convex cones QF and KG. Also, we assume that Q has nonempty interior.

We denote by the feasible set of (P), in the way that

We will consider the so called weakly efficient solutions of (P). We recall that is said to be a weakly efficient solution (respectively, local weakly efficient solution) of (P) if there does not exist x feasible for (P) such that f(x) - f() ∈ -int Q (respectively, if there exists a neighborhood V of such that there does not exist x ∈ V ∩ such that f(x) - f() ∈ - int Q).

Following Osuna-Gómez et al. [17], we give a definition of saddle points which has the feature of being based on solving scalar problems and not vector ones, as usual. We, then, show that every point that satisfies our definition is a weakly efficient solution. The converse is also obtained, but under a generalized convexity assumption and when (P) satisfies a constraint qualification. Subsequently, we will apply these results to the (finite-dimensional) particular case when Q = and K = :

where fj, gi : SX, jJ : = {1,..., p}, iI : = {1,..., m}, are continuous and Gâteaux differentiable functions and S is a nonempty open subset of a Banach space X. We obtain second order conditions for the nonsmooth finite-dimensional problem (PF) in terms of second order directional derivatives (see Cominetti and Correa [8]).

This work is divided in three more sections. In the next section, we recall some results on generalized subconvex-like functions and an alternative theorem; also we recall some generalized directional derivative and Hessian properties, introduced by Cominetti and Correa in [8]. In Section 3. we establish saddle point type theorems for the vectorial optimization problem (P) and, finally in Section 4., we use these results to obtain second order conditions for problem (PF).

2. Preliminaries

This section is devoted to present some definitions and auxiliary results, which will be useful in the next sections. In the sequel a definition and a technical lemma concerning dual cones are stated. Then we have two subsections, being the first one about the notion of generalized subconvex-like functions and a Gordan type theorem of the alternative for this sort of functions. In the second one, the concept of second order generalized derivatives is defined and some related results are given.

Let X be a locally convex topological vector space. X* denotes the (topological) dual of X and 〈·, ·〉 the canonical bilinear (duality) form between X and X*.

Definiton 2.1. The dual cone (or polar cone) of a set Q X is defined as the convex cone

Q*: = {fX*: 〈f, x〉 > 0 ∀ xQ}.

Lemma 2.1.Let F be a Banach space and Q F a closed convex cone. Then

f, x〉 > ∀ fQ*\ {0}, ∀ ∈ int Q.

The proof can be found in Craven [9].

2.1. Generalized subconvex-like functions and a Gordan type alternative theorem

Convexity and generalized convexity are very important concepts in optimization theory. One reason for this importance is that for these classes of functions it is possible to establish alternative theorems and consequently to obtain necessary and/or sufficient optimality conditions. The generalized convexity notion that we will use here is the generalized subconvex-like functions, which was introduced by Xinmin Yang in [21], where the author showed that these functions satisfy a Gordan type alternative theorem. He also showed that the generalized subconvex-like class of functions comprise subconvex-like, convex-like and convex class of functions. Thus the generalized subconvex-like is a large class of functions which satisfy a Gordan type alternative theorem.

Definition 2.2.Let E and F be normed spaces, S0 a nonempty subset of E, Q F a convex set with nonempty interior, and f : S0 E F. We say that f is a generalized subconvex-like function if there exists u int Q such that for each α ∈ (0,1) and arbitrary x1,x2 S0 and ε > 0, there exist x3 S0 and ρ > 0 such that

εu + αf (x1) + (1 - α) f (x2) - ρf (x3) ∈ Q.

The class of the generalized subconvex-like functions satisfies the following alternative theorem (see [21], p. 128-130):

Theorem 2.1 (Generalized Alternative Theorem). Let E and F be two Banach spaces, Q F a convex cone with nonempty interior and S E nonempty. Assume that f : S F is generalized subconvex-like. Then, exactly one of the following statements is consistent:

a) There exists x S such that - f(x) ∈ int Q;

b) There exists s* Q*\{0} such that s*, f(x)〉 > 0 ∀ x S.

2.2. Second order generalized derivative and the generalized Hessian

In this subsection we recall some results concerning the generalized second order derivative and the generalized Hessian. We start giving its definitions. Then, certain important classes of funtions are introduced. The section is closed with two propositions, where topological properties of the generalized derivative and a second order Taylor type expansion are exhibited. For more details, see Cominetti and Correa [8].

In the following, X is a locally convex topological vector space.

Definition 2.3.The generalized second order directional derivative of a function f : X at x ∈ X in the directions (u, v) X × X is defined by

and the generalized Hessian of f at x is the multifunction δ2 f(x) : X X* given by

δ2f(x)(u) : = {x*X* : 〈x*, v〉 < f○○ (x; u, v) ∀ vX}.

In order to obtain continuity properties for generalized Hessian it is necessary to define the following classes of functions:

Definition 2.4. A function f : X is twice C-differentiable in x if f○○ (x; u, ·) is lower semicontinuous (l.s.c.), for each u X.

Definition 2.5. We say that f : X is twice locally Lipschitz at x if for each v X there exist a neighborhood V of x and a neighborhood U of zero such that the set f○○ (V; U, v) is bounded in .

If the boundedness of f○○ (V; U, v) is uniform in v, that is, if there exist neighborhoods V of x and U of zero such that f○○ (V; U, U) is bounded in , then we say that f is twice uniformly locally Lipschitzian at x.

In [8] it is proved that if f : X → is twice locally Lipschitz at x, then f is twice C-differentiable at every point of V, where V is chosen as in the last definition.

Definition 2.6.Let Y, Z be topological vector spaces and A : Y Z be a multifunction. We say that A is locally compact at y Y if there exists a neighborhood V of y such that A(V) =

is relatively compact.

We say that A is closed at y if for each net yα y and zαz with zαA(yα) ∀ a, we have z A(y).

If A is locally compact and closed at y, we say that A is upper semicontinuous (u.s.c.) at y.

An important class of twice uniformly locally Lipschitzian functions is defined below.

Definition 2.7.We say that f : X is a C1,1 -function if it is Gâteaux-differentiable and the (Gâteaux) derivative f is locally Lipschitz.

Proposition 2.1 (Cominetti and Correa [8]) Assume that f : X is twice locally Lipschitz at x. Then, for each u X, the following assertions are satisfied:

(a) δ2 f(·) (u) is locally w*-compact and δ2 f(x) (u) is w*-compact;

(b) f○○ (·; ·, v) is u.s.c., for each v X and δ2 f (·)(·) is closed;

(c) If f is C1,1, then δ2 f(·)(·) is locally w*-compact.

The following proposition is a version of the second order Taylor expansion for twice C-differentiable functions.

Proposition 2.2 (Cominetti and Correa [8]) Let f : X be a continuously Gateaux-differentiable function and twice C-differentiable in the closed segment [x, y] ⊂ X. Then, there exists ξ in the open segment ]x,y[ such that

and the closure is unnecessary when f is C1,1.

3. Saddle Point Type Conditions

Following the guidelines of Kuhn-Tucker and Fritz-John, we characterize weakly efficient solutions of problem (P) in terms of saddle point type conditions. Here we give a saddle point definition to the problem of vector optimization, which is based on solving scalar problems, instead of vector ones like the most existing definitions in the literature. In other words, such a definition has the property of not involving the resolution of a multiobjective problem. Furthermore, our definition generalizes the one introduced in Osuna-Gómez et al. [17] for the corresponding case, when (P) is stated in a finite-dimensional setting.

Definition 3.1.We say that (, , ) ∈ E × F*× G* is a multiple saddle point for the problem (P) if

vK*,xS and if (, ) ∈ Q*× K*, ≠ 0.

As in the classical case, if (, , ) is a multiple saddle point, then is a weakly efficient solution of (P). Before we prove this assertion we need the following auxiliary result:

Lemma 3.1. Let E, F be two Banach spaces. Assume that F is ordered by the convex cone Q F with nonempty interior and let f : Γ ⊂ E F. Consider the vectorial optimization problem:

If there exists Q*\ {0} such that ∈ Γ is a solution of

then is a weakly efficient solution ().

Proof. Suppose that ∈ Γ is not a weakly efficient solution of (). In this case, there exists x ∈ Γ such that f(x) <Q f(), that is, f() - f(x) ∈ int Q. Since Q*\ {0}, by Lemma 2.1, we have (f() - f(x)) > 0 and, by linearity of , we have

f() > f(x),

which is a contradiction.

Theorem 3.1. If (, , ) is a multiple saddle point, then is a weakly efficient solution of (P).

Proof. By Lemma 3.1, it is enough to show that is a solution of the problem

minimize f(x)

subject to x,

where : = {xS : -g(x) ∈ K}. Since (, , ) is a multiple saddle point, we have

f() + vg() < f() + g() < f(x) + g(x) ∀ vK*, ∀ xS.

In particular, setting v = 0 , we obtain g() > 0 and, therefore, g() = 0. Then,

f() < f(x) ∀ x

and, thus,

is a solution of (

(

)).

The converse of the above result is also true under certain generalized convexity hypotheses (in our case, generalized subconvex-likeness) and regularity on the constraints of problem. We use a Slater type constraint qualification.

Definition 3.2 (Slater type Constraint Qualification) We say that the constraint qualification (CQ) is satisfied if there exists such that g() ∈ -int K.

Theorem 3.2. Assume that in the problem (P) the function (f - f(), g) is generalized subconvex-like (with respect to the cone Q × K ⊂ F × G). If is a weakly efficient solution of (P) and (CQ) is verified, then there exist , such that (, , ) is a multiple saddle point.

Proof. The proof follows from Theorem 2.1. In fact, if is a weakly efficient solution of (P), there does not exist a solution xE for the following system

Since the function (f - f(), g) is generalized subconvex-like, by Theorem 2.1, there exists (, ) ∈ Q*×K*\ {(0, 0)} such that

○ (f(x) - f()) + g(x) > 0 ∀ xS

that is,

If x = in (3.2), then g() > 0 and, therefore g() = 0, since g() ∈ -K and K*. Furthermore, vg() < 0 ∀ vK*. Thus, we have

for all vK* and xS.

Now, we show that ≠ 0. From condition (CQ), there exists S, g() ∈ -int K. Taking x = in (2), we obtain

By contradiction, assume that

= 0. Then,

≠ 0 and as

g(

) ∈ -int

K, it follows from Lemma * that

g(

) < 0. On the other hand, with

= 0 in (3) we get the opposite inequality, so that we have a contradiction. Therefore,

≠ 0 and hence (

,

,

) is a multiple saddle point.

4. Second Order Conditions

Here two relevant results regarding second order optimality conditions for (PF) are proposed. Necessity and sufficiency are tackled, as applications of the so studied notions and results. It is worth mentioning that such conditions are established without demanding twice differentiability (in the classical sense).

We consider the following vectorial optimization problem:

where X is a Banach space and fj, gi : SX, j = 1,..., p, i = 1,..., m, are continuous and Gâteaux differentiable functions and S is a nonempty open subset of X.

We prove second order conditions for weak efficiency in (PF) through the notions of directional derivative, generalized Hessian (Cominetti and Correa [8]) and the saddle point conditions studied in the last section.

We consider the Lagrangian function

where r, v and xX.

It is well known (see Da Cunha and Polak [10] or Jahn [14]) that if is a weakly efficient solution of (PF) and a regularity condition holds, then there exist \{0} and such that

In this case, (, ) is said to be a pair of multipliers. Here we give a proof of this result assuming that the Slater type constraint qualification and the generalized subconvex-likeness of the functionals are satisfied.

Theorem 4.1. Let be a weakly efficient solution for (PF). We assume that the function (f - f(), g) is generalized subconvex-like (with respect to the cone × ) and that (PF) satisfies the Slater constraint qualification. Then, there exists a pair of multipliers (, ) satisfying (4.1) - (4.2).

Proof. From Theorem 3.2, there exists (, ) ∈ × such that (, , ) is a multiple saddle point. In particular, the following inequality holds true:

Since fj, gi are Gâteaux-differentiable, the inequality above implies that

Furthermore, as we know from the proof of Theorem 3.2, when (, , ) is a multiple saddle point we have

Theorem 4.2 (Second order necessary conditions) Assume that is a weakly efficient solution of (PF). If (f - f(), g) is a generalized subconvex-like function and (PF) satisfies the Slater constraint qualification, then

(i) there exists (, ) such that (, , ) is a multiple saddle point;

(ii) the following inequality is verified:

Proof. (i) It follows directly from Theorem 3.2.

(ii) It is obvious that

Let uS. By the saddle point conditions,

for all t ∈ (0, t0], where t0 > 0 is such that + t0uS. Hence

Fixed t ∈ (0, t0], by the Mean Value Theorem, there exists ∈ (0, t) such that

and, consequently,

Thus,

(

;

u,

u)

> 0 for all

u

S.

Definition 4.1. Let QX. The Bouligand's tangent cone to Q at xQ is defined as

B(Q, x): = {uX : ∃ tα ↓ 0, ∃ uαu such that x + tα uαQ}.

Theorem 4.3. (Second order sufficient conditions) Assume that in problem (PF) the functions fj, gi : n are C1,1-functions, for all j = 1,..., p and i = 1,..., m. Then, a sufficient condition for a feasible point to be a local weakly efficient solution of (PF) is that there exists a pair of multipliers (, ) ∈> \ {0} × such that

( ; u, -u) > 0 ∀ uB(, ) \ {0}.

Proof. Suppose that is not a local weakly efficient solution for (PF). Then, there exists a sequence (xk) ⊂ \ {} such that xk and

Setting

we see (taking subsequences if necessary) that uku, for some uX. Then clearly uB(, ) \ {0}. Put

From (4.3), the feasibility of xk and the fact that (, ) is a pair of multipliers, it follows that

so that ak < 0 ∀ k. By Proposition 2.2, there exists xk ∈ ]xk, [ such that ak ∈ 〈 δ2 (ξk) (uk), uk 〉 ∀ k. Hence, there exists δ2 (xk) (uk) such that ak = 〈, uk〉 < 0 ∀ k. Furthermore, ξk. By Proposition 2.1-(c) we can assume, without loss of generality, that x*δ2 () (u). Therefore we obtain

In this way we have

with

u

B(

,

) \ {0}, which contradicts the hypothesis.

We now present a very simple example illustrating Theorem 4.3.

Example 4.1. Consider the problem

minimize f(x) : = (f1(x), f2(x)) = (|x|3/2, |x - 1|3/2)

subject to x .

Observe that

and
are not differentiable functions (in the classical sense).

Let = 0 and = 1. Then it is easily verified that () = 0 for = (1, 0) and() = 0 for = (0, 1), so that and are multipliers for and , respectively. We also have that (x ; u, -u) > 0 for all u \ {0}, for (r, x) = (, ) and (r, x) = (, ). Thus and are weakly efficient solutions of this problem.

We close this paper with some few words on possible applications of generalized second order optimality conditions.

In Huang and Yang [13] the authors present some nonlinear penalty methods for a constrained multiobjective optimization problem. The last result can be used, for example, in the study and development of these kind of methods. It is well known that penalty functions may be nonsmooth. Besides, even when a smoothing approach is performed the resulting function may not be twice differentiable.

The examination of the Hessian of the penalty function is important in choosing effective algorithms (see Nocedal and Wright [16] for the mono-objective case). The efficiency of penalty methods relies (not only) on the conditioning of the Hessian matrix.

In Bazaraa et al. [2] it can be seen that second order sufficient conditions are assumed on proving that the augmented Lagrangian penalty function can be classified as an exact penalty function (for scalar optimization). Then, Theorem 4.3 can be employed in the development of such a method for multiobjective problems with C1,1 data.

Another application of sufficient second order optimality conditions is in sensivity analysis. See Luenberger and Ye [15] for the mono-objective case.

These are topics for future work.

Recebido em 02 Agosto 2011; Aceito em 13 Agosto 2012.

  • [1] J.P. Aubin, I. Ekeland, "Applied Nonlinear Analysis", John Wiley & Sons, New York, 1984.
  • [2] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, "Nonlinear Programming: Theory and Algorithms", Wiley-Interscience, New Jersey, 2006.
  • [3] S. Bellaassali, A. Jourani, Lagrange multipliers for multiobjective programs with a general preference, Set-Valued Anal, 16 (2008), 229-243.
  • [4] G. Bigi, Saddlepoint optimality criteria in vector optimization, in "Optimization in Economics, Finance and Industry" (A. Guerraggio et al., eds.), pp. 85-102, Datanova, Milan, 2002.
  • [5] R.W. Chaney, Second order directional derivatives for nonsmooth functions, J. Math. Anal. Appl., 128 (1987), 495-511.
  • [6] S.L. Chen, N.J. Huang, M.M. Wong, B-Semipreinvex functions and vector optimization problems in Banach spaces, Taiwanese J. Math., 11 (2007), 595-609.
  • [7] F.H. Clarke, "Optimization and Nonsmooth Analysis", John Wiley & Sons, New York, 1983.
  • [8] R. Cominetti, R. Correa, A generalized second-order derivative in nonsmooth optimization, SIAM J. Control Optim., 28 (1990), 789-809.
  • [9] B.D. Craven, "Control and Optimization", Chapman & Hall, London, 1995.
  • [10] N.O. da Cunha, E. Polak, Constrained minimization under vector-valued criteria in finite dimensional spaces, J. Math. Anal. Appl., 19 (1967), 103-124.
  • [11] H. Gfrerer, Second-order optimality conditions for scalar and vector optimization problems in Banach spaces, SIAM J. Control Optim., 45 (2006), 972-997.
  • [12] J.B. Hiriart-Urruty, Approximating a second-order directional derivative for nonsmooth convex functions, SIAM J. Control Optim., 22 (1984), 43-56.
  • [13] X.X. Huang, X.Q. Yang, Asymptotic analysis of a class of nonlinear penalty methods for constrained multiobjective optimization, Nonlinear Analysis, 47 (2001), 5573-5584.
  • [14] J. Jahn, "Mathematical Vector Optimization in Partially Ordered Linear Spaces", Peter Lang, Frankfurt, 1986.
  • [15] D.G. Luenberger, Y. Ye, "Linear and Nonlinear Programmming", Springer, New York, 2008.
  • [16] J. Nocedal, S.J. Wright, "Numerical Optimization", Springer, New York, 2006.
  • [17] R. Osuna-Gómez, A. Rufián-Lizana, P. Ruiz-Canales, Duality in nondifferentiable vector programming, J. Math. Anal. Appl., 259 (2001), 462-475.
  • [18] R.T. Rockafellar, "Convex Analysis", Princeton, New Jersey, 1970.
  • [19] L.B. dos Santos, A.J.V. Brandão, R. Osuna-Gómez, M.A. Rojas-Medar, Necessary and sufficient conditions for weak efficiency in non-smooth vectorial optimization problems, Optimization, 58 (2009), 981-993.
  • [20] A. Taa, Second-order conditions for nonsmooth multiobjective optimization problems with inclusion constraints, J. Glob. Optim., 50 (2011), 271-291.
  • [21] X. Yang, Alternative theorems and optimality conditions with weakened convexity, Opsearch, 29 (1992), 125-135.
  • 1
    lucelina@ufpr.br; partially supported by the Spanish Ministry of Education and Science (MEC) - Grant MTM2010-15383, and by the National Council for Scientific and Technological Development (CNPq-Brazil) - Grant 476043/2009-3.
    2
    marko@ueubiobio.cl; partially supported by the Spanish Ministry of Education and Science (MEC) - Grant MTM2010-15383 and by the National Fund for Scientific and Technological Development (Fondecyt-Chile) - Project 1080628.
    3
    antunes@ibilce.unesp.br; supported by he State of São Paulo Research Foundation - FAPESP - Grant 2011/01977-2.
  • Publication Dates

    • Publication in this collection
      15 Oct 2012
    • Date of issue
      2012

    History

    • Received
      02 Aug 2011
    • Accepted
      13 Aug 2012
    Sociedade Brasileira de Matemática Aplicada e Computacional Rua Maestro João Seppe, nº. 900, 16º. andar - Sala 163 , 13561-120 São Carlos - SP, Tel. / Fax: (55 16) 3412-9752 - São Carlos - SP - Brazil
    E-mail: sbmac@sbmac.org.br