Saddle Point and Second Order Optimality in Nondifferentiable Nonlinear Abstract Multiobjective Optimization

This article deals with a vector optimization problem with cone constraints in a Banach space setting. By making use of a real-valued Lagrangian and the concept of generalized subconvex-like functions, weakly efficient solutions are characterized through saddle point type conditions. The results, jointly with the notion of generalized Hessian (introduced in [Cominetti, R., Correa, R.: A generalized second-order derivative in nonsmooth optimization. SIAM J. Control Optim. 28, 789–809 (1990)]), are applied to achieve second order necessary and sufficient optimality conditions (without requiring twice differentiability for the objective and constraining functions) for the particular case when the functionals involved are defined on a general Banach space into finite dimensional ones.


Introduction and Formulation of the Problem
In many situations, practical or theoretical, finite dimensional spaces are not the most suitable ones in order to model or study a given problem.Alike, oftentimes, scalar objective programming is not the most appropriate scenario.Therefore the development of optimality conditions for vectorial abstract programming problems is of great importance.
The role of nonsmooth analysis is of notable importance in the optimization theory.This is due at least to the following reasons.First, in practice, differentiability assumptions may be too restrictive.Second, as pointed by Cominetti and Correa [8], there are many usual techniques commonly employed in optimization that generate "nonsmoothness" even when the problems are differentiable.This arises, for example, in duality theory, sensitivity and stability analysis, decomposition techniques, penalty methods, among others.
With respect to necessary optimality conditions without differentiability, one can resort to those of Fritz John or Kuhn-Tucker type, which are obtained under various generalized derivatives concepts, or to saddle point conditions, where no differentiability assumption is required.Still on necessary conditions, we should mention the second order ones, that can also be obtained through generalized (second order) derivatives.Now, on sufficient conditions, there are those based on convexity or generalized convexity and second order type conditions.Both can be addressed in nondifferentiable frameworks.
In the last years, an extended differentiability theory has been developed through various concepts of generalized differentiability and first order optimality conditions to the problem of scalar optimization have been established (see Clarke [7] and Rockafellar [18]).Also, a significant theory of generalized second order differentiability has been developed (see, for example, Aubin and Ekeland [1], Chaney [5] and Hiriart-Urruty [12]).In particular, Cominetti and Correa introduced in [8] the notions of second order directional derivative and generalized Hessian and gave some second order optimality conditions for an abstract scalar minimization problem.
We now cite some few works concerning the topics aforementioned.Multiplier rules of Fritz John and Kuhn-Tucker types were studied, for example, in Bellaassali and Jourani [3], Da Cunha and Polak [10], Jahn [14] and Dos Santos et al. [19].Saddle point conditions were investigated, for instance, in Bigi [4] and Chen et al. [6].Bigi characterized saddle points assuming convex data and Chen et al. used a distinct type of generalized convexity.Second order conditions were explored, more recently, in Gfrerer [11] and Taa [20].In [11], the results were obtained by making use of Hadamard derivatives.In [20] abstract problems are considered, but under twice differentiability.
The reader who is interested in a more comprehensive bibliography review on these issues can consult the articles just quoted, which provide fine lists of references.
The aim of this paper is to contribute to the development of the optimality conditions theory of nondifferentiable nonlinear abstract multiobjective optimization.
At first, we will consider a vector optimization problem which can be posed as where f : S ⊆ E → F and g : S ⊆ E → G are given (not necessarily differentiable) functions, S is a nonempty subset of E and E, F and G are Banach spaces.The spaces F and G are ordered by closed convex cones Q ⊂ F and K ⊂ G. Also, we assume that Q has nonempty interior.We denote by F the feasible set of (P), in the way that We will consider the so called weakly efficient solutions of (P).We recall that x ∈ F is said to be a weakly efficient solution (respectively, local weakly efficient solution) of (P) if there does not exist x feasible for (P) such that f (x) − f (x) ∈ −int Q (respectively, if there exists a neighborhood V of x such that there does not exist Following Osuna-Gómez et al. [17], we give a definition of saddle points which has the feature of being based on solving scalar problems and not vector ones, as usual.We, then, show that every point that satisfies our definition is a weakly efficient solution.The converse is also obtained, but under a generalized convexity assumption and when (P) satisfies a constraint qualification.Subsequently, we will apply these results to the (finite-dimensional) particular case when where f j , g i : S ⊆ X → R, j ∈ J := {1, ..., p}, i ∈ I := {1, ..., m}, are continuous and Gâteaux differentiable functions and S is a nonempty open subset of a Banach space X.We obtain second order conditions for the nonsmooth finite-dimensional problem (PF) in terms of second order directional derivatives (see Cominetti and Correa [8]).This work is divided in three more sections.In the next section, we recall some results on generalized subconvex-like functions and an alternative theorem; also we recall some generalized directional derivative and Hessian properties, introduced by Cominetti and Correa in [8].In Section 3. we establish saddle point type theorems for the vectorial optimization problem (P) and, finally in Section 4., we use these results to obtain second order conditions for problem (PF).

Preliminaries
This section is devoted to present some definitions and auxiliary results, which will be useful in the next sections.In the sequel a definition and a technical lemma concerning dual cones are stated.Then we have two subsections, being the first one about the notion of generalized subconvex-like functions and a Gordan type theorem of the alternative for this sort of functions.In the second one, the concept of second order generalized derivatives is defined and some related results are given.
Let X be a locally convex topological vector space.X * denotes the (topological) dual of X and •, • the canonical bilinear (duality) form between X and X * .
Definition 2.1.The dual cone (or polar cone) of a set Q ⊂ X is defined as the convex cone The proof can be found in Craven [9].

Generalized subconvex-like functions and a Gordan type alternative theorem
Convexity and generalized convexity are very important concepts in optimization theory.One reason for this importance is that for these classes of functions it is possible to establish alternative theorems and consequently to obtain necessary and/or sufficient optimality conditions.The generalized convexity notion that we will use here is the generalized subconvex-like functions, which was introduced by Xinmin Yang in [21], where the author showed that these functions satisfy a Gordan type alternative theorem.He also showed that the generalized subconvex-like class of functions comprise subconvex-like, convex-like and convex class of functions.Thus the generalized subconvex-like is a large class of functions which satisfy a Gordan type alternative theorem.
Definition 2.2.Let E and F be normed spaces, S 0 a nonempty subset of E, Q ⊂ F a convex set with nonempty interior, and f : S 0 ⊂ E → F .We say that f is a generalized subconvex-like function if there exists u ∈ int Q such that for each α ∈ (0, 1) and arbitrary x 1 , x 2 ∈ S 0 and ε > 0, there exist x 3 ∈ S 0 and ρ > 0 such that εu The class of the generalized subconvex-like functions satisfies the following alternative theorem (see [21], p. 128-130): Theorem 2.1 (Generalized Alternative Theorem).Let E and F be two Banach spaces, Q ⊂ F a convex cone with nonempty interior and S ⊂ E nonempty.Assume that f : S → F is generalized subconvex-like.Then, exactly one of the following statements is consistent:

Second order generalized derivative and the generalized Hessian
In this subsection we recall some results concerning the generalized second order derivative and the generalized Hessian.We start giving its definitions.Then, certain important classes of funtions are introduced.The section is closed with two propositions, where topological properties of the generalized derivative and a second order Taylor type expansion are exhibited.For more details, see Cominetti and Correa [8].
In the following, X is a locally convex topological vector space.
Definition 2.3.The generalized second order directional derivative of a function f : and the generalized Hessian of f at x is the multifunction In order to obtain continuity properties for generalized Hessian it is necessary to define the following classes of functions: is lower semicontinuous (l.s.c.), for each u ∈ X. Definition 2.5.We say that f : X → R is twice locally Lipschitz at x if for each v ∈ X there exist a neighborhood V of x and a neighborhood U of zero such that the set f If the boundedness of f •• (V ; U, v) is uniform in v, that is, if there exist neighborhoods V of x and U of zero such that f •• (V ; U, U ) is bounded in R, then we say that f is twice uniformly locally Lipschitzian at x.
In [8] it is proved that if f : X → R is twice locally Lipschitz at x, then f is twice C-differentiable at every point of V , where V is chosen as in the last definition.Definition 2.6.Let Y, Z be topological vector spaces and A : Y ⇉ Z be a multifunction.We say that A is locally compact at y ∈ Y if there exists a neighborhood We say that A is closed at y if for each net y α → y and z α → z with z α ∈ A(y α ) ∀ α, we have z ∈ A(y).
If A is locally compact and closed at y, we say that A is upper semicontinuous (u.s.c.) at y.
An important class of twice uniformly locally Lipschitzian functions is defined below.
Definition 2.7.We say that f : X → R is a C 1,1 −function if it is Gâteauxdifferentiable and the (Gâteaux) derivative ∇f is locally Lipschitz.Proposition 2.1 (Cominetti and Correa [8]).Assume that f : X → R is twice locally Lipschitz at x.Then, for each u ∈ X, the following assertions are satisfied: The following proposition is a version of the second order Taylor expansion for twice C-differentiable functions.[8]).Let f : X → R be a continuously Gateaux-differentiable function and twice C-differentiable in the closed segment [x, y] ⊂ X.Then, there exists ξ in the open segment ]x, y[ such that

Proposition 2.2 (Cominetti and Correa
and the closure is unnecessary when f is C 1,1 .

Saddle Point Type Conditions
Following the guidelines of Kuhn-Tucker and Fritz-John, we characterize weakly efficient solutions of problem (P) in terms of saddle point type conditions.Here we give a saddle point definition to the problem of vector optimization, which is based on solving scalar problems, instead of vector ones like the most existing definitions in the literature.In other words, such a definition has the property of not involving the resolution of a multiobjective problem.Furthermore, our definition generalizes the one introduced in Osuna-Gómez et al. [17] for the corresponding case, when (P) is stated in a finite-dimensional setting.
Definition 3.1.We say that (x, r, v) ∈ E × F * × G * is a multiple saddle point for the problem (P) if As in the classical case, if (x, r, v) is a multiple saddle point, then x is a weakly efficient solution of (P).Before we prove this assertion we need the following auxiliary result: Lemma 3.1.Let E, F be two Banach spaces.Assume that F is ordered by the convex cone Q ⊂ F with nonempty interior and let f : Γ ⊂ E → F .Consider the vectorial optimization problem: minimize f (x) subject to x ∈ Γ.
( P) then x is a weakly efficient solution of ( P).
Proof.Suppose that x ∈ Γ is not a weakly efficient solution of ( P).In this case, there exists which is a contradiction.
Theorem 3.1.If (x, r, v) is a multiple saddle point, then x is a weakly efficient solution of (P).
Proof.By Lemma 3.1, it is enough to show that x is a solution of the problem where F := {x ∈ S : −g(x) ∈ K}.Since (x, r, v) is a multiple saddle point, we have In particular, setting v = 0 , we obtain v • g(x) ≥ 0 and, therefore, v • g(x) = 0.
and, thus, x is a solution of ( P(r)).
The converse of the above result is also true under certain generalized convexity hypotheses (in our case, generalized subconvex-likeness) and regularity on the constraints of problem.We use a Slater type constraint qualification.Definition 3.2 (Slater type Constraint Qualification).We say that the constraint qualification (CQ) is satisfied if there exists x ∈ F such that g(x) ∈ −int K. Theorem 3.2.Assume that in the problem (P) the function (f − f (x), g) is generalized subconvex-like (with respect to the cone Q × K ⊂ F × G).If x ∈ F is a weakly efficient solution of (P) and (CQ) is verified, then there exist r, v such that (x, r, v) is a multiple saddle point.
Proof.The proof follows from Theorem 2.1.In fact, if x is a weakly efficient solution of (P), there does not exist a solution x ∈ E for the following system Since the function (f − f (x), g) is generalized subconvex-like, by Theorem 2.1, there exists (r, v) for all v ∈ K * and x ∈ S. Now, we show that r = 0. From condition (CQ), there exists x ∈ S, g(x) ∈ −int K. Taking x = x in (3.2), we obtain By contradiction, assume that r = 0.Then, v = 0 and as g(x) ∈ −int K, it follows from Lemma 2.1 that v • g(x) < 0. On the other hand, with r = 0 in (3.3) we get the opposite inequality, so that we have a contradiction.Therefore, r = 0 and hence (x, r, v) is a multiple saddle point.

Second Order Conditions
Here two relevant results regarding second order optimality conditions for (PF) are proposed.Necessity and sufficiency are tackled, as applications of the so studied notions and results.It is worth mentioning that such conditions are established without demanding twice differentiability (in the classical sense).We consider the following vectorial optimization problem: where X is a Banach space and f j , g i : S ⊆ X → R, j = 1, ..., p, i = 1, ..., m, are continuous and Gâteaux differentiable functions and S is a nonempty open subset of X.
We prove second order conditions for weak efficiency in (PF) through the notions of directional derivative, generalized Hessian (Cominetti and Correa [8]) and the saddle point conditions studied in the last section.
We consider the Lagrangian function where r ∈ R p + , v ∈ R m + and x ∈ X.It is well known (see Da Cunha and Polak [10] or Jahn [14]) that if x is a weakly efficient solution of (PF) and a regularity condition holds, then there exist ) In this case, (r, v) is said to be a pair of multipliers.Here we give a proof of this result assuming that the Slater type constraint qualification and the generalized subconvex-likeness of the functionals are satisfied.
Theorem 4.1.Let x be a weakly efficient solution for (PF).We assume that the function (f −f (x), g) is generalized subconvex-like (with respect to the cone R p + ×R m + ) and that (PF) satisfies the Slater constraint qualification.Then, there exists a pair of multipliers (r, v) satisfying (4.1)-(4.2).
Proof.From Theorem 3.2, there exists (r, v) ∈ R p + ×R m + such that (x, r, v) is a multiple saddle point.In particular, the following inequality holds true: Furthermore, as we know from the proof of Theorem 3.2, when (x, r, v) is a multiple saddle point we have m i=1 vi g i (x) = 0. Theorem 4.2 (Second order necessary conditions).Assume that x is a weakly efficient solution of (PF).If (f − f (x), g) is a generalized subconvex-like function and (PF) satisfies the Slater constraint qualification, then (i) there exists (r, v) such that (x, r, v) is a multiple saddle point; (ii) the following inequality is verified: From (4.3), the feasibility of x k and the fact that (r, v) is a pair of multipliers, it follows that In this way we have We now present a very simple example illustrating Theorem 4.3.
Observe that f ′ 1 and f ′ 2 are not differentiable functions (in the classical sense).Let x = 0 and x = 1.Then it is easily verified that L ′ r (x) = 0 for r = (1, 0) and L ′ r (x) = 0 for r = (0, 1), so that r and r are multipliers for x and x, respectively.We also have that −L •• r (x; u, −u) > 0 for all u ∈ R \ {0}, for (r, x) = (r, x) and (r, x) = (r, x).Thus x and x are weakly efficient solutions of this problem.
We close this paper with some few words on possible applications of generalized second order optimality conditions.
In Huang and Yang [13] the authors present some nonlinear penalty methods for a constrained multiobjective optimization problem.The last result can be used, for example, in the study and development of these kind of methods.It is well known that penalty functions may be nonsmooth.Besides, even when a smoothing approach is performed the resulting function may not be twice differentiable.
The examination of the Hessian of the penalty function is important in choosing effective algorithms (see Nocedal and Wright [16] for the mono-objective case).The efficiency of penalty methods relies (not only) on the conditioning of the Hessian matrix.
In Bazaraa et al. [2] it can be seen that second order sufficient conditions are assumed on proving that the augmented Lagrangian penalty function can be classified as an exact penalty function (for scalar optimization).Then, Theorem 4.3 can be employed in the development of such a method for multiobjective problems with C 1,1 data.
Another application of sufficient second order optimality conditions is in sensivity analysis.See Luenberger and Ye [15] for the mono-objective case.
These are topics for future work.
i (x) ∀ x ∈ S.Since f j , g i are Gâteaux-differentiable, the inequality above implies that p j=1 rj ∇f j (x) + m i=1 vi ∇g i (x) = 0.

Example 4 . 1 .
Consider the problem minimize