Acessibilidade / Reportar erro

GENERALIZED NASH EQUILIBRIUM PROBLEMS - RECENT ADVANCES AND CHALLENGES

Abstract

Generalized Nash equilibrium problems have become very important as a modeling tool during the last decades. The aim of this survey paper is twofold. It summarizes recent advances in the research on computational methods for generalized Nash equilibrium problems and points out current challenges. The focus of this survey is on algorithms and their convergence properties. Therefore, we also present reformulations of the generalized Nash equilibrium problem, results on error bounds and properties of the solution set of the equilibrium problems.

generalized Nash equilibrium problem; reformulation; algorithms; local and global convergence; error bounds; structural properties


1 INTRODUCTION

In a Nash equilibrium problem (NEP for short), N players compete with each other. Every player is allowed to choose a strategy from his strategy set in order to minimize his objective. The objective of a player may depend on both the player's and the rival players' strategies. A vector of strategies for all players is called Nash equilibrium or simply solution of aNEP if noneof the players is able to improve his objective by solely changing his strategy. NEPs were defined in 1950 by John F. Nash [60[60] NASH JF. 1950. Equilibrium points in n-person games. Proceedings of the National Academy of Sciences, 36: 48-49.,61[61] NASH JF. 1951. Non-cooperative games. The Annals of Mathematics, 54: 286-295.]. The generalized Nash equilibrium problem (GNEP for short) was introduced in 1952 by Gerard Debreu [12[12] DEBREU G. 1952. A social equilibrium existence theorem. Proceedings of the National Academy of Sciences of the United States of America, 38: 886-893.]. In a GNEP, the strategy set of each player may also depend on the strategies of the other players.

GNEPs have become an important field of research during the last two decades and gained in importance for many practical applications, for example in economics, computer science, and engineering. In [28[28] FACCHINEI F & KANZOW C. 2010. Generalized Nash equilibrium problems. Annals of Operations Research, 175: 177-211.] many references for applications can be found. Particularly, three problems were described in more detail: the economy model by Arrow and Debreu, a power allocation problem in telecommunications, and a competition among countries that arises from the Kyoto protocol to reduce air pollution. Further problems in different applications were successfully modeled as (generalized) Nash equilibrium problems in recent years, for example let us mention problems in cloud computing [1[1] ARDAGNA D, PANICUCCI B & PASSACANTANDO M. 2011. A game theoretic formulation of the service provisioning problem in cloud systems. Proceedings of the 20th International Conference on World Wide Web (edited by Ghinita G & Punera K), ACM, 177-186.,2[2] ARDAGNA D, PANICUCCI B & PASSACANTANDO M. 2013. Generalized Nash equilibria for the service provisioning problem in cloud systems. IEEE Transactions on Services Computing, 6: 429-442.,10[10] CARDELLINI V, DE NITO PERSONÉ V, DI VALERIO V, FACCHINEI F, GRASSI V, LO PRESTI F & PICCIALLI V. 2013. A game-theoretic approach to computation offloading in mobile cloud computing. Technical Report, available online at <http://www.optimization-online.org/DBHTML/2013/08/3981.html>.
http://www.optimization-online.org/DBHTM...
], in electricity generation [11[11] CONTRERAS J, KRAWCZYK J, ZUCCOLLO J & GARCÍA J. 2013. Competition of thermal electricity generators with coupled transmission and emission constraints. Journal of Energy Engineering, 139: 239-252.,62[62] NASIRI F & ZACCOUR G. 2010. Renewable portfolio standard policy: a game-theoretic analysis. Information Systems and Operational Research, 48: 251-260.,67[67] PARENTE LA, LOTITO PA, RUBIALES AJ & SOLODOV MV. 2013. Solving net constrained hydrothermal Nash-Cournot equilibrium problems via the proximal point decomposition method. Pacific Journal of Optimization, 9: 301-322. ], in wireless communication [43[43] HAN Z, NIYATO D & SAAD W. 2012. Game Theory in Wireless and Communication Networks. Cambridge University Press.,57[57] MOCHAOURAB R, ZORBA N & JORSWIECK E. 2012. Nash equilibrium in multiple antennas protected and shared bands. Proceedings of the International Symposium on Wireless Communication Systems (ISWCS), IEEE, 101-105.,66[66] PANG J-S, SCUTARI G, PALOMAR DP & FACCHINEI F. 2010. Design of cognitive radio systems under temperature-interference constraints: A variational inequality approach. IEEE Transactions on Signal Processing, 58: 3251-3271.,71[71] SCUTARI G, FACCHINEI F, PANG J-S & PALOMAR DP. 2014. Real and complex monotone communication games. IEEE Transactions on Information Theory, 60: 4197-4231. ], in adversarial classification [7[7] BRÜCKNER M, KANZOW C & SCHEFFER T. 2012. Static prediction games for adversarial learning problems. Journal of Machine Learning Research, 13: 2617-2654.,8[8] BRÜCKNER M & SCHEFFER T. 2009. Nash equilibria of static prediction games. Advances in Neural Information Processing Systems (edited by Bengio Y, Schuurmans D, Lafferty J, Williams CKI & Culotta A), 22: 171-179. ], and in the non-life insurance market [22[22] DUTANG C, ALBRECHER H & LOISEL S. 2013. Competition between non-life insurers under solvency constraints: A game-theoretic approach. European Journal of Operational Research, 231: 702-711. ].

The paper [28[28] FACCHINEI F & KANZOW C. 2010. Generalized Nash equilibrium problems. Annals of Operations Research, 175: 177-211. ] from 2010 is not only a reference for applications, rather it also offers an excellent survey for GNEPs. Since then there were important new developments in the context of GNEPs so that we think it is worth to report on such advances. Besides that, our aim is to indicate current challenges and to motivate future research.

The main focus of this paper is on algorithms for the solution of GNEPs and related issues. Therefore, globally convergent methods as well as methods with fast local convergence are presented. We concentrate on recently developed algorithms and well known methods for which new convergence results have been obtained in the last years. There is a very recent survey paper [21[21] DUTANG C. 2013. A survey of GNE computation methods: Theory and algorithms. Available online at <http://hal.archives-ouvertes.fr/docs/00/81/35/31/PDF/meth-comp-GNE-dutangc-noformat.pdf>.
http://hal.archives-ouvertes.fr/docs/00/...
] that gives a detailed description of some algorithms, mainly for some equation-based reformulations of GNEPs.

From the theoretical point of view our focus lies on structural properties of the solution set of GNEPs and on error bound results. The latter play an important role for the local convergence analysis of several algorithms. For other theoretical results, in particular on existence, we refer to the survey papers [28[28] FACCHINEI F & KANZOW C. 2010. Generalized Nash equilibrium problems. Annals of Operations Research, 175: 177-211. ] and [20[20] DUTANG C. 2013. Existence theorems for generalized Nash equilibrium problems. Journal of Nonlinear Analysis and Optimization: Theory and Applications, 4: 115-126. ] and references therein. In [28[28] FACCHINEI F & KANZOW C. 2010. Generalized Nash equilibrium problems. Annals of Operations Research, 175: 177-211. ] also a detailed overview on the history of the generalized Nash equilibrium problem can be found. Stochastic and timedependent GNEPs are not in the scope of our paper, see [46[46] HAURIE A, KRAWCZYK JB & ZACCOUR G. 2012. Games and Dynamic Games. World Scientific. ] for a recent book.

The paper is organized as follows. In the remaining part of the Introduction the generalized Nash equilibrium problem is described and basic definitions are given. Reformulations of GNEPs by means of other problem classes are presented in Section 2. Such reformulations turn out to be useful from the theoretical as well as from the algorithmic point of view. Section 3 is devoted to local error bound results for GNEPs, whereas Section 4 reviews structural properties of the solution set of GNEPs. Section 5 deals with globally convergent algorithms and, in Section 6, we describe methods with locally fast convergence for the solution of GNEPs and some globalization techniques.

1.1 Problem statement

Let us consider a game of N players ν = 1,..., N. Each player ν controls his strategy vector

of nν decision variables. The vector

contains the n = ΣNν = 1Nν decision variables of all players. To emphasize the ν-th player's variables within x, we sometimes write (xν, x )instead of x, where

Each player ν has an objective function θν : ℝn → ℝ that may depend on both the player's decision variables xν and the decision variables x of the rival players. With respect to the practical setting, the objective function of a player is sometimes called utility function, payoff function or loss function. Moreover, each player's strategy xν has to belong to a set Xν (x ) ⊆ ℝnν that is allowed to depend on the rival players' strategies. Xν (x ) is called feasible set or strategy space of player ν. In many applications the feasible set is defined by inequality constraints, i.e., for each ν = 1,..., N, there is a continuous function gν : ℝn → ℝmν so that

For any given x ∈ ℝn, let us define

Note that xν Xν (x ) holds for all ν = 1,..., N if and only if the fixed point condition xX (x) is satisfied.

If we fix the rival players' strategies x , the aim of player ν is to choose a strategy xν Xν (x ) which solves the optimization problem

The GNEP is the problem of finding x* X (x *)so that, for all ν = 1,..., N,

holds. Such a vector x * is called generalized Nash equilibrium (NE for short) or simply solution of the GNEP. By SOL(GNEP)we denote the set of all solutions of the GNEP. If the feasible sets of all players do not depend on the rival players' strategies, the GNEP reduces to the classical Nash equilibrium problem.

1.2 Further definitions and notation

Among all GNEPs, player convex GNEPs play an important role. We call a GNEP player convex if, for every player ν and for every x ,the set Xν (x ) is closed and convex and the objective function θν (·, x ) is convex. This is a slightly weaker notion than the player convexity used in [ 45[45] HARMS N, KANZOW C & STEIN O. 2013. On differentiability properties of player convex generalized Nash equilibrium problems. Optimization, published online, DOI 10.1080/02331934.2012.752822.
https://doi.org/10.1080/02331934.2012.75...
], where Xν (x ) is given by (1) with continuous and componentwise convex functions gν (·, x ).

A further useful class of GNEPs is described by the existence of a nonempty set X ⊆ ℝn with the property

for all ν =1,..., N and all x . Members of this class are often called GNEPs with common or shared constraints.

Proposition 1.1 (see [74[74] VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377.]). For GNEPs satisfying (4) the following equivalences hold:

Proposition 1.1 tells us that in the case of a GNEP with shared constraints the fixed points of the point-to-set map xX (x) coincide with the elements of X. However, note that for given xX, the sets X and X (x)need not coincide. More precisely, neither X (x) ⊆ X nor XX (x) hold in general.

An important subclass of GNEPs with shared constraints are the jointly convex GNEPs. A GNEP is called jointly convex if, for every player ν and for every x , the function θν(·,x )is convex and if (4) holds with a nonempty, closed, and convex set X ⊆ ℝn. The set X is often described by

where G: ℝn → ℝM denotes a componentwise convex function. Then, (4) becomes

Obviously, a jointly convex GNEP is also player convex. To deal with more complicated cases, several authors (e.g., see [18[18] DREVES A, KANZOW C & STEIN O. 2012. Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems. Journal of Global Optimization, 53: 587-614.] consider the set

This set coincides with the set of the fixed points of the point-to-set map xX (x). In the case of shared constraints (4) we have W = X.

Proposition 1.2 (see [18[18] DREVES A, KANZOW C & STEIN O. 2012. Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems. Journal of Global Optimization, 53: 587-614.]). For any GNEP with W defined by (6) the following equivalences hold:

Note that, for given xW, the sets W and X (x) need not coincide. Furthermore, W is nonconvex in general, even if the GNEP is player convex.

Finally, let us state some basic notation. Throughout, ║·║ indicates the Euclidean vector norm. Sometimes, the maximum vector norm ║·║ is used. By Bδ (ɀ) := {x ∈ ℝn | ║x - ɀ║ ≤ δ} the closed ball around ɀ ∈ ℝn with radius δ > 0 is denoted. As before, the set of all solutions of a GNEP is denoted by SOL(GNEP). More general, we will use SOL(◊) to denote the solution set of a certain problem ◊. For a given function F its Jacobian is indicated by JF. We sometimes use ∇F(x) := JF(x)T. The Hadamard product of the vectors a and b is denoted by ɑb, i.e., (ɑb)i := ɑibi for all i. Finally, ℝ+ (ℝ++) denote the nonnegative (positive) reals.

2 REFORMULATION OF GNEPS

This section is devoted to several reformulations of the generalized Nash equilibrium problem. Such reformulations turn out to be useful to prove theoretical results, for instance on existence. Even more important, reformulations are often the key to the design of numerical algorithms for the solution of GNEPs.

2.1 Variational reformulations

Quasi-variational inequalities (QVIs) and, as a subclass, variational inequalities (VIs) play an important role for understanding and solving GNEPs. Before discussing relations between QVIs, VIs, and GNEPs let us introduce the corresponding notions. To this end, let F : ℝn →ℝn be continuous and let K : ℝn ⇒ ℝn denote a point-to-set map so that K(x) is closed and convex for all x ∈ ℝn. Then, the problem of finding x* satisfying

is called quasi-variational inequality and denoted by QVI(K,F). If, for some closed convex set K ⊆ ℝn, it holds that K(x) = K for all x ∈ ℝn then QVI(K,F) is called variational inequality and denoted by VI(K,F). To link QVIs to GNEPs let us define F : ℝn → ℝn by

assuming that θ 1,..., θN are C1-functions. If the point-to-set map X : ℝn ⇒ ℝn given by (2) is closed and convex then QVI(X,F)is well defined.

Theorem 2.1 ([ 6[6] BENSOUSSAN A. 1974. Points de Nash dans le cas de fonctionelles quadratiques et jeux differentiels lineaires a N personnes. SIAM Journal on Control, 12: 460-499.]).Let the functions θ1, ..., θ N be C 1 and the GNEP be player convex. Then, any solution x* of the GNEP is a solution of the QVI(X, F), and vice versa.

Developing methods for broader classes of QVIs is challenging but seems quite helpful for the solution of GNEPs.

If the GNEP is a NEP, i.e., if for every ν there is a set Xν ⊆ ℝnν such that Xν (x ) = Xν for all x , the above theorem tells us that, under a convexity assumption, any solution x* of the NEP is a solution of the variational inequality VI(ΠN ν=1 Xν, F), and vice versa. More interestingly, we will see in Subsection 4.2 that solutions of an appropriate VI may provide certain solutions of a GNEP. A basic result in this direction is

Theorem 2.2 ([3[3] AUBIN J-P. 1982. Mathematical Methods of Game and Economic Theory. North-Holland Publishing Company.,26[26] FACCHINEI F, FISCHER A & PICCIALLI V. 2007. On generalized Nash games and variational inequalities. Operations Research Letters, 35: 159-164.]).Let the functions θ1, ..., θN be C1 and let the GNEP be jointly convex. Then, SOL(VI(X, F)) ⊆ SOL(GNEP).

The set X in the previous theorem is connected to a jointly convex GNEP by its definition, see Subsection 1.2. Under the conditions of Theorem 2.2, any solution of VI(X, F) is called normalized Nash equilibrium or variational equilibrium. By means of the Nikaido-Isodafunction this notion can be extended to the case of only continuous functions θ 1, ..., θN, see [ 72[72] URYASEV S & RUBINSTEIN RY. 1994. On relaxation algorithms in computation of noncooperative equilibria. IEEE Transactions on Automatic Control, 39: 1263-1267., 74[74] VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377.], for example.

2.2 Reformulations based on Nikaido-Isoda functions

Throughout this subsection it is assumed that the functions θν are at least continuous. For some parameter α ≥ 0, let the map Ψα : ℝn × ℝn → ℝ be defined by

Taking α = 0, this is the classical Nikaido-Isoda function [ 63[63] NIKAIDO H & ISODA K. 1955. Note on noncooperative convex games. Pacific Journal of Mathematics, 5: 807-815.] (NI function for short). Ψ0 is also known as Ky-Fan function. For α> 0, Ψα is the regularized Nikaido-Isoda function introduced in [ 74[74] VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377.]. To understand the benefit of the (regularized) NI function let

Theorem 2.3.Let the GNEP be player convex. Then, it holds for α ≥ 0:

(a) α(x) ≥ 0 for all x satisfying xX (x).

(b) x* is a NE if and only if x* X (x*)and α(x*) = 0.

(c) x* is a NE if and only if x* is a fixed point of the point-to-set map x → Ŷα (x).

(d) If α > 0 then, for every xX(x), a unique vector ŷα (x)X (x) exists so that Ŷα (x) = {ŷα(x)} with

This theorem summarizes results in Theorem 2.2 and Proposition 2.3 of [ 74[74] VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377.], where the GNEP is assumed to be jointly convex. However, the proofs directly extend to player convex GNEPs, see [ 18[18] DREVES A, KANZOW C & STEIN O. 2012. Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems. Journal of Global Optimization, 53: 587-614.]. For the classical NI function (α = 0), the assertions (a)-(c) even hold without player convexity. Given xX(x), the set Ŷ0(x) might be empty. Only under strong convexity assumptions for the functions θν it can be shown that Ŷ0(x) is single-valued. Based on Theorem 2.3, several (quasi-) optimization and fixed point problems can be formulated. In general, the resulting problems are nonsmooth. The terminus "quasi" emphasizes the situation that the description of the feasible region X(x) itself depends on the varying variable x. Taking into account Proposition 1.1, a "real" but still complicated (nonsmooth) optimization problem can be considered for GNEPs with shared constraints. Then, a vector x* is aNE ifand only if

0 (x*) = 0 and x* is a global solution of

For player convex GNEPs with shared constraints this statement is also valid for

α with α > 0. Nevertheless, problem (9) remains nonsmooth in general. For obtaining smooth reformulations jointly convex GNEPs can be considered. To this end, in the definition of
α and Ŷα above X(x) is replaced by X, i.e.,

are defined for any x ∈ ℝn. Then, connections to normalized NE can be established.

Theorem 2.4 (see [74[74] VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377.]).Let the GNEP be jointly convex. Then, it holds for α ≥ 0:

(a) Vα(x) ≥ 0 for all xX.

(b) x* is a normalized NE if and only if x* X and Vα (x*) = 0.

(c) x* is a normalized NE if and only if x* is a fixed point of the point-to-set map xYα (x).

(d) If α > 0 then, for every xX, a unique yα (x) ∈ X exists so that Yα (x) = {yα (x)} with

and the map xyα (x) is continuous.

(e) If α > 0 and if θ 1, ..., θN are C 1 -functions then the function Vα is C1.

Theorem 2.4 leads to smooth constrained optimization and fixed point reformulations for jointly convex GNEPs. In particular, x* is a normalized NE if and only if Vα (x* ) = 0 and x* is a global solution of

Moreover, under the conditions in Theorem 2.4 (e), the objective in (10) becomes continuously differentiable. Results similar to those of Theorems 2.3 and 2.4 were presented in [ 54[54] LALITHA CS & DHINGRA M. 2012. Optimization reformulations of the generalized Nash equilibrium problem using regularized indicator Nikaido-Isoda function. Journal of Global Optimization, 57: 843-861.] for a modification with a player related regularization of the NI function.

A smooth unconstrained reformulation is given in

Theorem 2.5 (see [74[74] VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377.]). Let the GNEP be jointly convex. Then, it holds for β > α > 0:

(a) Vαβ (x) := Vα (x)- Vβ (x) ≥ 0 for all x ∈ ℝn.

(b) x* is a normalized NE if and only if Vαβ (x* ) = 0.

(c) If θ 1,..., θN are C1 -functions then the function Vαβis C1.

Thus, if the GNEP has at least one normalized NE, the set of global minimizers of Vαβ is equal to the set of normalized NE.

Taking into account Proposition 1.2, a similar unconstrained reformulation (with smoothness properties) can be established which characterizes Nash equilibria of certain player convex but not necessarily jointly convex GNEPs, see [ 18[18] DREVES A, KANZOW C & STEIN O. 2012. Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems. Journal of Global Optimization, 53: 587-614.]. For W according to (6), let S denote the closure of the convex hull of W. Moreover, with

the following result is known.

Theorem 2.6 (see [18[18] DREVES A, KANZOW C & STEIN O. 2012. Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems. Journal of Global Optimization, 53: 587-614.]).Let the GNEP be player convex with Xν(x) given by componentwise convex functions gν(·,x) according to (1). Moreover, suppose that W ≠ ∅ and X (x) ≠ ∅ for all xS. Then, it holds for β > α > 0 and c > 0:

(a)αβ(x) := α(x) − β(x) + c║x − PS(x)║2 ≥ 0 for all x ∈ ℝn.

(b)x* is a NE if and only if Vαβ (x*) = 0.

Note, that a constrained reformulation for player convex GNEPs can be obtained from problem (9) when replacing the function0 and the set X by α (α > 0) and W according to Proposition 1.2 (see [ 45[45] HARMS N, KANZOW C & STEIN O. 2013. On differentiability properties of player convex generalized Nash equilibrium problems. Optimization, published online, DOI 10.1080/02331934.2012.752822.
https://doi.org/10.1080/02331934.2012.75...
] for detailed studies).

2.3 Karush-Kuhn-Tucker conditions

Throughout this subsection we assume that the objective functions θν are continuously differentiable for all ν = 1,..., N and that the feasible sets of the players are defined by inequality constraints. More precisely, for each player ν = 1,..., N and each vector x let Xν (x ) be given by

with C 1-functions gν : ℝn → ℝmν. To keep it simple, we assume that only inequality constraints appear, cf. Remark 2.1 below. Let x* be a solution of the GNEP. Then, for every ν = 1,..., N, the vector x*,ν is a solution of the optimization problem (3) with x := x *,-ν. Therefore, if an appropriate constraint qualification is satisfied in x*,ν, there are vectors λ*,ν ∈ ℝmν such that (x*, ν, λ*,ν) satisfies the classical Karush-Kuhn-Tucker (KKT) conditions

where Lν : ℝn × ℝmν → ℝ denotes the Lagrangian of problem (3), i.e.,

Conversely, if the GNEP is player convex, it is well known that any solution of the KKT system (12) yields a solution of the optimization problem (3) with x := x *,-ν. Concatenating the KKT systems of all players we obtain the KKT system associated to the GNEP, namely

where

with m := m 1 + ...+ mN , and

Summarizing our above observations we have

Theorem 2.7.Suppose that the feasible sets of the players are given by (11) and that θν : ℝn → ℝ and gν : ℝnRmν are C 1-functions. Then, the following assertions hold:

(a) If x* is a solution of the GNEP and if, for every ν = 1,..., N, an appropriate constraint qualification is satisfied in x*, ν, then there is λ ∈ ℝm so that (x*, λ*) solves the KKT system (13).

(b) If the GNEP is player convex and if (x*, λ*) solves the KKT system (13), then x* is a solution of the GNEP.

Remark 2.1. If equality constraints appear in the GNEP, they can easily be incorporated in the KKT system (13). Doing this, assertion (a) of Theorem 2.7 remains true, whereas assertion (b) can be kept in case of affine equality constraints.

It is well known that the KKT system (13) can be reformulated as a nonlinear and possibly nonsmooth system of equations. Such reformulations may be important for the construction of algorithms for the solution of (13) as well as for the analysis of properties of the solution set. The nonsmooth reformulation

is, for instance, used in [ 27[27] FACCHINEI F, FISCHER A & PICCIALLI V. 2009. Generalized Nash equilibrium problems and Newton methods. Mathematical Programming 117: 163-194.] and [ 49[49] IZMAILOV AF & SOLODOV MV. 2014. On error bounds and Newton-type methods for generalized Nash equilibrium problems. Computational Optimization and Applications, 59: 201-218.]. The minimum has to be taken componentwise. Obviously, a point ɀ* = (x*, λ*) is a solution of (13) if and only if Hmin (ɀ*) = 0. A further useful reformulation of the KKT system associated to a GNEP is given by the following constrained system of equations:

Obviously, a point (x*, λ*) is a KKT point of the GNEP if and only if (x*, λ*, -g(x*)) is a solution of (15). The reformulation (15) was used in [ 16[16] DREVES A, FACCHINEI F, KANZOW C & SAGRATELLA S. 2011. On the solution of the KKT conditions of generalized Nash equilibrium problems. SIAM Journal on Optimization, 21: 1082-1108.] and [ 15[15] DREVES A, FACCHINEI F, FISCHER A & HERRICH M. 2014. A new error bound result for generalized Nash equilibrium problems and its algorithmic application. Computational Optimization and Applications, 59: 63-84.] to describe and analyze algorithms for the solution of (13).

Next, let us consider a jointly convex GNEP. According to its definition (see Subsection 1.2), the feasible set (11) of player ν is given by Xν (x ) = {xν ∈ ℝν | G(xν,x ) ≤ 0} with a componentwise convex function G : ℝn → ℝM, where m1 = ··· = mN = M. Due to the convexity assumptions for the constraints and the objective functions, the KKT conditions (13) are sufficient conditions.

In Theorem 2.2 it was stated that any solution of the variational inequality VI(X, F) associated to a jointly convex GNEP is also a solution of the GNEP. Now, with some multiplier vecto Λ ∈ M , the KKT system of VI(X, F)reads as

The next theorem states relations between solutions of a jointly convex GNEP, of the KKT system (13) for this GNEP, of the variational inequality VI(X, F) associated to the GNEP, and of the KKT system (16) belonging to VI(X, F). For a proof see [ 26[26] FACCHINEI F, FISCHER A & PICCIALLI V. 2007. On generalized Nash games and variational inequalities. Operations Research Letters, 35: 159-164.].

Theorem 2.8.Let the GNEP be jointly convex with C1-functions θ1, ..., θN and G. Then, the following assertions are valid.

(a) Let x* be a solution of VI(X,F) and Λ* ∈ ℝM so that (x*, Λ*) solves the KKT conditions (16). Then, x* is a solution of the GNEP and (x*, λ*) is a solution of the corresponding KKT conditions (13) with λ*,1 := ... := λ *,N := Λ.

(b) Let x* be a solution of the GNEP and λ ∈ ℝm so that the KKT conditions (13) are satisfied with λ*,1 = ... = λ*,N. Then, x* is a solution of VI(X,F) and (x*, Λ*) satisfies the corresponding KKT conditions (16) with Λ* := λ *,1.

Theorem 2.8 also characterizes the normalized NEs defined in Subsection 2.1.

3 ERROR BOUNDS

Throughout this section we consider GNEPs where, for ν = 1,...,N, the optimization problem of player ν is given by

Moreover, we also assume that the problem functions θν and gν are C2. From systems of equations, optimization problems, or variational inequalities it is well known that error bounds can be very useful for the design and convergence analysis of algorithms and for deriving theoretical results, see, e.g., [23[23] FABIAN MJ, HENRION R, KRUGER AY & OUTRATA JV. 2010. Error bounds: Necessary and sufficient conditions. Set-Valued and Variational Analysis, 18: 121-149.,32[32] FACCHINEI F & PANG J-S. 2003. Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer.,47[47] HENRION R & OUTRATA JV. 2005. Calmness of constraint systems with applications. Mathematical Programming, 104: 437-464.,64[64] PANG J-S. 1997. Error bounds in mathematical programming. Mathematical Programming, 79: 299-332. ]. However, the field of error bounds for GNEPs is in its beginning.

Our aim is to provide recent local error bound results for the solution set of the KKT system (13) of a GNEP. To this end, let Z denote the set of all points ɀ = (x, λ) that solve (13). Throughout this section we suppose that ɀ* = (x*, λ*) is an arbitrary but fixed element of Z. In Section 2.3 we mentioned that (13) can be reformulated by

with ɀ = (x, λ) and H min defined by (14). We say that H min provides a local error bound at ɀ* if there are δ > 0 and ω > 0 such that

where dist[ɀ,Z] := inf{║ɀ - ξ║| ξ ∈ Z} denotes the (Euclidean) distance of ɀ to the set Z. In the first part of this section we will state conditions that imply the existence of δ and ω such that (17) holds.

Proposition 3.1.Suppose that the functions θ1, ..., θN are quadratic and that the functions g1 ,..., gN are affine. Then, there is ω > 0 so that (17) is satisfied for any δ > 0.

This assertion follows from a well known result on polyhedral multifunctions [70[70] ROBINSON SM. 1981. Some continuity properties of polyhedral multifunctions. Mathematical Programming Study, 14: 206-214.], see also [27[27] FACCHINEI F, FISCHER A & PICCIALLI V. 2009. Generalized Nash equilibrium problems and Newton methods. Mathematical Programming 117: 163-194. , Theorem 8]. Although the particular class of GNEPs in Proposition 3.1 may appear in applications, one is interested in dealing with more general cases. A first result is Theorem 9 in [27[27] FACCHINEI F, FISCHER A & PICCIALLI V. 2009. Generalized Nash equilibrium problems and Newton methods. Mathematical Programming 117: 163-194. ]. For its description the map H * min needs to be defined by deleting certain rows from H min. More in detail, H * min is obtained from H min by successively removing rows min μ j, -gμ j (x)} if gμ j (x*) = 0 and another (undeleted) row min {λνi, gi ν(x)} exists with g i ν = g j µ Note that H* min has less components than n + m if at least one constraint being active at x* is shared by more than one player.

Theorem 3.1.Suppose that strict complementarity holds at ɀ*, i.e., gνi(x*) = 0 implies λ*,ν i > 0 for arbitrary ν ∈ {1,..., N} and i ∈ {1,...,mν}. If the Jacobian J H* min (ɀ*) has full row rank then there are δ > 0 and ω > 0 such that (17) is satisfied.

Note that the strict complementarity assumption guarantees that H min and H* min are continuously differentiable in a certain neighborhood of ɀ* . In particular, the Jacobian JH* min *) is well defined.

In [49[49] IZMAILOV AF & SOLODOV MV. 2014. On error bounds and Newton-type methods for generalized Nash equilibrium problems. Computational Optimization and Applications, 59: 201-218. ] a weaker condition is derived that also ensures that H min provides a local error bound at ɀ* . In particular, this condition does not require strict complementarity at ɀ* . Before stating the result some further definitions are needed. For simplicity of presentation, let us assume that only shared constraints occur, i.e., that there is a function G : ℝn → ℝM with g 1 = ··· = gN = G. Then, with the index sets

and

for all ν = 1,..., N, let the matrix J (x *, λ*) be defined by

where E +(x *, λ*) := blockdiag I ⊆ {1,..., M} the function GI : ℝn → ℝ|I| consists of all components of G with indices in I. . Note that for any index set

Theorem 3.2.Consider a GNEP with shared constraints only and suppose that J (x**) has full row rank. Then, there are δ > 0 and ω > 0 so that (17) is satisfied.

In [49[49] IZMAILOV AF & SOLODOV MV. 2014. On error bounds and Newton-type methods for generalized Nash equilibrium problems. Computational Optimization and Applications, 59: 201-218. , Remark 1) the authors explain that the full row rank assumption of Theorem 3.2 can even be weakened. Independently of [49[49] IZMAILOV AF & SOLODOV MV. 2014. On error bounds and Newton-type methods for generalized Nash equilibrium problems. Computational Optimization and Applications, 59: 201-218. ] and nearly at the same time the paper [15[15] DREVES A, FACCHINEI F, FISCHER A & HERRICH M. 2014. A new error bound result for generalized Nash equilibrium problems and its algorithmic application. Computational Optimization and Applications, 59: 63-84. ] was submitted which also deals with sufficient conditions for a local error bound to hold. There, another reformulation of the KKT system of a GNEP is considered, namely the constrained system of equations

where H, ɀ, and Ω are defined in (15). Due to the differentiability assumptions for the GNEP in the beginning of this section, H is C 1. This might be an advantage compared to H min. By the definition of H, one cannot expect that H provides a local error bound of the form (17) with H instead of H min. Rather, one has to intersect the neighborhood of ɀ * with Ω. For the sake of simplicity, we state the result from [15[15] DREVES A, FACCHINEI F, FISCHER A & HERRICH M. 2014. A new error bound result for generalized Nash equilibrium problems and its algorithmic application. Computational Optimization and Applications, 59: 63-84. ] only for GNEPs with shared constraints. ZH denotes the solution set of the constrained system (15).

Theorem 3.3.Consider a GNEP with shared constraints only and suppose that for any iI 0 an index ν(i) ∈ {1,..., N} exists so that λi *,ν(i), > 0 and that the matrix

is nonsingular, wherex*, λ*) := blockdiag (∇x1 G J1 (x *),...,∇xN GJN (x*)) with Jν := {i ∈ I0 | ν = ν(i)} (ν = 1,..., N). Then, there are δ > 0 and ω> 0 so that (

If δ,ω > 0 exist so that (18) holds H is said to provide a local error bound at ɀ on ω.

4 STRUCTURAL PROPERTIES

This section is devoted to some structural properties of generalized Nash equilibria. The next subsection is based on [13[13] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On intrinsic complexity of Nash equilibrium problems and bilevel optimization. Journal of Optimization Theory and Applications, 159: 606-634.,14[14] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On structure and computation of generalized Nash equilibria. SIAM Journal on Optimization, 23: 452-474. ] and highlights certain generic properties of solutions, in particular related to their isolatedness or nonisolatedness. Subsection 4.2 deals with approaches for characterizing all or a subset of solutions of a GNEP.

4.1 Generic properties

We consider GNEPs where, for every ν = 1,..., N, the ν-th player's optimization problem is given by

with C 2 -functions θν : ℝn → ℝ, gν :ℝn → ℝmν, and G : ℝn → ℝM. The common constraints G(x) ≤ 0 are shared by all players while the inequalities gν (x) ≤ 0 describe individual constraints.

Let us first explain what is meant by "generically satisfied property". Obviously, every GNEP is characterized by its defining problem functions θν,gν and G. We assumed that all these functions are twice continuously differentiable. Let the space C 2(ℝn) of twice continuously differentiable functions be endowed with the Whitney topology and the product space of GNEP defining functions

with the product- Whitney topology. For the definition of the Whitney topology we refer to [ 13[13] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On intrinsic complexity of Nash equilibrium problems and bilevel optimization. Journal of Optimization Theory and Applications, 159: 606-634., 14[14] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On structure and computation of generalized Nash equilibria. SIAM Journal on Optimization, 23: 452-474.] and references therein. We say that a property of GNEPs is satisfied generically if, regarding the Whitney topology, there is an open and dense subset of the space of GNEP defining functions such that the property holds for all instances of this subset.

In the following we first consider a subclass of GNEPs, namely problems without common constraints. Thus, for every ν =1,...,N, the optimization problem of player ν is

Let us fix ν for the moment and let x*,-ν be given. Recall that the KKT conditions of problem (20) with x := x*,-ν are given by (12). A solution (x*,ν*,ν) of (12) is called a nondegenerate KKT point of problem (20) if the following conditions are satisfied:

- LICQ is satisfied in x*, ν ,

- strict complementarity holds in (x*,ν, λ*, ν).

- the matrix

is nonsingular, where the columns of V ∈ ℝn×(n- |Iν0(x*)|) form a basis of the tangent space

where I 0 ν(x *) := {j ∈ {1,...,mν}|gν j (x*,ν, x*,-ν) = 0}.

Concatena ting the KKT systems of all players the KKT system of the GNEP

is obtained, cf. Section 2.3. A solution (x *, λ*) is called a nondegenerate KKT point of the GNEP if, for every ν = 1,..., N, (x*,ν, λ *,ν)is a nondegenerate KKT point of the optimization problem (20) with x = x*,-ν . The x-part of a nondegenerate KKT point of a GNEP is not necessarily an isolated Nash equilibrium (although the points x*,ν are isolated solutions of the optimization problems (20), given x*,-ν). Thus, to ensure isolatedness, a further condition is needed. To this end, for a KKT point (x *, λ*) of the GNEP, the function F1 : ℝn+m → ℝn+m is defined by

By m we still denote the total number of (individual) constraints, i.e., m = ΣNν=1 mν . Every solution (x,λ) of the nonlinear system F1(x, λ) = 0 with λ ≥ 0 and g(x) ≤ 0 is a KKT point of the GNEP. The converse is generally not true. However, if (x* , λ*) is a nondegenerate KKT point of the GNEP, at least every solution of (21) in a certain neighborhood of (x *, λ*) solves the problem F1(x, λ) = 0. Note that, due to the differentiability assumptions made in this subsection, F1 is continuously differentiable.

A KKT point (x*, λ*) is called a jointly nondegenerate KKT point of the GNEP if (x *, λ*) is a nondegenerate KKT point of the GNEP and the matrix J F1(x *, λ*) is nonsingular. Obviously, a jointly nondegenerate KKT point is a locally unique solution of F1 (x, λ) = 0. The following result is Theorem 3.2 from [ 13[13] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On intrinsic complexity of Nash equilibrium problems and bilevel optimization. Journal of Optimization Theory and Applications, 159: 606-634.].

Theorem 4.1.Generically, for a GNEP without common constraints the following property issatisfied. For every solution x* of the GNEP there is a unique vector λ of multipliers so that (x*, λ*) is a jointly nondegenerate KKT point of the GNEP.

In the case of GNEPs where common constraints occur, i.e., where the ν-th players' problem is given according to (19) it cannot be expected that every solution of the GNEP provides a KKT point, see [ 14[14] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On structure and computation of generalized Nash equilibria. SIAM Journal on Optimization, 23: 452-474., Example 2]. This example is stable with respect to small perturbations of the defining functions. This motivates to consider Fritz-John (FJ) points instead of KKT points. Let ν ∈ {1,..., N} and x*,-ν be arbitrary but fixed. Then the FJ conditions of problem (19) with x := x*,-ν are given by

By concatenating the FJ conditions of all players, the FJ conditions of the GNEP are obtained. A solution (x *, ξ*, λ*, Λ*) ∈ ℝn+N+m+NM of the FJ conditions is called a FJ point of the GNEP.

There is a relation between the set of FJ points of the GNEPs and the solution set of the nonlinear system F 2 (x, γ) = 0, where F2 : ℝn+N+m+NM → ℝn+m+M+N is defined by

where, for y ∈ ℝ, y + := max{0, y} and y_ := min{0, y} and, for y1,...,yp ∈ ℝ,

The next proposition is about the characterization of the set of FJ points of the GNEP by means of the nonsmooth function F2. For a proof we refer to [ 14[14] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On structure and computation of generalized Nash equilibria. SIAM Journal on Optimization, 23: 452-474., Lemma 2.1].

Proposition 4.1.For a GNEP and x* ∈ ℝn, the following assertions are equivalent.

(i) There are ξ* ∈ ℝN, λ* ∈ ℝm and Λ ∈ ℝNM such that (x *, ξ*, λ*, Λ*) is a FJ point of the GNEP.

(ii) There is γ* ∈ ℝN+m+NM such that F2 (x *, γ*) = 0.

Before we state the next result let us define the sets

and, for γ ∈ ℝN+m+NM,

Theorem 4.2. Generically, for a GNEP with common constraints the following properties are satisfied.

(a) For any solution (x *, γ*) of the system F2 (x, γ) = 0 it holds that | M0*) | ≤ (N - 1)M and that all matrices V ∈ ∂F2 (x *, γ*) have full row rank.

(b) The set F2 -1(0) is a Lipschitz manifold of dimension (N - 1)M.

This result follows from Theorem 2.2 and Corollary 2.3 in [ 14[14] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On structure and computation of generalized Nash equilibria. SIAM Journal on Optimization, 23: 452-474.], where the reader can also find the definition of a Lipschitz manifold. The main conclusion of the assertion (b) of Theorem 4.2 is that generically FJ points of a GNEP are not isolated.

In [ 14[14] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On structure and computation of generalized Nash equilibria. SIAM Journal on Optimization, 23: 452-474.] a Newton-type projection method for the solution of the system F2(x,γ) = 0 is provided. In each step of this method a linear system of equations must be solved, for details on convergence properties of the resulting algorithm see [ 14[14] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On structure and computation of generalized Nash equilibria. SIAM Journal on Optimization, 23: 452-474.].

4.2 Description of all solutions of a GNEP

As it can already be seen from simple examples, the solution set of a GNEP may consist of nonisolated solutions. Therefore, ways for characterizing all (or a subset of all) solutions of a GNEP by means of a parametrized simpler problem are useful, particularly within applications where one is interested in having a good approximation of the solution set. The definition of suitable parametrized problems is still a big challenge. At least for jointly convex GNEPs results exist. Below, we will review some of them. To this end, let us consider F : ℝn → ℝn given by (7) and let us define Fb : ℝn → ℝn by

for a parameter vector b = (b 1, ...,bN )T ∈ (0, ∞)N.

Theorem 4.3.Let θ1, ..., θN be C 1-functions and let the GNEP be jointly convex. Then,

This theorem is a direct consequence of Theorem 2.2 since SOL(GNEP) does not change if θν is replaced by bν θν for ν = 1,..., N. Using this theorem one can (in principle) obtain a certain subset of solutions of the GNEP by solving the simpler variational inequalities VI(X,Fb )) for all b ∈ (0,∞)N. This subset is the set of all normalized Nash equilibria of the GNEP. To compute other solutions as well by means of a VI one can try to substitute the set X or the map F by a suitably parametrized object. This idea was dealt with in [ 59[59] NABETANI K, TSENG P & FUKUSHIMA M. 2011. Parametrized variational inequality approaches to generalized Nash equilibrium problems with shared constraints. Computational Optimization and Applications, 48: 423-452.]. To review a first result let Fω : ℝn → ℝn be defined by

where ω = (ων )Nν=1 ∈ [0,∞)MN is a vector of parameters and G : ℝn → ℝM is a component-wise convex and continuous function used to define the set X in (5).

Theorem 4.4.Let G and θ1,..., θN be C1 -functions and let the GNEP be jointly convex with X given in (5). If, for any solution x* of the GNEP, a multiplier vector λ* exists so that the KKT conditions (13) with gν := G for ν = 1,..., N are satisfied for (x,λ) := (x* , λ*) , then

A further result in [ 59[59] NABETANI K, TSENG P & FUKUSHIMA M. 2011. Parametrized variational inequality approaches to generalized Nash equilibrium problems with shared constraints. Computational Optimization and Applications, 48: 423-452.] uses the parametrized set

where β = (β 1, ..., βN )T ∈ ℝN and gν : ℝnν → ℝM, ν = 1,..., N, are componentwise convex and continuous functions. These functions are used to define the map G : ℝn → ℝM in (5) in the following separable sense, namely

Theorem 4.5.Let θ1,..., θN be C1-functions and let the GNEP be jointly convex with X given by (5) and G given by (22). Then, with A := {β ∈ ℝN | β1 +···+ βN = 0},

For refinements of the above two theorems and additional results we refer the reader to [ 59[59] NABETANI K, TSENG P & FUKUSHIMA M. 2011. Parametrized variational inequality approaches to generalized Nash equilibrium problems with shared constraints. Computational Optimization and Applications, 48: 423-452.]. Another way of overestimating SOL(GNEP) is given in [36[36] FACCHINEI F & SAGRATELLA S. 2011. On the computation of all solutions of jointly convex generalized Nash equilibrium problems. Optimization Letters, 5: 531-547.]. To review this let Xy denote the feasible set of the NEP whose ν-th player solves the problem

where y denotes any element from X.

Theorem 4.6.Let the functions g1, ...,gN be componentwise convex, θ 1, ..., θN be convex and suppose that all these functions are C1. Moreover, let the GNEP be jointly convex with X given by (5) and G given by (22). If, for any solution x* of the GNEP, a multiplier vector λ* exists so that the KKT conditions (13) with gν := G for ν = 1,...,N are satisfied for (x,λ) := (x*, λ*),then

Finally, we would like to direct the reader's attention to [ 68[68] PUERTO J, SCHÖBEL A & SCHWARZE S. 2012. On equilibriain generalized Nash games with applications to games on polyhedral sets. Technical Report No. 15, Preprint-Serie, Institut für Numerische und Angewandte Mathematik, Universität Göttingen, Germany.], where relations between solutions of a GNEP with shared constraints and the nondominated points of the set X in (4) are provided. This may pave the way to the use of methods from multicriteria optimization for computing of a subset of solutions of the GNEP.

5 GLOBALLY CONVERGENT ALGORITHMS

The algorithms we present in this section generate a solution of a GNEP under certain assumptions for starting points that are not required to lie close to a solution. From our point of view, these algorithms turned out to be promising under suitable (but possibly restricted) circumstances. There are further interesting approaches which we cannot discuss here, see [44[44] HAN D, ZHANG H, QIAN G & XU L. 2012. An improved two-step method for solving generalized Nash equilibrium problems. European Journal of Operational Research, 216: 613-623.,53[53] KUBOTA K & FUKUSHIMA M. 2010. Gap function approach to the generalized Nash equilibrium problem. Journal of Optimization Theory and Applications, 144: 511-531.,55[55] MATIOLI LC, SOSA W & YUAN J. 2012. A numerical algorithm for finding solutions of a generalized Nash equilibrium problem. Computational Optimization and Applications, 52: 281-292. ] for some recent ones.

5.1 Decomposition methods

Due to the special structure of GNEPs the use of decomposition methods is a very natural approach to find a solution. Two important examples for decomposition methods are a Jacobi-type method and a Gauss-Seidel-type method. The idea is the following. Let xk ∈ ℝn be a given iterate. Then, for each ν = 1,..., N, xν k+1 is determined as best response of player ν to fixed strategies of the other players. Thus, xν k+1 has to be determined as a solution of the minimization problem

in the case of a Jacobi-type method and of

in the case of a Gauss-Seidel-type method. Now, we describe the latter. The Jacobi-type method uses subproblem (23) instead of (24).

Algorithm 1 (Gauss-Seidel-type Method).

(S0): Choose a starting point x 0X (x 0 ). Set k := 0.

(S1): If xk is a solution of the GNEP: STOP.

(S2): For any ν = 1,..., N, determine xν k+1 as a solution of (24).

(S3): Set xk+1 := (x 1 k+1, ...,xN k+1 ), k := k + 1, and go to (S1).

Unfortunately, despite their practical importance, convergence of Jacobi-type and Gauss-Seideltype methods is hard to prove so that they are used as heuristic methods. Only in some applications convergence of Algorithm 1 can be shown, see [ 65[65] PANG J-S, SCUTARI G, FACCHINEI F & WANG C. 2008. Distributed power allocation with rate constraints in Gaussian parallel interference channels. IEEE Transactions on Information Theory, 54: 3471-3489.] for an example. In general, convergence of such methods cannot be expected, even for quite well behaved problems, see [ 35[35] FACCHINEI F, PICCIALLI V & SCIANDRONE M. 2011. Decomposition algorithms for generalized potential games. Computational Optimization and Applications, 50: 237-262., Section 3.1] for an example.

However, a regularized version of the Gauss-Seidel-type method is analyzed in [ 35[35] FACCHINEI F, PICCIALLI V & SCIANDRONE M. 2011. Decomposition algorithms for generalized potential games. Computational Optimization and Applications, 50: 237-262.] and shown below. The subproblems of this method are

In comparison to (24), the subproblem (25) contains an additional regularization term in the objective function, where τk > 0 is a parameter.

Algorithm 2 (Regularized Gauss-Seidel-type Method).

(S0): Choose a starting point x 0X (x 0) and τ 0 > 0. Set k := 0.

(S1): If xk is a solution of the GNEP: STOP.

(S2): For any ν = 1,..., N, determine x ν k+1 as a solution of (25).

(S3): Set xk+1 := (x 1 k+1, ...,xN k+1 ), k :=k + 1, choose τk+1 > 0, and go to (S1).

The idea for Algorithm 2 comes from [ 42[42] GRIPPO L & SCIANDRONE M. 2000. On the convergence of the block nonlinear Gauss-Seidel method under convex constraints. Operations Research Letters, 26: 127-136.] where a regularized Gauss-Seidel-type method for the solution of optimization problems is provided. In [ 35[35] FACCHINEI F, PICCIALLI V & SCIANDRONE M. 2011. Decomposition algorithms for generalized potential games. Computational Optimization and Applications, 50: 237-262.] convergence properties of Algorithm 2 are analyzed for a subclass of GNEPs with shared constraints, namely the generalized potential games (GPGs for short). A GNEP is called GPG if

  1. (a) the feasible sets Xν (x ) are given according to (4) with a nonempty, closed set X ⊆ ℝ n and

  2. (b) there is a continuous function Q : ℝn → ℝ so that, for all ν = 1,..., N, for all x dom (Xν ), and for all yν , ɀνXν (x ),

    implies

    for some forcing function σ : ℝ+ → ℝ+ .

Here, dom (Xν ) := {x | Xν (x ) ≠ ∅}. Moreover, σ : ℝ+ → ℝ+ is called forcing function if tk ) = 0 implies tk = 0. σ(

The definition of a GPG is inspired by [ 56[56] MONDERER D & SHAPLEY LS. 1996. Potential games. Games and Economic Behavior, 14: 124-143.], where NEP potential games are introduced. Property (b) of a GPG is particularly satisfied if, for each ν,θν is continuous and depends only on the strategies xν of player ν. Then, Q can be chosen as Q(x) = ΣN ν=1 θν (xν ). Moreover, if, for eachν, θν is continuous and θν (x) = c(x) + dν (xν ) with a function c that is the same for all players and a function dν that depends only on the ν-th player's variables, property (b) is also valid with Q(x) = c (x) + ΣN ν=1 dν (xν ).

For player convex GPGs, Theorem 4.3 in [ 35[35] FACCHINEI F, PICCIALLI V & SCIANDRONE M. 2011. Decomposition algorithms for generalized potential games. Computational Optimization and Applications, 50: 237-262.] for Algorithm 2 can be stated as follows.

Theorem 5.1.Consider a player convex GPG. Suppose that θ1, ..., θ N are continuous and that the point-to-set maps Xν (·) are inner-semicontinuous relative to dom (Xν ) for all ν = 1,..., N. Moreover, suppose that τ k = τ > 0 for all k ∈ ℕ. Then, Algorithm 2 is well defined. If {xk } is a sequence generated by Algorithm 2 every accumulation point of {xk } is a solution of the GPG.

Due to the assumptions of the previous theorem, the subproblems (25) are strongly convex optimization problems and have a unique solution so that Algorithm 2 is well defined. The innersemicontinuity requirement for the maps Xν (·) in Theorem 5.1 says that if x belongs to X and, for any player ν, we consider any sequence {xk }⊆ dom(Xν ) with xk → x, then there are points x k νXν (x k ) so that xνk → xν. This is not a too restrictive requirement, see [35[35] FACCHINEI F, PICCIALLI V & SCIANDRONE M. 2011. Decomposition algorithms for generalized potential games. Computational Optimization and Applications, 50: 237-262. Section 2] for a discussion.

A further result in [ 35[35] FACCHINEI F, PICCIALLI V & SCIANDRONE M. 2011. Decomposition algorithms for generalized potential games. Computational Optimization and Applications, 50: 237-262.] is on convergence properties of Algorithm 2 for GPGs that are not necessarily player convex. Then, it is explicitly assumed that the subproblems of Algorithm 2 always have a solution. Furthermore, a special updating rule for the parameters τk is used. More precisely, τk+1 in (S3) of Algorithm 2 is computed by

The following theorem summarizes the results of Lemma 5.1 and Theorem 5.2 in [ 35[35] FACCHINEI F, PICCIALLI V & SCIANDRONE M. 2011. Decomposition algorithms for generalized potential games. Computational Optimization and Applications, 50: 237-262.].

Theorem 5.2.Consider a GPG. Suppose that θ1, ..., θN are continuous and that the point-toset maps Xν(·) are inner-semicontinuous relative to dom (Xν) for all ν = 1,..., N. Moreover, suppose that the subproblems of Algorithm 2 always have a solution and that τk+1 in (S3) is obtained by (26) for all k ∈ ℕ. If a sequence {xk } generated by Algorithm 2 has an accumulation point then {τk } converges to 0. Furthermore, if K denotes an infinite subset ofwith τk+1 < τ k for all kK then every accumulation point of the subsequence {xk} k K is a solution of the GPG.

Note that, even if the entire sequence {xk } has an accumulation point the existence of an accumulation point of the subsequence {xk}k K is not guaranteed.

5.2 Penalty methods

In this subsection we consider GNEPs where, for any ν = 1,..., N, the optimization problem of player ν is given by

with C 1-functions θν : n and gν : n mν . Moreover, we assume that, for all x n -nν , the function θν (·, x ) is convex and gν (·, x ) is componentwise convex.

The penalty algorithm that we are going to describe was first proposed in [ 33[33] FACCHINEI F& PANG J-S. 2006. Exact penalty functions for generalized Nash problems. Large Scale Nonlinear Optimization (edited by Di Pillo G & Roma M), Springer, 115-126.] and analyzed in more detail in [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253.]. The subproblems of the method are NEPs where the ν-th players' optimization problem

arises from (27) by penalization with a penalty parameter ρν > 0. The NEP resulting from concatenating (28) for ν = 1,..., N is denoted by PNEPρ with ρ = (ρ1,..., ρN)T. By ║ɀ║γ := (Σi |ɀi|γ)1/γ the γ-norm of a vector ɀ is denoted for some fixed γ ∈ (1,∞). For γ = 2 we still write ║·║. Moreover, ɀ+ := max{0,ɀ} is understood componentwise. Note that, for any x , the optimization problem (28) is unconstrained, convex, and nonsmooth. To deal with the nonsmoothness of ║·║γ let the index set

be defined for any xn . The penalty method in [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253.] updates the penalty parameters as follows. For given iterates x and ρ = (ρ1,..., ρN)T, the parameter ρν with ν ∈ P(x) is replaced by 2ρν if

holds with some fixed cν ∈ (0,1). The idea behind this is to increase ρν only if the gradient of the penalty term is not sufficiently larger than the gradient of the objective function. Note that the gradient ∇xν (║gν (x)+║γ) is well defined due to the definition of P(x). To formulate the penalty method we suppose that there is an iterative algorithm A which, starting from a point xk , determines a new point xk+1 := A[xk ] and so on. Moreover, for any ρ = (ρ1,..., ρN)T and any starting point x 0n , Algorithm A is assumed to generate a sequence {xk } whose accumulation points solve PNEPρ. Now, we are in the position to state the penalty algorithm from [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253.].

Algorithm 3 (Penalty Method).

(S0): Choose x 0 ∈ ℝn, γ ∈ (1,∞), ρ1,0,..., ρN,0 > 0, and c1,..., cN ∈ (0,1). Set k:=0.

(S1): If xk is a solution of the GNEP: STOP.

(S2): For any ν = 1,..., N: If ν ∈ P (xk) and if (29) is valid for x = xk and ρν = ρν,k , then set ρν,k+1 := 2ρν,k . Else, set ρν,k+1 := ρν,k .

(S3): Determine xk+1 := A[xk ], set k :=k + 1 and go to (S1).

In [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253., Theorem 2.5] the following convergence result is proven under the above assumption on Algorithm A.

Theorem 5.3.Let {xk1,k, ..., ρN,k} be an infinite sequence generated by Algorithm 3 and let the index set I∞ be defined by

If I∞=∅, then every accumulation point x * of {xk } is a solution of the GNEP. If I∞ ≠ ∅ and if the sequence {xk} is bounded, then, for each ν ∈ I∞, there is an accumulation point x* of {xk } for which one of the following assertions is true:

  1. (a)

  2. (b) x*,ν is the primal part of a Fritz John point of problem (27) with x := x*,-ν but not a global minimizer of (27),

  3. (c) x*,ν is a global minimizer of (27) with x :=x*,-ν.

Note that I∞ = ∅ means that all penalty parameters are updated a finite number of times only while I∞ ≠ ∅ indicates that at least one penalty parameter grows to infinity. A sufficient condition for I∞=∅ (if {xk} is bounded) is the extended Mangasarian-Fromovitz constraint qualification (EMFCQ for short). A proof of this assertion can be found in [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253., Theorem 2.8]. A GNEP satisfies the EMFCQ at a point x * if, for every player ν = 1,..., N, there is a vector dν such that

where I+ ν (x *) := {i ∈ {1,..., mν}|gi ν(x *) ≥ 0} denotes the index set of all active and violated constraints at the point x* for the ν-th player.

Corollary 5.1.Let {xk } be a sequence generated by Algorithm 3. If this sequence is bounded and if EMFCQ holds at every accumulation point of {xk } then each accumulation point of {xk } is a solution of the GNEP.

In [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253.], another constraint qualification (named CQγ) is shown to be weaker than EMFCQ but, still, to imply that each accumulation point of a bounded sequence generated by Algorithm 3 solves the GNEP. The definition of CQγ and references where such a constraint qualification was used in the case of optimization problems can be found in [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253.] and [ 31[31] FACCHINEI F & LAMPARIELLO L. 2011. Partial penalization for the solution of generalized Nash equilibrium problems. Journal of Global Optimization, 50: 39-57.].

For jointly convex GNEPs with X given in (5) a slight modification of Algorithm 3 is analyzed in [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253.] under the assumption that G : ℝn → ℝM is continuously differentiable. Then, if (S2) of Algorithm 3 is replaced by

Else, set ρν,k+1 := ρν,k for ν = 1,..., N, stronger results than in Theorem 5.3 were proved in [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253.]. In particular, if the penalty parameters grows to infinity and {xk }is bounded, there is an accumulation point x * of {xk } such that for each ν = 1,..., N one of the assertions (b) and (c) from Theorem 5.3 is true. Moreover, it can be shown that Slater's condition for the set X guarantees that the penalty parameters are increased a finite number of times only.

The problem that is left is the choice of an appropriate algorithm A for the solution of a subproblem PNEPρ. One possibility is to approximate the nonsmooth PNEPρ by a smooth one, where the ν-th players' optimization problem is given by

with some smoothing parameter ε > 0. The objective of (30) is strongly convex so that (30) always has a unique solution. The resulting NEP is unconstrained, convex and smooth and is denoted by PNEPρ(ε). Hence, PNEPρ(ε) can equivalently be written as

where, for each ν,Pν (·, x , ρν , ε) denotes the objective function of (30). If γ > 2 and if θ1,..., θN and g1,...,gN are C 2-functions then Fρ,ε is an C 1-function since, for each ν = 1,..., N, the function Pν (·, ρν , ε) is C 2. In [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253., Proposition 3.2] a result is proven showing that, for arbitrary sequences {εk }, {ηk } ⊂ (0,∞) converging to 0, any accumulation point of a sequence {x(εk )} satisfying

is a solution of PNEPρ. Thus, one possibility to determine xk+1 in (S3) of Algorithm 3 is to perform some steps of an equation solver applied to the equation Fρk,εk (x) = 0 with some εk > 0 until a certain accuracy is reached. The parameters εk have to be updated in the outer iterations. For more details on the implementation of Algorithm 3 see [ 29[29] FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253.].

The penalization approach described above transforms GNEPs into a sequence of NEPs. The same idea can be used in different ways. In [ 31[31] FACCHINEI F & LAMPARIELLO L. 2011. Partial penalization for the solution of generalized Nash equilibrium problems. Journal of Global Optimization, 50: 39-57.], a partial penalization is provided where constraints hν (xν ) ≤ 0 that depend only on the ν-th player's variables are not penalized. The subproblems of the resulting algorithm are constrained and nonsmooth NEPs where the ν-th player's optimization problem is

Another partial penalization approach is suggested in [ 40[40] FUKUSHIMA M. 2011. Restricted generalized Nash equilibria and controlled penalty algorithm. Computational Management Science, 8: 201-218.], where the nonsmoothness of ɀ+ is removed by introducing artificial variables.

5.3 Methods based on Nikaido-Isoda functions

In the beginning of this subsection, we consider jointly convex GNEPs where, for any ν = 1,..., N, the optimization problem of player ν is given by

with C 1-functions θν : ℝn → ℝ. At the end of this subsection, the convexity requirements will be weakened.

In order to compute normalized NEs for jointly convex GNEPs, we first exploit Theorem 2.5. The following descent algorithm was established in [ 74[74] VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377.] for treating the unconstrained optimization problem

Algorithm 4 (Gradient Method).

(S0): Choose x 0 ∈ ℝn, β > α > 0, ρ ∈ (0,1). Set k := 0.

(S1): If xk is a solution of the GNEP (i.e., Vαβ (xk ) = 0): STOP.

(S2): Compute dk := -∇Vαβ (xk ).

(S3): Compute tk := max{2-l | l = 0, 1, 2,...} such that

(S4): Set xk+1 := xk + tkdk, k := k + 1 and go to (S1).

A rule for computing ∇Vαβ (x) can be found in [ 74[74] VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377., Theorem 4.3]. According to this and also for computing Vαβ (x), it is necessary to determine the solutions yα (x) and yβ (x) of two constrained optimization problems (see part (d) of Theorem 2.4 for a definition). Initially, a Barzilai-Borwein step size [ 4[4] BARZILAI J & BORWEIN JM. 1988. Two-point step size gradient methods. IMA Journal of Numerical Analysis, 8: 141-148.] was implemented in [ 74[74] VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377.] to avoid evaluations of Vαβ caused by a line search. Later on, in [ 19[19] DREVES A, VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2013. A globalized Newton method for the computation of normalized Nash equilibria. Journal of Global Optimization, 56: 327-340.], the above algorithm with the Armijo line search formula (31) is combined with a locally superlinearly convergent method (see Section 6). By standard arguments it can be shown that each accumulation point of a sequence {xk } generated by Algorithm 4 is a stationary point of Vαβ . Due to Theorem 4.5 of [ 74[74] VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377.] such a point is a normalized NE of the GNEP provided that the gradients of Vαβ satisfy the following monotonicity property for all x ∈ ℝn with yβ (x) - yα (x) ≠ 0:

Another way to compute a normalized NE of a jointly convex GNEP is offered by Theorem 2.4. For treating the constrained optimization problem (10), i.e.,

the following algorithm was established and analyzed in [ 75[75] VON HEUSINGER A & KANZOW C. 2009. Relaxation methods for generalized Nash equilibrium problems with inexact line search. Journal of Optimization Theory and Applications, 143: 159-183.]. It can be regarded as a relaxation (see [ 72[72] URYASEV S & RUBINSTEIN RY. 1994. On relaxation algorithms in computation of noncooperative equilibria. IEEE Transactions on Automatic Control, 39: 1263-1267.]) of the fixed point iteration for solving x = yα (x), see Theorem 2.4.

Algorithm 5 (Relaxation Method).

(S0): Choose x 0X,α > 0, ρ ∈ (0, 1). Set k := 0.

(S1): If xk is a solution of the GNEP (i.e., Vα (xk ) = 0): STOP.

(S2): Compute yα (xk ) and set dk := yα (xk ) - xk .

(S3): Compute tk := max{2-l | l = 0, 1, 2,...} such that

(S4): Set x k+1 := xk + tkdk, k := k + 1 and go to (S1).

The following theorem presents the results of Theorems 4.1 and 5.1 in [ 75[75] VON HEUSINGER A & KANZOW C. 2009. Relaxation methods for generalized Nash equilibrium problems with inexact line search. Journal of Optimization Theory and Applications, 143: 159-183.].

Theorem 5.4. Consider a jointly convex GNEP and let, for some α > 0, one of the following two assumptions be satisfied:

  1. (a) For all xX with xyα (x) it holds

  2. (b) The function ψα(·, y) is convex for every y in some open set containing X.

Then, any accumulation point x*of a sequence {xk } generated by Algorithm 5 is a normalized NE of the GNEP.

In [ 75[75] VON HEUSINGER A & KANZOW C. 2009. Relaxation methods for generalized Nash equilibrium problems with inexact line search. Journal of Optimization Theory and Applications, 143: 159-183.] it is shown that Assumption (b) of Theorem 5.4 still implies that every accumulation point of {xk } is a normalized NE if the utility functions θν are continuous but not necessarily differentiable.

For the remainder of this subsection let us briefly deal with the case of not necessarily jointly convex, but player convex GNEPs. We assume that the feasible sets of the players Xν (x ) are given by (1) with componentwise convex functions gν (·, x ). According to Theorem 2.6,

is a suitable unconstrained optimization reformulation for player convex GNEPs. Under certain conditions, Vαβ is a piecewise continuously differentiable function, see [ 18[18] DREVES A, KANZOW C & STEIN O. 2012. Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems. Journal of Global Optimization, 53: 587-614.] for details. In the latter paper (and in [ 17[17] DREVES A & KANZOW C. 2011. Nonsmooth optimization reformulations characterizing all solutions of jointly convex generalized Nash equilibrium problems. Computational Optimization and Applications, 50: 23-48.] for the jointly convex case), a robust gradient sampling algorithm [ 9[9] BURKE JV, LEWIS AS & OVERTON ML. 2005. A robust gradient sampling algorithm for nonsmooth, nonconvex optimization. SIAM Journal on Optimization, 15: 751-779.] is used to minimize Vαβ. In general, one can only expect to obtain a stationary point of Vαβ.

However, as it is clear from Theorem 2.6, a global minimizer of Vαβ is needed to obtain a NE. Therefore, it might be interesting to find conditions on the GNEP under which a stationary point of Vαβ turns out to be a global minimizer.

In [ 18[18] DREVES A, KANZOW C & STEIN O. 2012. Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems. Journal of Global Optimization, 53: 587-614.], also a constrained optimization reformulation for player convex GNEPs was suggested and analyzed. More in detail, for α > 0, α and W according to (8) and (6), the problem

is considered. In particular smoothness properties are analyzed in [ 18[18] DREVES A, KANZOW C & STEIN O. 2012. Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems. Journal of Global Optimization, 53: 587-614.] and [ 45[45] HARMS N, KANZOW C & STEIN O. 2013. On differentiability properties of player convex generalized Nash equilibrium problems. Optimization, published online, DOI 10.1080/02331934.2012.752822.
https://doi.org/10.1080/02331934.2012.75...
]. Since α need not be defined outside W a feasible direction algorithm for solving this problem is used in [ 45[45] HARMS N, KANZOW C & STEIN O. 2013. On differentiability properties of player convex generalized Nash equilibrium problems. Optimization, published online, DOI 10.1080/02331934.2012.752822.
https://doi.org/10.1080/02331934.2012.75...
].

We would like to underline that the above two optimization reformulations (32) and (33) for player convex and, consequently, also for jointly convex GNEPs allow a complete characterization of all NEs (see Theorem 2.6 with Proposition 1.2 and Theorem 2.3) and not only of normalized NEs as it was possible by Theorems 2.4 and 2.5 in the case of exclusively jointly convex GNEPs.

5.4 Potential reduction algorithm

This section is devoted to an algorithm for the solution of the KKT system (13) of a GNEP. We assume that the feasible sets of the players ν = 1,..., N are given by (1). Moreover, let the functions θν and gν be C 2 for all ν = 1,..., N.

The KKT system (13) can be reformulated by (15), i.e., by the constrained system of equations

with H, ɀ, and Ω defined in (15). Due to our differentiability assumptions, H is a C 1-function.

The Potential Reduction Algorithm we are going to describe below is an interior point method and based on the minimization of a potential function. Such a method was firstly proposed in [ 58[58] MONTEIRO RDC & PANG J-S. 1999. A potential reduction Newton method for constrained equations. SIAM Journal on Optimization, 9: 729-754.] for the solution of constrained nonlinear systems. In [ 16[16] DREVES A, FACCHINEI F, KANZOW C & SAGRATELLA S. 2011. On the solution of the KKT conditions of generalized Nash equilibrium problems. SIAM Journal on Optimization, 21: 1082-1108.], it is applied to find a solution of (15). Before we state the algorithmic framework let us provide some notation. Let the function p : ℝn × ℝ2m++ → ℝ be given by

for some ζ > m. Furthermore, the set of all strictly feasible points for which the last 2m components of H are positive is denoted by

Now, the potential function ψ : ΩI → ℝ is defined by

Algorithm 6 (Potential Reduction Algorithm).

(S0): Choose ɀ 0 ∈ ΩI, ρ, σ ∈ (0,1), ζ > m. Set α T := (0n T, 1T 2m) and k := 0.

(S1): If H (ɀk ) = 0: STOP.

(S2): Choose σk ∈ [0, σ] and compute a solution dk of the linear system

(S3): Compute tk := max{2-l | l = 0, 1, 2, ...} such that

(S4): Set ɀk+1 := ɀk + tkdk, k := k + 1 and go to (S1).

The following convergence result is Theorem 4.3 from [ 16[16] DREVES A, FACCHINEI F, KANZOW C & SAGRATELLA S. 2011. On the solution of the KKT conditions of generalized Nash equilibrium problems. SIAM Journal on Optimization, 21: 1082-1108.]. A proof can be found there.

Theorem 5.5.Assume that J H (ɀ) is nonsingular for all ɀ ∈ ΩI. Let {ɀk } be any sequence generated by Algorithm 6. Then the following assertions hold:

  1. (a) The sequence {H (ɀk )} is bounded.

  2. (b) Any accumulation point of {ɀk } is a solution of (15).

The nonsingularity of JH(ɀ) on ΩI guarantees that Algorithm 6 is well defined since all iterates ɀk belong to ΩI. This follows from ɀ 0 ∈ ΩIand the step size rule in (S3). By definition of ΩI, none of the solutions of H(ɀ) = 0 can belong to ΩI. Thus, the nonsingularity of JH(ɀ) on I does not imply the nonsingularity of J H at a KKT point. In [ 16[16] DREVES A, FACCHINEI F, KANZOW C & SAGRATELLA S. 2011. On the solution of the KKT conditions of generalized Nash equilibrium problems. SIAM Journal on Optimization, 21: 1082-1108., 30[30] FACCHINEI F, KANZOW C & SAGRATELLA S. 2014. Solving quasi-variational inequalities via their KKT-conditions. Mathematical Programming, 144: 369-412.] some sufficient conditions for J H(ɀ) to be nonsingular on ΩI are provided. Moreover, [ 16[16] DREVES A, FACCHINEI F, KANZOW C & SAGRATELLA S. 2011. On the solution of the KKT conditions of generalized Nash equilibrium problems. SIAM Journal on Optimization, 21: 1082-1108.] contains discussions on the numerical behavior of the potential reduction method and numerical results. In [ 30[30] FACCHINEI F, KANZOW C & SAGRATELLA S. 2014. Solving quasi-variational inequalities via their KKT-conditions. Mathematical Programming, 144: 369-412.], the potential reduction method is used for the solution of quasi-variational inequalities and results from [ 16[16] DREVES A, FACCHINEI F, KANZOW C & SAGRATELLA S. 2011. On the solution of the KKT conditions of generalized Nash equilibrium problems. SIAM Journal on Optimization, 21: 1082-1108.] are extended to that case.

6 LOCAL METHODS AND GLOBALIZATIONS

According to the reformulations described in Section 2, related local methods with superlinear convergence can be found in literature. At first let us briefly consider the "partial" reformulation of a jointly convex GNEP by means of the variational inequality VI(X,F) where F is defined by (7) (see Theorem 2.2). For iteratively solving VI(X,F), the Josephy-Newton method [51[51] JOSEPHY NH. 1979. Newton's method for generalized equations. Technical Summary Report 1965, Mathematical Research Center, University of Wisconsin, USA. ] can be exploited (see, e.g., [48[48] IZMAILOV AF & SOLODOV MV. 2010. Inexact Josephy-Newton framework for generalized equations and its applications to local analysis of Newtonian methods for constrained optimization. Computational Optimization and Applications, 46: 347-368. ] and [41[41] GEIGER C & KANZOW C. 2002. Theorie und Numerik restringierter Optimierungsaufgaben. Springer. ]). This method generates a sequence of vectors {xk } where xk+1 is a solution of the simpler problem VI(X, Fk ) and Fk denotes the linearization of F at xk, i.e.,

Local quadratic convergence of the Josephy-Newton method is proved under strong assumptions.

6.1 Methods based on Nikaido-Isoda functions

In this subsection we consider jointly convex GNEPs. For any ν = 1,..., N, the optimization problem of player ν is given by

where θ 1, ..., θN : ℝn → ℝ and G : ℝn → ℝM are C 2-functions with locally Lipschitz continuous second-order derivatives.

Let us first describe a locally superlinearly convergent counterpart of the globally convergent Algorithm 4 for dealing with the problem of minimizing Vαβ (x) over ℝn. The following algorithm is a nonsmooth Newton method from [ 69[69] QI L & SUN J. 1993. A nonsmooth version of Newton's method. Mathematical Programming, 58: 353-367.] applied to the system ∇Vαβ (x) = 0. Here, Clarke's generalized Jacobian of ∇Vαβ is denoted by ∂2 Vαβ .

Algorithm 7 (Nonsmooth Newton-type Minimization Method).

(S0): Choose β > α > 0, x 0 ∈ ℝn. Set k := 0.

(S1): If xk is a solution of the GNEP (i.e., Vαβ (xk ) = 0): STOP.

(S2): Compute Hk ∈ ∂2 Vαβ (xk ) and dk as a solution of the linear system

(S3): Set xk+1 := xk + dk, k := k + 1 and go to (S1).

Let x * be a normalized NE. Hence, x * is a solution of the system ∇Vαβ (x) = 0. To apply the main convergence result of [ 69[69] QI L & SUN J. 1993. A nonsmooth version of Newton's method. Mathematical Programming, 58: 353-367.] semismoothness of ∇Vαβ at x * and the nonsingularity of all elements in ∂2 Vαβ (x *) are needed. In [ 73[73] VON HEUSINGER A & KANZOW C. 2008. SC1-optimization reformulations of the generalized Nash equilibrium problem. Optimization Methods and Software, 23: 953-973.], it is shown that ∇Vαβ is semismooth in a neighborhood of x * if the linear independence constraint qualification (LICQ) is satisfied in x. LICQ means that the gradients ∇Gi (x *) with i ∈ I0 (x *) = {i ∈ {1, ···, M}| Gi (x *) = 0} are linearly independent. Putting things from [ 73[73] VON HEUSINGER A & KANZOW C. 2008. SC1-optimization reformulations of the generalized Nash equilibrium problem. Optimization Methods and Software, 23: 953-973.] and [ 69[69] QI L & SUN J. 1993. A nonsmooth version of Newton's method. Mathematical Programming, 58: 353-367.] together we have

Theorem 6.1.Consider a jointly convex GNEP with X given in (5). Let x * be a normalized NE, suppose that LICQ holds in x * and that all elements of2 Vαβ (x *) are nonsingular. Then, there is δ > 0 so that for every x 0 ∈ Bδ(x *) Algorithm 7 is well defined and any infinite sequence {xk }generated by the algorithm converges Q-quadratically to x *.

For Step (S2) of Algorithm 7 it is suggested [ 73[73] VON HEUSINGER A & KANZOW C. 2008. SC1-optimization reformulations of the generalized Nash equilibrium problem. Optimization Methods and Software, 23: 953-973.] to compute k := ∇2 Vαβ (k) with some suitable kxk instead of an element of ∂2 Vαβ (xk ). Moreover, it is shown that the convergence properties of Algorithm 7 stay true for the resulting inexact method.

Combining Algorithms 4 and 7 global convergence can be obtained by taking the Newtondirection whenever available and if it satisfies a suitable descent condition. Later on, this technique will be described in detail for the globalization of the following locally superlinear method for treating jointly convex GNEPs where the players' problems are given by (34).

In [ 76[76] VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2012. Newton's Method for computing a normalized equilibrium in the generalized Nash game through fixed point formulation. Mathematical Programming, 132: 99-123.], Newton-type methods were applied to the equation

according to the fixed point reformulation in Assertion (d) of Theorem 2.4. We recall that yα (x) denotes the unique solution of the optimization problem

In contrast to Algorithm 7, the computable generalized JacobianCΦα described in [ 76[76] VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2012. Newton's Method for computing a normalized equilibrium in the generalized Nash game through fixed point formulation. Mathematical Programming, 132: 99-123.] is employed. To define ∂CΦα(x) the constant rank constraint qualification (CRCQ) is used. The CRCQ is satisfied in xX if there is δ > 0 so that, for any subset J ⊆ I0 (x), the matrices (···∇Gi (ω) ···)i∈J have the same rank r(J) for all ω ∈ Bδ(x).

Suppose that, for some x ∈ ℝn, CRCQ is satisfied in yα (x). Then, it is well known [ 50[50] JANIN R. 1984. Directional derivative of the marginal function in nonlinear programming. Mathematical Programming Study, 21: 110-126.] that there is a (not necessarily unique) Lagrange multiplier vector A ∈ ℝM so that (yα (x), Λ) solves the KKT system of the optimization problem (35):

Let LI (x) denote the set of all subsets J ⊆ I0 (yα (x)) for which Λ ∈ ℝM exists such that (yα (x), Λ) solves (36), Λj = 0 for all j ∈ I0 (yα (x)) \J, and the gradients ∇Gi (yα (x)) are linearly independent for all i ∈ J. From a result in [ 76[76] VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2012. Newton's Method for computing a normalized equilibrium in the generalized Nash game through fixed point formulation. Mathematical Programming, 132: 99-123.] it follows that LI (x) is nonempty if CRCQ is satisfied in yα (x). Moreover, in [ 76[76] VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2012. Newton's Method for computing a normalized equilibrium in the generalized Nash game through fixed point formulation. Mathematical Programming, 132: 99-123.], the computable generalized Jacobian of Φα at x is defined by

with the identity matrix In ∈ ℝn×n. Details and an explicit formula for ∇y j α (x) can be found in [ 76[76] VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2012. Newton's Method for computing a normalized equilibrium in the generalized Nash game through fixed point formulation. Mathematical Programming, 132: 99-123.] and [ 19[19] DREVES A, VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2013. A globalized Newton method for the computation of normalized Nash equilibria. Journal of Global Optimization, 56: 327-340.].

Now suppose that x * is a normalized NE and CRCQ is satisfied in x *. Taking into account Theorem 2.4, x* = yα (x *) holds so that CRCQ is also valid in yα (x *). Due to a result in [ 76[76] VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2012. Newton's Method for computing a normalized equilibrium in the generalized Nash game through fixed point formulation. Mathematical Programming, 132: 99-123.] it follows that CRCQ even holds in yα (x) for all x in some neighborhood of x *. Therefore, the computable generalized Jacobian ∂CΦα(x) is nonempty for all x near x *. Moreover, Assumption 6.1 below guarantees that any element of ∂CΦα(x) is nonsingular for x in a certain neighborhood of x* .

Algorithm 8 (Nonsmooth Newton-type Fixed Point Method).

(S0): Choose α > 0, x 0X. Set k := 0.

(S1): If xk is a solution of the GNEP (i.e., Φα(xk ) = yα (xk ) - xk = 0): STOP.

(S2): Compute Hk ∈ ∂CΦα(xk ) and dk as solution of the linear system

(S3): Set xk+1 := xk + dk, k := k + 1 and go to (S1).

Assumption 6.1.For each JLI (x) and each Λ with (yα(x), Λ) satisfying (36) let

be satisfied, where T J(x) := {d ∈ ℝn | d TGi (yα (x)) = 0 for all iJ} and

Note that for x* = y* := yα(x *) the Jacobian J F(x *) of the function F defined in (7) is equal to M(x*, y *).

Theorem 6.2 (see [ 76[76] VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2012. Newton's Method for computing a normalized equilibrium in the generalized Nash game through fixed point formulation. Mathematical Programming, 132: 99-123.]).Consider a jointly convex GNEP with X given by (5). Let x * be a normalized NE and suppose that CRCQ and Assumption 6.1 hold in x * . Then there is δ > 0 so that for every x 0 ∈ Bδ(x *) Algorithm 8 is well defined and any infinite sequence {xk } generated by the algorithm converges Q-quadratically to x * .

Global convergence can be obtained by combining (see [ 19[19] DREVES A, VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2013. A globalized Newton method for the computation of normalized Nash equilibria. Journal of Global Optimization, 56: 327-340.]) the globally convergent Gradient Algorithm 4 and the current local algorithm (here Algorithm 8) by taking the Newton-direction whenever available and if it satisfies a certain descent condition (see the tests in steps (S3) and (S4) of the following algorithm).

Algorithm 9 (Globalized Nonsmooth Newton-type Fixed Point Method).

(S0): Choose x 0 ∈ ℝn, β > α > 0, s > 1, ρ, σ, τ ∈ (0, 1). Set k := 0.

(S1): If xk is a solution of the GNEP (i.e., Φβ(xk ) = yβ (xk ) - xk = 0): STOP.

(S2): If possible compute Hk ∈ ∂CΦβ(xk ) and dk as a solution of the linear system

If this fails then compute dk := -∇Vαβ (xk ) and go to (S5).

(S3): If Vαβ(xk + dk) ≤ τVαβ(xk) then set xk+1 := xk + dk,k :=k + 1 and go to (S1).

(S4): If ∇Vαβ (xk )T dk > -σ ║dks then compute dk := -∇Vαβ (xk ).

(S5): Compute tk := max{2-l | l = 0, 1, 2, ...} such that

(S6): Set xk+1 := xk + tk dk, k := k + 1 and go to (S1).

Theorem 6.3 (see [ 19[19] DREVES A, VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2013. A globalized Newton method for the computation of normalized Nash equilibria. Journal of Global Optimization, 56: 327-340.]).Consider a jointly convex GNEP with X given by (5). Then, Algorithm 9 is well defined for any starting point x 0 ∈ ℝn and the following assertions hold:

  1. (a) Let CRCQ hold for all xX. Let x be an accumulation point of a sequence {xk } generated by Algorithm 9. Then, x is a stationary point of Vαβ, i.e.,Vαβ x) = 0. Moreover, x is a normalized NE provided that, for all x ∈ ℝn with yβ (x) - yα (x) ≠ 0, it holds

  2. (b) Let x * be a normalized NE and suppose that x * is an accumulation point of a sequence {xk } generated by Algorithm 9. If CRCQ and Assumption 6.1 are satisfied in x * , then the sequence {xk } converges Q-quadratically to x *.

6.2 Methods based on KKT conditions

This section is devoted to local algorithms for the solution of the KKT system (13) of a GNEP. Global convergence can be obtained by line search techniques or by a combination with the Potential Reduction Algorithm introduced in Subsection 5.4. We assume that the feasible sets of the players ν = 1,..., N are given by

where θν and gν are C 2-functions with locally Lipschitz continuous second-order derivatives. In [ 27[27] FACCHINEI F, FISCHER A & PICCIALLI V. 2009. Generalized Nash equilibrium problems and Newton methods. Mathematical Programming 117: 163-194.] Newton-type methods for KKT based reformulations for obtaining a NE were dealt with. Both a reformulation of the KKT system arising from the variational inequality VI(X,F) (see Subsection 2.1) and a reformulation of the (concatenated) KKT system (13) was considered. The latter is the nonsmooth equation

with H min : ℝn+m → ℝn+m and ɀ ∈ ℝn+m defined in (14). Since solutions of a GNEP are often nonisolated fast methods for reformulations that do not restrict the set of solutions (for example to normalized NEs) should take this into account. Levenberg-Marquardt methods [ 37[37] FAN J & YUAN Y. 2005. On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption. Computing, 47: 23-39., 38[38] FISCHER A. 2002. Local behavior of an iterative framework for generalized equations with nonisolated solutions. Mathematical Programming, 94: 91-124., 39[39] FISCHER A, SHUKLA PK & WANG M. 2010. On the inexactness level of robust Levenberg-Marquardt methods. Optimization, 59: 273-287., 77[77] YAMASHITA M & FUKUSHIMA M. 2001. On the rate of convergence of the Levenberg-Marquardt method. Computing [Suppl], 15: 239-249.] can serve as promising tool if one is able to provide an error bound under reasonable conditions. The subproblems of this method are quadratic programs with a strongly convex objective. A first approach in this direction to solve H min (ɀ) = 0 was suggested in [ 27[27] FACCHINEI F, FISCHER A & PICCIALLI V. 2009. Generalized Nash equilibrium problems and Newton methods. Mathematical Programming 117: 163-194.]. The subproblems of the Levenberg-Marquardt method then read as

with the regularization parameter µ(ɀk) := ║H min (ɀk)║. The new iterate ɀk+1 can be determined as (the unique) solution of a linear system of equations. However, to obtain differentiability of H min, at least in a certain neighborhood of a solution ɀ= (x *, λ*) of (14), strict complementarity at ɀ* is required. Under this assumption and the local error bound (17) the Levenberg-Marquardt method generates a well defined sequence for any ɀ 0 sufficiently close to ɀ * and any such sequence converges to a solution of (14) with a Q-quadratic rate. For sufficient conditions to ensure this error bound see Section 3.

In order to weaken the strict complementarity assumption another reformulation of the KKT system (13) was successfully applied in [ 15[15] DREVES A, FACCHINEI F, FISCHER A & HERRICH M. 2014. A new error bound result for generalized Nash equilibrium problems and its algorithmic application. Computational Optimization and Applications, 59: 63-84.]. There, the KKT system is reformulated as the constrained system of equations

where

are defined in (15). Due to the differentiability assumptions on the problem functions of the GNEP, H is differentiable and has a locally Lipschitz continuous derivative. Instead of the strict complementarity at (x *, λ*) a significantly weaker condition is used. For any constraint that is active at a solution ɀ = (x *, λ*, ω *) of (37) at least one player with a positive Lagrange multiplier for this constraint must exist within λ*. A detailed description is given in Theorem 3.3 for the case of shared constraints.

The LP-Newton Method was designed for solving general constrained equations with nonisolated solutions and nonsmoothness, see [ 24[24] FACCHINEI F, FISCHER A & HERRICH M. 2013. A family of Newton methods for nonsmooth constrained systems with nonisolated solutions. Mathematical Methods of Operations Research, 77: 433-443.] for details and a convergence analysis. The subproblems of the LP-Newton method are linear programs. In [ 15[15] DREVES A, FACCHINEI F, FISCHER A & HERRICH M. 2014. A new error bound result for generalized Nash equilibrium problems and its algorithmic application. Computational Optimization and Applications, 59: 63-84.] this method was applied to GNEPs within a hybrid algorithm. It combines the Potential Reduction Method (Algorithm 6) with the LP-Newton Method for obtaining both global and local fast convergence, see Algorithm 10 below.

A more general class of methods for solving (37) is presented in [ 25[25] FACCHINEI F, FISCHER A & HERRICH M. 2014. An LP-Newton method: Nonsmooth equations, KKT systems, and nonisolated solutions. Mathematical Programming, 146: 1-36.]. In particular, a constrained Levenberg-Marquardt algorithm with the subproblems

belongs to this class. Whereas [ 5[5] BEHLING R & FISCHER A. 2012. A unified local convergence analysis of inexact constrained Levenberg-Marquardt methods. Optimization Letters, 6: 927-940., 52[52] KANZOW C, YAMASHITA N & FUKUSHIMA M. 2004. Levenberg-Marquardt methods with strong local convergence properties for solving equations with convex constraints. Journal of Computational and Applied Mathematics, 172: 375-397.] contain results for the convergence of constrained Levenberg-Marquardt methods for smooth problems, the analysis in [ 25[25] FACCHINEI F, FISCHER A & HERRICH M. 2014. An LP-Newton method: Nonsmooth equations, KKT systems, and nonisolated solutions. Mathematical Programming, 146: 1-36.] allows also nonsmooth cases. The key for proving fast local convergence of the LP-Newton Method and of the constrained Levenberg-Marquardt method lies in a suitable error bound for problem (37). A sufficient condition for such an error bound is given in Theorem 3.3.

Recalling the definitions of the feasible region ΩI and of the potential function ψ from Section 5.4 we now state the hybrid algorithm from [ 15[15] DREVES A, FACCHINEI F, FISCHER A & HERRICH M. 2014. A new error bound result for generalized Nash equilibrium problems and its algorithmic application. Computational Optimization and Applications, 59: 63-84.]. The LP-Newton-direction is taken whenever satisfying a descent condition with respect to ║H(·)║, otherwise a potential reduction step is carried out.

Algorithm 10 (Potential Reduction LP-Newton Method).

(S0): Choose ɀ 0ΩI,β,ρ, σ ∈ (0,1), ζ > m,τ max > τ min > 0, τ 0 ∈ [τmin, τmax]. Set α T := (0T n, 1T 2m) and k := 0.

(S1): If ɀk is a solution of (15): STOP.

If ║H(ɀk)║ ≤ τk go to (S4), else go to (S2).

(S2): Choose σk ∈ [0,σ] and compute a solution dk of the linear system

(S3): Compute tk := max{2-l | l = 0, 1, 2,...} such that

Set ɀk+1 := ɀk + tkdk, ɀk+1 , τk+1 := τk, k := k+ 1 and go to (S1). :=

(S4): Compute ( k+1, ηk+1 ) ∈ ℝn+2m × ℝ as a solution of the linear program

If ║H(k+1 )║ ≤ βH(ɀk)║, set ɀk+1 := k+1, τk+1 := τk, k := k+ 1 and go to (S1), else set ɀk+1 := , k := k+ 1, choose τk+1 ∈ [τmin, τmax], and go to (S2).

Theorem 6.4 (see[ 15[15] DREVES A, FACCHINEI F, FISCHER A & HERRICH M. 2014. A new error bound result for generalized Nash equilibrium problems and its algorithmic application. Computational Optimization and Applications, 59: 63-84.]).Consider a GNEP with shared constraints. Assume that J H(ɀ)is nonsingular for all ɀΩI. Then Algorithm 10 is well defined and, for any sequence {ɀk } generated by the algorithm, the following assertions hold:

  1. (a) The sequence {H (ɀk )} is bounded.

  2. (b) Any accumulation point of {ɀk } is a solution of (15).

  3. (c) If {ɀk } has an accumulation point ɀ * satisfying the assumptions of Theorem 3.3 then the sequence {ɀk } converges to ɀ * with a Q-quadratic rate.

Global convergence results should also be possible for a modification of Algorithm 10 where the Potential Reduction Algorithm is replaced by the Penalty Method described in Section 5.2. The player convexity and regularity conditions required there for proving convergence to a solution of the GNEP would make Theorem 6.4 also valid for the case that "solution of (15)" is replaced by "solution of the GNEP". This is justified by Theorem 2.7.

We finally would like to refer the reader to an approach in [ 14[14] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On structure and computation of generalized Nash equilibria. SIAM Journal on Optimization, 23: 452-474.], where a Newton-like method is applied to F2 (x,γ) = 0 (see Subsection 4.1) in order to find a Fritz-John point of a GNEP. The local quadratic convergence of this method is claimed under quite strong conditions if it starts sufficiently close to a solution (x *, γ*) of F2 (x,γ) = 0. In particluar, all multipliers in γ * were required to be nonzero. As explained in [ 14[14] DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On structure and computation of generalized Nash equilibria. SIAM Journal on Optimization, 23: 452-474.] there is some hope to weaken those conditions.

7 ACKNOWLEDGMENTS

This work is supported in a part by the German Research Foundation (DFG) in the Collaborative Research Center 912 "Highly Adaptive Energy-Efficient Computing".

REFERENCES

  • [1]
    ARDAGNA D, PANICUCCI B & PASSACANTANDO M. 2011. A game theoretic formulation of the service provisioning problem in cloud systems. Proceedings of the 20th International Conference on World Wide Web (edited by Ghinita G & Punera K), ACM, 177-186.
  • [2]
    ARDAGNA D, PANICUCCI B & PASSACANTANDO M. 2013. Generalized Nash equilibria for the service provisioning problem in cloud systems. IEEE Transactions on Services Computing, 6: 429-442.
  • [3]
    AUBIN J-P. 1982. Mathematical Methods of Game and Economic Theory. North-Holland Publishing Company.
  • [4]
    BARZILAI J & BORWEIN JM. 1988. Two-point step size gradient methods. IMA Journal of Numerical Analysis, 8: 141-148.
  • [5]
    BEHLING R & FISCHER A. 2012. A unified local convergence analysis of inexact constrained Levenberg-Marquardt methods. Optimization Letters, 6: 927-940.
  • [6]
    BENSOUSSAN A. 1974. Points de Nash dans le cas de fonctionelles quadratiques et jeux differentiels lineaires a N personnes. SIAM Journal on Control, 12: 460-499.
  • [7]
    BRÜCKNER M, KANZOW C & SCHEFFER T. 2012. Static prediction games for adversarial learning problems. Journal of Machine Learning Research, 13: 2617-2654.
  • [8]
    BRÜCKNER M & SCHEFFER T. 2009. Nash equilibria of static prediction games. Advances in Neural Information Processing Systems (edited by Bengio Y, Schuurmans D, Lafferty J, Williams CKI & Culotta A), 22: 171-179.
  • [9]
    BURKE JV, LEWIS AS & OVERTON ML. 2005. A robust gradient sampling algorithm for nonsmooth, nonconvex optimization. SIAM Journal on Optimization, 15: 751-779.
  • [10]
    CARDELLINI V, DE NITO PERSONÉ V, DI VALERIO V, FACCHINEI F, GRASSI V, LO PRESTI F & PICCIALLI V. 2013. A game-theoretic approach to computation offloading in mobile cloud computing. Technical Report, available online at <http://www.optimization-online.org/DBHTML/2013/08/3981.html>.
    » http://www.optimization-online.org/DBHTML/2013/08/3981.html
  • [11]
    CONTRERAS J, KRAWCZYK J, ZUCCOLLO J & GARCÍA J. 2013. Competition of thermal electricity generators with coupled transmission and emission constraints. Journal of Energy Engineering, 139: 239-252.
  • [12]
    DEBREU G. 1952. A social equilibrium existence theorem. Proceedings of the National Academy of Sciences of the United States of America, 38: 886-893.
  • [13]
    DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On intrinsic complexity of Nash equilibrium problems and bilevel optimization. Journal of Optimization Theory and Applications, 159: 606-634.
  • [14]
    DORSCH D, JONGEN HT & SHIKHMAN V. 2013. On structure and computation of generalized Nash equilibria. SIAM Journal on Optimization, 23: 452-474.
  • [15]
    DREVES A, FACCHINEI F, FISCHER A & HERRICH M. 2014. A new error bound result for generalized Nash equilibrium problems and its algorithmic application. Computational Optimization and Applications, 59: 63-84.
  • [16]
    DREVES A, FACCHINEI F, KANZOW C & SAGRATELLA S. 2011. On the solution of the KKT conditions of generalized Nash equilibrium problems. SIAM Journal on Optimization, 21: 1082-1108.
  • [17]
    DREVES A & KANZOW C. 2011. Nonsmooth optimization reformulations characterizing all solutions of jointly convex generalized Nash equilibrium problems. Computational Optimization and Applications, 50: 23-48.
  • [18]
    DREVES A, KANZOW C & STEIN O. 2012. Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems. Journal of Global Optimization, 53: 587-614.
  • [19]
    DREVES A, VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2013. A globalized Newton method for the computation of normalized Nash equilibria. Journal of Global Optimization, 56: 327-340.
  • [20]
    DUTANG C. 2013. Existence theorems for generalized Nash equilibrium problems. Journal of Nonlinear Analysis and Optimization: Theory and Applications, 4: 115-126.
  • [21]
    DUTANG C. 2013. A survey of GNE computation methods: Theory and algorithms. Available online at <http://hal.archives-ouvertes.fr/docs/00/81/35/31/PDF/meth-comp-GNE-dutangc-noformat.pdf>.
    » http://hal.archives-ouvertes.fr/docs/00/81/35/31/PDF/meth-comp-GNE-dutangc-noformat.pdf
  • [22]
    DUTANG C, ALBRECHER H & LOISEL S. 2013. Competition between non-life insurers under solvency constraints: A game-theoretic approach. European Journal of Operational Research, 231: 702-711.
  • [23]
    FABIAN MJ, HENRION R, KRUGER AY & OUTRATA JV. 2010. Error bounds: Necessary and sufficient conditions. Set-Valued and Variational Analysis, 18: 121-149.
  • [24]
    FACCHINEI F, FISCHER A & HERRICH M. 2013. A family of Newton methods for nonsmooth constrained systems with nonisolated solutions. Mathematical Methods of Operations Research, 77: 433-443.
  • [25]
    FACCHINEI F, FISCHER A & HERRICH M. 2014. An LP-Newton method: Nonsmooth equations, KKT systems, and nonisolated solutions. Mathematical Programming, 146: 1-36.
  • [26]
    FACCHINEI F, FISCHER A & PICCIALLI V. 2007. On generalized Nash games and variational inequalities. Operations Research Letters, 35: 159-164.
  • [27]
    FACCHINEI F, FISCHER A & PICCIALLI V. 2009. Generalized Nash equilibrium problems and Newton methods. Mathematical Programming 117: 163-194.
  • [28]
    FACCHINEI F & KANZOW C. 2010. Generalized Nash equilibrium problems. Annals of Operations Research, 175: 177-211.
  • [29]
    FACCHINEI F & KANZOW C. 2010. Penalty methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization, 20: 2228-2253.
  • [30]
    FACCHINEI F, KANZOW C & SAGRATELLA S. 2014. Solving quasi-variational inequalities via their KKT-conditions. Mathematical Programming, 144: 369-412.
  • [31]
    FACCHINEI F & LAMPARIELLO L. 2011. Partial penalization for the solution of generalized Nash equilibrium problems. Journal of Global Optimization, 50: 39-57.
  • [32]
    FACCHINEI F & PANG J-S. 2003. Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer.
  • [33]
    FACCHINEI F& PANG J-S. 2006. Exact penalty functions for generalized Nash problems. Large Scale Nonlinear Optimization (edited by Di Pillo G & Roma M), Springer, 115-126.
  • [34]
    FACCHINEI F & PANG J-S. 2009. Nash equilibria: The variational approach. Convex Optimization in Signal Processing and Communications (edited by Eldar Y & Palomar DP), Cambridge University Press, 443-493.
  • [35]
    FACCHINEI F, PICCIALLI V & SCIANDRONE M. 2011. Decomposition algorithms for generalized potential games. Computational Optimization and Applications, 50: 237-262.
  • [36]
    FACCHINEI F & SAGRATELLA S. 2011. On the computation of all solutions of jointly convex generalized Nash equilibrium problems. Optimization Letters, 5: 531-547.
  • [37]
    FAN J & YUAN Y. 2005. On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption. Computing, 47: 23-39.
  • [38]
    FISCHER A. 2002. Local behavior of an iterative framework for generalized equations with nonisolated solutions. Mathematical Programming, 94: 91-124.
  • [39]
    FISCHER A, SHUKLA PK & WANG M. 2010. On the inexactness level of robust Levenberg-Marquardt methods. Optimization, 59: 273-287.
  • [40]
    FUKUSHIMA M. 2011. Restricted generalized Nash equilibria and controlled penalty algorithm. Computational Management Science, 8: 201-218.
  • [41]
    GEIGER C & KANZOW C. 2002. Theorie und Numerik restringierter Optimierungsaufgaben. Springer.
  • [42]
    GRIPPO L & SCIANDRONE M. 2000. On the convergence of the block nonlinear Gauss-Seidel method under convex constraints. Operations Research Letters, 26: 127-136.
  • [43]
    HAN Z, NIYATO D & SAAD W. 2012. Game Theory in Wireless and Communication Networks. Cambridge University Press.
  • [44]
    HAN D, ZHANG H, QIAN G & XU L. 2012. An improved two-step method for solving generalized Nash equilibrium problems. European Journal of Operational Research, 216: 613-623.
  • [45]
    HARMS N, KANZOW C & STEIN O. 2013. On differentiability properties of player convex generalized Nash equilibrium problems. Optimization, published online, DOI 10.1080/02331934.2012.752822.
    » https://doi.org/10.1080/02331934.2012.752822
  • [46]
    HAURIE A, KRAWCZYK JB & ZACCOUR G. 2012. Games and Dynamic Games. World Scientific.
  • [47]
    HENRION R & OUTRATA JV. 2005. Calmness of constraint systems with applications. Mathematical Programming, 104: 437-464.
  • [48]
    IZMAILOV AF & SOLODOV MV. 2010. Inexact Josephy-Newton framework for generalized equations and its applications to local analysis of Newtonian methods for constrained optimization. Computational Optimization and Applications, 46: 347-368.
  • [49]
    IZMAILOV AF & SOLODOV MV. 2014. On error bounds and Newton-type methods for generalized Nash equilibrium problems. Computational Optimization and Applications, 59: 201-218.
  • [50]
    JANIN R. 1984. Directional derivative of the marginal function in nonlinear programming. Mathematical Programming Study, 21: 110-126.
  • [51]
    JOSEPHY NH. 1979. Newton's method for generalized equations. Technical Summary Report 1965, Mathematical Research Center, University of Wisconsin, USA.
  • [52]
    KANZOW C, YAMASHITA N & FUKUSHIMA M. 2004. Levenberg-Marquardt methods with strong local convergence properties for solving equations with convex constraints. Journal of Computational and Applied Mathematics, 172: 375-397.
  • [53]
    KUBOTA K & FUKUSHIMA M. 2010. Gap function approach to the generalized Nash equilibrium problem. Journal of Optimization Theory and Applications, 144: 511-531.
  • [54]
    LALITHA CS & DHINGRA M. 2012. Optimization reformulations of the generalized Nash equilibrium problem using regularized indicator Nikaido-Isoda function. Journal of Global Optimization, 57: 843-861.
  • [55]
    MATIOLI LC, SOSA W & YUAN J. 2012. A numerical algorithm for finding solutions of a generalized Nash equilibrium problem. Computational Optimization and Applications, 52: 281-292.
  • [56]
    MONDERER D & SHAPLEY LS. 1996. Potential games. Games and Economic Behavior, 14: 124-143.
  • [57]
    MOCHAOURAB R, ZORBA N & JORSWIECK E. 2012. Nash equilibrium in multiple antennas protected and shared bands. Proceedings of the International Symposium on Wireless Communication Systems (ISWCS), IEEE, 101-105.
  • [58]
    MONTEIRO RDC & PANG J-S. 1999. A potential reduction Newton method for constrained equations. SIAM Journal on Optimization, 9: 729-754.
  • [59]
    NABETANI K, TSENG P & FUKUSHIMA M. 2011. Parametrized variational inequality approaches to generalized Nash equilibrium problems with shared constraints. Computational Optimization and Applications, 48: 423-452.
  • [60]
    NASH JF. 1950. Equilibrium points in n-person games. Proceedings of the National Academy of Sciences, 36: 48-49.
  • [61]
    NASH JF. 1951. Non-cooperative games. The Annals of Mathematics, 54: 286-295.
  • [62]
    NASIRI F & ZACCOUR G. 2010. Renewable portfolio standard policy: a game-theoretic analysis. Information Systems and Operational Research, 48: 251-260.
  • [63]
    NIKAIDO H & ISODA K. 1955. Note on noncooperative convex games. Pacific Journal of Mathematics, 5: 807-815.
  • [64]
    PANG J-S. 1997. Error bounds in mathematical programming. Mathematical Programming, 79: 299-332.
  • [65]
    PANG J-S, SCUTARI G, FACCHINEI F & WANG C. 2008. Distributed power allocation with rate constraints in Gaussian parallel interference channels. IEEE Transactions on Information Theory, 54: 3471-3489.
  • [66]
    PANG J-S, SCUTARI G, PALOMAR DP & FACCHINEI F. 2010. Design of cognitive radio systems under temperature-interference constraints: A variational inequality approach. IEEE Transactions on Signal Processing, 58: 3251-3271.
  • [67]
    PARENTE LA, LOTITO PA, RUBIALES AJ & SOLODOV MV. 2013. Solving net constrained hydrothermal Nash-Cournot equilibrium problems via the proximal point decomposition method. Pacific Journal of Optimization, 9: 301-322.
  • [68]
    PUERTO J, SCHÖBEL A & SCHWARZE S. 2012. On equilibriain generalized Nash games with applications to games on polyhedral sets. Technical Report No. 15, Preprint-Serie, Institut für Numerische und Angewandte Mathematik, Universität Göttingen, Germany.
  • [69]
    QI L & SUN J. 1993. A nonsmooth version of Newton's method. Mathematical Programming, 58: 353-367.
  • [70]
    ROBINSON SM. 1981. Some continuity properties of polyhedral multifunctions. Mathematical Programming Study, 14: 206-214.
  • [71]
    SCUTARI G, FACCHINEI F, PANG J-S & PALOMAR DP. 2014. Real and complex monotone communication games. IEEE Transactions on Information Theory, 60: 4197-4231.
  • [72]
    URYASEV S & RUBINSTEIN RY. 1994. On relaxation algorithms in computation of noncooperative equilibria. IEEE Transactions on Automatic Control, 39: 1263-1267.
  • [73]
    VON HEUSINGER A & KANZOW C. 2008. SC1-optimization reformulations of the generalized Nash equilibrium problem. Optimization Methods and Software, 23: 953-973.
  • [74]
    VON HEUSINGER A & KANZOW C. 2009. Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Computational Optimization and Applications, 43: 353-377.
  • [75]
    VON HEUSINGER A & KANZOW C. 2009. Relaxation methods for generalized Nash equilibrium problems with inexact line search. Journal of Optimization Theory and Applications, 143: 159-183.
  • [76]
    VON HEUSINGER A, KANZOW C & FUKUSHIMA M. 2012. Newton's Method for computing a normalized equilibrium in the generalized Nash game through fixed point formulation. Mathematical Programming, 132: 99-123.
  • [77]
    YAMASHITA M & FUKUSHIMA M. 2001. On the rate of convergence of the Levenberg-Marquardt method. Computing [Suppl], 15: 239-249.

Publication Dates

  • Publication in this collection
    Sep-Dec 2014

History

  • Received
    19 Dec 2013
  • Accepted
    29 Jan 2014
Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 Rio de Janeiro RJ - Brasil, Tel.: +55 21 2263-0499, Fax: +55 21 2263-0501 - Rio de Janeiro - RJ - Brazil
E-mail: sobrapo@sobrapo.org.br