Acessibilidade / Reportar erro

An inexact interior point proximal method for the variational inequality problem

Abstract

We propose an infeasible interior proximal method for solving variational inequality problems with maximal monotone operators and linear constraints. The interior proximal method proposed by Auslender, Teboulle and Ben-Tiba [3] is a proximal method using a distance-like barrier function and it has a global convergence property under mild assumptions. However, this method is applicable only to problems whose feasible region has nonempty interior. The algorithm we propose is applicable to problems whose feasible region may have empty interior. Moreover, a new kind of inexact scheme is used. We present a full convergence analysis for our algorithm.

maximal monotone operators; outer approximation algorithm; interior point method; global convergence


An inexact interior point proximal method for the variational inequality problem

Regina S. BurachikI,* * This author acknowledges support by the Australian Research Council Discovery Project Grant DP0556685 for this study. ; Jurandir O. LopesII, Partially supported by CNPq. thanks ; Geci J.P. Da SilvaIII, Partially supported by IM-AGIMB.

ISchool of Mathematics and Statistics, University of South Australia Mawson Lakes, S.A. 5095, Australia, E-mail: regina.burachik@unisa.edu.au

IIUniversidade Federal do Piauí, CCN-UFPI, CP 64.000, 64000-000 Teresina, PI, Brazil, E-mail: jurandir@ufpi.br

IIIUniversidade Federal de Goiás, IME/UFG, CP 131, 74001-970 Goiânia, GO, Brazil E-mail: geci@mat.ufg.br

ABSTRACT

We propose an infeasible interior proximal method for solving variational inequality problems with maximal monotone operators and linear constraints. The interior proximal method proposed by Auslender, Teboulle and Ben-Tiba [3] is a proximal method using a distance-like barrier function and it has a global convergence property under mild assumptions. However, this method is applicable only to problems whose feasible region has nonempty interior. The algorithm we propose is applicable to problems whose feasible region may have empty interior. Moreover, a new kind of inexact scheme is used. We present a full convergence analysis for our algorithm.

Mathematical subject classification: 90C51, 65K10, 47J20, 49J40, 49J52, 49J53.

Key words: maximal monotone operators, outer approximation algorithm, interior point method, global convergence.

1 Introduction

Let Cn be a closed and convex set, and T: be a maximal monotone point-to-set operator. We consider the variational inequality problem associated with T and C: Find such that there exists T () satisfying

This problem is denoted by V I P(T, C). In the particular case in which T is the subdifferential of a proper, convex and lower semicontinuous function f: n∪{∞}, (1.1) reduces to the constrained convex optimization problem: Find such that

We are concerned with C a polyhedral set on n defined by

where A is an m × n real matrix, bm and m > n. Well-known methods for solving V I P(T, C) are the so-called generalized proximal schemes, which involve a regularization term that incorporates the constraint set C in such a way that all the subproblems have solutions in the interior of C. For this reason, these methods are also called interior proximal methods. Examples of these regularizing functionals are the Bregman distances (see, e.g. [1, 8, 13, 14, 20, 25]), φ-divergences ([26, 5, 15, 18, 19, 27, 28]) and log-quadratic regularizations ([3, 4]). Being interior point methods, it is a basic assumption that the topological interior of C is nonempty. Otherwise, the iterates are not well-defined. However, a set C as above may usually have empty interior. In order to solve problem (1.2) for an arbitrary set C of the kind given in (1.3), Yamasita et al. [29] devised an interior-point scheme in which the subproblems deal with a constraint set Ck given by

where the vectors δk have positive coordinates and are such that . So, if C, it holds C ⊂ int Ck and hence a regularizing functional can be associated with the set Ck. Denote by k the regularization functional proposed in [3, 4] (associated with the set Ck with non-empty interior) and by ∇1

k the derivative of k with respect to its first argument. The subproblems in [29] find an approximate solution xk ∈ int Ck of the inclusion

where λk > 0, ε f is the ε-subdifferential of f [6]. Yamasita et al. prove in [29] convergence under summability assumptions on the "error" sequences {εk} and {δk}. One drawback of conditions of this kind is that there may be no constructive way to enforce them. Indeed, there exist infinitely many summable sequences, and it is not specified how to choose them at a specific iteration and for the given problem, so as to ensure convergence. From the algorithmic standpoint, one would prefer to have a computable error tolerance condition which is related to the progress of the algorithm at every given step when applied to the given problem. This is one of the main motivations of our approach (see condition (3.10) below), where we choose each εk so as to verify a specific condition at each iteration k.

Moreover, we also extend the scheme given in [29] to the more general problem (1.1). Namely, we are concerned with iterations of the kind: Find an approximate solution xk ∈ int Ck of

where λk > 0 and Tε is an enlargement of the operator T [11, 10]. We impose no summability assumption on the parameters {εk}. Instead, we define a criterion which can be checked at each iteration. On the other hand, we do need summability of the sequence { δk}.

Our relative error analysis is inspired by the one given in [12], which yields a more practical framework. The convergence analysis presented in [29] (which considers the optimization problem (1.2)) requires an assumption involving the sequence of iterates generated by the method, and the function f, namely that , where PC stands for the orthogonal projection onto C. We make no assumptions of this kind in our analysis. Another difference between [29] and the present paper is that we allow more degrees of freedom in the definition of the inexact step. See Remark 3.6 for a detailed comparison with [29].

The paper is organized as follows. In Section 2 we give some basic definitions and properties of the family of regularizations, as well as some known results on monotone operators. In the same section, the enlargement Tε is reviewed, together with its elementary features. In Section 3, we describe the algorithm, prove its well-definedness and give its inexact version. The convergence analysis is presented in Section 3.1, and in Section 4 we give some conclusions.

2 Basic assumptions and properties

A point-to-set valued map T: is an operator which associates with each point xn a set (possibly empty) T(x) ⊂ n. The domain and the graph of a point-to-set valued map T are defined as:

D(T) := {xn| T(x) ≠ },

G(T) := {(x, v) ∈ n × n| xD(T), vT(x)}.

A point-to-set operator T is said to be monotone if

u - v, x - y〉 > 0, ∀ uT(x), vT(y).

A monotone operator T is said to be maximal when its graph is not properly contained in the graph of any other monotone operator. The well-known result below has been proved in [24, Theorem 1]. Denote by ir A the relative interior of the set A.

Proposition 2.1. Let T1, T2 maximal monotone operators. If ir D(T1) ∩ ir D(T2) ≠ , then T1 + T2 is maximal monotone.

We denote by dom(f) = {xn| f(x) < +∞} the domain of f: n∪{+∞} and by f the asymptotic function [2, Definition 2.5.1] associated with the function f: n∪{+∞}.

It is well-known that the existence of solutions of inclusion (1.5) depends on the properties of the distance

k. For a given distance D, a coercivity property (namely surjectivity of ∇1D(·,y) for y fixed) is required (see, for instance [8, Proposition 3]). The result we need to ensure well-definedness of our scheme, which we state below, is [3, Proposition 3.1], which establishes the desired surjectivity in our particular setting.

Theorem 2.2 ([3, Proposition 3.1]). Let f:n∪{+∞} be a closed proper convex function with dom(f) open. Assume that f is differentiable on dom(f) and such that f(ξ) = +∞ ∀ξ ≠ 0. Let A be an m × n matrix with m > n and rank A = n, m with ( - A(n)) ∩ dom(f) ≠ , and set h(x): = f( - Ax). Let : be a maximal monotone operator such that D() ∩ dom(h) ≠ and set

Thenh(x) is onto. Moreover, there exist a solution x of equation 0 ∈ U(x), which is unique if f is strictly convex on its domain.

We describe below the family of regularizations we use. From now on, the function φ: +→ (-∞, ∞] is given by

where h is a closed and proper convex function satisfying the following additional properties:

(1) h is twice continuously differentiable on int(dom h) = (0, +∞),

(2) h is strictly convex on its domain,

(3)

h' (t) = -∞,

(4) h(1) = h' (1) = 0 and h" (1) > 0, and

(5) for t > 0

Items (1)-(4) and (1)-(5) were used in [4] to define, respectively, the families Φ and Φ2. The positive parameters µ, ν shall satisfy the following inequality

Note that conditions above and (2.2) imply

therefore limt→∞φ' (t) = +∞. The generalized distance induced by φ, is denoted by dφ (x, y) and defined as:

where . Since limt→∞φ' (t) = +∞ it follows that [dφ(·,y)](ξ) = +∞, ∀ξ ≠ 0. Denoting by ∇1 the gradient with respect to the first variable, it holds that [∇1dφ (x, y)]i = yiφ' (xi/yi) for all i = 1, ..., n.

The following lemma has a crucial role in the convergence analysis. Its first part has been established in [3]. Define

Lemma 2.3.For all w,z and v ∈ , it holds that

Proof. For part (i), see [3, Lemma 3.4]. We proceed to prove (ii). Since φ(t) = µh(t) + (t - 1)2, we have that φ'(t) = µh(t) + ν(t - 1). By (2.2) and (2.6) we get φ'(t) < (ν + ρµ)(t - 1). Letting t = and multiplying both sides by vizi yield

for all i = 1, ... , n. Therefore, 〈v, ∇1dφ(w, z)〉 < θv, w - z〉. Using the Cauchy-Schwartz inequality in the expression above, we get (ii).

The result below is known as Hoffman's lemma [16].

Lemma 2.4.Let C = {x n| Ax < b} and Ck= {x n| Ax < b + δk} where A is matrix m × n with m > n and b, δkm. Given xkCk there exists a constant α > 0 such that

where pk is projection of xk in C.

We recall next two technical results on nonnegative sequences of real numbers. The first one was taken from [22, Chapter 2] and the second from [21, Lemma 3.5].

Lemma 2.5.Let {σk} and {βk} be nonnegative sequences of real numbers satisfying:

Then the sequence {σk} converges.

Lemma 2.6. Let k} be a sequence of positive numbers, and {ak} be a sequence of real numbers. Let and , then

(i) lim infk→∞ ak < lim infk→∞bk < lim supk→∞ bk < lim supk→∞ ak;

(ii) If limk→∞ ak = a < ∞, then limk→∞ bk = a.

In our analysis, we relax the inclusion vkT(xk), by means of an ε-enlargement of the operator T introduced in [10]: Given T a monotone operator, define

This extension has many useful properties, similar to the ε-subdifferential of a proper closed convex function f. Indeed, when T = ∂f, we have

For an arbitrary maximal monotone operator T, the relation T0(x) = T(x) holds trivially. Furthermore, for ε' > ε > 0, we have Tε(x) ⊂ Tε' (x). In particular, for each ε > 0, T(x) ⊂ Tε(x) (see [9, Chapter 5] for a detailed study of the properties of Tε).

3 The algorithm

In this section, we propose an infeasible interior proximal method for the solution of V I P(T, C) (1.1). To state formally our algorithm, we consider

which is considered a perturbation of the original constraint set C. Moreover, if C, then C ⊂ int Ck for all k. Since δk→ 0 as k → ∞, the sequence of sets {Ck} converges to the set C. Now, if ai denotes the row i of the matrix A, for each xCk we consider , where

herefore, we have the function

k : int Ck × int Ck defined by

From the definition of dφ, for each xk ∈ int Ck, xk-1∈ int Ck-1, we have

In the method proposed in [29] for the convex optimization problem (1.2) with C defined as in (1.3), the exact algorithm of the iteration k is given by:

For λk > 0, δk > 0 and (xk-1, yk-1) ∈ int Ck-1 × , find (x, y) ∈ int Ck × and un such that

where y can be seen as a slack variable associated to x ∈ int Ck. The corresponding inexact iteration in [29] is given by:

Following the approach in (3.3), the exact version of our algorithm is obtained replacing ∂f by an arbitrary maximal monotone operator T. Namely, given λk > 0, δk > 0 and (xk-1, yk-1) ∈ int Ck-1 × , find (x, y) ∈ int Ck × and un such that

A detailed comparison with the method in [29] is given in Remark 3.6.

It is important to guarantee the existence of (xk, yk) ∈ int Ck × satisfying (3.4). In fact, the next proposition shows that there exists a unique pair (xk, yk) ∈ int Ck × satisfying (3.4) under the following two assumptions:

(H1) ir C ∩ ir D(T) ≠ ;

(H2) rank(A) = n (and therefore, A injective).

Proposition 3.1. Suppose that (H1) and (H2) hold. For every λk > 0, δk > 0 and (xk-1, yk-1) ∈ int Ck × , there exists a unique pair (xk, yk) ∈ int Ck × satisfying (3.4).

Proof. Define the operator k(x): = T(x) + (x) + λk-1 h(x), where h := k(·, xk-1). We prove first that we are in the conditions of Theorem 2.2 for := T + , f(·): = dφ(·, yk-1) and : = b + δk. Indeed, the operator is maximal monotone by (H1) and the fact that CCk (we are using here Proposition 2.1). The function dφ(·, yk-1) is by definition convex, proper and differentiable on its (open) domain and [dφ(·, yk-1)](ξ) = +∞, ∀ξ ≠ 0. By (H2), A has maximal rank. We claim that (b + δk - A(n)) ∩ dom dφ(·, yk-1) ≠ . Indeed, fix xC. It holds that

and therefore b + δk- Ax = dom dφ (·, yk-1).

The only hypothesis that remains to be checked is: D() ∩ dom(h) ≠ , where dom(h) = int Ck. Indeed, by (H1) and by definition of the Ck we get

CD(T) ⊂ int CkD(T) ⊂ D().

Hence CD() ⊂ D() ∩ int Ck = D() ∩ dom(h). So the hypotheses of Theorem 2.2 are satisfied and therefore there exists x* a solution of the equation

This solution is unique, because dφ(·, yk-1) is strictly convex on its domain.So, there exists ukT(xk), vk (xk) and zk = ∇1

k(xk, xk-1) = ∇1dφ(b + δk - A(xk), yk-1) such that

Taking b + δk - Axk =: yk we have that yk is also unique. Since yk, it holds that xk ∈ int Ck, thus vk = 0. Hence by (3.7) there exists a unique pair (xk, yk) ∈ int Ck × satisfying

which completes the proof.

Remark 3.2. We point out that the previous proposition can be established (with essentially the same proof) if we replace (H1) by the weaker requirement D(T) ∩ int(Ck) ≠ . We will need (H1), however, for proving that our iterates converge to a solution (see Theorem 3.11).

To deal with approximations, we relax the inclusion and the equation of the exact system (3.4) in a way similar to (3.3):

where Tε is the enlargement of T given in (2.7). In the exact solution, we have εk = 0 and εk = 0. An approximate solution should have εk and εk "small". Our aim is to use a relative error criteria as the one used in [12] to control the size of εk and εk. The intuitive idea is to perform an extragradient step from xk-1 to x, using the direction (see (3.9)), and then check whether the "error terms" of the iteration, given by εk + 〈, - x〉 and || - y|| are small enough with respect to the previous step.

Definition 3.3. Letσ ∈ [0,1) and γ > 0. We say that (, , , εk) in (3.8) is an approximated solution of system (3.4) with tolerance σ and γ if for (x, y) such that

it holds that

where τ > 0 is as in (2.6).

Remark 3.4.

(i) Since the domain of dφ(·, yk-1) is , for , , εk and x as in Definition 3.3, it holds that , x ∈ int Ck.

(ii) If (x, y, u) verifies (3.4), then (x, y, u, 0) is an approximated solution of system (3.4) with tolerance σ, γ for any σ ∈ [0,1) and γ > 0. It is clear that in this case ek = 0. Conversely, if (, , , εk) is an approximated solution of system (3.4) with tolerance σ = 0 and γ > 0 arbitrary, then we must have εk = 0 and (, , ) satisfying (3.4). Indeed, since γ > 0 is arbitrary, we get y = . Using the fact that A is one-to-one, we get x = . From the fact that σ = 0, we conclude that εk = 0.

(iii) If (H1) and (H2) hold, by Proposition 3.1 the system (3.8) with ek = 0 and εk = 0 has a solution. By (ii) of this remark, this solution is also an approximated solution.

We describe below our algorithm, called Extragradient Algorithm.

Extragradient Algorithm-EA

Initialize: Take , > 0, σ ∈ [0,1), γ > 0, x0n and y0 such that δ0: = y0 - (b - Ax0) ∈ .

Iteration: For k = 1, 2, ...,

Step 1. Take λk with < λk< and 0 < δk < δk-1. Find (k, k, k, εk) an approximated solution of system (3.4) with tolerance σ, γ (i.e., they verify (3.8)).

Step 2. Compute (xk, yk) such that

Step 3. Set k := k + 1, and return to Step 1.

Remark 3.5. The parameter > 0 ensures that the information on the original problem is taken into account at each iteration. The requirement > 0 is standard in the convergence analysis of proximal methods.

Remark 3.6. Our algorithm extends the one in [29]. More precisely, our step coincides with the one in [29] when the following hold.

(i) T = ∂f.

(ii) ek = 0 in (3.8) (so xk = k),

(iii) Choose , instead of taking k in the (potentially) bigger set as we do in (3.8).

From (ii) and (iii), our step allows more freedom in the choice of the next iterate xk. As mentioned earlier, a conceptual difference with the method in [29] is the fact that the sequence {εk} is chosen in a constructive way, so as to ensure convergence. Our choice of each εk is related with the progress of the algorithm at every given step when applied to the given problem. It can be seen that, if (i)-(iii) above hold, then (3.10) forces , the latter being an assumption for the convergence results in [29]. Indeed, if (i)-(iii) hold, then xk = k and (3.10) yields . Using now Proposition 3.7, Corollary 3.8 (see the next section) and Lemma 2.5 we obtain . Therefore, .

3.1 Convergence analysis

In this section, we prove convergence of the Algorithm above. From now on {xk}, {k}, {k}, {yk},{k}, {εk}, {λk} and {δk} are sequences generated by EA with approximating criteria (3.10)-(3.11). The main result we shall prove is that the sequence {xk} converges to a solution of V I P(T, C).

The next proposition is essential for the convergence analysis, to show this we need the following further assumptions

(H3) The solution set S of V I P(T, C) is nonempty.

Proposition 3.7. Suppose that (H3) holds and let S and T(). Define : = b - A. Then, for k = 1, 2, ...,

where θ, τare as in (2.6) and a is as in Lemma 2.4.

Proof. Fix k > 0 and take . For all (x, u) ∈ G(T) we have that

Therefore,

Using (3.9), (3.10) and (3.12) in the inequality above, we get

Now, using (3.2) we have

where y = b - Ax. Combining the equality above with (3.15), we get

Applying Lemma 2.3 in this inequality yields

The inequality above is valid in particular for (x, u): = (, ) with S and such that = b - A. Therefore,

On the other hand, for (, ) with S and T(), we have that 〈 - x, 〉 < 0 ∀ xC. Let pk be the projection of k onto C. Since pkC, we have that 〈 - pk, 〉 < 0, and therefore 〈 - k, 〉 < 〈pk - k, 〉. Using the Cauchy-Schwarz inequality and multiplying by λk > 0, we get

By Lemma 2.4 we conclude that

for some α > 0. Combining (3.17) and (3.18), we get

The next corollary guarantees boundedness of the sequence {yk - yk-1}.

Corollary 3.8. Suppose that (H3) holds, then the sequence {||yk- yk-1||} is bounded.

Proof. Assume the sequence {||yk - yk-1||} is unbounded. Then there is a subsequence {||yk - yk-1||}kK such that ||yk - yk-1|| → ∞ for kK, whereas the complementary subsequence {||yk - yk-1||}kK is bounded (note that this complementary subsequence could be finite or even empty). From (3.19), we have

Summing up the inequalities (3.20) over k = 1,2, ..., n gives

Set

and

So the above inequality can be re-written as

Because we conclude that cn < for some . We show now that {an} is bounded below and , which is a contradiction with (3.21). This will complete the proof. Since {||yk - yk-1||}kK is bounded, there is L such that ||yk - yk-1|| < L for all kK, then

summing up the inequalities, we have

it follows that the sequence {an} is bounded below because . Since in K the sequence {||yk - yk-1||} is unbounded and {||δk||} converges to zero, there exists an k0K such that

therefore,

it follows that .

Remark 3.9. We point out that the requirement λk< used in Corollary 3.8 can be weakened to the assumption . We have chosen to use the stronger requirement λk< k for simplicity of the presentation.The assumption λk< is not used in any of the remaining results.

Corollary 3.10. Suppose that (H3) holds. Then, for , , as in Proposition 3.7, it holds that

(i) {||yk - ||} converges (and hence {yk} is bounded);

(ii) limk||yk - yk-1|| = 0;

(iii) {||A(xk - )||} converges (hence {||xk - ||} converges and {xk} is bounded);

(iv) limk||k - xk|| = 0;

(v) {

k} is bounded.

Proof.

(i) From (3.13) we have that

Define

Since {||yk - yk-1||} is bounded by Corollary 3.8 and , then . Therefore the sequences {σk} and {βk} are in the conditions of Lemma 2.5. This implies that {||yk - ||} converges and therefore {yk} is bounded.

(ii) It follows from (i) and Proposition 3.7 that , therefore limk||yk - yk-1|| = 0.

(iii) Since yk - = A( - xk) + δk, we get that

||yk – || – ||δk|| < ||A( – xk)|| < ||yk – || + ||δk||.

Being {||yk - ||} convergent and {||δk||} convergent to zero, we conclude from the expression above that {||A( - xk)||} is also convergent. By (H2), the function u → ||u||A := ||Au|| is a norm in n, and then it follows that {||xk - ||} converges and therefore {xk} is bounded.

(iv) From (ii) and (3.11), it follows that limk→ ∞ ||k - yk|| = 0. Therefore limk→ ∞||A(k - xk)|| = 0. Again, the assumptions on A imply that limk→ ∞||k- xk|| = 0. Item (v) Follows from (iii) and (iv).

We show below that the sequence {xk} converges to a solution of V I P(T, C). Denote by cc(zk) the set of accumulation points of the sequence {zk}.

Theorem 3.11. Suppose that (H1)-(H3) hold. Then {xk} converges to an element of S.

Proof. By Corollary 3.10(iii) and (iv) then cc(k) = cc(xk) ≠ .We prove first that every element of cc(k) = cc(xk) is a solution of V I P(T, C). Indeed, by (3.16), for all (x, u) ∈ G(T) it holds

Using Corollary 3.10(i)-(iii), we have that {xk} and {yk} are bounded and limk||yk - yk-1|| = 0. These facts, together with the identity

||yk - y||2 - ||yk-1 - y||2 = ||yk - yk-1||2 + 2〈yk - yk-1, yk-1 - y〉,

yield

Let be a subsequence converging to x*, we have that

Using the above inequality and the fact that λk> > 0, we obtain the following expression by taking limits for k → ∞ in (3.23):

By definition, with > 0. We know that {} converges to y* = b - Ax*, with y*> 0. Therefore Ax*< b. Equivalently, x*C. By definition of NC, we have

Combining (3.25) and (3.26), we conclude that

x - x*, u + w〉 > 0 ∀ (x, u + w) ∈ G(T + NC).

By (H1) and Proposition 2.1, T + NC is maximal monotone. Then the above inequality implies that 0 ∈ (T + NC)(x*), i.e, x*S. Recall that x* is also an accumulation point of {xk}. Using Corollary 3.10(iii), we have that the sequence {||x* - xk||} is convergent. Since it has a subsequence that converges to zero , the whole sequence {||x* - xk||} must converge to zero. This completes the proof.

4 Conclusion

We propose an infeasible interior point method with log-quadratic proximal regularization for solving variational inequality problems. Our algorithm can start from any point of the space (note that the first δ0 can be chosen arbitrarily large, as long as the sequence {||δk||} is summable). We also introduce a relative error analysis which can be checked at each iteration, as the one in [12].

Moreover, our method can be applied even when the interior of C is empty. We show convergence under similar assumptions as those in the classical log-quadratic proximal, where the set C is required to have nonempty interior.

We acknowledge the fact that the scheme we propose is mainly theoretical.We point out that no numerical results are available for the method in [29]. However, from Remark 3.6, our inexact iteration includes the one in [29] as a particular case, and thus our step is likely to be computationally cheaper than that in [29]. A full numerical implementation of our method and a comparison with [29] is a fundamental question, which is the subject of future investigations.

Received: 27/XI/07. Accepted: 31/VII/08.

#744/07.

  • [1] A. Auslender and M. Haddou, An interior proximal method for convex linearly constrained problems and its extension to variational inequalities Mathematical Programming, 71 (1995), 77-100.
  • [2] A. Auslender and M. Teboulle, Asymptotic cones and functions in Optimization and Variational Inequalities Springer Monographs in Mathematics Springer-Verlag, New York, (2003).
  • [3] A. Auslender, M. Teboulle and S. Ben-Tiba, A logarithmic-quadratic proximal method for variational inequalities Computational Optimization and Applications, 12 (1999), 31-40.
  • [4] A. Auslender, M. Teboulle and S. Ben-Tiba, Interior proximal and multiplier methods based on second order homogeneous kernels Mathematics of Operations Research, 24 (1999), 645-668.
  • [5] A. Ben-Tal and M. Zibulevsky, Penalty-Barrier Methods for convex programming problems SIAM Journal on Optimization, 7 (1997), 347-366.
  • [6] A. Brøndsted and R.T. Rockafellar, On the subdifferentiability of convex functions Proceedings of the American Mathematical Society, 16 (1965), 605-611.
  • [7] F.E. Browder, Nonlinear operators and nonlinear equations of evolution in Banach spaces Proceedings of Symposia in Pure Mathematics, Journal of the American Mathematical Society, 18 (1976).
  • [8] R.S. Burachik and A.N. Iusem, A generalized proximal point method for the variational inequality problem in a Hilbert space SIAM Journal on Optimization, 8 (1998), 197-216.
  • [9] R.S. Burachik and A.N. Iusem, Set-valued mappings and enlargements of monotone operators Springer Optimization and its Applications, 8, Springer, New York, (2008).
  • [10] R.S. Burachik, A.N. Iusem and B.F. Svaiter, Enlargements of maximal monotone operators with applications to variational inequalities Set Valued Analysis, 5 (1997), 159-180.
  • [11] R.S. Burachik, C.A. Sagastizábal and B.F. Svaiter, Epsilon-Enlargement of Maximal Monotone Operators with Application to Variational Inequalities, in Reformulation - Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, Fukushima, M. and Qi, L. (editors), Kluwer Academic Publishers, (1997).
  • [12] R.S. Burachik and B.F. Svaiter, A relative error tolerance for a family of generalized proximal point methods Mathematics of Operations Research, 26 (2001), 816-831.
  • [13] Y. Censor, A.N. Iusem and S. Zenios, An interior point method with Bregman functions for the variational inequality problem with paramonotone operators Mathematical Programming, 83 (1998), 113-123.
  • [14] J. Eckstein, Approximate iterations in Bregman-function-based proximal algorithms Mathematical Programming, 81 (1998), 373-400.
  • [15] P. Eggermont, Multiplicative iterative algorithms for convex programming Linear Algebra and its Applications, 130 (1990), 25-42.
  • [16] A.J. Hoffman, On approximate solutions of systems of linear inequalities J. of the National Bureau of Standards, 49 (1952), 263-265.
  • [17] A.N. Iusem, On some properties of paramonotone operators Math. Oper. Res., 19 (1994), 790-814.
  • [18] A.N. Iusem, B.F. Svaiter and M. Teboulle, Entropy-like proximal methods in convex programming Journal Convex Anal., 5 (1998), 269-278.
  • [19] A.N. Iusem and M. Teboulle, Convergence rate analysis of nonquadratic proximal and augmented Lagrangian methods for convex and linear programming Math. of Oper. Res., 20 (1995), 657-677.
  • [20] K.C. Kiwiel, Proximal minimization methods with generalized Bregman functions SIAM Journal on Control and Optimization, 35 (1997), 1142-1168.
  • [21] B. Lemaire, On the convergence of some iterative methods for convex minimization In Recent Developments in Optimization, R. Durier and C. Michelot (eds), Lecture Notes in Economics and Mathematical Systems, Springer-Verlag, 429 (1995), 252-268.
  • [22] B.T. Polyak, Introduction to optimization Optimization Software Inc., New York, (1987).
  • [23] D. Pascali and S. Sburlan, Nonlinear mappings of monotone type, Ed. Academiei, Bucaresti, Romenia, (1978).
  • [24] R.T. Rockafellar, On the maximality of sums of nonlinear monotone operators Transactions of the American Mathematical Society, 149 (1970), 75-88.
  • [25] M.V. Solodov and B.F. Svaiter, An Inexact Hybrid Generalized Proximal Point Algorithm and some New Results on the Theory of Bregman Functions Mathematics of Operations Research, 25 (2000), 214-230.
  • [26] M. Teboulle, Entropic proximal mappings with applications to nonlinear programming Mathematics of Operations Research, 1 (1992), 670-690.
  • [27] M. Teboulle, Convergence of proximal-like algorithms SIAM J. Optim., 7 (1997) 1069-1083.
  • [28] P. Tseng and D. Bertsekas, On the convergence of exponential multiplier method for convex programming Mathematical Programming, 60 (1993), 1-19.
  • [29] N. Yamashita, C. Kanzow, T. Morimoto and M. Fukushima, An infeasible interior proximal method for convex programming problems with linear constraints J. Nonlinear Convex Analysis, 2 (2001), 139-156.
  • *
    This author acknowledges support by the Australian Research Council Discovery Project Grant DP0556685 for this study.
  • Partially supported by CNPq. thanks
  • Partially supported by IM-AGIMB.
  • Publication Dates

    • Publication in this collection
      30 Mar 2009
    • Date of issue
      2009

    History

    • Accepted
      31 July 2008
    • Received
      27 Nov 2007
    Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
    E-mail: sbmac@sbmac.org.br