## Services on Demand

## Journal

## Article

## Indicators

- Cited by SciELO
- Access statistics

## Related links

- Cited by Google
- Similars in SciELO
- Similars in Google

## Share

## Computational & Applied Mathematics

##
*Print version* ISSN 2238-3603*On-line version* ISSN 1807-0302

### Comput. Appl. Math. vol.28 no.1 São Carlos 2009

#### http://dx.doi.org/10.1590/S0101-82052009000100002

**An inexact interior point proximal method for the variational inequality problem**

**Regina S. Burachik ^{I,} ^{*}; Jurandir O. Lopes^{II,} ^{†}; Geci J.P. Da Silva^{III,} ^{‡}**

^{I}School of Mathematics and Statistics, University of South Australia Mawson Lakes, S.A. 5095, Australia, E-mail: regina.burachik@unisa.edu.au

^{II}Universidade Federal do Piauí, CCN-UFPI, CP 64.000, 64000-000 Teresina, PI, Brazil, E-mail: jurandir@ufpi.br

^{III}Universidade Federal de Goiás, IME/UFG, CP 131, 74001-970 Goiânia, GO, Brazil E-mail: geci@mat.ufg.br

**ABSTRACT**

We propose an infeasible interior proximal method for solving variational inequality problems with maximal monotone operators and linear constraints. The interior proximal method proposed by Auslender, Teboulle and Ben-Tiba [3] is a proximal method using a distance-like barrier function and it has a global convergence property under mild assumptions. However, this method is applicable only to problems whose feasible region has nonempty interior. The algorithm we propose is applicable to problems whose feasible region may have empty interior. Moreover, a new kind of inexact scheme is used. We present a full convergence analysis for our algorithm.

**Mathematical subject classification:** 90C51, 65K10, 47J20, 49J40, 49J52, 49J53.

**Key words:** maximal monotone operators, outer approximation algorithm, interior point method, global convergence.

**1 Introduction**

Let *C* ⊂ * ^{n}* be a closed and convex set, and

*T*: be a maximal monotone point-to-set operator. We consider the

*variational inequality problem associated with T and C*: Find such that there exists ∈

*T*() satisfying

This problem is denoted by *V I P*(*T, C*). In the particular case in which *T* is the subdifferential of a proper, convex and lower semicontinuous function *f*: ^{n} → ∪{∞}, (1.1) reduces to the *constrained convex optimization problem*: Find such that

We are concerned with *C* a polyhedral set on ^{n} defined by

where *A* is an *m* × *n* real matrix, *b* ∈ ^{m} and *m* __>__ *n*. Well-known methods for solving *V I P*(*T, C*) are the so-called *generalized proximal* schemes, which involve a regularization term that incorporates the constraint set *C* in such a way that all the subproblems have solutions in the interior of *C*. For this reason, these methods are also called *interior proximal methods*. Examples of these regularizing functionals are the Bregman distances (see, e.g. [1, 8, 13, 14, 20, 25]), *φ*-divergences ([26, 5, 15, 18, 19, 27, 28]) and *log-quadratic* regularizations ([3, 4]). Being * interior* point methods, it is a basic assumption that the topological interior of *C* is nonempty. Otherwise, the iterates are not well-defined. However, a set *C* as above may usually have empty interior. In order to solve problem (1.2) for an arbitrary set *C* ≠ of the kind given in (1.3), Yamasita et al. [29] devised an interior-point scheme in which the subproblems deal with a constraint set *C ^{k}* given by

where the vectors *δ*^{k} have positive coordinates and are such that . So, if *C* ≠ , it holds *C* ⊂ int *C ^{k}* and hence a regularizing functional can be associated with the set

*C*. Denote by

^{k}_{k}the regularization functional proposed in [3, 4] (associated with the set

*C*with non-empty interior) and by ∇

^{k}_{1}

_{k}the derivative of

_{k}with respect to its first argument. The subproblems in [29] find an approximate solution

*x*∈ int

^{k}*C*of the inclusion

^{k}where λ_{k} > 0, *∂ _{ε}*

*f*is the

*ε*-subdifferential of

*f*[6]. Yamasita et al. prove in [29] convergence under summability assumptions on the "error" sequences {

*ε*

^{k}} and {

*δ*

^{k}}. One drawback of conditions of this kind is that there may be no constructive way to enforce them. Indeed, there exist infinitely many summable sequences, and it is not specified how to choose them at a specific iteration and for the given problem, so as to ensure convergence. From the algorithmic standpoint, one would prefer to have a computable error tolerance condition which is related to the progress of the algorithm at every given step when applied to the given problem. This is one of the main motivations of our approach (see condition (3.10) below), where we choose each

*ε*

^{k}so as to verify a specific condition at each iteration

*k*.

Moreover, we also extend the scheme given in [29] to the more general problem (1.1). Namely, we are concerned with iterations of the kind: Find an approximate solution *x ^{k}* ∈ int

*C*of

^{k}

where λ_{k} > 0 and *T*^{ε} is an enlargement of the operator *T* [11, 10]. We impose no summability assumption on the parameters {*ε*_{k}}. Instead, we define a criterion which can be checked at each iteration. On the other hand, we do need summability of the sequence { *δ*_{k}}.

Our relative error analysis is inspired by the one given in [12], which yields a more practical framework. The convergence analysis presented in [29] (which considers the optimization problem (1.2)) requires an assumption involving the sequence of iterates generated by the method, and the function *f*, namely that , where *P _{C}* stands for the orthogonal projection onto

*C*. We make no assumptions of this kind in our analysis. Another difference between [29] and the present paper is that we allow more degrees of freedom in the definition of the inexact step. See Remark 3.6 for a detailed comparison with [29].

The paper is organized as follows. In Section 2 we give some basic definitions and properties of the family of regularizations, as well as some known results on monotone operators. In the same section, the enlargement *T*^{ε} is reviewed, together with its elementary features. In Section 3, we describe the algorithm, prove its well-definedness and give its inexact version. The convergence analysis is presented in Section 3.1, and in Section 4 we give some conclusions.

**2 Basic assumptions and properties**

A point-to-set valued map *T*: is an operator which associates with each point *x* ∈ ^{n} a set (possibly empty) *T*(*x*) ⊂ ^{n}. The domain and the graph of a point-to-set valued map *T* are defined as:

*D*(*T*) := {*x* ∈ ^{n}| *T*(*x*) ≠ },

*G*(*T*) := {(*x, v*) ∈ ^{n }× * ^{n}*|

*x*∈

*D*(

*T*),

*v*∈

*T*(

*x*)}.

A point-to-set operator *T* is said to be *monotone* if

〈*u* - *v, x* - *y*〉 __>__ 0, ∀ *u* ∈ *T*(*x*), *v* ∈ *T*(*y*).

A monotone operator *T* is said to be *maximal* when its graph is not properly contained in the graph of any other monotone operator. The well-known result below has been proved in [24, Theorem 1]. Denote by ir *A* the relative interior of the set *A*.

**Proposition 2.1. ** *Let T*_{1}*, T*_{2}* maximal monotone operators. If *ir* D*(*T*_{1}) ∩ ir *D*(*T*_{2}) ≠ *, then T*_{1} + *T*_{2}* is maximal monotone.*

We denote by dom(*f*) = {*x* ∈ ^{n}| *f*(*x*) < +∞} the domain of *f*: ^{n} → ∪{+∞} and by *f*_{∞} the asymptotic function [2, Definition 2.5.1] associated with the function *f*: ^{n} → ∪{+∞}.

It is well-known that the existence of solutions of inclusion (1.5) depends on the properties of the distance _{k}. For a given distance *D*, a coercivity property (namely surjectivity of ∇_{1} *D*(·,*y*) for *y* fixed) is required (see, for instance [8, Proposition 3]). The result we need to ensure well-definedness of our scheme, which we state below, is [3, Proposition 3.1], which establishes the desired surjectivity in our particular setting.

**Theorem 2.2 **([3, Proposition 3.1]). *Let f*:^{n} → ∪{+∞} *be a closed proper convex function with* dom(*f*) *open. Assume that f is differentiable on* dom(*f*) *and such that f*_{∞}(ξ) = +∞ ∀ξ ≠ 0. *Let A be an m* × *n matrix with m* __>__ n *and* rank *A* = *n*, ∈ ^{m}* with* ( - *A*(^{n})) ∩ dom(*f*) ≠ , *and set h*(*x*): = *f*( - *Ax*). *Let* : *be a maximal monotone operator such that D*() ∩ dom(*h*) ≠ *and set*

* Then* ∇*h*(*x*) *is onto. Moreover, there exist a solution x of equation* 0 ∈ *U*(*x*), *which is unique if f is strictly convex on its domain.*

We describe below the family of regularizations we use. From now on, the function *φ*: _{+} → (-∞, ∞] is given by

where *h* is a closed and proper convex function satisfying the following additional properties:

(1)

his twice continuously differentiable on int(domh) = (0, +∞),(2)

his strictly convex on its domain,(3)

h(^{' }t) = -∞,(4)

h(1) =h(1) = 0 and^{'}h(1) > 0, and^{" }(5) for

t> 0

Items (1)-(4) and (1)-(5) were used in [4] to define, respectively, the families Φ and Φ_{2}. The positive parameters *µ*, *ν* shall satisfy the following inequality

Note that conditions above and (2.2) imply

therefore lim_{t}_{→∞} *φ**'* (*t*) = +∞. The generalized distance induced by *φ*, is denoted by *d*_{φ} (*x, y*) and defined as:

where . Since lim_{t}_{→∞} *φ**'* (*t*) = +∞ it follows that [*d*_{φ}(·,*y*)]_{∞}(*ξ*) = +∞, ∀ξ ≠ 0. Denoting by ∇_{1} the gradient with respect to the first variable, it holds that [∇_{1} *d*_{φ} (*x, y*)]_{i} = *y _{i}*

*φ*

*'*(

*x*/

_{i}*y*) for all

_{i}*i*= 1, ...,

*n*.

The following lemma has a crucial role in the convergence analysis. Its first part has been established in [3]. Define

**Lemma 2.3.** *For all w,z *∈ * and v *∈* , it holds that*

**Proof. ** For part (i), see [3, Lemma 3.4]. We proceed to prove (ii). Since *φ*(*t*) = *µ**h*(*t*) + (*t* - 1)^{2}, we have that *φ**'*(*t*) = *µ**h*(*t*) + *ν*(*t* - 1). By (2.2) and (2.6) we get *φ**'*(*t*) __<__ (ν + *ρµ*)(*t* - 1). Letting *t* = and multiplying both sides by *v _{i}z_{i}* yield

for all *i* = 1, ... , *n*. Therefore, 〈*v*, ∇_{1}*d*_{φ}(*w, z*)〉 __<__ *θ* 〈*v, w* - *z*〉. Using the Cauchy-Schwartz inequality in the expression above, we get (ii).

□

The result below is known as Hoffman's lemma [16].

**Lemma 2.4.** *Let C = *{*x *∈* ^{n}| Ax *

__<__

*b*}

*and C*= {

^{k}*x*∈

^{n}|

*Ax*

__<__

*b*+

*δ*

*}*

^{k}*where A is matrix m × n with m*

__>__

*n and b,*

*δ*

*∈*

^{k}*∈*

^{m}. Given x^{k}*C*

^{k}there exists a constant*α*

*> 0 such that*

* where p ^{k} is projection of x^{k} in C.*

We recall next two technical results on nonnegative sequences of real numbers. The first one was taken from [22, Chapter 2] and the second from [21, Lemma 3.5].

**Lemma 2.5.** *Let *{*σ*_{k}}* and *{*β*_{k}}* be nonnegative sequences of real numbers satisfying:*

*Then the sequence *{*σ*_{k}}* converges.*

**Lemma 2.6. ***Let *{λ_{k}} * be a sequence of positive numbers, and *{*a _{k}*}

*be a sequence of real numbers. Let and , then*

(i) lim inf

_{k}_{→∞}a_{k}<lim inf_{k}_{→∞}b_{k}<lim sup_{k}_{→∞}b_{k}<lim sup_{k}_{→∞}a;_{k}(ii)

Iflim_{k}_{→∞}a=_{k}a< ∞,thenlim_{k}_{→∞}b=_{k}a.

In our analysis, we relax the inclusion *v ^{k}* ∈

*T*(

*x*), by means of an

^{k}*ε*-enlargement of the operator

*T*introduced in [10]: Given

*T*a monotone operator, define

This extension has many useful properties, similar to the *ε*-subdifferential of a proper closed convex function *f*. Indeed, when *T* = ∂*f*, we have

For an arbitrary maximal monotone operator *T*, the relation *T*^{0}(*x*) = *T*(*x*) holds trivially. Furthermore, for *ε**'* __>__ *ε* __>__ 0, we have T* ^{ε}*(

*x*) ⊂ T

^{ε}*(*

^{'}*x*). In particular, for each

*ε*

__>__0,

*T*(

*x*) ⊂

*T*

*(*

^{ε}*x*) (see [9, Chapter 5] for a detailed study of the properties of

*T*

*).*

^{ε}

**3 The algorithm**

In this section, we propose an infeasible interior proximal method for the solution of *V I P*(*T, C*) (1.1). To state formally our algorithm, we consider

which is considered a perturbation of the original constraint set *C*. Moreover, if *C* ≠ , then *C* ⊂ int *C ^{k}* ≠ for all

*k*. Since

*δ*

^{k}→ 0 as

*k*→ ∞, the sequence of sets {

*C*} converges to the set

^{k}*C*. Now, if

*a*denotes the row

_{i}*i*of the matrix

*A*, for each

*x*∈

*C*we consider , where

^{k}herefore, we have the function _{k} : int *C ^{k }*× int

*C*→ defined by

^{k}

From the definition of *d*_{φ}, for each *x ^{k}* ∈ int

*C*

^{k}, x^{k}^{-1}∈ int

*C*

^{k}^{-1}, we have

In the method proposed in [29] for the convex optimization problem (1.2) with *C* defined as in (1.3), the exact algorithm of the iteration *k* is given by:

For λ_{k} > 0, *δ*^{k} > 0 and (*x ^{k}*

^{-1},

*y*

^{k}^{-1}) ∈ int

*C*

^{k}^{-1}× , find (

*x, y*) ∈ int

*C*× and

^{k}*u*∈

^{n}such that

where *y* ∈ can be seen as a slack variable associated to *x* ∈ int *C ^{k}*. The corresponding

*inexact*iteration in [29] is given by:

Following the approach in (3.3), the exact version of our algorithm is obtained replacing ∂*f* by an arbitrary maximal monotone operator *T*. Namely, given λ_{k} > 0, δ^{k} > 0 and (*x ^{k}*

^{-1},

*y*

^{k}^{-1}) ∈ int

*C*

^{k}^{-1 }× , find (

*x, y*) ∈ int

*C*× and

^{k}*u*∈

^{n}such that

A detailed comparison with the method in [29] is given in Remark 3.6.

It is important to guarantee the existence of (*x ^{k}, y^{k}*) ∈ int

*C*× satisfying (3.4). In fact, the next proposition shows that there exists a unique pair (

^{k}*x*) ∈ int

^{k}, y^{k}*C*× satisfying (3.4) under the following two assumptions:

^{k}(

H_{1}) irC∩ irD(T) ≠ ;(

H_{2}) rank(A) =n(and therefore,Ainjective).

**Proposition 3.1. ** *Suppose that *(* H_{1}*)

*and*(

*H*)

_{2}*hold. For every*λ

*> 0,*

_{k}*δ*

*> 0*

^{k}*and*(

*x*

^{k}^{-1},

*y*

^{k}^{-1}) ∈ int

*C*(

^{k}× , there exists a unique pair*x*) ∈ int

^{k}, y^{k}*C*(3.4).

^{k}× satisfying **Proof. ** Define the operator ^{k}(*x*): = *T*(*x*) + (*x*) + λ_{k}^{-1 }∇*h*(*x*), where *h* := _{k}(·, *x ^{k}*

^{-1}). We prove first that we are in the conditions of Theorem 2.2 for :=

*T*+ ,

*f*(·): =

*d*

_{φ}(·,

*y*

^{k}^{-1}) and : =

*b*+

*δ*

^{k}. Indeed, the operator is maximal monotone by (

*H*

_{1}) and the fact that

*C*⊆

*C*(we are using here Proposition 2.1). The function

^{k}*d*

_{φ}(·,

*y*

^{k}^{-1}) is by definition convex, proper and differentiable on its (open) domain and [

*d*

_{φ}(·,

*y*

^{k}^{-1})]

_{∞}(

*ξ*) = +∞, ∀ξ ≠ 0. By (

*H*

_{2}),

*A*has maximal rank. We claim that (

*b*+

*δ*

^{k}-

*A*(

^{n})) ∩ dom

*d*

_{φ}(·,

*y*

^{k}^{-1}) ≠ . Indeed, fix

*x*∈

*C*. It holds that

and therefore *b* + *δ*^{k }- *Ax* ∈ = dom *d*_{φ} (·, *y ^{k}*

^{-1}).

The only hypothesis that remains to be checked is: *D*() ∩ dom(*h*) ≠ , where dom(*h*) = int *C ^{k}*. Indeed, by (

*H*

_{1}) and by definition of the

*C*we get

^{k} ≠ *C* ∩ *D*(*T*) ⊂ int *C ^{k}* ∩

*D*(

*T*) ⊂

*D*().

Hence ≠ *C* ∩ *D*() ⊂ *D*() ∩ int *C ^{k}* =

*D*() ∩ dom(

*h*). So the hypotheses of Theorem 2.2 are satisfied and therefore there exists

*x*

^{*}a solution of the equation

This solution is unique, because *d*_{φ}(·, *y ^{k}*

^{-1}) is strictly convex on its domain.So, there exists

*u*∈

^{k}*T*(

*x*),

^{k}*v*∈ (

^{k}*x*) and

^{k}*z*= ∇

^{k}_{1}

_{k}(

*x*

^{k}, x^{k}^{-1}) = ∇

_{1}

*d*

_{φ}(

*b*+

*δ*

^{k}-

*A*(

*x*),

^{k}*y*

^{k}^{-1}) such that

Taking *b* + *δ*^{k} - *Ax ^{k}* =:

*y*we have that

^{k}*y*is also unique. Since

^{k}*y*∈ , it holds that

^{k}*x*∈ int

^{k}*C*, thus

^{k}*v*= 0. Hence by (3.7) there exists a unique pair (

^{k}*x*) ∈ int

^{k}, y^{k}*C*× satisfying

^{k}which completes the proof.

□

**Remark 3.2. ** We point out that the previous proposition can be established (with essentially the same proof) if we replace (*H*_{1}) by the weaker requirement *D*(*T*) ∩ int(*C ^{k}*) ≠ . We will need (

*H*

_{1}), however, for proving that our iterates converge to a solution (see Theorem 3.11).

To deal with approximations, we relax the inclusion and the equation of the exact system (3.4) in a way similar to (3.3):

where *T*^{ε} is the enlargement of *T* given in (2.7). In the exact solution, we have *ε*_{k} = 0 and *ε*^{k} = 0. An approximate solution should have *ε*_{k} and *ε*^{k} "small". Our aim is to use a relative error criteria as the one used in [12] to control the size of *ε*_{k} and *ε*^{k}. The intuitive idea is to perform an extragradient step from *x ^{k}*

^{-1}to

*x*, using the direction (see (3.9)), and then check whether the "error terms" of the iteration, given by

*ε*

_{k}+ 〈, -

*x*〉 and || -

*y*|| are small enough with respect to the previous step.

**Definition 3.3. ***Let* *σ* ∈ [0,1)* and **γ* > 0*. We say that (, , , *ε* _{k}*)

*in*(3.8)

*is an*(3.4)

*approximated solution*of system*with tolerance*

*σ*

*and*

*γ*

*if for*(

*x, y*)

*such that*

* it holds that*

*where **τ* > 0* is as in *(2.6).

**Remark 3.4.**

(i) Since the domain of

d_{φ}(·,y^{k}^{-1}) is , for , ,ε_{k}andxas in Definition 3.3, it holds that ,x∈ intC.^{k}(ii) If (

x, y, u) verifies (3.4), then (x, y, u,0) is an approximated solution of system (3.4) with toleranceσ,γfor anyσ∈ [0,1) andγ> 0. It is clear that in this casee= 0. Conversely, if (, , ,^{k}ε_{k}) is an approximated solution of system (3.4) with toleranceσ= 0 andγ> 0 arbitrary, then we must haveε_{k}= 0 and (, , ) satisfying (3.4). Indeed, sinceγ> 0 is arbitrary, we gety= . Using the fact thatAis one-to-one, we getx= . From the fact thatσ= 0, we conclude thatε_{k}= 0.(iii) If (

H_{1}) and (H_{2}) hold, by Proposition 3.1 the system (3.8) withe= 0 and^{k}ε_{k}= 0 has a solution. By (ii) of this remark, this solution is also an approximated solution.

We describe below our algorithm, called *Extragradient Algorithm*.

**Extragradient Algorithm-EA**

**Initialize:** Take , > 0, *σ* ∈ [0,1), *γ* > 0, *x*^{0} ∈ ^{n} and *y*^{0} ∈ such that *δ*^{0}: = *y*^{0 }- (*b* - *Ax*^{0}) ∈ .

**Iteration**: For *k* = 1, 2, ...,

**Step 1.** Take λ_{k} with __<__ λ_{k} __<__ and 0 < *δ*^{k} < *δ*^{k-1}. Find (^{k}, ^{k}, ^{k}, *ε*_{k}) an approximated solution of system (3.4) with tolerance *σ*, *γ* (i.e., they verify (3.8)).

**Step 2.** Compute (*x ^{k}, y^{k}*) such that

**Step 3.** Set *k* := *k* + 1, and return to Step 1.

**Remark 3.5. ** The parameter > 0 ensures that the information on the original problem is taken into account at each iteration. The requirement > 0 is standard in the convergence analysis of proximal methods.

**Remark 3.6. ** Our algorithm extends the one in [29]. More precisely, our step coincides with the one in [29] when the following hold.

(i)

T= ∂f.(ii)

e= 0 in (3.8) (so^{k}x=^{k}^{k}),(iii) Choose , instead of taking

^{k}in the (potentially) bigger set as we do in (3.8).

From (ii) and (iii), our step allows more freedom in the choice of the next iterate* x ^{k}*. As mentioned earlier, a conceptual difference with the method in [29] is the fact that the sequence {

*ε*

_{k}} is chosen in a

*constructive way*, so as to ensure convergence. Our choice of each

*ε*

_{k}is related with the progress of the algorithm at every given step when applied to the given problem. It can be seen that, if (i)-(iii) above hold, then (3.10) forces , the latter being an assumption for the convergence results in [29]. Indeed, if (i)-(iii) hold, then

*x*=

^{k}^{k}and (3.10) yields . Using now Proposition 3.7, Corollary 3.8 (see the next section) and Lemma 2.5 we obtain . Therefore, .

*3.1 Convergence analysis*

In this section, we prove convergence of the Algorithm above. From now on {*x ^{k}*}, {

^{k}}, {

^{k}}, {

*y*},{

^{k}^{k}}, {

*ε*

_{k}}, {λ

_{k}} and {

*δ*

_{k}} are sequences generated by

**EA**with approximating criteria (3.10)-(3.11). The main result we shall prove is that the sequence {

*x*} converges to a solution of

^{k}*V I P*(

*T, C*).

The next proposition is essential for the convergence analysis, to show this we need the following further assumptions

(*H*_{3}) The solution set *S* of *V I P*(*T, C*) is nonempty.

**Proposition 3.7. ***Suppose that *(*H*_{3})* holds and let *∈ *S and * ∈* T*()*. Define : = b - A. Then, for k* = 1, 2, ...,

* where **θ*, *τ* *are as in (2.6) and a is as in Lemma 2.4.*

**Proof. ** Fix *k* > 0 and take . For all (*x, u*) ∈ *G*(*T*) we have that

Therefore,

Using (3.9), (3.10) and (3.12) in the inequality above, we get

Now, using (3.2) we have

where *y* = *b* - *Ax*. Combining the equality above with (3.15), we get

Applying Lemma 2.3 in this inequality yields

The inequality above is valid in particular for (*x, u*): = (, ) with ∈ *S* and such that = *b* - *A*. Therefore,

On the other hand, for (, ) with ∈ *S* and ∈ *T*(), we have that 〈 - *x*, 〉 __<__ 0 ∀ *x* ∈ *C*. Let *p ^{k}* be the projection of

^{k}onto

*C*. Since

*p*∈

^{k}*C*, we have that 〈 -

*p*, 〉

^{k}__<__0, and therefore 〈 -

^{k}, 〉

__<__〈

*p*-

^{k }^{k}, 〉. Using the Cauchy-Schwarz inequality and multiplying by λ

_{k}> 0, we get

By Lemma 2.4 we conclude that

for some *α* > 0. Combining (3.17) and (3.18), we get

□

The next corollary guarantees boundedness of the sequence {*y ^{k}* -

*y*

^{k}^{-1}}.

**Corollary 3.8. ** *Suppose that (H*_{3}*) holds, then the sequence *{||*y ^{k} *-

*y*||}

^{k-1}*is bounded.*

**Proof. ** Assume the sequence {||*y ^{k }*-

*y*

^{k}^{-1}||} is unbounded. Then there is a subsequence {||

*y*-

^{k}*y*

^{k}^{-1}||}

_{k}

_{∈}

_{K}such that ||

*y*-

^{k}*y*

^{k}^{-1}|| → ∞ for

*k*∈

*K*, whereas the complementary subsequence {||

*y*-

^{k}*y*

^{k}^{-1}||}

_{k}

_{∈}

_{K}is bounded (note that this complementary subsequence could be finite or even empty). From (3.19), we have

Summing up the inequalities (3.20) over *k* = 1,2, ..., *n* gives

Set

and

So the above inequality can be re-written as

Because we conclude that *c _{n}*

__<__for some . We show now that {

*a*} is bounded below and , which is a contradiction with (3.21). This will complete the proof. Since {||

_{n}*y*-

^{k}*y*

^{k}^{-1}||}

_{k}

_{∉}

_{K}is bounded, there is

*L*such that ||

*y*-

^{k}*y*

^{k}^{-1}||

__<__

*L*for all

*k*∉

*K*, then

summing up the inequalities, we have

it follows that the sequence {*a _{n}*} is bounded below because . Since in

*K*the sequence {||

*y*-

^{k}*y*

^{k}^{-1}||} is unbounded and {||

*δ*

^{k}||} converges to zero, there exists an

*k*

_{0}∈

*K*such that

therefore,

it follows that .

**Remark 3.9. ** We point out that the requirement λ_{k} __<__ used in Corollary 3.8 can be weakened to the assumption . We have chosen to use the stronger requirement λ_{k} __<__ _{k} for simplicity of the presentation.The assumption λ_{k} __<__ is not used in any of the remaining results.

**Corollary 3.10. ***Suppose that (H*_{3}*) holds. Then, for , , as in Proposition* 3.7*, it holds that*

(i) {||

y- ||}^{k}converges (and hence{y}^{k}is bounded);(ii) lim

_{k}||y-^{k}y^{k}^{-1}|| = 0;(iii) {||

A(x- )||}^{k }converges (hence{||x- ||}^{k }converges and{x}^{k}is bounded);(iv) lim

_{k}||^{k}-x|| = 0;^{k}(v) {

^{k}}is bounded.

**Proof. **

(i) From (3.13) we have that

Define

Since {||

y-^{k}y^{k}^{-1}||} is bounded by Corollary 3.8 and , then . Therefore the sequences {σ_{k}} and {β_{k}} are in the conditions of Lemma 2.5. This implies that {||y- ||} converges and therefore {^{k}y} is bounded.^{k}(ii) It follows from (i) and Proposition 3.7 that , therefore lim

_{k}||y-^{k}y^{k}^{-1}|| = 0.(iii) Since

y- =^{k}A( -x) +^{k}δ^{k}, we get that||

y– || – ||^{k}δ^{k}||<||A( –x)|| < ||^{k}y– || + ||^{k}δ^{k}||.Being {||

y- ||} convergent and {||^{k}δ^{k}||} convergent to zero, we conclude from the expression above that {||A( -x)||} is also convergent. By (^{k}H_{2}), the functionu→ ||u||:= ||_{A }Au|| is a norm in, and then it follows that {||^{n}x- ||} converges and therefore {^{k}x} is bounded.^{k}(iv) From (ii) and (3.11), it follows that lim

_{k}_{→ ∞}||^{k }-y|| = 0. Therefore lim^{k}_{k}_{→ ∞}||A(^{k}-x)|| = 0. Again, the assumptions on^{k}Aimply that lim_{k}_{→ ∞}||^{k }-x|| = 0. Item (v) Follows from (iii) and (iv).^{k}

□

We show below that the sequence {*x ^{k}*} converges to a solution of

*V I P*(

*T, C*). Denote by

*cc*(

*z*) the set of accumulation points of the sequence {

^{k}*z*}.

^{k} **Theorem 3.11. ** *Suppose that (H*_{1}*)-(H*_{3}*) hold. Then *{*x ^{k}*}

*converges to an element of S.*

**Proof. ** By Corollary 3.10(iii) and (iv) then *cc*(^{k}) = *cc*(*x ^{k}*) ≠ .We prove first that every element of

*cc*(

^{k}) =

*cc*(

*x*) is a solution of

^{k}*V I P*(

*T, C*). Indeed, by (3.16), for all (

*x, u*) ∈

*G*(

*T*) it holds

Using Corollary 3.10(i)-(iii), we have that {*x ^{k}*} and {

*y*} are bounded and lim

^{k}_{k}||

*y*

^{k}- y^{k}^{-1}|| = 0. These facts, together with the identity

||*y ^{k}* -

*y*||

^{2 }- ||

*y*

^{k}^{-1 }-

*y*||

^{2}= ||

*y*-

^{k}*y*

^{k}^{-1}||

^{2}+ 2〈

*y*-

^{k}*y*

^{k}^{-1},

*y*

^{k}^{-1}-

*y*〉,

yield

Let be a subsequence converging to *x*^{*}, we have that

Using the above inequality and the fact that λ_{k} __>__ > 0, we obtain the following expression by taking limits for *k* → ∞ in (3.23):

By definition, with > 0. We know that {} converges to *y*^{*} = *b* - *Ax*^{*}, with *y*^{*} __>__ 0. Therefore *Ax*^{*} __<__ *b*. Equivalently, *x*^{*} ∈ *C*. By definition of *N _{C}*, we have

Combining (3.25) and (3.26), we conclude that

〈*x* - *x*^{*}, *u* + *w*〉 __>__ 0 ∀ (*x, u* + *w*) ∈ *G*(*T* + *N _{C}*).

By (*H*_{1}) and Proposition 2.1, *T* + *N _{C}* is maximal monotone. Then the above inequality implies that 0 ∈ (

*T*+

*N*)(

_{C}*x*

^{*}), i.e,

*x*

^{*}∈

*S*. Recall that

*x*

^{*}is also an accumulation point of {

*x*}. Using Corollary 3.10(iii), we have that the sequence {||

^{k}*x*

^{* }-

*x*||} is convergent. Since it has a subsequence that converges to zero , the whole sequence {||

^{k}*x*

^{* }-

*x*||} must converge to zero. This completes the proof.

^{k}

**4 Conclusion**

We propose an infeasible interior point method with log-quadratic proximal regularization for solving variational inequality problems. Our algorithm can start from any point of the space (note that the first *δ*_{0} can be chosen arbitrarily large, as long as the sequence {||*δ*_{k}||} is summable). We also introduce a relative error analysis which can be checked at each iteration, as the one in [12].

Moreover, our method can be applied even when the interior of *C* is empty. We show convergence under similar assumptions as those in the classical log-quadratic proximal, where the set *C* is required to have nonempty interior.

We acknowledge the fact that the scheme we propose is mainly theoretical.We point out that no numerical results are available for the method in [29]. However, from Remark 3.6, our inexact iteration includes the one in [29] as a particular case, and thus our step is likely to be computationally cheaper than that in [29]. A full numerical implementation of our method and a comparison with [29] is a fundamental question, which is the subject of future investigations.

**REFERENCES**

[1] A. Auslender and M. Haddou, *An interior proximal method for convex linearly constrained problems and its extension to variational inequalities*. Mathematical Programming, **71** (1995), 77-100. [ Links ]

[2] A. Auslender and M. Teboulle, *Asymptotic cones and functions in Optimization and Variational Inequalities*. Springer Monographs in Mathematics Springer-Verlag, New York, (2003). [ Links ]

[3] A. Auslender, M. Teboulle and S. Ben-Tiba, *A logarithmic-quadratic proximal method for variational inequalities*. Computational Optimization and Applications, **12** (1999), 31-40. [ Links ]

[4] A. Auslender, M. Teboulle and S. Ben-Tiba, *Interior proximal and multiplier methods based on second order homogeneous kernels*. Mathematics of Operations Research, **24** (1999), 645-668. [ Links ]

[5] A. Ben-Tal and M. Zibulevsky, *Penalty-Barrier Methods for convex programming problems*. SIAM Journal on Optimization, **7** (1997), 347-366. [ Links ]

[6] A. Brøndsted and R.T. Rockafellar, *On the subdifferentiability of convex functions*. Proceedings of the American Mathematical Society, **16** (1965), 605-611. [ Links ]

[7] F.E. Browder, *Nonlinear operators and nonlinear equations of evolution in Banach spaces*. Proceedings of Symposia in Pure Mathematics, Journal of the American Mathematical Society, **18** (1976). [ Links ]

[8] R.S. Burachik and A.N. Iusem, *A generalized proximal point method for the variational inequality problem in a Hilbert space*. SIAM Journal on Optimization, **8** (1998), 197-216. [ Links ]

[9] R.S. Burachik and A.N. Iusem, *Set-valued mappings and enlargements of monotone operators*. Springer Optimization and its Applications, 8, Springer, New York, (2008). [ Links ]

[10] R.S. Burachik, A.N. Iusem and B.F. Svaiter, *Enlargements of maximal monotone operators with applications to variational inequalities*. Set Valued Analysis, **5** (1997), 159-180. [ Links ]

[11] R.S. Burachik, C.A. Sagastizábal and B.F. Svaiter, *Epsilon-Enlargement of Maximal Monotone Operators with Application to Variational Inequalities*, in Reformulation - Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, Fukushima, M. and Qi, L. (editors), Kluwer Academic Publishers, (1997). [ Links ]

[12] R.S. Burachik and B.F. Svaiter, *A relative error tolerance for a family of generalized proximal point methods*. Mathematics of Operations Research, **26** (2001), 816-831. [ Links ]

[13] Y. Censor, A.N. Iusem and S. Zenios, *An interior point method with Bregman functions for the variational inequality problem with paramonotone operators*. Mathematical Programming, **83** (1998), 113-123. [ Links ]

[14] J. Eckstein, *Approximate iterations in Bregman-function-based proximal algorithms*. Mathematical Programming, **81** (1998), 373-400. [ Links ]

[15] P. Eggermont, *Multiplicative iterative algorithms for convex programming*. Linear Algebra and its Applications, **130** (1990), 25-42. [ Links ]

[16] A.J. Hoffman, *On approximate solutions of systems of linear inequalities*. J. of the National Bureau of Standards, **49** (1952), 263-265. [ Links ]

[17] A.N. Iusem, *On some properties of paramonotone operators*. Math. Oper. Res., **19** (1994), 790-814. [ Links ]

[18] A.N. Iusem, B.F. Svaiter and M. Teboulle, *Entropy-like proximal methods in convex programming*. Journal Convex Anal., **5** (1998), 269-278. [ Links ]

[19] A.N. Iusem and M. Teboulle, *Convergence rate analysis of nonquadratic proximal and augmented Lagrangian methods for convex and linear programming*. Math. of Oper. Res., **20** (1995), 657-677. [ Links ]

[20] K.C. Kiwiel, *Proximal minimization methods with generalized Bregman functions*. SIAM Journal on Control and Optimization, **35** (1997), 1142-1168. [ Links ]

[21] B. Lemaire, *On the convergence of some iterative methods for convex minimization*. In Recent Developments in Optimization, R. Durier and C. Michelot (eds), Lecture Notes in Economics and Mathematical Systems, Springer-Verlag, **429** (1995), 252-268. [ Links ]

[22] B.T. Polyak, *Introduction to optimization*. Optimization Software Inc., New York, (1987). [ Links ]

[23] D. Pascali and S. Sburlan, *Nonlinear mappings of monotone type*, Ed. Academiei, Bucaresti, Romenia, (1978). [ Links ]

[24] R.T. Rockafellar, *On the maximality of sums of nonlinear monotone operators*. Transactions of the American Mathematical Society, **149** (1970), 75-88. [ Links ]

[25] M.V. Solodov and B.F. Svaiter, *An Inexact Hybrid Generalized Proximal Point Algorithm and some New Results on the Theory of Bregman Functions*. Mathematics of Operations Research, **25** (2000), 214-230. [ Links ]

[26] M. Teboulle, *Entropic proximal mappings with applications to nonlinear programming*. Mathematics of Operations Research, **1** (1992), 670-690. [ Links ]

[27] M. Teboulle, *Convergence of proximal-like algorithms*. SIAM J. Optim., **7** (1997) 1069-1083. [ Links ]

[28] P. Tseng and D. Bertsekas, *On the convergence of exponential multiplier method for convex programming*. Mathematical Programming, **60** (1993), 1-19. [ Links ]

[29] N. Yamashita, C. Kanzow, T. Morimoto and M. Fukushima, *An infeasible interior proximal method for convex programming problems with linear constraints*. J. Nonlinear Convex Analysis, **2** (2001), 139-156. [ Links ]

Received: 27/XI/07. Accepted: 31/VII/08.

#744/07.

* This author acknowledges support by the Australian Research Council Discovery Project Grant DP0556685 for this study.

† Partially supported by CNPq. thanks

‡ Partially supported by IM-AGIMB.