## Print version ISSN 0101-7438On-line version ISSN 1678-5142

### Pesqui. Oper. vol.40  Rio de Janeiro  2020  Epub May 18, 2020

#### https://doi.org/10.1590/0101-7438.2020.040.00217467

ARTICLES

A CONSTRUCTIVE GLOBAL CONVERGENCE OF THE MIXED BARRIER-PENALTY METHOD FOR MATHEMATICAL OPTIMIZATION PROBLEMS

1Department of Mathematics, FCPN, Universidad Mayor de San Andrés, La Paz, Bolivia - E-mail: psunagua@umsa.bo

2Department of Applied Mathematics, IMECC, University of Campinas, 13083-859, Campinas, SP-Brazil - E-mail: aurelio@ime.unicamp.br

ABSTRACT

In this paper we develop a generic mixed bi-parametric barrier-penalty method based upon barrier and penalty generic algorithms for constrained nonlinear programming problems. When the feasible set is defined by equality and inequality functional constraints, it is possible to provide an explicit barrier and penalty functions. If such case, the continuity and differentiable properties of the restrictions and objective functions could be inherited to the penalized function.

The main contribution of this work is a constructive proof for the global convergence of the sequence generated by the proposed mixed method. The proof uses separately the main results of global convergence of barrier and penalty methods. Finally, for some simple nonlinear problem, we deduce explicitly the mixed barrier-penalty function and illustrate all functions defined in this work. Also we implement MATLAB code for generate iterative points for the mixed method.

Keywords: nonlinear programming; mixed barrier-penalty methods; convergence of mixed algorithm

1 INTRODUCTION

The mathematical optimization is one of the concepts widely used to analyze many complex decision or allocation problems. In order to better use available resources, optimization techniques allow the selection of values for a certain number of interrelated variables, and with them we could measure the performance and quality of a decision by focusing on some objective functions.

Specifically, a mathematical optimization problem consists of minimizing or maximizing an objective function f (x) subject to restrictions xΩ , where f is a real valued continuous function defined on Ωn . In this work, we consider the feasible set Ω having three types of restrictions

xΩ1, xΩ2, xΩ3 (1)

where Ω1 can be whatever restriction set that is difficult to handle, Ω2 is a robust set and Ω3 could be a simple set such as signal or boundary restrictions. The robust set means that it has a dense nonempty interior subset. In other words, the set has an interior, and it is possible to get any boundary point by approaching it from a sequence of interior points, Luenberger & Ye (2008).

According to specifications above, we consider the following optimization problem,

min f (x)s. t. xΩ1, xΩ2, xΩ3. (2)

One of the most common nonlinear programming problems formulation is when the restrictions are characterized by equality and inequality functional constraints, Bazaraa et al. (2013), Luenberger & Ye (2008), Wright & Nocedal (1999), Griva et al. (2009). In which given the continuous functions f : n, h : nm, g : np , the classical nonlinear optimization problem is

min fxs. t. h(x)=0 g(x)0, (3)

where the restriction sets are given by Ω1=xn : h(x)=0, Ω2=xn : g(x)0 and Ω3=n .

For many decades, many authors proved some theoretical results and proposed several algorithms in order to solve nonlinear optimization problems considering penalty or barrier function methods. Luenberger & Ye (2008), Fiacco & McCormick (1990) state convergence for both methods, Polyak Polyak (1971) showed convergence rate for penalty function method within Hilbert space, Bertsekas (1976) obtained convergence and rate of convergence results for the sequences of primal and dual variables generated on penalty and Lagrange multiplier methods, he showed that the multiplier method is faster than the pure penalty method. Fiacco & McCormick (1990) demonstrate by contradiction the global convergence for mixed penalty-barrier method, also Breitfeld & Shanno (1995) proposed composite algorithm of augmented Lagrangian, modified log-barrier, and classical log-barrier methods for that they demonstrated global convergence to a first-order stationary point for the constrained problem which was based on Breitfeld & Shanno (1994).

In this work, we develop the mixed barrier-penalty method for solving a general nonlinear problem (2); and we provide a generic bi-parametric algorithm. The main contribution is a constructive proof of global convergence of sequence generated by that mixed method as an alternative proof to existing ones with slightly different assumptions. Suñagua & Oliveira (2017) showed that computational experiments for NETLIB problems work successfully for large scale linear optimization problems.

2 BARRIER METHODS OVERVIEW

Barrier methods are also called interior point or internal penalty methods. Some theoretical results of them were developed by Martınez & Santos (1995), Luenberger & Ye (2008), Nash & Sofer (1993), and Wright (1992). These methods are applicable to problems of the form

min f(x)s. t. xΩ (4)

where f is a continuous function and Ω is a robust restriction set. This kind of set

often arises from the inequality constraints, that is, Ω=xn : g(x)0 , for which there is a point x¯Ω such that g(x¯)<0 .

Barrier methods work by establishing a barrier on the boundary of the restriction set that prevents a search procedure from leaving the feasible region. A barrier function is a function B(·) defined on the interior set Int(Ω)=x : g(x)<0 of Ω such that (i) B is continuous,(ii) B(x)0 , (iii) Bx as x approaches the boundary of Ω. For inequality constraints gi(x)0, i=1, 2,..., p in many practical applications, the barrier functions commonly used are the logarithmic or inverse barrier function. They are defined on Int(Ω) respectively by

B(x)=- i=1p log-gix and Bx=-i=1p 1gix.

Now, the problem (4) can be transformed into a penalized subproblem

(Pµ) min f(x)+µB(x) s. t. xInt(Ω) (5)

where µ > 0 is called barrier parameter and we take µ small (going to zero). In this approach, the main assumption is that the original problem (4) has a global solution x . Let x(µ) be a global solution of subproblem (5). When µk0 , we hope x(µ k ) converges to x .

Given ϕ(x, µ)=f(x)+µB(x) , we have a generic barrier algorithm given in Algorithm 1.

The following Lemma gives a set of inequalities that follow directly from Algorithm 1 steps. A proof is based from Luenberger & Ye (2008) and Martınez & Santos (1995).

Lemma 2.1. Let {x k } be a sequence generated byAlgorithm 1, then

1. ϕ (xk+1, µk)ϕ(xk, µk1)

2. B(xk+1)B(xk)

3. f(xk+1)f(xk) .

Proof. Since {µ k } is a monotone decreasing sequence, x k+1 is a global minimizer of (6) and recalling (ii) of barrier condition, B is non-negative function, then

ϕ(xk+1, µk)=f(xk+1)+µkB(xk+1) f(xk)+µkB(xk) f(xk)+µk1B(xk)=ϕ(xk, µk1).

For establishes the second inequality, we also have

ϕ(xk+1, µk)=f(xk+1)+µkB(xk+1)f(xk)+µkB(xk) (7)

ϕ(xk, µk-1)=f(xk)+µk-1B(xk)f(xk+1)+µk-1B(xk+1), (8)

now, using (8) and (7), we get

(µkµk1)B(xk+1)(µkµk1)B(xk)

eliminating the common factor µkµk1<0 , we prove the item 2.

Finally, by previous inequality,

f(xk+1)+µkB(xk+1)f(xk)+µkB(xk)f(xk)+µkB(xk+1),

hence f(xk+1)f(xk) . □

The global convergence of the barrier method, in the sense that any limit point of the sequence is a solution of problem (4), can be verified from the previous Lemma.

Theorem 2.1. Let {x k } be a sequence generated byAlgorithm 1, in which µk0 . Then, any limit point of the sequence is a global minimizer of problem (4).

Proof. Let fk=minϕ(x,µk) : xInt(Ω) be global minimum value of ϕ(·,µk) on Int(Ω), whose solution is x k+1 . By Lemma 2.1, fkfk+1 for all k. If f=minf(x) : xΩ , then

f0f1···fkfk+1···f.

First of all, we prove only the sequence {f k } converges to f , next we continue with the demonstration of convergence of some subsequence that will converge to some global minimizer.

Indeed, {f k } is a bounded below monotone decreasing sequence, hence it converges to its infimum, we say f¯ . If f¯f , then f¯>f . Recalling x the global minimizer of (4) and since f is a continuous function, then there is an open ball ℬ centered at x such that, for all xInt(Ω) we have,

f(x)<f¯12( f¯f). (9)

Since B(x)0 for all xInt(Ω) , and 0<µk+1<µk , we have 0<µk+1B(x)<µkB(x), xInt(Ω) . Therefore,

limk µkB(x)=0, xInt(Ω). (10)

Thus, for any x'Int(Ω) , and k large enough, we get

µkB(x')<14( f¯f). (11)

Then, from (9) and (11), we have

ϕ(x', µk)<f¯12(f¯f)+14(f¯f)=f¯14(f¯f)<f¯,

which contradicts to fkf¯ . Therefore, f¯=f . That is

fk+1=ϕ(xk+1, µk)f. (12)

Now, let x¯Ω be any subsequential limit of {x k }, more precisely, there is a subsequence xkl such that xklx¯ . If x¯x with f(x¯)>f(x) , then by continuity of f a subsequence f(xkl)f(x)+µklB(xkl) cannot converge to zero, which contradicts fkf0 . Therefore, x¯=x or x¯x , but f(x¯)=f(x) . Thus, every limit point generated by Algorithm 1 is a global solution of the problem (4). □

3 PENALTY METHODS OVERVIEW

Given f : n a continuous function, we consider the problem

(GP) min f(x) s. t. xΩ1 xΩ2. (13)

where Ω1 and Ω2 are any arbitrary subsets of ℝ n . In most applications Ω1 is defined implicitly by functional restrictions as h(x)=0 , where h : nm . In some cases, we assume that f and h are twice differentiable functions. A basic assumption is that problem (GP) admits global minimizer, some theoretical results were established by Polyak (1971), Breitfeld & Shanno (1995), Nash (2010), and Luenberger & Ye (2008).

Given a restriction set Ω1, a penalty function is defined as a function 𝒫 : n satisfying (i) 𝒫 is continuous, (ii) 𝒫(x)=0 if xΩ1 , and (iii) 𝒫(x)>0 if xΩ1 .

In order to solve the problem (13), the penalty function method solves the following penalized subproblem

Qρ min fx+ρPx s.t. xΩ2 (14)

where ρ>0 is a constant called penalty parameter. For ρ large, is clear that a solution of (14) will be in a region where 𝒫 is small. Thus, when ρ is expected that the corresponding optimal points will approach the feasible set Ω1.

For C 2 class functions, h : nm and g : np , some useful penalty functions 𝒫 based on the type of restrictions h(x)=0 or g(x)0 may be

1. 𝒫(x)=12h(x)22 , quadratic penalty,

2. 𝒫(x)=h(x)1 ,

3. Px=i=1pmax0,gix2,

4. 𝒫(x)=12h(x)22 +i=1p[max{0, gi(x)}]2 ,

in the first item the quadratic penalty function preserves C 2 property, but in the last three items 𝒫 is only C 1.

Given ψ(x, ρ)=f(x)+ρ𝒫(x) , we have a generic penalty algorithm given in Algorithm 2 for solving the problem (13), that works iteratively updating the parameter ρ before solving the penalized subproblem (14)

In general, one of the suggestions to compute ρk is taking ρ0=1 and ρk+1=10ρk , Fletcher (2013). However, when Ω1 is the set of equality constraints h(x)=0 , a basic rule that works in practice is that if h(xk)0.1h(xk1) , then ρk+1=10ρk , otherwise ρ does not change. That approach was successfully tested for linear programming problems, Suñagua & Oliveira (2017).

The following Lemma gives a set of inequalities that follow directly from Algorithm 2 steps. A proof is based in Martınez & Santos (1995) and Luenberger & Ye (2008).

Lemma 3.1. Let {x k } be a sequence generated byAlgorithm 2, which x k+1 is global solution of problem (Q k ). Then

1. ψ(xk, ρk1)ψ(xk+1, ρk)

2. P(xk+1)P(xk)

3. f(xk)f(xk+1) .

Proof. Since {ρ k } is a monotone increasing sequence and x k is a global minimizer of subproblem (15), then

ψ(xk, ρk1)=f(xk)+ρk1P(xk) f(xk+1)+ρk1P(xk+1) f(xk+1)+ρkP(xk+1)=ψ(xk+1, ρk).

To establish the second inequality, recalling the optimalities of x k and x k+1 , we have

ψ(xk, ρk1)=f(xk)+ρk1P(xk)f(xk+1)+ρk1P(xk+1) (16)

ψ(xk+1, ρk)=f(xk+1)+ρkP(xk+1)f(xk)+ρkP(xk) (17)

using (17) and (16), we get

(ρk1ρk)P(xk)(ρk1ρk)P(xk+1),

as ρk1<ρk , then 𝒫(xk)𝒫(xk+1) . Finally, using this inequality

f(xk)+ρk1P(xk)f(xk+1)+ρk1P(xk+1)f(xk+1)+ρk1P(xk)

hence f(xk)f(xk+1) . □

Lemma 3.2. If x * is a global minimizer of (GP) then for k=0, 1, 2, ···

f(xk)ψ(xk, ρk1)f(x).

Consequently, xkΩ1 , if and only if, x k is the global solution of (GP).

Proof. Since ρk>0 and 𝒫(x)0 xn and x k is the global minimizer of (Q k−1 ), then

f(xk)f(xk)+ρk1P(xk)f(x)+ρk1P(x)=f(x),

where P(x)=0 . □

The global convergence of the penalty method, in the sense any limit point of the sequence is a solution, can be verified from the two previous Lemmas.

Theorem 3.1 (Global convergence for penalty method). Let {x k } be a sequence of global minimizers of (Qk ) generated by Algorithm 2 in which ρk+ . Then, any limit point of the sequence is a global minimizer of problem (13).

Proof. With a slight change of notation, the proof is based on Martínez & Santos’ demonstration. Indeed, let xkl be a subsequence of {x k } such that xklx¯ . By the continuity of f , we have

fxklfx¯. (18)

Let f* be an optimal value of problem (GP). By Lemma 3.1 and Lemma 3.2, the sequence ψ(xk, ρk1) is nondecreasing and bounded above by f , then

liml ψ(xkl, ρkl1)=supl1 ψ(xkl, ρkl1)=pf. (19)

Thus, using (18) and (19), yields

liml ρkl1Pxkl=limlfxkl+ρkl1Pxkl-fxkl=limlfxkl+ρkl1Pxkl-limlfxkl=p-fx¯.

Since Pxkl0 and ρkl , we conclude that liml Pxkl=0 . Using the continuity of 𝒫, Px¯=0 , thereby x¯Ω1 . To prove the optimality of x¯ , just note that by Lemma 3.2, fxklf , then

f(x¯)=liml fxklf,

which completes the proof, because obviously ff(x¯) and then f(x¯)=f . □

Furthermore, by definition of ψ and (19)

f(xxl)ψ(xxl, ρkl1)pf(x¯)pf.

Therefore f(x¯)=p=f , then

liml ρkl-1Pxkl=0. (20)

And using (19)

liml Ψxkl, ρkl-1=f. (21)

4 MIXED BARRIER-PENALTY METHOD

For continuous function f : n , we consider the general programming problem

(NLP) min f(x) s. t. xΩ1, xΩ2, xΩ3. (22)

where Ω1, Ω2 and Ω3 are restriction sets that are defined in (1).

As in the previous sections, we assume that problem (22) admits a global minimizer. Now, let 𝒫 a penalty function related to Ω1 and B a barrier function related to Ω2. Then, taking the penalty parameter ρ>0 and the barrier parameter μ>0 , we have the associate mixed barrier-penalty subproblem,

(BPρ,µ) min f(x)+ρP(x)+µB(x) s. t. xInt(Ω2), xΩ3. (23)

Since the general problem (NLP) admits global minimizer, then the problem (BP ρ,µ) in (23) also admits a global solution for any feasible parameter values. Therefore, we define

Φ(x, ρ,µ)=f(x)+ρP(x)+µB(x). (24)

In order to solve the general problem (22), we provide a generic algorithm given in Algorithm 3, that works iteratively updating ρ and µ parameters before solving the penalized subproblem (23).

To establish the global convergence of the Algorithm 3, firstly we can associate the additive terms in two convenient ways

Φ(x, ρ,µ)=[f(x)+ρP(x)]+µB(x) =[f(x)+µB(x)]+ρP(x). (26)

Therefore, fixing respectively ρ and µ, we define Fρ(x)=f(x)+ρ𝒫(x) and Gµ(x)=f(x)+µB(x) , then we associate to (NLP) the following two problems

GPρ min Fρx GPμ min Gμx s.t. xΩ2 s.t. xΩ1, xIntΩ2 xΩ3 xΩ3. (27)

Since the problem (NLP) admits a global minimizer, both (GP ρ ) and (GP µ ) in (27) also admit global minimizers. Therefore, defining

ϕρ(x, µ)=Fρ(x)+µB(x) and ψµ(x, ρ)=Gµ(x)+ρP(x).

We have respectively the barrier and penalty subproblems

BPρ min ϕρx, μ PPμ min ψμx,ρ s.t. xIntΩ2 s.t. xIntΩ2 xΩ3, xΩ3 (28)

By fixing one of the parameters according to (27), the two problems in (28) are equivalent to (BP ρ,µ ). In fact

ϕρ(x, µ)=Φx, ρ, μ=ψµ(x, ρ), (29)

therefore, we can apply the results obtained in the preceding two sections.

In order to understand more clearly the ideas of the mixed problem, we consider the following particular quadratic problem

min x12+x22s.t. x2=2 1-x10 -1-x20. (30)

According to the contours of the objective function and graph of restrictions in Figure 1, the optimal point is x=(1, 2) . First, if we consider the Lagrangian function (x1, x2, λ, u1, u2)=x12+x22+λ(x21)+u1(1x1)+u2(1x2) , the Karush Kuhn-Tuker conditions (-Kuhn & Tucker, 1951) are

x1=2x1u1=0, x2=2x2+λu2=0x22=0, 1x10, 1x20u1(1x1)=0, u2(1x2)=0u10, u20

whose unique solutions for variables and Lagrangian parameters are x1*=1, x2*=2, u1*=2, u2*=0, λ*=-4.

Now, we associate to (30) the mixed barrier-penalty subproblem

QPρ,μ min Φx, ρ, μ s.t. 1-x1<0 -1-x2<0. (31)

where the penalized objective function is

Φx, ρ, μ=x12+x22+ρ2x2-22-μlogx1-1+logx2+1-log M

where M is a large enough positive number such that x1>1, x2>1 and (x11)(x2+1)<M , surely this region lies within inequality constraints 1x1<0 and 1x2<0 . This condition ensures that the barrier function is non-negative in the region that contains optimal point.

It is easy to see that Φ is a smooth function, thereby from first-order necessary conditions for optimal points, we have

2x1-μx1-1=0,2x2+ρx2-2-μx2+1=0. (32)

Solving this nonlinear system, subject to x1>1 and x2>-1 , by the substitution method, we obtain

x1=1+1+2μ2μ01=x1*x2=ρ-2+2-ρ2+42+ρ2ρ+μ22+ρ ρμ0 2=x2*. (33)

Thus, for each optimal point in (33), the optimal values of problem (QP ρ,µ ) in (31) is θ(ρ, µ)=Φ(x1, x2, ρ, µ) , whose graph is shown in Figure 2 with M=2 .

We can see that for fixed µ, θ(ρ, µ) is an increasing function and for fixed ρ, θ(ρ, µ) is a decreasing function. This fact will be showed theoretically in Theorem 4.1.

Furthermore, using (33), when µ0 and ρ , the following gradient’s coefficients in (32) converge to optimal Lagrangian parameters

u1=μx1-1=1+1+2μ2=u1*u2=μx2+1=2μ2+ρ2+3ρ+A0=u2*λ=ρx2-2=ρ-10-3ρ+A22+ρ-4=λ*

where A=4µ(2+ρ)+(2+3ρ)2 . In addition, the Hessian matrix for Φ is

2Φ=2+μx1-12002+ρ+μx2+12=2+u12μ002+ρ+u22μ~2+4μ002+ρ.

Then ∇2Φ is a positive definite matrix, that guarantees the minimality of x 1 and x 2 in (33). Moreover, the approximate condition number of this matrix is

κ2Φ2+4μ2+ρ,

hence ∇2Φ is ill-conditioned for very small µ and small ρ, however for large ρ, that condition number could be reduced.

Next, we have the global convergence theorem for the mixed barrier-penalty algorithm.

Theorem 4.1 (Global convergence for mixed method). Let {x k } be a sequence of global minimizers of (BP k ) problem in (25) generated by mixedAlgorithm 3in which ρk+ and µk0 . Then any limit point of sequence is a global minimizer of the (NLP) problem.

Proof. In order to apply the results of the preceding sections, the idea is to fix, one of the parameters in the (BP ρ,µ ) subproblem in (23) one at a time, and apply the corresponding results for each subproblems in (27).

Firstly, to fix ρ, let xkρ be the sequence generated by Algorithm 1 for solving the (GP ρ ) subproblem in (27). By applying Lemma 2.1, we get

ϕρxk+1ρ, μkϕρxkρ, μk-1,Fρxk+1ρFρxkρ. (34)

By the monotonicity in (34) and by (12) the sequence ϕρ(xkρ, µk1) converges to global optimal value of the problem (GP ρ ) in (27), that is,

ϕρ(xkρ, µk1)infk1 ϕρ(xkρ, µk1)=Fρx*ρ. (35)

In addition, from Theorem 2.1 all convergent subsequence of xkρ converges to a global minimizer of the problem (GP ρ ) in (27).

Similarly, to fix µ, let xkμ be the sequence generated by the associated Algorithm 2 for solving the (GP µ ) subproblem in (27). By applying Lemma 3.1, we get

ψμxkμ, ρk-1ψμxkμ, ρk,GμxkμGμxk+1μ. (36)

By the monotonicity in (36) and by (21), the sequence ψμ(xkμ,ρk-1) converges to global optimal value of the (GP µ ) problem of (27), that is,

ψμxkμ,ρk-1supk1ψμxkμ,ρk-1=Gx*μ. (37)

In addition, from Theorem 3.1 all convergent subsequence of xkμ converges to a global minimizer of the problem (GP µ ) in (27). And by Lemma 3.2, we get Gμ(xkμ)Gμ(x*μ),k .

Now let {x k } be a sequence of minimizers obtained by Algorithm 3 for a mixed problem. More precisely, let xk+1=x(ρk,μk) which also minimizes (28), because according to (29), we have

Φ(xk+1,ρk,μk)=f(xk+1)+ρkP(xk+1)+μkB(xk+1)=ϕρk(xk+1,μk)=ψμk(xk+1,ρk)

Since (35) and (37), we have

Fρk(x*ρk)ϕρk(xk+1,μk)=Φ(xk+1,ρk,μk)=ψμk(xk+1,ρk)Gμk(x*μk). (38)

Giving x(ρ k , µ k ) the solution of (BP ρ ,µ ) for μ=μk and ρ=ρk . For μk<μk1 , additionally we solve (BP ρ ,µ ) for μ=μk1 and ρ=ρk , which solution is called x(ρ k , µ k−1 ). Using (34)

ϕρk(x(ρk,μk),μk)ϕρk(x(ρk,μk1),μk1) (39)

and by (29), for x=x(ρk,μk) and y=x(ρk,μk-1) , we have

ϕρk(x,μk)=f(x)+ρkP(x)+μkB(x)=ψμk(x,ρk)ϕρk(y,μk1)=Fρk(y)+μk1B(y)=f(y)+ρkP(y)+μk1B(y) =f(y)+μk1B(y)+ρkP(y)=Gμk1(y)+ρkP(y) =ψμk1(y,ρk) (40)

Using (39) and (40), we get

ψμkxρk,μk,ρkψμk-1xρk,μk-1,ρkGμkx*μkGμk-1x*μk-1.

Similarly, for ρk>ρk1 , additionally we consider a solution of (BP ρ ,µ ) for ρ=ρk1 and μ=μk , which solution is called x(ρ k−1 k ). Using (36)

ψμkxρk-1,μk,ρk-1ψμkxρk,μk,ρk (41)

and by (29) for x=xρk,μk and z=xρk-1,μk , we have

ψμkx,ρk=fx+μkBx+ρkPx=ϕρkx,μkψμkz,ρk-1=Gμkz+ρk-1Pz=fz+μkBz+ρk-1Pz =fz+ρk-1Pz+μkBz=Fρk-1z+μkBz =ϕρk-1z,μk (42)

Using (41) and (42), we get

ϕρk-1xρk-1,μk,μkϕρkxρk,μk,μkFρk-1x*ρk-1Fρkx*ρk.

Let x* be a global minimizer of (NLP). Recalling x*μk a solution of the problem GPμk in (27), with the additional assumption x*μkIntΩ2 , we can conclude that fx*Gμkx*μk . Moreover x* is a feasible point of the problem (GP ρ ), then Fρkx*ρkfx* . Therefore, Gμkx*μk is a monotone nonincreasing sequence that is bounded below by f(x*) and using (12) this sequence converges to its infimum f(x*). Also, Fρkx*ρk is monotone nondecreasing sequence that is bounded above by f(x*) and by (21) that sequence converges to its supremum f(x*), that is,

Fρkx*ρksupk1Fρkx*ρk=fx*,Gμkx*μkinfk1Gμkx*μk=fx*. (43)

By applying squeeze theorem1 to (38) and (43), we show

limkΦ(xk,ρk1,μk1)=f(x*). (44)

Let xkl be any subsequence of {x k } such that xklx . By the continuity of f, we get f(xkl)f(x¯) . The final demonstration will be done by contradiction under the assumption xx* with f(x)>f(x*) . Using (10) for the problem (GP ρ ), we have μkl1B(x)0 , for any xInt(Ω2) . Furthermore, using (20) for the problem (GP µ ), also we have ρkl1𝒫(xkl)0 , and by continuity of f, the sequence f(xkl)f(x)+ρkl𝒫(xkl)+μklB(xkl) cannot converge to zero, which contradicts (44). □

5 APPLICATIONS

5.1 Barrier-Penalty applied to convex problem

The Algorithm 4 is an algorithm based on generic Algorithm 3 in order to solve the nonlinear problem (30).

For ρ0=1,μ0=1 and x0=(1.5,1) we write a MATLAB script for Algorithm 4 in order to compute a sequence of optimal points that approach to x*=(1,2) . The iterative results is shown in Table 1.

Table 1 Iterative results.

k x k Φ(x k , ρ k , µ k ) ρ k µ k
0 (1.500000,1.000000) 3.25000000 1 1
1 (1.366025,0.847127) 3.63962871 10 0.1
2 (1.047723,1.669788) 4.63714955 100 0.01
3 (1.004975,1.960817) 4.97372208 103 0.001
4 (1.000500,1.996008) 4.99951984 104 0.0001
5 (1.000054,1.999600) 5.00018097 105 1e-05
6 (1.000009,1.999960) 5.00004320 106 1e-06
7 (1.000101,1.999996) 5.00020113 107 1e-07
8 (1.000020,2.000000) 5.00004029 108 1e-08
9 (1.000001,2.000000) 5.00000209 109 1e-09

The path following points is shown in Figure 3, where last points are close to x*. The exact results also solved by MATLAB are x*=(1.000000,2.000000) and f(x*)=5.000000 .

5.2 Penalized standard linear programming problem

We consider the standard linear programming problem where several variables are upper bounded

LP min cTx s.t. Ax=b Exu x0, (46)

where A is m × n matrix, c,xn,bm , and E is formed by rows of n × n identity matrix corresponding to bounded variables, thereby Ex is the vector of bounded variables for which u is the vector of upper bounds. In this case is usual to add the slack variable v such that Ex+v=u , where v ≥ 0.

In the most computational packages that implement Interior Point Methods for solving linear programming problems only barrier parameter is considered.

In order to solve the LP problem (46), by using the quadratic penalty and logarithmic barrier functions, the objective function is penalized as follow

Φx,v,ρ,μ=cTx+ρ2b-Ax2-μj=1nlogxj-μj=1nblogvj, (47)

where µ and ρ are respectively the barrier and penalty parameters and nb is the number of bounded variables. Then the associated mixed barrier-penalty subproblem is

(LPPρ,μ) minΦ(x,v,ρ,µ) s.t. x,v>0. (48)

Since Φ(x,v,ρ,µ) is a smooth function on open set x,v>0 . By applying the first-order necessary condition, we have

cATρ(bAx)µX1e+µETV1e=0,

Defining y=ρ(bAx),z=µX1e,w=µV1e , we get

cATy+ETwz=0, Ex+v=u,XZe=µe, VWe=µe, y=ρ(bAx).

Taking δ=1/ρ , we rewrite δy=bAx . Thus Ax+δy=b .

Therefore, the optimality conditions for subproblem (LPP ρ,µ ) on (x,v)>0 and (z,w)>0 are

Ax+δy=bEx+v=uATy+z-ETw=cXZe=μeVWe=μe. (49)

In Interior Point Methods reviewed on Suñagua & Oliveira (2017), we find a search direction by applying Newton’s Method for solving nonlinear system (49). In fact, the Newton’s directions satisfy

A0δI00EI00000ATI-ETZ00X00W00Vdxdvdydzdw=rprurdrcrs rp=b-Ax-δyru=u-Exrd=c-ATy-z+ETwrc=μe-XZers=μe-VWe, (50)

solving this block linear equations, we find up

dz=X1(rcZdx), dw=V1(rsWdw), dv=ruEdx (51)

replacing this on third group of equations

ATdyD1dx=rdX1rc+ETV1rsETV1Wru

where D1=X1Z+ETV1WE , then

dx=D(ATdyrd+X1rcETV1rs+ETV1Wru) (52)

using (52) and first group of equations of (50), we get the normal equations

(ADAT+δI)dy=AD(rdX1rc+ETV1rsETV1Wru) (53)

close to optimal point, D matrix is very bad scaled and then ADA T is also very ill-conditioned. In this case, the penalty parameter δ improves that condition number, which is helpful for solving the symmetric positive definite system by applying conjugate gradient method for instance.

Alternatively to (52) and (53), dx and dy also obtain by solving the following augmented system

-D-1ATAδIdxdy=r1rp (54)

where r1=rdX1rc+ETV1rsETV1Wru . This system is also symmetric, indefinite and better condition number due to penalty parameter.

For computational experiments, we use the open source package PCx (Czyzyk et al., 1997), that implements the Mehrotra’s Predictor-Corrector algorithm in which the barrier parameter µ is already incorporated in order to solve linear programming problems. By adding an appropriate code to PCx, we achieve to incorporate the penalty parameter δ, thus, we obtain the modified PCx called the Predictor-Corrector mixed algorithm with barrier and penalty parameters. The numerical results for several NETLIB LP problems were computed for approaches proposed in Suñagua & Oliveira (2017), where the goodness of the approaches were compared according to Dolan & More (2002) performance profile criteria.

6 CONCLUSIONS

Firstly, we present a brief summary of the main concepts and results upon the barrier and penalty methods, where for each method we show the global convergence theorems in order to use these strategies in the proof of the global convergence theorem for mixed algorithm.

In the Section 4, we provide a mixed algorithm for solving mixed barrier-penalty subproblem (23), and we provide a constructive proof on global convergence theorem for mixed barrierpenalty methods as an alternative showed in Fiacco & McCormick (1990) and Breitfeld & Shanno (1995). For simple convex nonlinear problems we write MATLAB code in order to generate iterative points that illustrate penalty and barrier functions.

Finally, we develop an application for nonlinear programming problems with equality and inequality functional constraints, such as, quadratic programming problem, and a standard linear programming problem. Since the functions involved have the smooth property on an open set, then the optimality conditions for each class of problems are stated, those can be solved by applying interior point methods.

ACKNOWLEDGEMENTS

Thanks to CNPq, FAPESP (grant number 2010/06822-4) and Universidad Mayor de San Andrés (UMSA) for their financial support.

References

1 BAZARAA MS, SHERALI HD & SHETTY CM. 2013. Nonlinear programming: theory and algorithms. John Wiley & Sons. [ Links ]

2 BERTSEKAS DP. 1976. On penalty and multiplier methods for constrained minimization. SIAM Journal on Control and Optimization, 14(2): 216-235. [ Links ]

3 BREITFELD MG & SHANNO DF. 1994. A globally convergent penalty-barrier algorithm for nonlinear programming and its computational performance. Rutgers University. Rutgers Center for Operations Research [RUTCOR]. [ Links ]

4 BREITFELD MG & SHANNO DF. 1995. A Globally Convergent Penalty-Barrier Algorithm for Nonlinear Programming. In: Operations Research Proceedings 1994. pp. 22-27. Springer. [ Links ]

5 CZYZYK J, MEHROTRA S, WAGNER M & WRIGHT SJ. 1997. PCx user guide (Version 1.1). Optimization Technology Center, Northwestern University,. [ Links ]

6 DOLAN ED & MORE JJ. 2002. Benchmarking optimization software with performance profiles. Mathematical programming, 91(2): 201-213. [ Links ]

7 FIACCO AV & MCCORMICK GP. 1990. Nonlinear programming: sequential unconstrained minimization techniques. vol. 4. Siam. [ Links ]

8 FLETCHER R. 2013. Practical methods of optimization. John Wiley & Sons, Chichester. [ Links ]

9 GRIVA I, NASH SG & SOFER A. 2009. Linear and nonlinear optimization. vol. 108. Siam. [ Links ]

10 KUHN HW & TUCKER AW. 1951. Nonlinear Programming. In: Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability. pp. 481-492. Berkeley, California: University of California Press. Available at: http://projecteuclid.org/euclid.bsmsp/1200500249. [ Links ]

11 LUENBERGER DG & YE Y. 2008. Linear and nonlinear programming. 3th ed.. Springer New York. [ Links ]

12 MARTINEZ JM & SANTOS SA. 1995. Métodos computacionais de otimização. Colóquio Brasileiro de Matemática, Apostilas, 20. Available at: https://www.ime.unicamp.br/~martinez/mslivro.pdf. [ Links ]

13 NASH SG. 2010. Penalty and barrier methods. Wiley Encyclopedia of Operations Research and Management Science,. [ Links ]

14 NASH SG & SOFER A. 1993. A barrier method for large-scale constrained optimization. ORSA Journal on Computing, 5(1): 40-53. [ Links ]

15 POLYAK BT. 1971. The convergence rate of the penalty function method. USSR Computational Mathematics and Mathematical Physics, 11(1): 1-12. [ Links ]

16 SUÑAGUA P & OLIVEIRA AR. 2017. A new approach for finding a basis for the splitting preconditioner for linear systems from interior point methods. Computational Optimization and Applications, 67(1): 111-127. [ Links ]

17 WRIGHT MH. 1992. Interior methods for constrained optimization. Acta Numérica, 1: 341-407. [ Links ]

18 WRIGHT SJ & NOCEDAL J. 1999. Numerical optimization. vol. 2. Springer New York. [ Links ]

1 formulated in modern terms by Carl Friedrich Gauss

Received: December 07, 2018; Accepted: October 31, 2019

*Corresponding author

This is an open-access article distributed under the terms of the Creative Commons Attribution License