Acessibilidade / Reportar erro

A NONLINEAR FEASIBILITY PROBLEM HEURISTIC

Abstract

In this work we consider a region S ⊂ x 0 is given, which is close in certain norm to the analytic center of S, and that a new nonlinear smooth convex inequality is added to those defining S (perturbed region). It is constructively shown how to obtain a shift of the right-hand side of this inequality such that the point x 0 is still close (in the same norm) to the analytic center of this shifted region. Starting from this point and using the theoretical results shown, we develop a heuristic that allows us to obtain the approximate analytic center of the perturbed region. Then, we present a procedure to solve the problem of nonlinear feasibility. The procedure was implemented and we performed some numerical tests for the quadratic (random) case.

given by a finite number of nonlinear smooth convex inequalities and having nonempty interior. We assume a point

interior point; analytic center; convex optimization


1 INTRODUCTION

In Linear Programming (LP), as well as in other areas of optimization, it is desirable to have the possibility of making the post-optimality processing the most efficient way. In particular, if the LP is solved and right after that the original data are altered, one wants to restart the process to find the solution for the new problem from the obtained solution for the original LP. It is thought that in many occasions, to obtain a new solution using the latest information requires less computational effort than to start the process from scratch.

It is widely known that this kind of hot start can be successfully carried out by the Simplex method and its variants for certain perturbations in the data. An important particular case, for its applications, is the addition of a new constraint. Adding a constraint to a LP is equivalent to the addition of a variable in the associated dual problem. If this variable is zero, then the optimal dual solution is feasible for the new problemand then we can start the Dual-Simplexmethod from this solution to solve the new problem. This often reduces the number of iterations required to resolve the perturbed problem. This idea has an important application in Cutting PlanesMethods (CPMs) for Integer Linear Programming (ILP).

In an ILP problem, we want to optimize a linear functional in several variables, subject to linear constraints and such that some or all variables take on integer values, that is, a problem of ILP is a LP problem where some or all variables must be integers. This complication takes the seemingly simple LP probleminto a problem of combinatorial nature, whose solution can be difficult to achieve. Many ILP problems, such as the traveling salesman problem, are NP-complete. Traditionally, such problems are solved using heuristics like local search, "Simulated annealing", genetic algorithms, ant colony, etc., thus obtaining good solutions, and that in some cases can be optimal. However, these techniques do not have the ability to check the optimality the obtained solutions.

On the other hand, classical methods such as "branch and bound", cutting planes and its hybrids (such as "branch and cut") use bounds for the optimal value (iteratively updated) to check the optimality of the obtained solution or at least to informits proximity to the optimal value. Among these techniques, it is interesting the CPM, which is a generic term for optimization methods which iteratively refine a feasible set or objective function by means of linear inequalities, known as cuts. Such procedure are popularly used to find integer solutions to Mixed Integer Linear Programming (MILP) problems, as well as to solve general, not necessarily differentiable convex optimization problems. By considering certain assumptions of the problem, we can prove the convergence of the CPM, what does not mean a reasonable computational behavior. In certain occasions, the time required is so large that in practice it is impossible to obtain a solution. Despite these difficulties, the CPMs still are the best behaved methods in solving specialized problems. In these applications, the Simplex method is used to solve the relaxed LP generated by CPM.

Thirty years after the revolutionary step by N. Karmarkar (11)11 KARMARKAR N. 1984. A new polynomial-time algorithm for linear programming. Combinatorica, Berlin, 4(2): 373-395. in the area of optimization, when he introduced a polynomial interior point algorithm with computational efficiency for the solving LP problems, it was natural that some research arose on the application of interior point algorithms in solving post-optimality problems, in particular in solving the LP arising from a relaxed CPM to solve the ILP. One of the early works in this direction was presented in (1)BASESCU VL & MITCHELL JE. 2008.An analytic center cutting plane approach for conic programming. Mathematics of Operations Research, 33: 529-551. , (13)13 MITCHELL JE 2003. Polynomial interior point cutting plane method. Technical Report, Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, NY, 12180. , (14)14 MITCHELL JE& RAMASWAMY S. 2000. A Long-Step, Cutting Plane Algorithm for Linear and Convex Programming. Annals of OR, 99: 95-122., where it is described a CPMfor ILP that uses as a subroutine a variant of Karmarkar's projective algorithm. In each iteration, it is solved the dual problem associated with the relaxed LP using the Karmarkar-variant. As mentioned earlier, when adding a new constraint to the original LP, we easily obtain a dual-feasible solution. The problem is that this solution is not interior. It is obtained by making at least one variable zero. Using heuristics and LP information, the authors provide a direction from which it is possible to obtain an interior solution to restart the projective algorithm or the Karmarkar-variant.

Another way to address the problemof restarting the algorithm when we add a constraint is to use algorithms that work with points that do not necessarily are feasible (see (12)12 LIU YZ & CHEN Y. 2010. A Full-Newton Step Infeasible Interior-Point Algorithm for Linear Programming Based on a Special Self-Regular Proximity. The Ninth International Symposium on Operations Research and Its Applications (ISORA'10) Chengdu-Jiuzhaigou, China, pp. 19-23. , (16)16 ROOS C. 2006. A full-Newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM J. Optimization, 16(4): 1110-1136.). This methodology achieves both the feasibility and optimality. The problem is that by adding a constraint to the feasibility region, the iterations of the algorithm tend to move towards the center of the feasible region, thus recovering feasibility but moving away from optimal.

So far one of the best methods that uses interior point in the solution of the relaxed LP in a CPM is the method based on the concept of analytic center. The analytic center for a feasible region is single interior point farthest from the border. Briefly, a CPM can be thought of as a column generation method applied to the dual problem of the relaxed LP. In (18)18 YE Y 1992. A Potential Reduction Algorithm Allowing Column Generation., SIAM Journal on Optimization 2: 2-7. a polynomial algorithm is proposed with column generation based on interior point and applied to Linear Feasibility Problem (LFP). The LFP is equivalent to a LP problem. In (6)GOFFIN JL, LUO Z-Q & YE Y. 1996. Complexity Analysis of an Interior Cutting Plane Method for Convex Feasibility Problems. SIAM Journal Optimization, 6: 638-652. it is presented a polynomial algorithm for column generation, which uses the analytic center and solves the nonlinear convex feasibility problem.

In general, the problem to be solved when using interior point in a CPM is that when adding a new constraint, the current solution is no longer interior, and therefore it is not possible to reset the methodology. In this case, one needs some technique or heuristic that allows the "interior recovery" of the current solution. In this work we present a heuristic to recover the analytic center of a region defined by a finite number of nonlinear inequalities, as soon as we add a new inequality to region. This heuristic is based on the extension of the theoretical results developed in (5)FEIJOO B, SANCHEZ A & GONZAGA CC. 1997. Maintaining closeness to the analytic center of a polytope by perturbing added hyperplanes. Applied Mathematics & Optimization, 35: 139-144..

In the next section we present the theoretical work and it is where the problem to be considered is defined. In the sequence, we present the extended results, as well as the implemented heuristic and its application in solving the nonlinear feasibility problem. Finally, we present the numerical results and the conclusions.

2 THEORETICAL FOUNDATIONS

We consider a region S given by S = {x : ƒi (x) < 0; i = 1, ...,, m} where ƒi (x) are convex functions from to with continuous second order derivatives in int (S) (the interior of S). We will assume that int (S) is nonempty and bounded. The logarithmic barrier function associated to S is given for and the gradient and the Hessian matrix of P(x) at x are, respectively:

where H induces the relative norm for all x . We suppose that the Hessian matrices Hi of all constraint functions ƒi (x) fulfils a Relative Lipschitz Condition, i.e.,

Thus, for each i, the logarithmic barrier function −ln(− ƒi (x)) is self-concordant (15)15 NESTEROV YE & NEMIROVSKY AS. 1989. Self-Concordant Functions and Polynomial Time Methods in Convex Programming. SIAM Studies in Applied Mathematics.. Also, the logarithmic barrier function P(x) associated to S is a self-concordant function (as the sum of self-concordant functions is again a self-concordant function) with the Hessian matrix being positive-definite on its domain int (S), and strictly convex. P(x) penalizes points near the boundary of S.

Note that P(x) has a unique minimizer xc int (S), which is called analytic center of S, see e.g. (10)10 HUARD P & LIEU T. 1966. La M`ethode des Centres dans un Espace Topologique. Numerische Mathematik, 8: 56-67., i.e., xc = arg min {P(x) : xint (S)}. The proximity of xint (S) to xc is defined by , i.e., the norm (induced by H) of the Newton direction for P at x, or the scaled gradient vector. Then, x will be considered near xc (or ε-approximate center) if δ(x) < ε for some ε ∈ (0, 1). Clearly, δ(x) = 0 if and only if x = xc .

In thiswork we suppose a point x 0int (S) is given such that δ = δ(x 0) < ε for some ε ∈ (0, 1), and also, that it is given ƒ0(x), a convex function from to with continuous second order derivative in int (S), such that the inequality ƒ0(x) < 0 generates a new region S 0 from S:

We are interested in finding a "shift" γ* of the right-hand side of the new inequality which generates a new region where x 0 is still close, in the same norm, to the corresponding analytic center. This question is conceptually equivalent to the following problem: given an ε-approximate center x 0 of the region S and ƒ0(x) < γ*, find an upper bound for δ, depending on ƒ0, ε and γ*, to preserve x 0 to be δ-approximate center of the region

S* = {x : ƒ0(x) < γ*; ƒi (x) < 0; i = 1, ..., m}.

We present results (bounds for γ*) to preserve centrality after a new constraint ƒ0(x) < γ* is added to S. More accurately, given an ε-approximate center x 0 of the region S, an upper bound for γ* depending on ƒ0 and ε, is derived to preserve x 0 to be an -approximate center of the S*.

3 RESULTS

With analytic center theory in mind, given γ and q > m a positive integer, we start our search for such a region by adding the inequality ƒ0(x) < γ repeated q times to those defining S, that is, we set:

Note that the integer q acts as a weight in computing the associated barrier function. Indeed: Pq (x, γ)= −q ln[γ − ƒ0(x)]+ P(x). Also, it is a known fact that we only claim that P 1(x, 0) = P(x, 0) = −ln[− ƒ0(x)] + P(x). The notation Sq,γ is used throughout to emphasize that the logarithmic barrier function being considered for the region has the inequality with right-hand side γ repeated q times.

The analytic center of Sq,γ is = arg min {Pq (x, γ) : xin(Sq, γ )}. As done previously, the proximity measure to , given xint (Sq,γ ), needs the gradient and the Hessian matrix of Pq (x, γ) (with respect to x), which are:

If no confusion is possible, we will write, for the sake of shortness, gq and Hq instead of gq (x, γ) and Hq(x, γ). Hq is positive-definite for xint (Sq,γ ), and the proximity of x to is given by . The following lemma shows the relationbetween δ(x 0) and δq (x 0, γ), for γ > ƒ0(x 0).

From now on, given two square matrices A and B, by A B we mean that BA is positive semidefinite.

Lemma 3.1.Givenγ > ƒ0(x 0), d 0 = ∇ƒ0(x 0), Q 0 = ∇2 ƒ0(x 0), θ = H −1(x 0) d 0 = d 0, α = g(x 0) = g 0 and β 0 = β 0(γ) = . Then

Proof. From (7) and (8) we have gq (x 0, γ) = g 0 + 0 d 0, and Hq (x 0, γ) = + 0 Q 0+H 0. Also, H 0+ Hq (x 0, γ), so that by Corollary 7.7.4 of (9)HORN R & JOHNSON CR. 1985. Matrix Analysis. Cambridge University Press., (H 0 + . As a consequence, it holds that:

The following theorem shows how we can choose γ to make x0 close to the analytic center of Sq,γ.

Theorem 3.2.Letd0 = ∇ƒ0(x 0), θ = d 0 > 0, α = g 0, ε ∈ (0, 1), and ρ = . Then for γ 0 > ƒ0(x 0) + 1/ρ, if δ(x 0) < ε then δq (x 0, γ 0) < .

Proof. By Lemma 3.1, < δ 2(x 0) + (where β 0 = 1/(γ 0 − ƒ0(x 0)). If the numerator of the fraction is negative, we conclude that < δ 2(x 0) < ε 2 < ε, and we are done. If it is nonnegative, use the facts that the denominator is greater than 1, and

to get . On the other hand, < δ 2(x 0) + + 2qαβ 0 = δ 2(x 0)+ 0(qθβ 0 +2α) < δ 2(x 0)+(qθρ +2α) = δ 2(x 0) +ε(1 −ε) < ε 2 + ε(1 − ε) = ε.

This guarantees that the point x 0 is close to the analytic center of Sq,γ 0 . Nevertheless, this is not the desired region, since the constraint involving the constraint function ƒ0(·) appears with weight q > 1 in the associated logarithmic barrier function, and we can show that x 0 is close to the analytic center of such a region but with q = 1. Recall that the constraint ƒ0(x) < 0 was added to S just once.

The following result helps to show that, in general, if a point x is close to the analytic center of Sq,γ , then there exists a "representative" of γ , say γ*, such that x is close to the analytic center of S1,γ*.

Theorem 3.3. Given γ , xint (Sq,γ ), then for γ* = ƒ0(x), we have

Proof. Let β = β(x, γ) = , β* = β*(x, γ*) = , d = d(x) = ∇ƒ0(x), Q = Q(x) = ∇2 ƒ0(x). Clearly β* = . Also, g 1 = ∇x P 1(x, γ*) = + g(x) = qβd + g = gq . Now,

as q > 1. Therefore, using as before, Corollary 7.7.4 of (9)HORN R & JOHNSON CR. 1985. Matrix Analysis. Cambridge University Press., , and = g 1 = gq < gq = ; or δ 1(x, γ*) < δq (x, γ).

Corollary 3.4.x0is close to the analytic center of , where γ 0* = ƒ0(x 0), and γ 0 > ƒ0(x 0) + (see Theorems 3.2 and 3.3). It is clear that is the desired region.

Corollary 3.5.If γ0* < 0, then δ 1(x 0, 0) < .

Proof.Let γ1 = 0 + ƒ0(x 0) = ƒ0(x 0). By hypothesis,

and therefore γ 1 ≥ ƒ0(x 0) + . By Theorem 3.2, we have δq (x 0, γ 1) < Note that γ 1 is defined is such a way that its representative regarding x 0 is zero, and that by Theorem 3.3, δ 1(x 0, 0) ≤ δq (x 0, γ 1). Combining these two inequalities we obtain δ 1(x 0, 0) ≤ .

4 APPLICATIONS (NONLINEAR FEASIBILITY PROBLEM)

4.1 Heuristic

Form the literature of mathematical programming, we know that methods of cutting planes (1)BASESCU VL & MITCHELL JE. 2008.An analytic center cutting plane approach for conic programming. Mathematics of Operations Research, 33: 529-551., using analytic center as testing points, need some clever strategy to "restart" the method of centers, which is used to solve the corresponding partial or relaxed problem (see (2)ENGAU A, ANJOS MF & BOMZE I. 2013. Constraint selection in a build-up interior-point cuttingplane method for solving relaxations of the stable-set problem. Mathematical Methods of Operation Research, 78: 35-59. , (3)ENGAU A ANJOS MF& VANNELLI A. 2010. On interior-point warmstarts for linear and combinatorial optimization. SIAM Journal on Optimization, 20(4): 1828-1861. , (4)ENGAU A 2012. Recent Progress in Interior-Point Methods: Cutting-Plane Algorithms and Warm Starts. Handbook on Semidefinite, Conic and Polynomial Optimization. International Series in Operations Research& Management Science, 166: 471-498. , (7)GONDZIO J. 2012. Interior PointMethods 25 years later. European Journal of Operational Research, 218: 587-601. , (17)17 SKAJAA A, ANDERSEN ED &.YE Y 2013. Warm starting the homogeneous and self-dual interior point method for linear and conic quadratic problems. Mathematical Programming Computation, 5: 1-25.). Such restarting consists in finding a new point close to the analytic center of a convex perturbed region, right after adding a new constraint. That is, suppose we have a point x 0int (S) such that δ(x 0) < ε, for some ε ∈ (0, 1). Suppose also that a new convex function ƒ0(x) is given, along with its second order continuous derivatives, defined in int (S), and such the inequality ƒ0(x) < 0 generates a new region

where x 0int (S 0). The problem of restarting in a cutting plane strategy consists in determining a new point close to the analytic center of S 0. Below we present a heuristic to solve this problem using the results of the previous section.

First of all, as Theorem 3.2 indicates, x 0 is close to the analytic center of Sq,γ0 , for γ0 = ƒ0(x 0) + ; moreover, δq (x 0, γ 0) < . Also, by Theorem 3.3, x 0 is close to the analytic center of , for γ 0* = γ0 + ƒ0(x 0), so that δ 1(x 0, γ 0) < .

Now, if γ 0* < 0, then by Corollary 3.5 we have the desired point. Otherwise, that is, if γ 0* > 0 we can initiate the strategy of reducing the value of γ0 and, simultaneously, change the value of γ 0*, trying to make it close to zero. To accomplish this, we will use an algorithm in two phases. The first one is a method of an incomplete method of centers (Algorithm 1), where (y, z) ∈ , ƒ : and τ represents tolerance to the proximity of the analytic center (at the numerical tests, we used τ = 0.1). We want to reduce (not optimize) γ, starting with the value of γ 0, until that the representative of γ* becomes negative. The second one is a process of bisection to "approach zero", where in each iteration we do a centralization, just like in the method of centers.

Algorithm 1
Centering (y, z, f )

Remember that the barrier logarithmic function associated to the region Sq,γ0 is given by Pq(x, γ0)= −q ln (γ0 − ƒ0(x)) + P(x) and corresponds to the auxiliary function used in a method of centers to solve the problem

Therefore, the phase of reduction (Algorithm 2) can be initiated with the application of the incomplete method of centers starting from x 0 and the bound γ 0. We update the bound to γ 1 and we obtain a new point x 1, close to the analytic center of . Then we evaluate γ 1*, the representative of γ1, such that x 1 is close to the analytic center of , and we check if γ 1* < 0 or, at least, if it is close enough to zero. In the first case, the reduction phase ends; in the second one the heuristic stops (the desired point was achieved). If none of the previous conditions are satisfied, then we do another iteration, starting from x 1 and γ 1, and so on. This phase generates sets {γj }, {γj *} and {xj } such that each xj is located close to the analytic center of and . Now, if the method of centers were fully applied, we would achieve the optimal value of (15) in polynomial time. In this way, using an incomplete method of centers, we would achieve in a polynomial time the condition γk * < 0, for some k.

Algorithm 2

At the beginning of the second phase we know the iterates γ(k−1), x(k−1), γ(k−1)∗, and γk, xk , γk* such that γk* < 0 < γ(k−1)∗. This suggests the possibility of beginning bisection (Algorithm 3) over the interval [γk, γ(k−1)]. For simplification, suppose that a left iterate γL, xL, γL* and a right iterate γR, xR, γR* are given with the property that xL is close to the analytic center of and , and that xR is close to the analytic center of and . Also, suppose that γL * < 0 < γR *. Regarding the parameters, we used q = 2n + 1 and δ represents tolerance to the proximity of the right-hand side of the new added constraint (at the numerical tests, we used δ = 0.1).

Algorithm 3

As usual in bisection, we evaluate the middle point of the interval [γL , γR ], that is, γM = . Then, from xR and γM (as bound) we do a centralization to obtain the point xM , close to the analytic center of . Then we evaluate γM * (the representative of γM regarding xM ). Clearly, if γM * is close enough to zero, then the heuristic can stop. Otherwise, there are two possibilities: either γM * < 0, in which case [γM , γR ] is a new interval to consider for bisection, or γM * > 0, in which case [γL , γM ] is a new interval to consider.

Based on the previous procedures, we present the "Analytic Center Recovery Algorithm" - ACRA (Algorithm 4). Such routine consists in finding a new point close to the analytic center of a convex perturbed region, right after adding a new constraint.

Algorithm 4
ACRA(y, S)

4.2 Nonlinear feasibility problem

Using ACRA heuristicwe developed a procedure, called "Convex FeasibilityAlgorithm" - CFA, to solve the non-linear problem of feasibility in smooth and convex regions (Algorithm 5). The procedure is based on the known methodology of generation of columns (or inequalities) (8)GONDZIO J GONZÁLEZ BREVIS P & MUNARI P. 2013. New developments in the primal-dual column generation technique., European Journal of Operational Research 224: 41-51.. Our inicial problem is to find an interior point close to the analytic center of a given region S = {x : ƒi (x) < 0; i = 1, ..., m} ≠ ∅. We are supposing that S is a smooth convex region, that is, it satisfies the conditions described at the introduction of this work, so that we can suppose that there exists a box in , say T = {x : < x < u} containing S, where , u and j < uj . Therefore, our problem is to find an interior point close to the analytic center of the region {x : ƒi(x) < 0; i = 1, ..., m, < x < u}. The use of a box containing S, far from complicating the problem, allows us to have an automatic inicialization of the proposed procedure, as we will see below.

Algorithm 5
CFA

To simplify, we can suppose that right after some translation and scaling in the variables, the previous region can be written like {x : ƒi(x) < 0; i = 1, ...., m, and − e < x < e}, where e = (1, ..., 1). In this way, we can initiate the procedure, starting fromthe exact analytic center of the region B = {x : −e < x < e}, which coincides with the geometric center of S when given by x 0 = 0. Then, starting with ƒ1(x), we add this inequality to B and using the proposed heuristic, we obtain a point x 1, close to the analytic center of the new region. Now, we repeat this process for x 1 and so on, up to the end.

The procedure performs at most m times the ACRA routine, considering that once an inequality is added, this one cannot be violated by any further iterate. At the end, we can withdraw the constraints regarding the unitary box B and obtain the desired point, that is, a point close to the analytic center of S, a key point when one has to work with a methodology of cutting planes that uses like test point, points close to the analytic center of the generated regions during each iteration.

However when working with nonlinear feasibility problem, we consider an implementation of the heuristic that relax the bisection subroutine and we conform to the recovery of the feasibility of the iterated, i.e., we do not try to approach the right-hand side of the added inequality but rather a relaxation of it. In this strategy it is possible, however with little possibility, that the same inequality be violated in different iterations, so preventing convergence. More research is needed to try to determine if there are conditions that guarantee convergence. Aiming to understand the behavior of computational modified heuristic some numerical tests were carried out with promising results.

5 NUMERICAL RESULTS

In this work, we have tried the heuristic with random problems, for the sake of readiness of implementation.

5.1 Generation of random problems

The random convex feasibility problems are generated as follows. We want to generate a region in the form

where for each i = 1, ..., m,

and where Ai is symmetric positive-definite.

We start by choosing a point y 0 ∈ {y : −e < y < e}, where we randomly obtain each entry according to a uniform distribution in the interval (−1, 1).

Now, for each quadratic function ƒi , after randomly generating a symmetric positive-definite matrix Ai , we generate a random direction vector d , whose each entry is generated according to a uniform distribution in (−1, 1). Then we normalize d using the Ai -norm, that is, ui = . Thenwe set

Now, if we take bi = Ai (y 0ui ), then

So, by randomly choosing α > 2, we obtain

that is

5.2 Results of random problems

We performed four sets of randomly-generated problems. Each set was labeled according to the dimensions of its problems. They were 50 × 50, 50 × 100, 100 × 50 and 100 × 100, where the first number is n and the second number is the number m of constraints. The results are shown at Tables 1, 2, 3 and 4, respectively, where we show firstly the results with the bisection routine and secondly the results without the bisection routine. We can see that only 5 test problems are shown at each test set. However, at each test set we had 30 problems and it is worth mentioning that some of the tests were not successful, because the bisection routine achieved its maximum number of iterations (100) and the test was aborted. Nevertheless, the conclusions can be clearly drawn based on these 5 test problems.

Table 1
Tests with and without the bisection routine for randomly-generated problems with size 50×50. The average running time was 2 min for the tests with bisection and 50s for those without bisection.

Table 2
Tests with and without the bisection routine for randomly-generated problems with size 50×100. The average running time was 9min for the tests with bisection and 3min for those without bisection.

Table 3
Tests with and without the bisection routine for randomly-generated problems with size 100×50. The average running time was 7min for the tests with bisection and 2min for those without bisection.

Table 4
Tests with andwithout the bisection routine for randomly-generated problemswith size 100×100. The average running time was 24min for the tests with bisection and 8min for those without bisection.

In the following tables, FEA represents the number of times that ACRA was called. RED stands for the total number of redutions, while BIS stands for the total number of bisections. Also, NWT is the total number of iterations of Newton's method. Finally, after each test, we obtained the analytic center of the region using MATLAB'S unconstraind minimization routine fminunc and compared it with what we obtained after CFA. In this way, the relative error is shown at the last column.

In Table 1 (problems with size 50 × 50) it can be seen that problem 3 (with or without bisection) has a high relative error, with a difference of almost 40%. This also applies to problem 5 without bisection. On the other hand, in Table 2 (problems with size 50 × 100) only problem 2 with bisection shows a high relative error, over 20% of difference. In Table 3 (problems with size 100 × 50), problem 2 and 4 (with and without bisection) showed the worst relative errors, somewhere around 40% and 20% of difference respectively. Finally, in Table 4 (problems with size 100 × 100), problems number 2, 3 and 4 (with bisection) also have high relative error, over 20% of difference. It is worth mentioning that for all tables, the number of ACRA calls (FEA) and the number of calls of the reduction procedure (RED) is considerably low when compared to the number of calls of the bisection procedure (BIS). As a final comment, we observe that when we compare the number of calls of the Newton method (NWT) for all the 20 test problems considered, the value obtained without bisection is approximately 25% of the value with bisection.

All problems were solved by computer codes implemented in MATLAB, running on a computer with an Intel Core 2 Duo - E7500, 2.93 GHz processor, with 3.9 GB of usable RAM.

6 CONCLUSIONS

We extended the known results in (5)FEIJOO B, SANCHEZ A & GONZAGA CC. 1997. Maintaining closeness to the analytic center of a polytope by perturbing added hyperplanes. Applied Mathematics & Optimization, 35: 139-144. for the linear case, using smooth convex regions.

We also presented a heuristic (ACRA) able to recover the approximate analytic center when we perturb (through the addition of new constraints) the initial region. It is worth recalling that this is fundamental when we work inside a cutting plane methodology that uses the approximate analytic center as a testing point.

In the sequence, we presented a procedure (CFA) the solves an important problem of applied mathematics, that is, the feasibility problem, using the heuristic ACRA. Finally, the numerical results presented solving the feasibility problem for randomly-generated problems, show that the greatest number of iterations, due to the bisection routine, represents nearly ten times the calls to the routine ACRA, when we are interested in obtaining an approximation for the analytic center.

It is important to underline that the number of calls to ACRA in the second case, i.e. without the bisection routine, is smaller than when using the bisection routine. On the other hand, in many problems the number of calls to Newton's method almost duplicates in the second case. Finally, the number of calls to the reduction routine does not show meaningful variation in both cases.

All the tests in this work were carried out using randomly generated problems, mainly because of their easiness and readiness to use. Clearly, every improvement of CFA will depend on more clever ways to define the bisection routine. In the future, with such improvement, we hope to be able to test our heuristic with known library problems.

  • 1
    BASESCU VL & MITCHELL JE. 2008.An analytic center cutting plane approach for conic programming. Mathematics of Operations Research, 33: 529-551.
  • 2
    ENGAU A, ANJOS MF & BOMZE I. 2013. Constraint selection in a build-up interior-point cuttingplane method for solving relaxations of the stable-set problem. Mathematical Methods of Operation Research, 78: 35-59.
  • 3
    ENGAU A ANJOS MF& VANNELLI A. 2010. On interior-point warmstarts for linear and combinatorial optimization. SIAM Journal on Optimization, 20(4): 1828-1861.
  • 4
    ENGAU A 2012. Recent Progress in Interior-Point Methods: Cutting-Plane Algorithms and Warm Starts. Handbook on Semidefinite, Conic and Polynomial Optimization. International Series in Operations Research& Management Science, 166: 471-498.
  • 5
    FEIJOO B, SANCHEZ A & GONZAGA CC. 1997. Maintaining closeness to the analytic center of a polytope by perturbing added hyperplanes. Applied Mathematics & Optimization, 35: 139-144.
  • 6
    GOFFIN JL, LUO Z-Q & YE Y. 1996. Complexity Analysis of an Interior Cutting Plane Method for Convex Feasibility Problems. SIAM Journal Optimization, 6: 638-652.
  • 7
    GONDZIO J. 2012. Interior PointMethods 25 years later. European Journal of Operational Research, 218: 587-601.
  • 8
    GONDZIO J GONZÁLEZ BREVIS P & MUNARI P. 2013. New developments in the primal-dual column generation technique., European Journal of Operational Research 224: 41-51.
  • 9
    HORN R & JOHNSON CR. 1985. Matrix Analysis. Cambridge University Press.
  • 10
    HUARD P & LIEU T. 1966. La M`ethode des Centres dans un Espace Topologique. Numerische Mathematik, 8: 56-67.
  • 11
    KARMARKAR N. 1984. A new polynomial-time algorithm for linear programming. Combinatorica, Berlin, 4(2): 373-395.
  • 12
    LIU YZ & CHEN Y. 2010. A Full-Newton Step Infeasible Interior-Point Algorithm for Linear Programming Based on a Special Self-Regular Proximity. The Ninth International Symposium on Operations Research and Its Applications (ISORA'10) Chengdu-Jiuzhaigou, China, pp. 19-23.
  • 13
    MITCHELL JE 2003. Polynomial interior point cutting plane method. Technical Report, Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, NY, 12180.
  • 14
    MITCHELL JE& RAMASWAMY S. 2000. A Long-Step, Cutting Plane Algorithm for Linear and Convex Programming. Annals of OR, 99: 95-122.
  • 15
    NESTEROV YE & NEMIROVSKY AS. 1989. Self-Concordant Functions and Polynomial Time Methods in Convex Programming. SIAM Studies in Applied Mathematics.
  • 16
    ROOS C. 2006. A full-Newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM J. Optimization, 16(4): 1110-1136.
  • 17
    SKAJAA A, ANDERSEN ED &.YE Y 2013. Warm starting the homogeneous and self-dual interior point method for linear and conic quadratic problems. Mathematical Programming Computation, 5: 1-25.
  • 18
    YE Y 1992. A Potential Reduction Algorithm Allowing Column Generation., SIAM Journal on Optimization 2: 2-7.

Publication Dates

  • Publication in this collection
    Jan-Apr 2015

History

  • Received
    15 Oct 2013
  • Accepted
    18 June 2014
Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 Rio de Janeiro RJ - Brasil, Tel.: +55 21 2263-0499, Fax: +55 21 2263-0501 - Rio de Janeiro - RJ - Brazil
E-mail: sobrapo@sobrapo.org.br