Acessibilidade / Reportar erro

A new primal-dual path-following method for convex quadratic programming

Abstract

In this paper, we describe a new primal-dual path-following method to solve a convex quadratic program (QP). The derived algorithm is based on new techniques for finding a new class of search directions similar to the ones developed in a recent paper by Darvay for linear programs. We prove that the short-update algorithm finds an epsilon-solution of (QP) in a polynomial time.

convex quadratic programming; interior point methods; primal-dual path-following methods; convergence of algorithms


A new primal-dual path-following method for convex quadratic programming

Mohamed Achache

Département de Mathématiques, Faculté des Sciences, University Ferhat Abbas, Sétif 19000, Algérie. E-mail: achache_m@yahoo.fr

ABSTRACT

In this paper, we describe a new primal-dual path-following method to solve a convex quadratic program (QP). The derived algorithm is based on new techniques for finding a new class of search directions similar to the ones developed in a recent paper by Darvay for linear programs. We prove that the short-update algorithm finds an e-solution of (QP) in a polynomial time.

Mathematical subject classification: 90C20, 90C51, 90C60.

Keywords: convex quadratic programming, interior point methods, primal-dual path-following methods, convergence of algorithms.

1 Introduction

In this paper we consider the following convex quadratic program in its classical standard (QP) form called the primal

and its dual form

where is a given (n × n) matrix, c Î n, b Î m and A is a given (m × n) matrix.

Recently, some other important formulations have been used to study convex quadratic program such as the (second-order) cone optimization problem (SOCP). They are currently the most often used for quadratic convex programs. See, e.g., [5].

Quadratic programming is a special case of nonlinear programming and has a wide range of applications. A classical method to solve convex (QP) is the Wolfe algorithm which can be considered as a direct extension of the well-known simplex method, in the worst case its complexity is exponential. Therefore, there is a growing interest for developing efficient, robust and polynomial time algorithms to solve the problem. Primal-dual path-following methods are among the most efficient algorithms of interior point methods (IPM) to solve linear, quadratic programming, complementarity problems (CP) and general conic optimization problems (COP). These algorithms are, on one hand, of Newton type and thus lead to an efficient practical implementation and on another hand they have a polynomial time complexity. Recently, many interior point methods have been proposed for solving (QP) problems in its standard format or via (SCOP). These methods are based on new search directions with best theoretical properties such as polynomial time complexity and efficient practical implementations. See, e.g., [2, 4, 5, 6, 7]. In this paper, we extend a recent approach developed by Darvay [6] for linear programs to (QP) case. In this approach, based on the introduction of a monotonic function j of one real variable. The equations xizi = µ in the nonlinear system which defines the central path (see Wright [8]) are replaced by

Of course, if j is the identical function we recover the classical path-following method. For j(t) = we define a new class of search directions and hence a new primal-dual path-following method for approximating the central path. We prove that the related short-update algorithm solves (QP) locally and quadratically and in a polynomial time. We mention also some related and interesting IPM approaches studied recently by Bai et al. [4]. They have also defined a new class of search direction for linear optimization by using the so-called kernel function, that is any univariate strictly convex function k(t) that is minimal at t = 1 and such that k(1) = 0. However, by defining

that gives the corresponding kernel function which defines exactly the same search direction and complexity. Nevertheless, for j(t) = t it corresponds to the kernel function k(t) = (t2 – 1) – log t of the logarithmic barrier function, which gives the classical search direction. This new search direction is also analyzed by the authors (Peng, Roos and Terlaky) in a more general context of Self-regular proximities measures for linear and (second order) cone optimization problem. See, e.g., [7].

The paper is organized as follows. In the next section, the problem is presented. Section 3 deals with the new search directions. The description of the algorithm and its convergence analysis are presented in section 4. Section 5 ends the paper with a conclusion.

Our notation is the classical one. In particular,

n denotes the space of real n-dimensional vectors. Given u, v Î n, uTv is their inner product, and || u || is the Euclidean norm, whereas || u || ¥ is the l¥-norm. Given a vector u in n, U = diag(u) is the n × n diagonal matrix with Uii = ui for all i.

2 The statement of the problem

Through the paper the following assumptions hold.

• Positive semidefinitness (PSD). The matrix is symmetric and positive semidefinite.

• Interior point condition (IPC). There exists (x0,y0,z0) such that Ax0 = b, ATy0 + z0 – x0 = c, x0 > 0, z0 > 0.

• The full rank condition (FR). The matrix A is of rank m (m n).

The optimal solutions of the primal and the dual are the solutions of nonlinear system of (2n + m) of equations:

The first equation of the system (3) represents the primal feasibility, the second one the dual feasibility and the last one the complementarity condition.

The classical path-following method consists in introducing a positive parameter µ. One considers the nonlinear system parameterized by µ

It is shown, under our assumptions, that there exists one unique solution (x(µ),y(µ),z(µ)). The path m ® (x(µ),y(µ),z(µ)) is called the central-path (see for instance Wright [8]). It is known that when µ 0, (x(µ),y(µ),z(µ)) goes to a solution of (3). Applying the Newton's method for the system (4), we develop the classical primal-dual path-following algorithm.

In the next section, we shall present a new method for approximating the central path.

3 New search directions

In the proposed method, we replace the n equations xiyi = µ in (4) by n equivalent equations j = j(1) where j is a real valued function on [0,+¥) and differentiable on (0,+¥) such that j and j'(t) > 0, for all t > 0.

Given µ > 0 and (x,y,z) such that ATy + z – x = c, Ax = b, x > 0 and z > 0 but not such that j = j(1) for all i, a new triple (x + Dx, y + Dy, z + Dz) is obtained thanks to the Newton method for solving the nonlinear system. One obtains

For commodity let us introduce the vectors v and p of n defined by

Next let us introduce the vectors

and the matrix

then the system reduces to the system

which leads to the system

which is non singular since is positive semidefinite and has a full rank. Then dz is computed by the formula

dz = p – dx.

Remark 3.1. If we adopt j(t) = t, then we recover the classical primal-dual path-following method

In this paper, we shall consider j(t) = . Then j'(t) = . It follows that

or

where e = (1,...,1)T and the system (5) becomes

In addition with the notation in (6), we have

and

4 Description of the algorithm and its convergence analysis

Now we define a proximity measure for the central path as

In addition, we define the vector q by

Then

and

This last inequality follows from

since , with dx = 0 and is a positive semidefinite matrix.

4.1 The algorithm

Let e > 0 be the given tolerance, 0 < q < 1 the update parameter (default q = 1/(2)), and 0 < t < 1 the proximity parameter (default t = ). Assume that for the triple (x0,y0,z0) IPC holds and let µ0 = . In addition, we assume that d(X0Z0e,µ) < t.

begin

x := x0; y := y0; z := z0;

µ := µ0;

while xTz = nme do

begin

µ := (1 – q)µ;

compute (Dx,Dy,Dz) from (8)

x := x + Dx;

y := y + Dy;

z := z + Dz;

end

end

In the next section we prove that the algorithm converges quadratically and locally and in a polynomial time.

4.2 Complexity analysis

In the following lemma, we state a condition which ensures the feasibility of the full Newton step. Let = x + Dx and = z + Dz be the new iterate after a full Newton step.

Lemma 4.1. Let d = d(XZe,µ) < 1. Then the full Newton step is strictly feasible, hence

> 0 and > 0.

Proof. For each 0 a 1. Let (a) = x + aDx and (a) = z + aDz. Hence

i(a)i(a) = xizi + a(xiDzi + ziDxi) + a2DxiDzi for all i = 1,...,n.

Now, in view of (9) and (10) we have

i(a)i(a) = µ( + avi(dxi + dzi) + a2dxidzi) for all i = 1,...,n.

In addition from (7), we have

and thus

Thereby

and

Thus the inequality

i(a)i(a) > 0 holds if

Using (11) and (12) we get

Hence,

i(a)i(a) > 0 for each 0 a 1. Since i(a) and i(a) are linear function of a, then they do not change sign on the interval [0,1] and for a = 0 we have i(0) > 0 and i(0) > 0. This leads to i(1) > 0 and i(1) > 0 for all i.

In the next lemma we state that under the condition d(XZe,m) < 1, the full Newton step is quadratically convergent.

Lemma 4.2. Let

= x + Dx and
= z + Dz be the iteration obtained after a full step. Suppose d(XZe,µ) < 1. Then

Thus d(

e,µ) < d2, which means quadratic convergence of the full Newton step.

Proof. Setting in (13) a = 1, we get

Hence

and thereby

On the other hand, we have

and using (15) we get

Using (11), (12) and (14) we get

Hence

This proves the Lemma.

The next Lemma gives an upper bound for the duality gap after a full Newton step.

Lemma 4.3. Let

= x + Dx and
= z + Dz. Then the duality gap is

hence

T
< µn.

Proof. We have seen from the previous lemmas that

Hence

but since

it follows that

This completes the proof.

The next Lemma discusses the influence on the proximity measure of the Newton process followed by a step along the central path.

Lemma 4.4. Let d = d(XZe,µ) < 1 and µ+ = (1 – q)µ, where 0 < q < 1. Then

Furthermore, if d < 1/2, q = 1/(2) and n 4, then we get

Proof. We have

Hence by using (13) and (14) we get

Now, in view of (10) and (11) we deduce that

Finally, we get

This completes the proof.

It results from Lemma 4.4 that for the default q = 1/(2), the (x,z) > 0 and d(XZe,µ) < 1/2 are maintained during the algorithm. Hence the algorithm is well defined.

In the next Lemma we compute an upper bound for the total number of iterations produced by the algorithm.

Lemma 4.5. Assume that x0 and z0 are strictly feasible, µ0 = and d(X0Z0e,µ) < . Moreover, let xk and zk be the vectors obtained after k iterations. Then the inequality (xk )T zk e is satisfied for

Proof. We have after k iterations that µk = (1 – q)kµ0. Using Lemma 4.3, it follows that (xk)T zk

nµk = (1 – q)k(x0)T z0. Hence the inequality (xk)T zk e holds if (1 – q)k(x0)T z0 e. By taking logarithms of both sides, we obtain

k log(1 – q) + log(x0)T z0 log e.

The estimate for the log function

–log(1 – q) q for all 0 q < 1.

Therefore the inequality (xk)T zk e is fulfilled if k .

For the default q = 1/(2), we obtain the following theorem.

Theorem 4.6. Suppose that the pair (x0,z0) is strictly feasible and let µ0 = (x0)T z0/n. If q = 1/(2), then the algorithm 4.1 requires at most

iterations. For the resulting vectors we have (xk)T zk e.

5 Conclusion

In this paper, we have described a new primal-dual path-following method to solve convex quadratic programs. We have proved that the short-update algorithm can find an e-solution for (QP) in

iterations, where x0 and z0 are the initial positive starting points. The initialization of the algorithm can be achieved as in linear optimization by using self-dual embedding techniques. Hence, if x0 = y0 = e, we get the best well-known short step complexity

Received: 14/VI/05. Accepted: 26/X/05.

#636/05.

  • [1] L. Adler and R.D.C. Monteiro, Interior path following primal-dual algorithms, Part II: Convex quadratic programming. Mathematical Programming, 44 (1989), 43-66.
  • [2] F. Alizadeh and D. Goldfarb, Second-order cone programming. Mathematical programming, 95 (2003), 3-15.
  • [3] K.M. Anstreicher, D. den hertog, C. Roos and T. Terlaky, A long step barrier method for convex quadratic programming. Delft University of technology. Report, (1990), 90-53.
  • [4] Y.Q. Bai, M. El ghami and C. Roos, A new efficient large-update primal-dual interior-point method based on a finite barrier SIAM Journal on Optimization, 13(3) (2003), 766-782.
  • [5] A. Ben-Tal and A. Nemirovski, Lectures on Modern convex Optimization. Analysis, Algorithms and engineering Applications, volume 2 of MPS-SIAM Series on Optimization.SIAM, Philadelphia, USA (2001).
  • [6] Zs. Darvay, A new algorithm for solving self-dual linear programming problems Studia Universitatis Babes-Bolyai, Series Informatica, 47(1) (2002), 15-26.
  • [7] J. Peng, C. Roos and T. Terlaky, Primal-dual interior point methods for second-order conic optimization based on self-regular proximities SIAM J. Optimization, 13(1) (2002),179-203.
  • [8] S.J. Wright. Primal-Dual Interior Point Methods SIAM, Philadelphia, (1997), USA.

Publication Dates

  • Publication in this collection
    27 Sept 2006
  • Date of issue
    2006

History

  • Accepted
    26 Oct 2005
  • Received
    14 June 2005
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br