## Serviços Personalizados

## Artigo

## Indicadores

- Citado por SciELO
- Acessos

## Links relacionados

- Similares em SciELO

## Compartilhar

## Computational & Applied Mathematics

*versão On-line* ISSN 1807-0302

### Comput. Appl. Math. vol.30 no.3 São Carlos 2011

#### http://dx.doi.org/10.1590/S1807-03022011000300005

**A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function **

**Liang Fang ^{I,} ^{*}; Zengzhe Feng^{II}**

^{I}College of Mathematics and System Science, Taishan University, 271021, Tai'an, P.R. China. E-mail: fangliang3@163.com

^{II}College of Information Engineering, Taishan Medical University, 271016, Tai'an, P.R. China. E-mail: fengzengzhe@163.com

**ABSTRACT**

A new smoothing function of the well known Fischer-Burmeister function is given. Based on this new function, a smoothing Newton-type method is proposed for solving second-order cone programming. At each iteration, the proposed algorithm solves only one system of linear equations and performs only one line search. This algorithm can start from an arbitrary point and it is Q-quadratically convergent under a mild assumption. Numerical results demonstrate the effectiveness of the algorithm.

**Mathematical subject classification: **90C25, 90C30, 90C51, 65K05, 65Y20.

**Key words: **second-order cone programming, smoothing method, interior-point method, Q-quadratic convergence, central path, strong semismoothness.

**1 Introduction**

The second order cone (SOC) in ^{n} (*n* __>__ 2), also called the Lorentz cone orthe ice-cream cone, is defined as

_{n} = {(*x*_{1}; *x*_{2})| *x*_{1} ∈ , *x*_{2} ∈ ^{n-1} and *x*_{1} __>__ ||*x*_{2}||)},

here and below, ||·|| refers to the standard Euclidean norm, *n* is the dimensionof _{n}, and for convenience, we write *x* = (*x*_{1}; *x*_{2}) instead of (*x*_{1}, )^{T}. It is easy to verify that the SOC _{n} is self-dual, that is

_{n} = = {*s* ∈ ^{n }: *s ^{T} x*

__>__0, for all

*x*∈

*}.*

_{n}We may often drop the subscripts if the dimension is evident from the context.

For any *x *= (*x*_{1}; *x*_{2}), *y* = (*y*_{1}; *y*_{2}) ∈ ×^{n-1}, their Jordan product is defined as [5]

*x* º *y* = (*x ^{T }y; x*

_{1}

*y*

_{2 }+

*y*

_{1}

*x*

_{2}).

Second-order cone programming (SOCP) problems are to minimize a linear function over the intersection of an affine space with the Cartesian product of a finite number of SOCs. The study of SOCP is vast important as it covers linear programming, convex quadratic programming, quadratically constraint convex quadratic optimization as well as other problems from a wide range of applications in many fields, such as engineering, control, optimal control and design, machine learning, robust optimization and combinatorial optimization and so on [13, 24, 4, 23, 29, 22, 18, 10, 11].

Without loss of generality, we consider the SOCP problem with a single SOC

and its dual problem

where* c* ∈ ^{n}*, A* ∈ ^{m×n} and *b* ∈ ^{m}, with an inner product ‹·, · ›, are given data.* x* ∈ is variable and the set is an SOC of dimension *n*. Note that our analysis can be easily extended to the general case with Cartesian product of SOCs.

We call *x* ∈ primal feasible if *Ax = b*. Similarly (*y, s*) ∈ ^{m}× is called dual feasible if *A ^{T} y + s = c*. For a given primal-dual feasible point (

*x,y,s*) ∈ ×

^{m}×, ‹

*x,s*› is called the duality gap due to the well known weak dual theorem, i.e., ‹

*x,s*›

__>__0, which follows that

‹*c,x* › - ‹*b,y* › = ‹*A ^{T }y + s,x* › - ‹

*Ax,y*› = ‹

*x,s*›

__>__0.

Let us note that ‹*x,s* › = 0 is sufficient for optimality of primal and dual feasible (*x,y,s*) ∈ ×^{m}×.

Throughout the paper, we make the following Assumption:

**Assumption 2.1. ** Both (PSOCP) and its dual (DSOCP) are strictly feasible.

It is well known that under the Assumption 2.1, the SOCP is equivalent to its *optimality conditions*:

where ‹*x, s* › = 0 is usually referred to as the complementary condition.

There are an extensive literature focusing on interior-point methods (IPMs) for (PSOCP) and (DSOCP) (see, e.g., [1, 17, 6, 23, 16, 11] and references therein). IPMs typically deal with the following perturbation of the optimality conditions:

where µ > 0 and *e* = (1; 0) ∈ ×^{n-1} is identity element. This set of conditions are called the *central path conditions* as they define a trajectory approaching the solution set as µ ↓ 0. Conventional IPMs usually apply a Newton-type method to the equations in (4) with a suitable line search dealing with constraints *x* ∈ and *s * ∈ explicitly.

Recently smoothing Newton methods [2, 14, 25, 19, 20, 7, 15, 8, 12] haveattracted a lot of attention partially due to their superior numerical performances. However, some algorithms [2, 19] depend on the assumptions of uniform nonsingularity and strict complementarity. Without the uniform nonsingularity assumption, the algorithm given in [27] usually needs to solve two linear systems of equations and to perform at least two line searches at each iteration. Lastly, Qi, Sun and Zhou [20] proposed a class of new smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities under a nonsingularity assumption. The method in [20] was shown to be locally superlinearly/quadratically convergent without strict complementarity. Moreover, the smoothing methods available are mostly for solving the complementarity problems [2, 3, 19, 20, 7, 8], but there is little work on smoothing methods for the SOCP.

Under certain assumptions, IPMs and smoothing methods are globally convergent in the sense that every limit point of the generated sequence is a solution of optimality conditions (3). However, with the exception of the infeasible IPMs [2, 21, 20], they need a feasible starting point.

Fukushima, Luo and Tseng studied Lipschitzian and differential properties of several typical smoothing functions for second-order cone complementarity problems. They derived computable formulas for their Jacobians, which provide a theoretical foundation for constructing corresponding non-interior point methods. The purpose of this paper is just to present such a non-interior point method for problem (PSOCP), which employs a new smoothing function to characterize the central path conditions. We stress on the demonstration of the global convergence and locally quadratic convergence of the proposed algorithm.

The new smoothing algorithm to be discussed here is based on perturbed optimality conditions (4) and the main difference from IPMs is that we reformulate (4) as a smoothing linear system of equations. It is shown that our algorithm has the following good properties:

(i) The algorithm can start from an arbitrary initial point;

(ii) The algorithm needs to solve only one linear system of equations and perform only one line search at each iteration;

(iii) The algorithm is globally and locally Q-quadratically convergent under mild assumption, without strict complementarity. The result is stronger than the corresponding results for IPMs.

The following notations and terminologies are used throughout the paper.We use "," for adjoining vectors and matrices in a row and ";" for adjoining them in a column. ^{n} (*n* __>__ 1) denotes the space of *n*-dimensional real column vectors, and ^{n}×^{m} is identified with * ^{n+m}*. Denote

*x*

^{2}=

*x*º

*x*. For any

*x,y*∈

^{n}, we write

*x*

*y*or

*x*

*y*(respectively,

*x*

*y*or

*x*

*y*) if

*x - y*∈ or

*y - x*∈ (respectively,

*x - y*∈ int or

*y - x*∈ int, where int denotes the interior of ).

_{+}(

_{++}) denotes the set of nonnegative (positive) reals. For

*x*∈

^{n}with eigenvalues λ

_{1}and λ

_{2}, we can define the Frobenius norm

Since both eigenvalues of *e* are equal to one, ||*e*||_{F} = . For any *x, y* ∈ ^{n}, the Euclidean inner product and norm are denoted by ‹*x, y* › = *x ^{T} y* and ||

*x*|| = respectively.

The paper is organized as follows. In Section 2, we give the equivalent formulation of the perturbed optimality conditions and some preliminaries. Asmoothing function associated with the SOC and its properties are given in Section 3. In Section 4, we describe our algorithm. The convergence of the new algorithm is analyzed in Section 5. Numerical results are shown in Section 6.

**2 Preliminaries**

For any vector *x* = (*x*_{1}; *x*_{2}) ∈ ×^{n-1}, we define its spectral decomposition associated with SOC as

where the spectral values λ_{i} and the associated spectral vectors *u _{i}* of

*x*aregiven by

for *i* = 1,2, with any ω ∈ ^{n-1} such that || ω|| = 1. If *x*_{2} ≠ 0, then the decomposition (5) is unique. Some interesting properties of λ_{1}, λ_{2} and *u*_{1}, *u*_{2} are summarized below.

**Property 2.1. ** For any *x* = (*x*_{1}; *x*_{2}) ∈ ×^{n-1}, the spectral values λ_{1}, λ_{2} and spectral vectors *u*_{1}, *u*_{2} as given by (6) and (7), have the following properties:

(i)

u_{1 }+u_{2}=e.(ii)

u_{1}andu_{2}are idempotent under the Jordan product, i.e., =u= 1,2._{i}, i(iii)

u_{1}andu_{2}are orthogonal under the Jordan product, i.e.,u_{1}ºu_{2}= 0, and have length /2.(iv) λ

_{1}, λ_{2}are nonnegative (respectively, positive) if and only ifx∈ (respectively,x∈ int).

Given an element *x* = (*x*_{1}; *x*_{2}) ∈ ^{n}, we define the arrow-shaped matrix

where *I* represents the (*n* - 1) × (*n* - 1) identity matrix. It is easy to verify that *x *º *s = L _{x }*º

*s = L*º

_{s }*x = L*for any

_{x}L_{s}e*s*∈

^{n}. Moreover,

*L*is symmetric positive definite if and only if

_{x}*x*∈ int, i.e., x 0.

**3 A smoothing function associated with the SOC and its properties**

First, let us introduce a smoothing function. In [7], it has been shown that the vector valued Fischer-Burmeister function ϕ_{FB}(*x, s*) : * ^{n }*×

*→*

^{n }^{n}defined by

satisfies the following important property

The Fischer-Burmeister function has many interesting properties. However,it is typically nonsmooth, because it is not derivable at (0; 0) ∈ ×^{n-1}, which limits its practical applications. Recently, some smoothing methods are presented, such as the method using Chen-Harker-Kanzow-Smale smoothing function (see [9, 28] and its references therein).

We now smoothing the function ϕ_{FB}, so that we get a characterization ofthe central path conditions (4). By smoothing the symmetrically perturbed ϕ_{FB}, we obtain the new vector-valued function Φ : → ^{n}, defined by

where = {( µ, *x, s*) ∈ _{++} × * ^{n }*×

^{n }: µ > ||||

_{F}}.

**Proposition 3.1. ** *For any *( µ_{1}*, x, s*), ( µ_{2}*,x, s*) ∈ ,

|| Φ( µ_{1}, *x, s*) - Φ( µ_{2}, *x, s*)||_{F} < | µ_{1 }- µ_{2}|.

**Proof. ** For any ( µ_{1}, *x, s*), ( µ_{2}, *x, s*) ∈ , without loss of generality, we assume µ_{1} __>__ µ_{2} > . Thus, we have

which completes the proof.

□

As we will show, the function Φ( µ, *x, s*) have many good properties that can be used to characterize the central path conditions (4). Φ( µ, *x, s*) is smooth for any ( µ, *x, s*) ∈ . This property plays an important role in the analysis of the quadratic convergence of our smoothing Newton method. Semismoothness is a generalization concept of smoothness, which was originally introduced in [15] and then extended by L. Qi in 1993. Semismooth functions includesmooth functions, piecewise smooth functions, convex and concave functions, etc. The composition of (strongly) semismooth functions is still a (strongly) semismooth function.

**Definition 3.1. ***Suppose that H : ^{n }* →

*∈*

^{m}is a locally Lipschitz continuous function. H is said to be semismooth at x*∈ ∂*

^{n}if H is directionally differentiable at x and for any V*H*(

*x +*Δ

*x*)

* H*(*x + * Δ*x*) *- H*(*x*) *- V*( Δ*x*)* = o*(*||* Δ*x||*).

* H is said to be p-order *(0* < p < * ∞)* semismooth at x if H is semismoothat x and *

* H*(*x + * Δ*x*) *- H*(*x*) *- V*( Δ*x*)* = O*(*||* Δ*x||*^{1}* ^{+p}*).

* In particular, H is called strongly semismooth at x if H is 1-order semismooth at x.*

The following concept of a smoothing function of a nondifferentiable function was introduced by Hayashi, Yamashita and Fukushima [8].

**Definition 3.2. ** *A function H : ^{n}* →

^{m}is said to be a semismooth (respectively, p-order semismooth) function if it is semismooth (respectively, p-order semismooth) everywhere in^{n}. In fact, the function Φ( µ, *x, s*) given by (10) is a smoothing function of ϕ_{FB}(*x, s*). Thus, we can solve a family of smoothing subproblems Φ( µ, *x, s*) = 0 for µ > 0 and obtain a solution of Φ_{FB}(*x, s*) = 0 by letting µ ↓ 0.

**Definition 3.3 **[8]. *For a nondifferentiable function g : ^{n} * →

^{m}, we consider a function g_{µ}

*:*→

^{n }*µ*

^{m}with a parameter*> 0 that has the following properties:*

(i)

g_{µ}is differentiable for anyµ> 0;(ii) (

x)= g(x)for any x∈^{n}.

Such a function *g* _{µ} is called a smoothing function of *g*.

**Theorem 3.1. ***For any x, s* ∈ * ^{n}, let w* :=

*whose spectral decomposition associated with SOC is w*= λ

_{1}

*u*

_{1}+ λ

_{2}

*u*

_{2}.

*If*µ > ||||

_{F}

*, then the following results hold:*

(i) *w* 0; (ii) µ*e* *w*; (iii) µ^{2}*e * *w*^{2}.

**Proof. ** Assume the spectral decomposition of *w* associated with SOC is *w* = λ_{1}*u*_{1 }+ λ_{2}*u*_{2}. From µ > ||||_{F} we have

Thus, we have 0 __<__ λ_{i} < µ, *i* = 1,2, which means that (i) holds. By

0 __<__ λ* _{i}* < µ,

*i*= 1, 2, µ

*e*-

*w*= ( µ - λ

_{1})

*u*

_{1 }+ ( µ - λ

_{2})

*u*

_{2 }0,

which yields (ii), and µ^{2}*e* - *w*^{2} = ( µ^{2 }- )*u*_{1 }+ ( µ^{2 }- )*u*_{2 } 0, which gives (iii).

□

Now, we give the main properties of Φ( µ, *x, s*):

**Theorem 3.2. **(i) Φ( µ, *x, s*) *is globally Lipschitz continuous for any* ( µ, *x, s*) ∈ . *Moreover*, Φ( µ, *x, s*) *is continuously differentiable at any* ( µ, *x, s*) ∈ *with its Jacobian*

(ii) ( µ, *x, s*) = ϕ_{FB}(*x, s*) *for any* (*x, s*) ∈ ^{n }× ^{n}*. Thus*, Φ( µ, *x, s*) *is a smoothing function of* Φ_{FB}(*x, s*).

**Proof. ** (i) It is not difficult to show that Φ( µ, *x, s*) is globally Lipschitz continuous, and continuously differentiable at any ( µ, *x, s*) ∈ . Now we prove (12). For any ( µ, *x, s*) ∈ , from (10) and by simply calculation, we have

which yield (12).

Next, we prove (ii). For any *x* = (i_{1}; *x*_{2}) ∈ × ^{n}^{-1} and *s* = (*s*_{1}; *s*_{2}) ∈ × ^{n-1}, denote *w* = whose spectral decomposition associated with SOC is *w* = λ_{1}*u*_{1 }+ λ_{2}*u*_{2}. From Theorem 3.1, we have *w * 0, and 0 __<__ λ_{i} __<__ µ, *i* = 1,2. Therefore, we have

Thus, we have ( µ, *x, s*) = ϕ* _{FB}*(

*x, s*). Therefore, it follows from (i) and Definition 3.3 that Φ( µ,

*x, s*) is a smoothing function of ϕ

_{FB}(

*x, s*).

□

**4 Description of the algorithm**

Based on the smoothing function (10) introduced in the previous section, the aim of this section is to propose the smoothing Newton-type algorithm for theSOCP and show the well-definedness of it under suitable assumptions.

Let *z* := ( µ, *x, c - A ^{T }y*) ∈ . By using the smoothing function (10), we define the function

*G*( µ,

*x, y*) : ×

*×*

^{n }*→ ×*

^{m }*×*

^{m }*by*

^{n} In view of (9) and (13), *z**: = ( µ*, *x*, y**) is a solution of the system *G*(*z*) = 0 if and only if (*x*, y*, c - A ^{T }y**) solves the optimality conditions (3).

It is well-known that problems (PSOCP) and (DSOCP) are equivalent to (13) [1, 20]. Therefore, *z** is a solution of *G*(*z*) = 0 if and only if (*x*, y*, c -A ^{T }y**) is the optimal solution of (PSOCP) and (DSOCP). Then we can apply Newton's method to the nonlinear system of equations

*G*(

*z*) = 0.

Let γ ∈ (0, 1) and define the function β: ^{n+m+1 } → _{+} by

Now we are in the position to give a formal description of our algorithm.

**Algorithm 4.1. ** (*A smoothing Newton-type method for* SOCP).

**Step 0.** Choose constants δ ∈ (0, 1), σ ∈ (0, 1), and µ_{0} ∈ _{++}, and let := ( µ_{0}, 0, 0) ∈ . Let (*x*_{0}, *y*_{0}) ∈ ^{n }× ^{m} be arbitrary initial point and *z*_{0 }:= ( µ_{0}, *x*_{0}, *y*_{0}). Choose γ ∈ (0, 1) such that γ µ_{0} < 1/2. Set *k* := 0.

**Step 1.** If *G*(*z _{k}*) = 0, then stop. Else, let β

_{k}: = β(

*z*).

_{k}**Step 2.** Compute Δ*z _{k}*: = ( Δ µ

_{k}, Δ

*x*, Δ

_{k}*y*) by solving the following system of linear equations

_{k}**Step 3.** Let ν* _{k}* be the smallest nonnegative integer ν such that

Let λ_{k }:= .

**Step 4.** Set *z _{k}*

_{+1 }:=

*z*+ λ

_{k}*Δ*

_{k}*z*and

_{k}*k*:=

*k*+ 1. Go to step 1.

In order to analyze our algorithm, we study the Lipschitzian, smoothness and differential properties of the function *G*(*z*) given by (13). Moreover, we derive the computable formula for the Jacobian of the function *G*(*z*) and give the condition for the Jacobian to be nonsingular.

Throughout the rest of this paper, we make the following assumption:

**Assumption 4.1. ** The matrix *A* has full row rank, i.e., all the row vectors of *A* are linearly independent.

**Lemma 4.1 **[7]. *For any x, s * ∈* ^{n} and any * ω 0

*, we have*

* Moreover, *(17)* remains true when "" is replaced by "" everywhere.*

**Theorem 4.1. ***Let z := *( µ*, x, y) * ∈* and G : * →* be defined by *(13)*. Then the following results hold. *

(i)

G is globally Lipschitz continuous, and continuously differentiable atany z :=( µ, x, y)∈with its Jacobian

(ii) *Under Assumption *4.1*, G '*(*z*)* is nonsingular for any * µ *> 0. *

**Proof. ** By Theorem 3.1, we can easily show that (i) holds. Now we prove (ii). For any fixed µ > 0, let

Δ*z* := ( Δ µ, Δ*x*, Δ*y*) ∈ × * ^{n }*×

^{m}.

It is sufficient to prove that the linear system of equations

has only zero solution, i.e., Δ µ = 0, Δ*x* = 0 and Δ*y* = 0. By (18) and (19), we have

Since

It follows from Lemma 4.1 that

Premultiplying (23) by Δ*x ^{T}* and taking into account

*A*Δ

*x*= 0, we have

Denote

From (24), we obtain

By (23), is positive definite. Therefore, (26) yields Δ = 0, and it follows from (25) that Δ*x* = 0. Since *A* has full row rank, (21) implies Δ*y* = 0. Thus the linear system of equations (18) has only zero solution, and hence *G'*(*z*) is nonsingular. Thus, the proof is completed.

□

By Theorem 4.1, we can show that Algorithm 4.1 is well-defined.

**Theorem 4.2. ** *Suppose that Assumption 4.1 holds. If * µ* _{k} > 0, then Algorithm 4.1 is well-defined for any k > *0.

**Proof. ** Since *A* has full row rank, it follows from Theorem 4.1 that *G'*(*z _{k}*) is nonsingular for any µ

_{k}> 0. Therefore, Step 2 is well-defined at the

*k*th iteration. Then by following the proof of Lemma 5 in [20], we can show the well-definedness of Step 3. The proof is completed.

□

**5 Convergence analysis**

In this section, we analyze the global and local convergence properties of Algorithm 4.1. It is shown that any accumulation point of the iteration sequence is a solution of the system *G*(*z*) = 0. If the accumulation point *z** satisfies a nonsingularity assumption, then the iteration sequence converges to *z** locally Q-quadratically without any strict complementarity assumption. To show the global convergence of Algorithm 4.1, we need the following Lemma (see [20], Proposition 6).

**Lemma 5.1. ** *Suppose that Assumption 4.1 holds. For any * ∈* _{++ }× ^{n }× ^{m}, and G '() is nonsingular, then there exists a closed neighborhood *()

*of and a positive number*∈ (0, 1]

*such that for any z =*(

*u, x, y*) ∈()

*and all*α ∈ [0, ]

*, we have u*∈

*(*

_{++}, G'*z*)

*is invertible and*

**Theorem 5.1. ***Suppose that Assumption *4.1* holds and that *{*z _{k}*}

*is the iteration sequence generated by Algorithm*4.1.

*Then the following results hold.*

(i) µ

∈_{k}∈ Θ_{++}and z_{k}for any k0>, where

(ii)

Any accumulation point z* :=( µ*, x*, y*)of{z}_{k}is a solution of G(z) = 0.

**Proof. ** Suppose that µ_{k} > 0. It follows from (15) and Step 4 that

Substituting (29) into (30), we have

which, together with µ_{0} > 0 and λ_{k} = ∈ (0, 1) implies that µ_{k} ∈ _{++} for any* k* __>__ 0.

Now we prove *z _{k}* ∈ Θ for any

*k*

__>__0 by induction. Since

β_{0} = β(*z*_{0}) = γ min {1, ||*G*(*z*_{0})||^{2}} __<__ γ ∈ (0, 1),

it is easy to see that *z*_{0} ∈ Θ. Suppose that *z _{k}* ∈ Θ, then

We consider the following two cases:

**Case (I)**: If ||*G*(*z _{k}*)|| > 1, then

Since β_{k+1} = γ min{1, ||*G*(*z _{k}*

_{+1})||

^{2}}

__<__γ, it follows from (16), (31), (32) and (33) that

**Case (II)**: If ||*G*(*z _{k}*)||

__<__1, then

By (16), we have ||*G*(*z _{k}*

_{+1})||

__<__||

*G*(

*z*)||

_{k}__<__1. From (31), (35), and taking into account β

_{k+1}= γ||

*G*(

*z*

_{k}_{+1})||

^{2}, we have

Combining (34) and (36) yields that *z _{k}* ∈ Θ for any

*k*

__>__0.

Now, we prove (ii). Without loss of generality, we assume that {*z _{k}*} converges to

*z** as

*k*→ + ∞. Since {||

*G*(

*z*)||} is monotonically decreasing and bounded from below, it follows from the continuity of

_{k}*G*(·) that {||

*G*(

*z*)||} converges to a nonnegative number

_{k}*G*(

*z**). Then by the definition of β(·), we obtain that { β

_{k}} converges to

β* = γ min {1, ||*G*(*z**)||^{2}}.

It follows from (15) and Theorem 5.1 (i) that

0 < µ_{k+1} = (1 - λ* _{k}*) µ

*+ λ*

_{k }*β*

_{k}*µ*

_{k}_{0}

__<__µ

_{k},

which implies that { µ_{k}} converges to µ*. If ||*G*(*z**)|| = 0, then we obtain the desired result. In the following, we suppose ||*G*(*z**)|| > 0. By Lemma 4.1, 0 < β* µ_{0} __<__ µ*. It follows from Theorem 4.1 that *G*'(*z**) exists and it is invertible. Hence, by Lemma 5.1, there exists a closed neighborhood* *() of and a positive number ∈ (0, 1] such that for any *z* = ( µ, *x, y*) ∈* *() and all α ∈ [0, ], we have µ ∈ _{++}, *G*'(*z*) is invertible and

Therefore, for a nonnegative integer such that ∈ (0, ], for all sufficiently large *k*, we have

||*G*(*z _{k }*+ Δ

*z*)||

_{k}^{2}

__<__[1 - σ(1 - 2 γ µ

_{0})]||

*G*(

*z*)||

_{k}^{2}.

For all sufficiently large *k*, since λ_{k} = __>__ , it follows from (16) that

This contradicts the fact that the sequence {||*G*(*z _{k}*)||} converges to ||

*G*(

*z**)|| > 0. This completes the proof.

□

To establish the locally *Q*-quadratic convergence of our smoothing Newton method, we need the following assumption:

**Assumption 5.1. ** Assume that *z** satisfies the nonsingularity condition, i.e., all *V* ∈ ∂*G*(*z**) are nonsingular.

Now we are in the position to give the rate of convergence for Algorithm 4.1.

**Theorem 5.2. ***Suppose that Assumption 4.1 holds and that z* is an accumulation point of the iteration sequence *{*z _{k}*}

*generated by Algorithm*4.1

*. If Assumption*5.1

*holds, then:*

(i) λ* _{k} * ≡ 1

*for all z*

_{k}sufficiently close to z*; (ii) {*z _{k}*}

*converges to z* Q-quadratically, i.e., ||z*µ

_{k+1}- z*|| = O(||z_{k}- z*||^{2}); moreover,*().*

_{k+1}= O **Proof. ** By using Lemma 3.1 and Theorem 4.1, we can prove the theorem similarly as in Theorem 8 of [20]. For brevity, we omit the details here.

□

**6 Numerical results**

In this section, we conducted some numerical experiments to evaluate the efficiency of Algorithm 4.1. All these experiments were performed on an IBM notebook computer R40e with Intel(R) Pentium(R) 4 CPU 2.00 GHz and 512 MB memory. The operating system was Windows XP SP2 and the implementations were done in MATLAB 7.0.1. For comparison purpose, we also useSDPT3 solver [12] which is an IPM for the SOCP.

For simplicity, we randomly generate six test problems with size *m *= 50 and *n* = 100. To be specific, we generate a random matrix* A* ∈ ^{m×n} with full row rank and random vectors

*x* ∈ int, *s* ∈ int, *y* ∈ ^{m}, and then let *b* := *Ax* and *c := A ^{T }y + s.*

Thus the generated problems (PSOCP) and (DSOCP) have optimal solutions and their optimal values coincide, because the set of strictly feasible solutions of (PSOCP) and (DSOCP) are nonempty. Let *x*_{0} = *e* ∈ ^{n} and *y*_{0} = 0 ∈ ^{m} be initial points. The parameters used in Algorithm 4.1 were as follows:

µ_{0} = 0.01, σ = 0.35, Δ = 0.65 and γ = 0.90.

We used ||*H*(*z*)|| __<__ 10^{-5} as the stopping criterion.

The results in Table 1 indicate that Algorithm 4.1 performs very well. We also obtained similar results for other random examples.

**Acknowledgments. ** The work is supported by National Natural Science Foundation of China (10571109, 10971122), Natural Science Foundation of Shandong (Y2008A01), Scientific and Technological Project of Shandong Province (2009GG10001012), the Excellent Young Scientist Foundation of Shandong Province (BS2011SF024), and Project of Shandong Province Higher Educational Science and Technology Program (J10LA51). The authors would like to thank the anonymous referees for their valuable comments and suggestions on the paper, which have considerably improved the paper.

**REFERENCES**

[1] F. Alizadeh and D. Goldfarb, *Second-order cone programming*. Mathematical Programming, **95** (2003), 3-51. [ Links ]

[2] B. Chen and N. Xiu, *A global linear and local quadratic non-interior continuation method for nonlinear complementarity problems based on Chen-Mangasarian smoothing functions*. SIAM Journal on Optimization, **9** (1999), 605-623. [ Links ]

[3] J. Chen, *Two classes of merit functions for the second-order cone complementarity problem*. Math. Meth. Oper. Res., ** 64** (2006), 495-519. [ Links ]

[4] R. Debnath, M. Muramatsu and H. Takahashi, *An Efficient Support Vector Machine Learning Method with Second-Order Cone Programming for Large-Scale Problems*. Applied Intelligence, **23** (2005), 219-239. [ Links ]

[5] J. Faraut and A. Koranyi, *Analysis on symmetric cones*. Oxford University Press, London and New York, 1994, ISBN: 0-198-53477-9. [ Links ]

[6] L. Faybusovich, *Euclidean Jordan Algebras and Interior-point Algorithms*. Positivity, **1** (1997), 331-357. [ Links ]

[7] M. Fukushima, Z. Luo and P. Tseng, *Smoothing functions for second-order-cone complimentarity problems*. SIAM J. Optim., ** 12**(2) (2002), 436-460. [ Links ]

[8] S. Hayashi, N. Yamashita and M. Fukushima, *A combined smoothing and regularized method for monotone second-order cone complementarity problems*. SIAM Journal on Optimization, **15** (2005), 593-615. [ Links ]

[9] K. Hotta and A. Yoshise, *Global convergence of a class of non-interior-point algorithms using Chen-Harker-Kanzow functions for nonlinear complementarity problems*. Discussion Paper Series No. 708, Institute of Policy and Planning Sciences, University of Tsukuba, Tsukuba, Ibaraki 305, Japan, December 1996. CMP 98:13. [ Links ]

[10] Y. Kanno and M. Ohsaki, *Contact Analysis of Cable Networks by Using Second-Order Cone Programming*. SIAM J. Sci. Comput., ** 27**(6) (2006), 2032-2052. [ Links ]

[11] Y.J. Kuo and H.D. Mittelmann, *Interior point methods for second-order cone programming and OR applications*. Computational Optimization and Applications, **28** (2004), 255-285. [ Links ]

[12] Y. Liu, L. Zhang and Y. Wang, *Analysis of a Smoothing Method for Symmetric Cone Linear Programming*. J. Appl. Math. and Computing, ** 22**(1-2) (2006), 133-148. [ Links ]

[13] M.S. Lobo, L. Vandenberghe, S. Boyd and H. Lebret, *Applications of second-order cone programming*. Linear Algebra and its Applications, **284** (1998), 193-228. [ Links ]

[14] C. Ma and X. Chen, *The convergence of a one-step smoothing Newton method for P _{0}-NCP based on a new smoothing NCP-function*. Journal of Computational and Applied Mathematics,

**216**(2008), 1-13. [ Links ]

[15] R. Miffin, *Semismooth and semiconvex functions in constrained optimization*.SIAM Journal on Control and Optimization, **15** (1977), 957-972. [ Links ]

[16] Y.E. Nesterov and M.J. Todd, *Primal-dual Interior-point Methods for Self-scaled Cones*. SIAM J. Optim., **8**(2) (1998), 324-364. [ Links ]

[17] J. Peng, C. Roos and T. Terlaky, *Primal-dual Interior-point Methods for Second-order Conic Optimization Based on Self-Regular Proximities*. SIAM J. Optim., **13**(1) (2002), 179-203. [ Links ]

[18] X. Peng and I. King, *Robust BMPM training based on second-order cone programming and its application in medical diagnosis*. Neural Networks, **21** (2008), 450-457. [ Links ]

[19] L. Qi and D. Sun, *Improving the convergence of non-interior point algorithm for nonlinear complementarity problems*. Mathematics of Computation, **69** (2000), 283-304. [ Links ]

[20] L. Qi, D. Sun and G. Zhou, *A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities*. Mathematical Programming, **87** (2000), 1-35. [ Links ]

[21] B.K. Rangarajan, *Polynomial Convergence of Infeasible-Interior-Point Methods over Symmetric Cones*. SIAM J. Optim., ** 16**(4) (2006), 1211-1229. [ Links ]

[22] T. Sasakawa and T. Tsuchiya, *Optimal Magnetic Shield Design with Second-order Cone Programming*. SIAM J. Sci. Comput., **24**(6) (2003), 1930-1950. [ Links ]

[23] S.H. Schmieta and F. Alizadeh, *Extension of primal-dual interior point algorithms to symmetric cones*, Mathematical Programming, Series A **96** (2003), 409-438. [ Links ]

[24] P.K. Shivaswamy, C. Bhattacharyya and A.J. Smola, *Second Order Cone Programming Approaches for Handling Missing and Uncertain Data*. Journal of Machine Learning Research, **7** (2006), 1283-1314. [ Links ]

[25] D. Sun and J. Sun, *Strong Semismoothness of the Fischer-Burmeister SDC and SOC Complementarity Functions*. Math. Program., Ser. A **103** (2005), 575-581. [ Links ]

[26] K.C. Toh, R.H. Tütüncü and M.J. Todd, *SDPT3 Version 3.02-A MATLAB software for semidefinite-quadratic-linear programming*, http://www.math.nus.edu.sg/~mattohkc/sdpt3.html, 2002. [ Links ]

[27] P. Tseng, *Error bounds and superlinear convergence analysis of some Newton-type methods in optimization*, in: G. Di Pillo, F. Giannessi (Eds.), Nonlinear Optimization and Related Topics, Kluwer Academic Publishers, Boston, (2000),pp. 445-462. [ Links ]

[28] S. Xu, *The global linear convergence and complexity of a non-interior path-following algorithm for monotone LCP based on Chen-Harker-Kanzow-Smale smooth functions*. Preprint, Department of Mathematics, University of Washington, Seattle, WA 98195, February (1997). [ Links ]

[29] S. Yan and Y. Ma, *Robust supergain beamforming for circular array via second-order cone programming*. Applied Acoustics, ** 66** (2005), 1018-1032. [ Links ]

Received: 20/VI/10.

Accepted: 03/VIII/10.

#CAM-227/10.

* Corresponding author.