## Services on Demand

## Journal

## Article

## Indicators

- Cited by SciELO
- Access statistics

## Related links

- Cited by Google
- Similars in SciELO
- Similars in Google

## Share

## Computational & Applied Mathematics

##
*On-line version* ISSN 1807-0302

### Comput. Appl. Math. vol.31 no.2 São Carlos 2012

#### http://dx.doi.org/10.1590/S1807-03022012000200010

**A filled function method for nonlinear systems of equalities and inequalities ^{*}**

**Zhongping Wan ^{**}; Liuyang Yuan; Jiawei Chen**

School of Mathematics and Statistics, Wuhan University, Wuhan, Hubei, 430072, P.R. China. E-mails: mathwanzhp@whu.edu.cn / yangly0601@126.com

**ABSTRACT**

In this paper a filled function method is suggested for solving nonlinear systems of equalities and inequalities. Firstly, the original problem is reformulated into an equivalent constrained global optimization problem. Subsequently, a new filled function with one parameter is constructed based on the special characteristics of the reformulated optimization problem. Some properties of the filled function are studied and discussed. Finally, an algorithm based on the proposed filled function for solving nonlinear systems of equalities and inequalities is presented. The objective function value can be reduced by half in each iteration of our filled function algorithm. The implementation of the algorithm on several test problems is reported with numerical results.

**Mathematical subject classification:** 65K05, 90C30.

**Key words:** nonlinear systems of equalities and inequalities, constrained global optimization, filled function method.

**1 Introduction**

In this paper, we consider the following systems of nonlinear equalities and inequalities (for short, (SNEI)) defined by

where the functions *c _{i}* :

*R*→

^{n}*R*are continuously differentiable,

*I*∪

*E*= { 1,...,

*m*} and

*I*∩

*E*= ∅. The systems of nonlinear equalities and inequalities have found myriad applications in various industrial and economic areas.

If *n* = *m* and *I* = ∅, the system (SNEI) reduces into a system of nonlinear equations, one classical problem in mathematics, for which there are many well-known methods, such as Newton-type methods, secant methods, and trust-region methods (see [3, 4, 5, 11] etc.). Similarly, in recent years these methods are also proposed to solve the system (SNEI) (see [6, 7, 15, 16, 17, 18, 19, 22, 26] etc.).

A typical way of solving the system (SNEI) is to reformulate it into the following constrained optimization problem (for short, (COP))

Some well-developed optimization methods are used to solve problem (COP). It is clear that global optimal solutions of problem (COP) with the zero objective function value correspond to the solutions of the system (SNEI). Therefore, efficient global optimization methods are crucial for successfully solving the system (SNEI).

As one of the main global optimization methods to solve general unconstrained global optimization problems without special structural property, the filled function method has attracted much attention. The filled function algorithm is an efficient deterministic global optimization algorithm, and it has been used to solve plenty of optimization problems, such as unconstrained global optimization problems [9, 10], constrained global optimization problems [25], nonlinear equations [13, 14, 24], constrained nonlinear equations [1], nonlinear complementarity problems [27] and so on. The main idea of filled function method is to construct an auxiliary function called filled function via the current local minimizer of the original optimization problem, with the property that the current local minimizer is a local maximizer of the constructed filled function and a better initial point of the primal optimization problem can be obtained by searching the constructed filled function locally. From both the theoretical and the algorithmic points of view, the filled function method is competitive with the other existing global optimization methods, such as tunneling function methods [2, 12] and stochastic methods [20, 21]. Bai [1] proposed a filled function with three parameters for solving constrained nonlinear equations. However, in [1], ∇ *p _{r, c, q, x*}* (

*x*) = depends on

*x - x**, which is not desirable. Motivated by these considerations, a novel filled function only with one parameter is constructed by employing the idea of penalty function in constrained optimization problem which is reformulated from the system (SNEI). Moreover, the gradient of the new filled function is not affected by ||

*x - x**||. The obtained numerical experiments show that the novel filled function method is of superiority over that in [1].

**Outline. ** The rest of the paper is organized as follows. In section 2, a filledfunction is constructed for the reformulated constrained global optimization problem (COP). The corresponding filled function algorithm is presented in Section 3. Several numerical examples are reported in Section 4. and finally, some concluding remarks are made in section 5.

**2 Filled function for problem (COP)**

In this section, The following problem (COP) is considered

Let

Note that *Sº* is not necessarily identical to the interior of *S*.

Throughout the rest of the paper, we always assume that the following assumptions for problem (COP) hold.

**Assumption 2.1** *I* ≠ ∅.

**Assumption 2.2** *The system of nonlinear equalities and inequalities has at least one solution.*

**Assumption 2.3** *Sº* ≠ ∅ *and clSº = S, where clA denotes the closure ofset A.*

By Assumption 2.3, we can see that for any *x*_{0} ∈ *S*, there exists a sequence {*x _{k}*} ⊂

*Sº*, such that lim

_{k}

_{→ ∞}

*x*=

_{k}*x*

_{0}.

It is easy to see that *x** is a solution of the system (SNEI) if and only if it isa global minimizer of problem (COP) and satisfies that *f*(*x**) = 0.

For a given *x** ∈ *S* with *f*(*x**) > 0, the definition of the filled function is given as follows.

**Definition 2.4** *A continuously differentiable function p*(*x, x**)* is called a filled function of problem (COP) at x* with f*(*x**)* > 0, if it satisfies the following conditions*

x* is a strict local maximizer of p(x, x*)on R∈^{n};

anyS\{x*}with∇p(, x*)=0implies f()< ;(

any local minimizer of px, x*)on R() <^{n}satisfies fand∈Sº;()

any local minimizer of problem (COP) with f<and∈Sº is a local minimizer of p(x ,x*)on R^{n}.

**Remark 2.5.** With the definition above, we know that if *p*(*x, x**) is a filled function of problem (COP) at *x** with *f*(*x**) > 0, i.e. *x** is not a solution to the system (SNEI), any local minimizer of *p*(*x, x**) on *R ^{n}* satisfies the inequality

*f*() < and ∈

*Sº*.

In the following, a one-parameter filled function satisfying Definition 2 is introduced. To begin with, we design a continuously differentiable function *H _{r, a}*(

*t*) with the following properties: it equals to a positive constant a when

*t*

__>__0, and it equals to 0 when

*t*

__<__-

*r*.

More specifically, for any given *r* > 0 and a > 0, we construct *H _{r, a}*(

*t*) as follows

Note that the requirement for continuous differentiability of *H _{r, a}*(

*t*) justifies the use of log function and cubic polynomial. The function

*H*(

_{r, a}*t*) differs from

*h*(

_{r, a}*t*) in [1], which we can get information from the figures below.

From the figures above, it can be seen that when *r* __>__ 1, *H _{r, a}*(

*t*) increases faster than

*h*(

_{r, a}*t*) from -

*r*to 0, while when 0 <

*r*< 1, the function

*H*(

_{r, a}*t*) is increasingly close to

*h*(

_{r, a}*t*) as r is getting smaller and smaller.

It is not difficult to check that *H _{r, a}*(

*t*) is continuously differentiable andincreasing on

*R*. Obviously, we have

Given an *x** ∈ *S* with *f*(*x**) > 0, the following filled function with one parameter is constructed

where the only parameter *q* > 0 and *x** is the current local minimizer of problem (COP). Clearly, *F*(*x*, *x**, *q*) is continuously differentiable on *R ^{n}*.

**Remark 2.6** Note that *F*(*x*, *x**, *q*) includes as a penalty term to penalize unfeasible points.

The following theorems show that *F*(*x*, *x**, *q*) satisfies Definition 2.4 when the positive parameter *q* is sufficiently large.

**Theorem 1** * Let f*(*x**)* > *0*, q > *0*. Then x* is a strict local maximizer of F*(*x, x*, q*)* on R ^{n}.*

**Proof. ** Since *x** is a local minimizer of *f*(*x*), there exists a neighborhood *N*(*x**, *σ*) of *x** with *σ* > 0 such that *f*(*x*) __>__ *f*(*x**) for all *x* ∈ *N*(*x**, *σ*) = {*x*| || *x* - *x** || < *σ*}. Then for any *x* ∈ *N*(*x**, *σ*), *x* ≠ *x**, *q* > 0, and 0 __<__ *H _{r, a}*(

*t*)

__<__

*a*, we have

Thus, x* is a strict local maximizer of F(x, x*, q) on R.^{n} |

Theorem 2.7 reveals that the proposed new filled function satisfies condition (1) of Definition 2.4.

**Theorem 2.8** *Let f*(*x**)* > *0*, q > *0*. Then any point *∈* S\*{*x**}* with *∇* F*(*, x*, q*)* = *0* implies f*()* < . *

**Proof. ** Assume that *f*() __> __. Then for any point ∈ *S\*{*x**}, we have

which is a contradiction. Thus, any point ∈ S\{x*} with ∇ F(, x*, q) = 0 implies f() < . |

Theorem 2.8 reveals that the proposed new filled function satisfies condition (2) of Definition 2.4.

**Theorem 2.9** * Let f*(*x**)* > *0*, q > *0*. Then any local minimizer of F*(*x, x*, q*)* on R ^{n} satisfies f*() <

*and*∈

*Sº.*

**Proof. ** Let be a local minimizer of *F*(*x*, *x**, *q*) on R^{n}, then ∇ *F*(, *x**, *q*) = 0 and ≠ *x** (since *x** is a strict local maximizer of *F*(*x*, *x**, *q*) on *R ^{n}*). By contradiction, suppose that is neither a point satisfying f() < nor ∈

*Sº*. Then

This is a contradiction. Therefore, if is a local minimizer of *F*(*x*, *x**, *q*) on *R ^{n}*, we have

*f*() < and ∈

*Sº*. Theorem 2 reveals that the proposed new filled function satisfies condition (3) of Definition 2.4.

**Theorem 2.10** *Let f*(*x**)* > *0*, q > *0*. Suppose that (SNEI) satisfies Assumption 2.2. Then there exists q*_{0}* > *0* such that when q > q*

_{0}

*, any local minimizer of problem (COP) with f*()

__<__and ∈

*Sº is a local minimizer of F*(

*x, x* ,q*)

*on R*∈

^{n}. Furthermore, the number of point*Sº with f*()

__<__is infinite.**Proof. ** Let be a local minimizer of problem (COP) on *Sº* with *f*() __<__ . Then *c _{i}*() < 0,

*i*∈

*I*and there exists a small enough number

*σ*> 0 such that

*f*(

*x*)

__<__and

*f*(

*x*)

__>__

*f*() for all

*x*∈ S ∩

*N*(,

*σ*), where

*N*(,

*σ*) = {

*x*∈

*R*| ||

^{n}*x*- || <

*σ*}. Thus, there exists

*q*

_{0}> 0 such that

*c*() < ,

_{i}*i*∈

*I*. It follows that F(,

*x**,

*q*) = - || -

*x**|| when

*q*

__>__

*q*

_{0}. Since

*F*(

*x*,

*x**,

*q*)

__>__- || -

*x**|| for any

*x*∈

*R*, is a global minimizer of

^{n}*F*(

*x*,

*x**,

*q*) on

*R*. Therefore, is a local minimizer of

^{n}*F*(

*x*,

*x**,

*q*) on

*R*. Let be a solution of (SNEI). Then we have that ≠

^{n}*x**. By

*clSº*=

*S*, there exists a sequence {

*x*} ⊂

_{k}*S*such that

*x*≠

_{i}*x*for

_{j}*i*≠

*j*and = . Hence, there exists a positive integer number

*k*such that when

_{0}*k*

__>__

*k*,

_{0}*f*(

*x*)

_{k}__<__. Therefore the number of point ∈

*Sº*with

*f*()

__<__is infinite.

Theorems 2.10 show that, for all *q* __>__ *q*_{0}, *F*(*x*, *x**, *q*) satisfies condition (4) of Definition 2.4. The following theorem shows that function *F*(*x*, *x**, *q*) hasan interesting property.

**Theorem 2.11** *Let x*_{1}*, x*_{2} ∈* S and the following conditions hold:*

(i) min{

f(x_{1}),f(x_{2})}>f(x*);

(ii) ||x_{2}-x*|| > ||x_{1}-x*||.

* Then, the inequality F*(*x*_{1}*, x*, q*)* > F*(*x*_{2}*, x*, q*)* holds for all q > *0.

x*, q) = - ||x - _{i}x*|| + q , i = 1, 2. Therefore, for all q > 0, F(x_{1}, x*, q) > F(x_{2}, x*, q) holds. |

**3 Filled function algorithm**

In this section, a global optimization method for solving problem (COP) ispresented based on the constructed filled function (3), which leads to a solution or an approximate solution to (SNEI).

Suppose that (SNEI) has at least one solution. The general idea of the global optimization method is as follows.

Let *x*_{0} ∈ S be a given initial point. Starting from this initial point, a local minimizer of problem (COP) is obtained with a local minimization method. The main task is to find deeper local minimizers of problem (COP) if is not a global minimizer.

Consider the following filled function problem (for short, (FFP))

where *F*(*x*, , *q*) is given by (3).

Let be a obtained local minimizer of problem (FFP) on *R ^{n}*, then by Theorem 2.9, we have

*f*() < and ∈

*Sº*. Starting from this initial point , we can obtain a local minimizer of problem (COP). If is a global minimizer (namely

*f*() = 0), is the solution of the system (SNEI); otherwise locally solve problem (FFP). Let be the obtained local minimizer, then we have that

*f*() < and ∈

*Sº*. Repeating this process, we can finally obtain a solution of the system (SNEI) or a sequence {} ⊂

*Sº*with

*f*() < ,

*k*= 1, 2,.... For such a sequence {},

*k*= 1, 2, ..., when

*k*is sufficiently large, can be regarded as an approximate solution of the system (SNEI).

Let *x** ∈ S and *ε* > 0, *x** is called a *ε*-approximate solution of the system (SNEI) if *x** ∈ S and f(*x**) __<__ *ε*.

The corresponding filled function algorithm for the global optimization problem (COP) is described as follows. The algorithm is referred as FFCOP (the filled function method for problem (COP)).

**Algorithm FFCOP**

Step 0:Choose small positive numbersε,λ, a large positive numberq, and an initial value^{U}q_{0}for the parametersq. (e.g.ε= 10^{-8},λ= 10^{-5},q= 10^{U}^{5}andq= 100). Choose a positive integer number_{0}K(e.g.K= 2n) and directionse,_{i}i= 1,...,K, are the coordinate directions. Choose an initial pointx_{0}∈ S. Setk: = 0.

Iff(x_{0})<ε, then let : =x_{0}and go to Step 6; otherwise, letq:=q_{0}and go to Step 1.

Step 1:Find a local minimizer of the problem (COP) by local search methods starting fromx. If_{k}f()<ε, go to Step 6.

Step 2:Letwhere

H(_{r, a}t) is defined by (1). Setl= 1 andu= 1.

Step 3:(a) Ifl>K, setq: = 10q, go to Step 5; otherwise, go to (b).

(b) Ifu>λ, set : = +ue, go to (c); otherwise, set_{l}l:=l+ 1,u= 1, go to (a).

(c) If ∈S, go to (d); otherwise, setu:= , go to (b).

(d) Iff() < , then setx_{k}_{ + 1}: = ,k: =k+ 1, go to Step 1; otherwise, go to Step 4.

Step 4:Search for a local minimizer of the following filled function problem starting fromOnce a point ∈

Sºwithf() < is obtained in the process of searching, setx_{k}_{ + 1}: = ,k:=k+ 1 and go to Step 1; otherwise continue the process. Let be an obtained local minimizer of problem (5). If satisfiesf() < and ∈Sº, then setx_{k}_{ + 1}:= ,k:=k+ 1 and go to Step 1; otherwise, setu: = , and go to Step (3b).

Step 5:Ifq<q, go to Step 2.^{U}

Step 6:Letx= and stop._{s}

From Theorems 2.7-2.10, it can be seen that if *λ* is small enough, *q ^{U}* is large enough, and the direction set {

*e*

_{1},...,

*e*

_{K}} is large enough,

*x*can be obtained from Algorithm FFCOP within finite steps.

_{s}

**4 Numerical experiment**

In this section, several sets of numerical experiments are presented to illustratethe efficiency of Algorithm FFCOP. All the numerical experiments are implemented in Matlab2010b. In our programs, the local minimizers of problem (FFP) and problem (COP) are respectively obtained by the Quasi-Newton method and the SQP method. || ∇ *f*(*x*) || __<__ 10^{- 8} is used as the terminate condition.

The data of the problems is shown in Table 1. The number of the problem is shown in column 1, the source of the problem in column 2 and the number of the set of equalities (*m*_{E}), the set of inequalities (*m*_{I}) and variables (*n*) in the last column.

Initial points are the same as in the cited references.

The number of problems (column 1), the number of iterations (Iter, column 2), the approximation solution to (SNEI) (*x _{end}*, column 4) and the final functional values obtained (

*f*(

*x*), column 5) are shown in Table 2-4.

_{end}

The first three problems are nonlinear inequalities, Problem 4 is square inequalities and the last six problems are linear inequalities. From Table 2-3, it can be seen that the number of iterations of problem 1, 2, 3 is respectively 1, 1, 3, while the number of iterations of the same problems are 4, 11, 8 in [1]. Moreover, from Table 4, it can be see that our filled function method is also effective to solve those problems which have square inequalities and linear inequalities.

**5 Conclusions**

In this paper, the filled function *F*(*x*, *x**, *q*) with one parameter is constructed for solving nonlinear systems of equalities and inequalities and it has been proved that it satisfies the basic characteristics of the filled function definition. Promising computation results have been observed from our numerical experiments. In the future, the filled function method can be used to solve other problems such as nonlinear feasibility problems with expensive functions and so on.

**Acknowledgments. ** We are greatly indebted to the anonymous referees and Professor J.M. Martínez as the Editor of our paper for their very careful and valuable comments that helped improve this manuscript.

**References**

[1] F.S. Bai, M. Mammadov, Z.Y. Wu and Y.J. Yang, *A filled function method for constrained nonlinear equations*. Pac. J. Optim., **4** (2008), 9-18. [ Links ]

[2] J. Barhen, V. Protopopescu and D. Reister, *TRUST: A deterministic algorithmfor global optimization*. Science, **276** (1997), 1094-1097. [ Links ]

[3] A.R. Conn, N.I.M. Gould and P.L. Toint, *Trust region methods*. SIAM, Philadelphia, USA (2000). [ Links ]

[4] J.W. Daniel, *Newton's method for nonlinear inequalities*. Numer. Math., **21** (1973), 381-387. [ Links ]

[5] J.E. Dennis and R.B. Schnabel, *Numerical methods for unconstrained optimization and nonlinear equations*. SIAM, Philadelphia, USA (1996). [ Links ]

[6] J.E. Dennis Jr., M. EL-Alem and K. Williamson, *A trust-region approach to nonlinear systems of equalities and inequalities*. SIAM J. Optim., **9** (1999), 291-315. [ Links ]

[7] I.I. Dikin, *Solution of systems of equalities and inequalities by the method of interior points*. Cybernetics and Systems Analysis, **40** (2004), 625-628. [ Links ]

[8] C.A. Floudas et al., *Handbook of Test Problems in Local and Global Optimization, Nonconvex Optimization and its Applications*. Kluwer Academic Publishers, Dordrecht, **33** (1999). [ Links ]

[9] R.P. Ge, *A filled function method for finding a global minimizer of a function of several variables*. Math. Program., **46** (1990), 191-204. [ Links ]

[10] R.P. Ge and Y.F. Qin, *A class of filled functions for finding global minimizers of a function of several variables*. J. Optim. Theory. Appl., **54** (1987), 241-252. [ Links ]

[11] C.T. Kelley, *Iterative methods for linear and nonlinear equations*. SIAM,Philadelphia, USA (1995). [ Links ]

[12] A.V. Levy and A. Montalvo, *The tunneling algorithm for the global minimization of function*. SIAM J. Sci. Stat. Comput., **6** (1995), 15-27. [ Links ]

[13] Y. Lin, Y.J. Yang and M. Mammadov, *A new filled function method for nonlinear equations*. Appl. Math. Comput., **210** (2009), 411-421. [ Links ]

[14] Y. Lin and Y. Yang, *Filled function method for nonlinear equations*. J. Comput. Appl. Math., **234** (2010), 695-702. [ Links ]

[15] M. Macconi, B. Morini and M. Porcelli, *Trust-region quadratic methods for nonlinear systems of mixed equalities and inequalities*. Appl. Numer. Math., **59** (2009), 859-876. [ Links ]

[16] B. Morini and M. Porcelli, *TRESNEI, A Matlab trust-region solver for systems of nonlinear equalities and inequalities*. Comput. Optim. Appl., (2010), doi: 10.1007/s10589-010-9327-5.pdf. [ Links ]

[17] U.M. Garcia-Palomares, *A global quadratic algorithm for solving a system of mixed equalities and inequalities*. Math. Program., **21** (1981), 290-300. [ Links ]

[18] B.T. Polyak, *Gradient methods for solving equations and inequalities*. USSR Comput. Math., **4** (1964), 17-32. [ Links ]

[19] B.N. Pshenichnyi, *Newton's method for the solution of systems of equalities and inequalities*. MAT. Zametki, **8**(5) (1970), 635-640, [Russian]; English translation: Math. Notes, **8** (1970), 827-830. [ Links ]

[20] A.H.G. Rinnoy Kan and G.T. Timmer, *Stochastic global optimization methods, part I:clustering methods*. Math. Program., **39** (1987), 27-56. [ Links ]

[21] A.H.G. Rinnoy Kan and G.T. Timmer, *Stochastic global optimization methods, part II: multi-level methods*. Math. Program., **39** (1987), 57-78. [ Links ]

[22] S.M. Robinson, *Extension of Newton's method to nonlinear functions with values in a cone*. Numer. Math., **19** (1972), 341-347. [ Links ]

[23] D. Bini and B. Mourrain, *Polynomial test suite.* The website http://www-sop.inria.fr/saga/POL/. [ Links ]

[24] Z.Y. Wu, M. Mammadov, F.S. Bai and Y.J. Yang, *A filled function method for nonlinear equations*. Appl. Math. Comput., **189** (2007), 1196-1204. [ Links ]

[25] Z.Y. Wu, F.S. Bai, H.W.J. Lee and Y.J. Yang, *A filled function method for constrained global optimization*. J. Glob. Optim., (2007), doi: 10.1007/s10898-007-9152-2. [ Links ]

[26] L. Yang, Y.P. Chen and X.J. Tong, *Smoothing newton-like method for the solution of nonlinear systems of equalities and inequalities*. Numer. Math. Theor. Meth. Appl., **2** (2009), 224-236. [ Links ]

[27] L.Y. Yuan, Z. Wan, J.J. Zhang and B. Sun, *A filled function method for solving nonlinear complementarity problems*. J. Ind. Manag. Optim., **5** (2009), 911-928. [ Links ]

Received: 10/IX/11.

Accepted:03/X/11.

#CAM-410/11.

*This work was supported by the Natural Science Foundation of China (No.71171150).

^{**}Corresponding author.