Acessibilidade / Reportar erro

A global linearization approach to solve nonlinear nonsmooth constrained programming problems

Abstract

In this paper we introduce a new approach to solve constrained nonlinear non-smooth prograing probles ith any desirable accuracy even hen the objective function is a non-smooth one. In this approach for any given desirable accuracy, all the nonlinear functions of original problem (in objective function and in constraints) are approximated by a piecewise linear functions. We then represent an efficient algorithm to find the global solution of the later problem. The obtained solution has desirable accuracy and the error is completely controllable. One of the main advantages of our approach is that the approach can be extended to problems with non-smooth structure by introducing a novel definition of Global Weak Differentiation in the sense of L1 norm. Finally some numerical examples are given to show the efficiency of the proposed approach to solve approximately constraints nonlinear non-smooth programming problems.

nonlinear programming problem; non-smooth analysis; equicontinuity; uniform continuity


A global linearization approach to solve nonlinear nonsmooth constrained programming problems

A.M. VaziriI; A.V. KamyadI; A. JajarmiII; S. EffatiI

IDepartment of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran E-mails: a_mvaziri@yahoo.com / avkamyad@yahoo.com / effati911@yahoo.com

IIDepartment of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran E-mail: jajarmi@stu-mail.um.ac.ir

ABSTRACT

In this paper we introduce a new approach to solve constrained nonlinear non-smooth prograing probles ith any desirable accuracy even hen the objective function is a non-smooth one. In this approach for any given desirable accuracy, all the nonlinear functions of original problem (in objective function and in constraints) are approximated by a piecewise linear functions. We then represent an efficient algorithm to find the global solution of the later problem. The obtained solution has desirable accuracy and the error is completely controllable. One of the main advantages of our approach is that the approach can be extended to problems with non-smooth structure by introducing a novel definition of Global Weak Differentiation in the sense of L1 norm. Finally some numerical examples are given to show the efficiency of the proposed approach to solve approximately constraints nonlinear non-smooth programming problems.

Mathematical subject classification: 90C30,49M37,49M25.

Key words: nonlinear programming problem, non-smooth analysis, equicontinuity, uniform continuity.

1 Introduction

Frequently practitioners need to solve global optimization problem in many fields such as engineering design, molecular biology, neural network training and social science. So that the global optimization becomes a popular computational

task for researchers and practitioners. There are some interesting recent papers for solving nonlinear programming problems [3], non-smooth global optimization [6, 10], solving a class of non-differentiable programming based on neural network method [11] and controllability for time-varying systems [4].

One of the efficient approaches for solving nonlinear programming problems is to linearize the nonlinear functions when the domain of the function is partitioned to very small sub-domains. However many realistic problems cannot be adequately linearized. So throughout its domain efforts to approximate nonlinear problems efficiently is the focused of the new researcher. Two other aspects that should be considered are non-convexity and non-smooth dynamics due to our ability to obtain the global solution of nonlinear, non-convex and non-smooth problems(when they exist) is still limited. So an efficient approach which is applicable in the presence of non-convex and non-smooth functions should be investigated (see [1, 2, 5]).

In this paper we introduce a new approach to solve approximately nonlinear non-smooth programming problems which don't have any limitation upon convexity and smoothness of the nonlinear functions. In this approach any given nonlinear function is approximated by a piecewise linear function with controlled error. In this manner, the difference between global solution of the approximated problem and the main problem is less than or equal a desirable upper bound which is shown by ε > 0. Also we represent an efficient algorithm to find global solution of approximated problem. One of the main advantages of our approach is that it can be extended to problems with non-smooth functions by introducing a novel definition of Global Weak Differentiation in the sense of L1-norm. The paper is organized as follow:

In section two we explain our approach for one dimensional nonlinear programming problem. In the third section we deal with the extension of our approach for n dimensional nonlinear programming problems. In section four the approach was extended for non-smooth nonlinear programming problems by introducing the definition of global weak differentiation. In the fifth sections some illustrative examples are given to show the effectiveness of the proposed approach. Some suggestions and Conclusions are included in Section 6.

2 Proposed approach for one dimensional problem

Consider the following non-constrained nonlinear minimization problem:

where f : [a, b] → R; is a nonlinear smooth function. We may approximate the nonlinear function f (x) by a piecewise linear function defined on [a, b]. Let us mention the following definitions.

Definition 2.1. Let Pn([a, b]) be a partition of the interval [a, b] as the form:

where and xi = x0 + ih. The norm of partition defined by:

It is easy to show that || Pn([a, b]) || → 0 as n →∞.

Definition 2.2. The function fi(x, si) is defined as follows:

where si ∈ (xi-1, xi) is an arbitrary point. The function fi(x, si) is called the linear parametric approximation of f(x) on [xi-1, xi] at the point si (xi-1, xi). (In usual linear expansion the point si is fixed, but here we assume si is a free point in [xi-1, xi]).

Now, we define gn(x) as the parametric linear approximation of f(x) on [a, b], associated with the partition Pn as follows:

where ΧA is the characteristic function and defined as below:

The following theorems are shown that gn(x) is convergence uniformly to the original nonlinear function f(x) when ||Pn([a, b])|| → 0. In the other word we show that

Lemma 2.3.Let Pn([a, b]) be an arbitrary regular partition of [a, b]. If f(x) is continuous function on [a, b] and x, s ∈ [xi-1, xi] are an arbitrary points then

Proof. The proof is an immediate consequence of the definition. This lemma shown that gnf point-wise on [a, b].

Definition 2.4. A family F of complex functions f defined on a set A in a metric space X, is said to be equicontinuous on A if for every ε > 0 there exists δ > 0 such that |f (x) - f(y)| < ε whenever d(x, y) < δ, xA, yA, fF. Here d(x, y) denotes the metric of A (see [7]).

Since {gn(x)} is a sequence of linear functions it is trivial that this sequence is equicontinuous.

Theorem 2.1. Let {fn} is an equicontinuous sequence offunction on a compact set A and {fn} converges point-wise on A. Then {fn} converges uniformly on A.

Proof. Since {fn} is a sequence of equicontinuous function on A then:

For each xA there exists δ > 0 such that Since A is a compact, this open covering of A has a finite sub-covering. Thus, there exists a finite number of points such as x1, x2, ..., xr in A such that Therefore for each xA there exists xiA i = 1, 2, ..., r; such that d(x, xi) < δ.

We know fn is point-wise convergent sequence then there exists a natural number N such that for each n > N, m > N we have:

Then according to the Theorem 7.8 in [7] the sequence {fn} is uniformly continuous on A and the proof is completed.

Theorem 2.2. Let gn(x) is a piecewise linear approximation of f(x) on [a, b] as (4). Then:

Proof. The proof is an immediate consequence of Lemma 2.3 and Theorem 2.1.

Now, we introduce a novel definition of global error for approximated f(x) with linear parametric function gn(x) in the sense of L1-norm which is a suitable criterion to show the goodness of fitting.

Definition 2.5. Let f(x) be a nonlinear smooth function defined on [a, b] and let gn(x) defined in (4) be a parametric linear approximation of f(x). Let the global error for approximation of the function f(x) with function gn(x) in the sense of L1-norm is defined as follows:

It is easy to show that En tends to zero uniformly when ||Pn([a, b])|| → 0.

This definition is used to make the fine partition which is matched with a desirable accuracy. These partitions can be obtained according to the following iterative algorithm.

Step 1. Let select an acceptable upper bound for desirable global error of approximation which called Uε and set n = 1.

Step 2.n is substituted by 2n and then determine En as in (5).

Step 3. If En > Uε go to 2 and else end the process.

The value of n which is achieved in the above algorithm indicates the number of points in the suitable partition which is matched with the desirable accuracy. Let f(x) in the problem (1) is replaced with its piecewise linear approximation gn(x). So, we will have the following minimization problem:

Where its solution is an approximation for the solution of the problem (1) we want this approximated solution have a given desirable accuracy. For this mean the partition should be chosen enough fine. But we don't know how fine the partition should be chosen? In the next section this question will be answered.

2.1 Error analysis for one dimensional problem

Assume that global optimum solution of (6) and (1) are happened at x = α and x = β respectively. It means that:

Now it is desirable to find an appropriate partition such that for any given ε > 0 the following inequality is hold:

The following theorems are proved to show the achievement to the above goal.

Theorem 2.3.Consider nonlinear real function f (x) and it's piecewise linear approximation gn (x) defined in (4). Then, for each x ∈[a, b] and ε > 0 such that ε << b - a, we have:

where En is a global error of fitting defined in (5).

Proof. We know that [a, b] = [a, b)U{b}. Thus the above inequality is proved separately for [a, b) and {b} as follows:

Let [a, b) is considered then for each x ∈ [a, b) there exist ε1 > 0 such that [x, x + ε1 ] ⊆ [a, b]. Therefore we have:

According to (5) the right hand side of the above inequality is En. Additionally, if ε1 is chosen such that ε1 << b - a the left hand side of the above inequality is calculated approximately using the rectangular role. Therefore we have:

Let {b} is considered then for x = b there exist ε2 > 0 such that [x - ε2, x] ⊆ [a, b]. Therefore we have:

If ε2 be chosen such that ε2 << b - a the left hand side of the above inequality is calculated approximately in the same manner which yield:

Let ε < {ε1, ε2}. According to the above discussion for any x ∈ [a, b)U{b} = [a, b] there exists ε > 0 such that:

Thus the proof is completed.

Theorem 2.4.Let f (x) is a nonlinear function if for each ε > 0 we have En < ε2then (7) is satisfied.

Proof. Let ε << b - a according to the Theorem 2.3 we have:

First, consider the right inequality i.e.:

According to the definition of f (β) we have:

Let x = α, so we have:

Now consider the left inequality i.e:

According to the definition of gn (α) we have:

Setting x = β,we have:

Let n is chosen such that En < ε2. Then the above inequality is transformed to the following ones:

and the proof is complete.

2.2 Described algorithm for one dimensional problem

According to the previous section in the first step of our algorithm for finding the optimum solution of nonlinear constrained programming problem with a desirable accuracy ε we must find an appropriate partition of [a, b]. Then the function f (x) must be approximated by the parametric linear function gn(x).

At the next step the global optimum solution of the problem (6) must be calculated which is an accurate approximation for the global optimum solution of the problem (1). Here an efficient algorithm to solve the problem (6) is represented.

In each sub-interval of the form [xi-1, xi ] we have the following optimization problem:

where fi(x) is a parametric linear approximation of f(x) which is defined in (3). Since fi(x) has an affine form such as aix + bi (ai = f'(si) and bi = f (si) - sif'(si)) based on the sign of ai the global minimum of fi (x) is happened at extreme points of its validity domain or equivalently on {xi-1 , xi}. Thus the optimization problem (8) is transferred to the following ones:

Here we define αii = 1, · · · , n as the global solution of problem (9). So αi can be formulated as follows:

Therefore the optimization problem (6) is converted to the following ones:

3 Extension of the proposed approach for n dimensional problems

Consider the following nonlinear minimization problem:

where and f (.) : AR is nonlinear smooth function. Here we introduce a piecewise linear parametric approximation for f(x) which is the extension of Definition 2.2.

Definition 3.1. Consider the nonlinear smooth function f (.) : AR where Also consider Pn([ai, bi]) as a regular partition of [ai, bi], i = 1, ..., n as follows:

where ki = 0, 1, ..., ni and i = 1, ..., n.

Therefore A is partitioned to N cells where N = n1Χ · · ·Χ nn. Let us show the kth cell by Ek, k = 1, ..., N. Let sk = be an arbitrary point of Ek. Now fk (x) is defined as a linear parametric approximation of f (x) for xEk as follows:

where xEk, k = 1 , . . . , N.

Now gN(x) is defined as a piecewise linear approximation of f(x) as follows:

we have or equivalently

Now a definition of global error of approximation nonlinear function f(x) and it's piecewise linear approximation gn(x) in the sense of L1-norm is introduced which is the extension of Definition 2.5.

Definition 3.2. Consider the nonlinear smooth function f(x) and it's piece-wise linear approximation gN(x). We define a global error of approximation in the sense of L1-norm to be EN as follows:

Remark 3.3. The iterative algorithm which is presented in Section 2 can be used to find the appropriate number of partitions. According to that manner this number increases until the approximation is achieved with a desirable accuracy.

Therefore the following minimization problem must be solved:

The solution of this optimization problem is an approximated solution of the original problem (10). Since we want to achieve to a given desirable accuracy the partition should also be chosen enough fine. Therefore the method which has been explained in Section 2.1 is extended.

3.1 Error analysis for n dimensional problems

Assume that the global minimum of gN(x) and f(x) on are happened at x = α and x = β respectively. So the approximated partition must be found such that:

where ε is a given desirable error.

Since the above inequality must be satisfied thus the manner which has been represented in Section 2.1 should be repeated in n dimensions. Then, we find N such that we have EN < εn+1. (En is defined in (12)).

3.2 Description of the algorithm for n dimensional problems

According to the above manner which is explained in the previous sections the following independent linear optimization problem are defined:

Where fk(x) is a linear parametric approximation of f(x) on Ek which is defined in (11). Since fk(x) has an affine form similar to

based on the sign of ak the global minimum of fk(x) is happened at 2n extreme points of its validity domain Ek.

Therefore the optimization problem (13) is transferred to the following ones:

Here we define αk, k = 1, ..., N as the global solution of problem (14). Thus the optimization problem (13) is converted to the following simpler ones:

4 Extension to nonlinear non-smooth problems

In general it is reasonable to assume that the objective function is a non-smooth ones. Therefore we define a kind of generalized differentiation for non-smooth functions in the sense of L1-norm. This kind of differentiation is coincideing with usual differentiation for smooth functions. Therefore the following theorem is represented.

Theorem 4.1.Consider the nonlinear smooth function f : A → R where

Then the optial solution ofthe following optiization problem is f'(x).

where s = (s1, s2, · · · , sn) ∈A is an arbitrary point and p(.) = (p1 (.), ..., pn (.)) is a vector.

Proof. See [9].

Now based on Theorem 4.1 the following definition can be stated for non-smooth functions.

Definition 4.1. Let f: AR is a non-smooth function where A = The global weak differentiation with respect to x in the sense of L1-norm is defined as the p(.) the optimal solution of the minimization problem which is shown in (15).

5 Examples

In the current section we apply the performance of our method on some examples.

Example 5.1. Consider the following nonlinear minimization problem:

Which is desirable to be solved with accuracy more than ε = 10-3.

Based on our proposed approach we approximate f(x) with a piecewise linear function with global error less than (10-3)2. An appropriate number of partitions which is matched with desirable accuracy is obtained as n = 128. Figure 1 shows f (x) and its accurate enough piecewise linear approximation.


Table 5.1 compares approximated solution and exact solution of this example. Comparison results show the effectiveness of the proposed approach to solve this problem with desirable accuracy.

Example 5.2. Consider the following minimization problem (see Schuldt [8]):

Here it is desirable to solve above problem with accuracy more than ε = 10-3. Thus we approximate f (x, y) with a piecewise linear function with global error less than (10-3)3. Table 5.2 copares the solution hich is obtained by our proposed approach and exact solution of this problem. It can be shown that proposed approach is effective to solve the problem with desirable accuracy.

Example 5.3. In this example we consider a nonlinear non-smooth function as follows:

It is desirable to solve with accuracy more than ε = 10-5

Since objective function is non-smooth function we find the global weak differentiation of f (x) = |x|e-|x|; x ∈ [-1,1] which is the optimal solution of the following optimization problem:

The optimal solution is shown in Figure 2.


Now we find a piecewise linear approximation for non-smooth function f (x) = |x|e-|x| on [-1,1] with the global error less than (10-5)2. Therefore number of partitions should be chosen as n > 512. Figure 3 shows f(x) and its accurate enough piecewise linear approximation with n = 512.


Table 5.3 compares approximated and exact solution of last example. Comparison results show the effectiveness of the proposed approach in the presence of non-smooth functions.

6 Conclusion

In this paper we introduce a new approach to solve approximately wide class of constrained nonlinear programming problems. The main advantage of this approach is that we obtained an approximation for the optimum solution of the problem with any desirable accuracy. Also the approach can be extended for problems with non-smooth dynamics by introducing a novel definition of global weak differentiation in the sense of L1 and Lp norms. In this paper we assume f be a non-smooth function, so it may have a finite or infinite points where the gradient of f does not exist. It is very interesting that we may not know these point (where are located) and also the set of points where the functions are non-smooth may be an infinite set.

Received: 28/IV/10.

Accepted: 16/VIII/10.

#CAM-209/10.

  • [1] M.S. Bazaraa, J.J. Jarvis and H D. Sherali, Linear Programming and Network Flows, 2nd Edition, John Wiley and Sons, New York (1990).
  • [2] M.S. Bazaraa, H.D. Sherali and C.M. Shetty, Nonlinear Programming: Theory and Algorithms, 2nd Edition, John Wiley and Sons, New York (1993).
  • [3] Claus Still and Tapio Westerlund, A linear programming-based optimization algorithm for solving nonlinear programming problems, European Journal of Operational Research, Article in press.
  • [4] A.V. Kamyad and H.H. Mehne, A linear programming approach to the controllability of time-varying systems, IUST - Int. J. Eng. Sci., 14(4) (2003).
  • [5] D.G. Luenberger, Linear and Nonlinear Programming, Stanford University, California (1984).
  • [6] Yehui Peng, Heying Feng and Qiyong Li, A filter-variable-metric method for nonsmooth convex constrained optimization, Journal of Applied Mathematics and Computation, 208 (2009), 119-128.
  • [7] W. Rudin, Principles of Mathematical Analysis, Mc Graw-Hill, Inc. (1976).
  • [8] S.B. Schuldt, A method of multipliers for mathematical programming problems with equality and inequality constraints, Journal of Optimization Theory and Applications, 17(1/2) (1975), 155-161.
  • [9] A.M. Vaziri, A.V. Kamyad, S. Effati and M. Gachpazan, A parametric linearization approach forsolvingnonlinearprogrammingproblems, AligarhJournal Statistics, Article in press.
  • [10] Ying Zhang, Liansheng Zhang and Yingtao Xu, New filled functions for nonsmooth global optimization, Journal of Applied Mathematical Modelling, 33 (2009),3114-4129.
  • [11] Yongqing Yang and Jinde Cao, The optimization technique for solving a class of non-differentiable programming based on neural network method, Journal on Nonlinear Analysis, Article in press.

Publication Dates

  • Publication in this collection
    27 July 2011
  • Date of issue
    2011

History

  • Received
    28 Sept 2010
  • Accepted
    16 Aug 2010
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br