Acessibilidade / Reportar erro

A Heuristic Algorithm Based on Line-up Competition and Generalized Pattern Search for Solving Integer and Mixed Integer Non-linear Optimization Problems

Abstract

The global optimization of integer and mixed integer non-linear problems has a lot of applications in engineering. In this paper a heuristic algorithm is developed using line-up competition and generalized pattern search to solve integer and mixed integer non-linear optimization problems subjected to various linear or nonlinear constraints. Due to its ability to find more than one local or global optimal points, the proposed algorithm is more beneficial for multi-modal problems. The performance of this algorithm is demonstrated through several non-convex integer and mixed integer optimization problems exhibiting good agreement with those reported in the literature. In addition, the convergence time is compared with LCAs' one demonstrating the efficiency and speed of the algorithm. Meanwhile, the constraints are satisfied after passing only a few iterations.

Keywords:
Global optimization; integer optimization; mixed integer optimization; multi-modal problems; constraint optimization

1 INTRODUCTION

A vast number of optimization problems deal with integer variables. Due to the complexity of either the objective function or the constraints, these problems can be a real challenge. The presence of nonlinearities in the objective and constraint functions might imply non-convexity in mixed integer nonlinear programming (MINLP) problems, i.e. the potential existence of multiple local solutions. A mixed integer optimization problem can be formulated as:

Min f(x)

S.T:

gi(x) ≤ 0i = 1, ..., p

hj(x) = 0j = 1, ..., q

xkRk = 1, ..., s

xkZk = s + 1, ..., n

lrxrurr = 1, ..., n

where n is the number of variables, x is the vector of variables, and (lr, ur)are the boundaries of the variable xr.

The solution methods are classified into three major classes and named as: relaxation methods, search heuristics, and pattern search methods (Abramson, 2002Abramson, M. A., (2002). Pattern search algorithms for mixed variable general constrained optimization problems. Dissertation, Houston, Texas.).

Relaxation methods such as outer approximation(Duran and Grossman, 1986Duran, M. A., Grossman, I. E., (1986). An outer approximation algorithm for a class of mixed integer nonlinear programs, Math. Prog. 36:306-339.; Fletcher and Leyffer, 1994Fletcher, R., Leyffer, S., (1994). Solving mixed integer programs by outer approximation, Math. Prog. 66:327-349.), generalized bender decomposition(Geoffrion, 1972Geoffrion, A. M., (1972). Generalized benders decomposition, J. Optim. Theory Appl. 10:237-260.), branch and bound methods (Dakin, 1965Dakin, R. J., (1965). A tree-search algorithm for mixed integer programming problems, Comput. J. 8:250-255.; Leyffer, 1998Leyffer, S., (1998). Integrating SQP and branch-and-bound for mixed integer nonlinear programming, Technical Report NA/182, Dundee University.; Kesavan and Barton, 2000Kesavan, P., Barton, P. I., (2000). Generalized branch-and-cut framework for mixed-integer nonlinear optimization problems, Comput. Chem. Eng. 24:1361-1366.), and extended cutting plane (Kelley, 1960Kelley, J. E., (1960). The cutting plane method for solving convex programs, J. Soc. Indust. and Appl. Math. 8:703-712; Marchand et al., 2002Marchand, H., Martin, A., Weismantel, R., (2002). Laurence Wolsey, Cutting planes in integer and mixed integer programming, Discrete Appl. Math. 123:397-446.; Wang, 2009Wang, L., (2009). Cutting plane algorithms for the inverse mixed integer linear programming problem, Oper. Res. Lett. 37:114-116.), involve solving several sub-problems. The solution process needs the linearization of some sub-problems which requires cost function and constraints to be differentiable.

Search heuristics are methods designed to find global optima without using derivative information by systematically searching the solution space (Abramson, 2002Abramson, M. A., (2002). Pattern search algorithms for mixed variable general constrained optimization problems. Dissertation, Houston, Texas.). These methods often are based on the principles of natural biological evolution. The most relevant algorithms to solve MINLP are simulated annealing (Gidas, 1985Gidas, B., (1985). Non-stationary Markov chains and convergence of the annealing algorithm, J. Statist. Phys. 39:73-131.), Tabu search (Glover, 1990Glover, F., (1990). Tabu search, ORSA J. Comp. 1:4-32.; Glover, 1994Glover, F., (1994). Tabu search for nonlinear and parametric optimization (with links to genetic algorithms), Discrete Appl. Math. 49:231-255.), and evolutionary algorithms such as Genetic Algorithm (Holland, 1962Holland, J. H., (1962). Outline for a logical theory of adaptive systems, J. ACM 3:297-314.; Costa and Oliveira, 2001Costa, L., Oliveira, P., (2001). Evolutionary algorithms approach to the solution of mixed integer non-linear programming problems, Comput. Chem. Eng. 25:257-266.; Deep et al., 2009Deep, K., Singh, K. P., Kansal, M. L., Mohan, C., (2009). A real coded genetic algorithm for solving integer and mixed integer optimization problems, Appl. Math. Comput. 212:505-518.), Evolutionary Strategy (Costa and Oliveira, 2001Costa, L., Oliveira, P., (2001). Evolutionary algorithms approach to the solution of mixed integer non-linear programming problems, Comput. Chem. Eng. 25:257-266.), Evolutionary Programming (Fogel et al., 1966Fogel, L. J., Owens, A. J., Walsh, M. J., (1966). Artificial Intelligence through simulated evolution, John Wiley and Sons(New York).), Ant Colony Optimization (Schluter et al., 2009Schluter, M., Egea, J. A., Banga, J. R., (2009). Extended ant colony optimization for non-convex mixed integer nonlinear programming, Comput. Oper. Res. 36:2217-2229.), Particle Swarm Optimization (Coelho, 2009Coelho, L. D. S., (2009). An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications, Reliab. Eng. Syst. Saf. 94:830-837.), and Differential Evolution (Ponsich and Coello, 2009Ponsich, A., Coello, C. A., (2009). Differential Evolution performances for the solution of mixed-integer constrained process engineering problems, Applied Soft Computing. doi:10.1016/j.asoc.2009.11.030.
https://doi.org/10.1016/j.asoc.2009.11.0...
; Lin et al., 2004Lin, Y. C., Hwang, K. S., Wang, F. S., (2004). A mixed-coding scheme of evolutionary algorithms to solve mixed-integer nonlinear programming problems, Comput. Math. Appl. 47:1295-1307.).

Pattern search methods were proposed to minimize a continuous function without any knowledge of its derivative. The class of generalized pattern search (GPS) methods was introduced for solving unconstrained non-linear programming (Box, 1975Box, G. E. P., (1975). Evolutionary operation: A method for increasing industrial productivity, J. Appl. Statist. 6:81-101.), and was used to optimize mixed integer constraint non-linear optimization problems (Audet and Dennis, 2001Audet, C., Dennis, J. E., (2001). Pattern search algorithm for mixed variable programming, SIAM J. Optimiz. 11:53-594.).

The line-up competition algorithm (LCA) which is categorized in evolutionary algorithms was proposed to optimize non-linear (Yan and Ma, 2001Yan, L. X., Ma, D. X., (2001). Global optimization of nonconvex nonlinear programs using line-up competition algorithm, Comput. Chem. Eng. 25:1601-1610.) and mixed integer non-linear optimization problems (Yan et al., 2004Yan, L., Shen, K., Hu, S., (2004). Solving mixed integer nonlinear programming problems with line-up competition algorithm, Comput. Chem. Eng. 28:2647-2657.).

In this paper, an algorithm based on line-up competition and generalized pattern search is proposed to optimize integer and mixed integer non-linear problems. Using this algorithm more than one optimal point could be obtained, which makes it appropriate for multi-modal problems. The present algorithm is simple, easy to implement, and fast. The rest of the paper is organized as follows. In section 2, the proposed algorithm is described and all the required steps are mathematically formulated. Section 3 is allotted to the numerical implementation of the algorithm. Finally, the performance of this algorithm is tested through several examples in section 4.

2. ALGORITHM

This section of the paper is allocated to the description of the proposed algorithm. First, in subsection 2.1,an overall perspective of the algorithm is presented, and the steps which should be followed are explained. In subsections 2.2 to 2.7, these steps are mathematically formulated and their details are discussed.

2.1. Outline on the Present Algorithm

In the present algorithm, a uniform mesh is first generated over the solution space as the initial population. This uniform mesh guarantees that the initial population covers the whole search space not leading to loss any area of the space. In the rest of the paper, each point of the mesh is called a "family". These families are ranked to form a line-up according to the values of their objective functions, i.e. the best family is placed in the first position in the line-up, while the worst is placed in the final position. Based on the position of each family in the line-up, a search space is allocated to each family. In the next step each family produces 2n children in its allocated search space, where n is the number of variables. These children are produced using generalized pattern search method which cause that all the directions on the corresponding search space get covered. The members of each family compete with each other, as well as their father, and the best one survives as the father of the next generation. This algorithm can be described as follow:

  1. Generate mesh points on the search space and compute the value of the objective function for each family.

  2. Rank the mesh points to form a line-up according to their objective function values. For a minimization problem the line-up is an ascending sequence and vice versa.

  3. Allocate a search space to each family according to their position in the line-up. The best family (first in the line-up) has the smallest search space, while the worst (final in the line-up) has the biggest search space.

  4. Produce 2n children using generalized pattern search method. Then, the children compete with each other, as well as their father, and the best one survives as the next generation's father.

  5. Update the search space according to the f-th first family. The search space will be expanded if there is at least one improvement in the f-th first family and will be contracted if there is no improvement in the f-th first family. Notice that the value off is defined by the user and helps to find more than one optimal point.

  6. If the stopping criterion isn't satisfied return to (3).

The above-mentioned steps are described in mathematical terms in the following subsections.

2.2. Mesh Generation

To start the optimization process, it is necessary to generate an initial population. In this paper, the initial population is generated using a regular mesh over the search space. It means that some deterministically generated points are distributed in each direction corresponding to each variable.

Let's define vector m, whose arrays, m1 to mn, denote the number of mesh points which should be generated in (l1, u1) to (ln, un)) respectively. The mesh points related to mj are generated in (lj, uj) utilizing the following equation.

Where Δj = uj - lj, is the tj-th value in (lj, uj), and INT denotes the integer operator. Notice that mj-s control the number of mesh points.

A matrix, M, containing the mesh points is defined, which has n rows and C columns where C is calculated as:

Each column of M denotes a mesh point in the search space. The arrays of this matrix (mip) are generated using Eq. 3.

in which ai = 1, ..., mq. Figure 1 shows the generated mesh for n=3, m=[3 2 3]T, l=[0 2 1]T, and u=[1 4 10]T, with the use of Eq. 3.

Figure 1
Generated mesh for n=3, m=[3 2 3]T, l=[0 2 1]T, and u=[1 4 10]T using Eq. 3.

The mesh matrix, M, for this example is as follow:

2.3. Ranking the Families

The columns of the mesh matrix, M, should be ranked based on their objective function values. Considering W = [w1 w2 ... wc] as a vector, whose arrays are the ranked objective function values, V would be a matrix where its columns are the mesh points corresponding to the arrays of W.

2.4. Allocation of the Search Space

Allocation of the search space is based on the position of each family in the line-up. In other words, the best family has the smallest search space and the worst has the biggest one. The search space for the j-th mesh point is a rectangular region. The lower and upper boundaries of i-th variable for j-th family (mesh point) in k-th generation are calculated using Eq. 4 and Eq. 5, respectively:

2.5. Production of Children

In each family 2n children are produced utilizing GPS(2n). Let's define children generator matrix, D, as follow:

where In is unit matrix. Now, the arrays of the children matrix of s-th family in k-th generation are produced using the following equation:

After generating the children, they compete with each other and their father. The winner of the competition will be the father of that family for the next generation. Notice, that if the father wins in the competition, he again stands as the father of the next generation. But, if one of the children wins the competition, the father will be replaced by the best child.

2.6. Update the Search Space

The search space will be expanded if there is at least one improvement in the f-th first family as:

where α is the expansion factor and α>1.The search space will be contracted if there is no improvement in the f-th first family as:

where β is the contraction factor and 0<β<1.

2.7. Stopping Criterion

The stopping criterion can be explained by Eq. 10:

where is the maximum constraint violation of the f-th first family, ε1 is the convergence tolerance parameter, and ε2 is constraint violation tolerance.

3. IMPLEMENTATION

3.1. Constraint Handling

Constraint handling in optimization problems is a real challenge. In this paper, a pseudo cost function approach is used to replace the original objective function with a pseudo cost function, which is a weighted sum of the original objective function and the constraint violations (Vanderplaats, 1999Vanderplaats, G. N. (1999). Numerical optimization techniques for engineering design, Vanderplaats Research & Development. Inc., Colorado Springs, CO.). Therefore, the pseudo cost function acts as the objective function of the new unconstrained optimization problem. Eq. 11 shows the mathematical explanation of a classical pseudo cost (static penalty function (Back et al., 1997Back, T., Fogel, D. B., & Michalewicz, Z. (1997). Handbook of evolutionary computation. IOP Publishing Ltd.)) function.

in which P is penalty factor.

3.2. Algorithm Parameters Choice

There are several parameters in the present algorithm which should be chosen by the user. It is important to use appropriate values for these parameters because they influence the performance of the algorithm. These parameters are vector m, expansion factor α, contraction factor β, f (as described previously), penalty factor p, and stopping criterion tolerances ε1 and ε2.

The larger value for the arrays of m makes it possible to find the global optimum point more accurately. However, it increases the computational time. It's suitable to choose the value of its arrays according to the optimization cost function, constraints, and boundaries. In other words, for highly non-linear problems, the arrays of m might be increased. Also, for a wide boundary of the j-th variable, a larger value for mj is suitable. In this paper mj=2 is used for all examples except example 12, and the obtained results are winsome, so mj=2 seems suitable, but in problems with lots of variables, e.g. example 12 mj=1 can be used.

Expansion and contraction factors influence the quality of the solution and computational time. A larger value for both expansion and contraction factors provides better results, but the computational time increases. Meanwhile, larger values for β provide better global search. Based on our computing experiences, for a simple problem,β ≥ 0.4 is sufficient for finding the global optimal solution. While for a difficult problem, β ≥ 0.9 is recommended. For both simple and difficult problems, α=1 is efficient. Note that the word "simple" refers to problems with a small number of local and global minima, while the word "difficult" refers to noisy problems with several local and global minima. In other words, if there are several minima in the solution space, a small value of β will cause a fast contraction of the search space leading to the loss of some optima.

A larger quantity for thef parameter provides a better global search and helps to find several global and local optimal points (if they exist), but the time of convergence to a global solution is possibly much longer. So, the value of this parameter might be chosen according to the non-linearity of the cost function and constraints.

Penalty factor is another important parameter which can affect the performance of the algorithm. The value of this parameter should be large enough in comparison with the cost function value. A value about 1010 times greater than the values of the cost function is suitable for this parameter. In addition, when there are several constraints with a different order of magnitudes, all the constraints should be normalized.

Smaller stopping criterion tolerances make the convergence slow and their choice depends on the desired precision.

4. NUMERIAL RESULTS

The performance of the present optimization algorithm is tested in twelve integer and mixed integer non-linear optimization problems taken from the literature. These optimization problems are the test problems for mixed integer non-linear programming (Yan et al, 2004Yan, L., Shen, K., Hu, S., (2004). Solving mixed integer nonlinear programming problems with line-up competition algorithm, Comput. Chem. Eng. 28:2647-2657.; Costa and Oliveira, 2001Costa, L., Oliveira, P., (2001). Evolutionary algorithms approach to the solution of mixed integer non-linear programming problems, Comput. Chem. Eng. 25:257-266.; Deep et al, 2009Deep, K., Singh, K. P., Kansal, M. L., Mohan, C., (2009). A real coded genetic algorithm for solving integer and mixed integer optimization problems, Appl. Math. Comput. 212:505-518.).A more thorough list of the test problems can be found in (Schluter et al, 2009Schluter, M., Egea, J. A., Banga, J. R., (2009). Extended ant colony optimization for non-convex mixed integer nonlinear programming, Comput. Oper. Res. 36:2217-2229.). Table 1 shows the algorithm parameters for each example.

Table 1
Algorithm parameters for each example.

Example 1: This example is taken from (Yan et al., 2004Yan, L., Shen, K., Hu, S., (2004). Solving mixed integer nonlinear programming problems with line-up competition algorithm, Comput. Chem. Eng. 28:2647-2657.) and also given in (Costa and Oliveira, 2001Costa, L., Oliveira, P., (2001). Evolutionary algorithms approach to the solution of mixed integer non-linear programming problems, Comput. Chem. Eng. 25:257-266.; Deep et al., 2009Deep, K., Singh, K. P., Kansal, M. L., Mohan, C., (2009). A real coded genetic algorithm for solving integer and mixed integer optimization problems, Appl. Math. Comput. 212:505-518.).

Min f(x) = -0.7x3 + 5(x1 - 0.5)2 + 0.8

S.T:

-exp(x1 - 0.2) - x2, ≤ 0

x2 + 1.1x3 ≤ -1

x1- 1.2x3 ≤ 0.2

0.2 ≤ x1 ≤ 1

- 2.22554 ≤ x2 ≤ -1

x3 ∈ {0,1}

The global optimum point is xopt = [0.9419 -2.1 1] with f(xopt).= 1.0765. The result is in full agreement with other studies. Figure 2(a) and (b) show the history of pseudo cost function value and convergence parameter, respectively. Referring to this figure, a fast convergence to the global optimal point is achieved.

Figure 2
Example 1, (a) pseudo cost function history, (b) Convergence parameter history.

Figure 3 shows the variation of the convergence time with parameter α for several values of β. According to this figure, α=1 and β=0.4 are the best values for fast convergence to the optimal point.

Figure 3
Example 1, convergence time vs. β for three values of α.

Example 2: This example is taken from (Yan et al., 2004Yan, L., Shen, K., Hu, S., (2004). Solving mixed integer nonlinear programming problems with line-up competition algorithm, Comput. Chem. Eng. 28:2647-2657.).

min f(x) = (x1 - 1)2 + ( x2 - 2)2 + (x3 - 3)2 + (x4 - 1)2 + (x5 - 2)2 + (x6 - 1)2 = In (x7 + 1)

S.T:

x1 + x2 + x3 + x4 + x5 + x6 ≤ 5

x1 + x4 ≤ 1.2

x2 + x5 ≤ 1.8

x3 + x6 ≤ 2.5

x1 + x7 ≤ 1.2

xi > 0i = 1,2,3

xi ∈ {0, 1}i = 4,...,7

The global optimum point is xopt = [ 0.2 0.8 1.9079 1 1 0 1]T with f(xopt) = 4.5796. The obtained optimum point is in full agreement with that reported in(Yan et al., 2004Yan, L., Shen, K., Hu, S., (2004). Solving mixed integer nonlinear programming problems with line-up competition algorithm, Comput. Chem. Eng. 28:2647-2657.). The pseudo cost function, convergence parameter, and maximum constraint violation history of the problem are depicted in figure 4 (a)-(c), respectively, showing fast convergence to the optimum point. The maximum constraints violation is almost zero in all iterations, meaning that the generated mesh covers the search space appropriately.

Figure 4
Example 2, (a) pseudo cost function history, (b) Convergence parameter history, (c) Maximum constraint violation history.

Figure 5 compares the convergence time for several values of α and β. Referring to this figure, α=1 and 0.5≤β≤0.8 are the best values for fast convergence to the optimum point.

Figure 5
Example 2, convergence time vs. β for three values of α.

Example 3: This is a synthesis problem of a process system, taken from (Yan et al., 2004Yan, L., Shen, K., Hu, S., (2004). Solving mixed integer nonlinear programming problems with line-up competition algorithm, Comput. Chem. Eng. 28:2647-2657.).

Min f(x) = E ( ai exp(x(i+4)) - bi In (1 1xi))

S,T:

x1 + x2 = e

x1 + x2 + x3 + x4 = 0

x5 + x6 ≥ 4

x7 + x8 ≥ 4

4xix(i+4)i = 1,...,4

xi ≥ 0i = 1,...,4

xi ∈ {0,1,2,3,4}i = 5, ...,8

where a1 = 2.1 , a2 = 0.1 , a3 = 4.1 , a4 = 0.1 , b1 = 3 , b2 = 1 , b3 = 3 , b4 = 4 and e = 1. The obtained optimum point is xopt = [0.25 0.75 0.25 0.75 1 3 1 3] with f(xopt ) = 4.2498 which is in full agreement with the data coming from (Yan et al., 2004Yan, L., Shen, K., Hu, S., (2004). Solving mixed integer nonlinear programming problems with line-up competition algorithm, Comput. Chem. Eng. 28:2647-2657.).

Figure 6(a)-(c) show the pseudo cost function, convergence parameter, maximum constraint violation history of the problem, respectively. Since there are 2 equal constraints, finding the feasible region would require that some iterations to be passed. Accordingly, all the constraints are satisfied after 26 iterations.

Figure 6
Example 3, (a) pseudo cost function history, (b) Convergence parameter history, (c) Maximum constraint violation history.

Example 4: This example is taken from (Tian et al., 1998Tian, P., Ma, J., Zhang, D. M., (1998). Nonlinear integer programming by Darwin and Boltzmann mixed strategy, Eur. J. Oper. Res. 105:224-235.).

min f(x) = x2 - 3)2cos(π.x1) + (x2 - 6)sin(.x2) + + (x3 + 2)3exp(-x4)

S.T:

xi ∈ {0, 1, 2, ..., 60}i = 1, ..., 4

In this problem, all the variables are limited to have integer values. There are many local and global optimum solutions for this problem. Using an appropriate value for parameter f, several optimum points can be found. Table 2 shows 10 optimum points and their corresponding optimum values which are obtained by this algorithm. All the optimum points have the same value and the algorithm finds them with only one run through 95 iterations (Figure 7) which takes only 1.812 sec to converge.

Figure 7
Example 4, (a) pseudo cost function history, (b) Convergence parameter history.

Table 2
Example 4 optimal points.

Example 5: This example is taken from (Tian et al., 1998Tian, P., Ma, J., Zhang, D. M., (1998). Nonlinear integer programming by Darwin and Boltzmann mixed strategy, Eur. J. Oper. Res. 105:224-235.).

This problem contains many global and local optimum points, of which 6 are found after only 45 iterations (Figure 8). Table 3 presents the optimal points and their values.

Figure 8
Example 5, (a) pseudo cost function history, (b) Convergence parameter history.

Table 3
Example 5 optimal points.

Example 6: This example is taken from (Costa and Oliveira, 2001Costa, L., Oliveira, P., (2001). Evolutionary algorithms approach to the solution of mixed integer non-linear programming problems, Comput. Chem. Eng. 25:257-266.) and also reported in (Deep et al., 2009Deep, K., Singh, K. P., Kansal, M. L., Mohan, C., (2009). A real coded genetic algorithm for solving integer and mixed integer optimization problems, Appl. Math. Comput. 212:505-518.).

Max f(x) = - 5.357854 - 0.835689 x3x4 - 37.29329 x4 + 40792.141

S.T:

85.334407 + 0.0056858 x3x5 + 0.0006262 x2x4- 0.0022053 x1x3 ≤ 92

80.51249 + 0.0071317x3x5 + 0.0029955 x4x5 + 0.0021813 ≤ 110

9.300961 + 0.0047026 x1x3 + 0.0012547 x1x4 + 0.0019085 x1x2 ≤ 25

27 ≤ xi ≤ 45i = 1,2,3

x4 ∈ {78, 79,...,102}

x5 ∈ {33, 34,..., 45}

The global optimum is x1 = 27, x4 = 78, for any combination of x2and x5. The optimum point is obtained after only 2 iterations which exhibits fast convergence.

Example 7: This example is taken from (Deep et al., 2009Deep, K., Singh, K. P., Kansal, M. L., Mohan, C., (2009). A real coded genetic algorithm for solving integer and mixed integer optimization problems, Appl. Math. Comput. 212:505-518.).

min f (x) = x1x7 + 3x2x6 + x3x5 + 7x4

x1 + x2 + x3 ≥ 6

x4 + x5 + 6x6 ≥ 8

x1x6 + x2 + 3x5 ≥ 7

4x2x7 +3x4x5 ≥ 25

3x1 + 2x3 + x5 ≥ 7

3x1x3 + 6x4 + 4x5 ≤ 20

4x1 + 2x3 + x6x7 ≤ 15

xi ∈ {0, 1,..., 4}i = 1,2,3

xi ∈ {0, 1, 2}i = 4,5,6

x7 ∈ {0, 1, ..., 6}

Using the present algorithm, 3 global and 2 local optimal points are obtained after 45 iterations, which take 2.484 sec to converge. Table 4 shows the optimum points and their corresponding optimum values.

Table 4
Example 7 optimal points.

Example 8: This example is taken from (Deep et al., 2009Deep, K., Singh, K. P., Kansal, M. L., Mohan, C., (2009). A real coded genetic algorithm for solving integer and mixed integer optimization problems, Appl. Math. Comput. 212:505-518.).

ui = 25 + (-501n(0.01i)2/3

S.T:

0 ≤ x1 ≤ 5

x2 ∈ { 0,1,...,25}

x3 ∈ {1,2,...,100}

The global optimum point is xopt = [ 1.5 25 50]T with f(xopt) = 0. It is in full agreement with the results reported in other studies.

Example 9: This example is taken from (Costa and Oliveira, 2001Costa, L., Oliveira, P., (2001). Evolutionary algorithms approach to the solution of mixed integer non-linear programming problems, Comput. Chem. Eng. 25:257-266.).

min f(x) = 5x + 7x3 + 6x4 = 7.5x5 + 5.5x6

x5 + x6 = 1

z1 = 0.9 (1 - ) x1

z1 = 0.8 (1 - ) x2

x1 + x2 - x = 0

z1 + z2 = 10

x3 ≤ 10x5

x4 ≤ 10x6

x1 ≤ 20x5

x2 ≤ 20x6

xi ≥ 0i = 1,...,4

xi ∈ {0,1}i = 5, 6

The global optimum point is xopt = [13.4252 0 3.5162 0 1 0]T with f(xopt) = 99.2396. The optimum value of the cost function is reported as f(xopt) = 99.245209.in (Fogel et al., 1966Fogel, L. J., Owens, A. J., Walsh, M. J., (1966). Artificial Intelligence through simulated evolution, John Wiley and Sons(New York).). As seen, the proposed method provides a better value for the cost function in comparison to the one reported by Fogle et al.

Example 10: This is an integer cubic problem which is taken from (Dickman and Gilman, 1989Dickman, B. H., Gilman, M. J., (1989). Technical Note: Monte Carlo Optimization, J. Optim. Theory Appl. 60:149-157.).

max f(x) = - (x1x2x3 + x1x4x5 + x2x4x6 + x6x7x8 + x2x5x7)

12 - (2x1 + 2x4 + 8x8) ≤ 0

41 - (11x1 + 7x4 + 13 x6) ≤ 0

60 - ( 6x2 + 9x4x6 + x7) ≤ 0

42 - (3x2 + 5x5 + 7x8) ≤ 0

53 - (6x2x7 + 9x3 + 5x5) ≤ 0

13- (4x3x7 + x5) ≤ 0

2x1 + 4x2 + 7x4 + 3x5 + x7 ≤ 69

9x1x8 + 6x3x5 + 4x3x7 ≤ 47

12x2 + x2x8 + 2x3x6 ≤ 73

x3 + 4x5 + 2x6 + 9x8 ≤ 31

xi ∈ {0,1,...,7}i = 1,3,4,6,8

xi ∈ {0,1,...15}i = 2,5,7

There are eight design variables and ten inequality constraints. The optimum point is xopt = [5 4 1 1 6 3 2 0]T with f(xop) = - 110, which is in full agreement with (Dickman and Gilman, 1989Dickman, B. H., Gilman, M. J., (1989). Technical Note: Monte Carlo Optimization, J. Optim. Theory Appl. 60:149-157.).

Example 11: This example is taken from (Costa and Oliveira, 2001Costa, L., Oliveira, P., (2001). Evolutionary algorithms approach to the solution of mixed integer non-linear programming problems, Comput. Chem. Eng. 25:257-266.) and also given in (Lin et al., 2004Lin, Y. C., Hwang, K. S., Wang, F. S., (2004). A mixed-coding scheme of evolutionary algorithms to solve mixed-integer nonlinear programming problems, Comput. Math. Appl. 47:1295-1307.). The problem contains three integer variables and seven continuous ones subjected to 18 inequality constraints.

Where, for the specific problem considered in this paper, M=3, N=2, H=6000, αj=250, βj=0.6, Nju=3, Vjl=250 and Vju=2500. The values of the other parameters are given as follows:

The obtained global optimum point is xopt = [ 1 1 1 480 720 960 240 120 20 16]T with f(xop) =38499.8, which is in full agreement with that which was reported in the other studies.

Example 12: This example addresses the optimal design of multiproduct batch plants which is taken from (Goyal and Ierapetritou, 2004Goyal, V., Ierapetritou, M. G., (2004). Computational studies using a novel simplicial-approximation based algorithm for MINLP optimization, Comput. Chem. Eng. 28:1771-1780.) and also given in (Kocis and Grossmann, 1988Kocis, G. R., Grossmann, I. E., (1988). Global Optimization of Nonconvex Mixed-Integer Nonlinear Programming (MINLP) Problems in Process Synthesis, Ind. Eng. Chem. Res. 27:1407-1421.). This MINLP problem consists of 100 variables, from which 60 are binary ones. The problem contains 217 constraints and a nonlinear objective function. The problem data can be found in (Goyal and Ierapetritou, 2004Goyal, V., Ierapetritou, M. G., (2004). Computational studies using a novel simplicial-approximation based algorithm for MINLP optimization, Comput. Chem. Eng. 28:1771-1780.).

The optimal value of the objective function is 2.68×106, and is in full agreement with the value reported in(Goyal and Ierapetritou, 2004Goyal, V., Ierapetritou, M. G., (2004). Computational studies using a novel simplicial-approximation based algorithm for MINLP optimization, Comput. Chem. Eng. 28:1771-1780.).

5. CONCLUSION AND SUMMARY

In this paper, an algorithm was proposed for the solution of constrained, integer and mixed integer, non-linear optimization problems. In this algorithm a deterministic search over the solution space is performed to find the optimal solution. The performance of the proposed algorithm was tested through several integer and mixed integer (including multi-modal) optimization problems. The obtained results were compared with those reported in the literature demonstrating efficiency and fast convergence. One of the most important advantages of this method is the ability to find more than one optimal point with only one run of the computer program. So, this algorithm is suitable for multi-modal optimization problems.

REFERENCES

  • Abramson, M. A., (2002). Pattern search algorithms for mixed variable general constrained optimization problems. Dissertation, Houston, Texas.
  • Audet, C., Dennis, J. E., (2001). Pattern search algorithm for mixed variable programming, SIAM J. Optimiz. 11:53-594.
  • Back, T., Fogel, D. B., & Michalewicz, Z. (1997). Handbook of evolutionary computation. IOP Publishing Ltd.
  • Box, G. E. P., (1975). Evolutionary operation: A method for increasing industrial productivity, J. Appl. Statist. 6:81-101.
  • Coelho, L. D. S., (2009). An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications, Reliab. Eng. Syst. Saf. 94:830-837.
  • Costa, L., Oliveira, P., (2001). Evolutionary algorithms approach to the solution of mixed integer non-linear programming problems, Comput. Chem. Eng. 25:257-266.
  • Dakin, R. J., (1965). A tree-search algorithm for mixed integer programming problems, Comput. J. 8:250-255.
  • Deep, K., Singh, K. P., Kansal, M. L., Mohan, C., (2009). A real coded genetic algorithm for solving integer and mixed integer optimization problems, Appl. Math. Comput. 212:505-518.
  • Dickman, B. H., Gilman, M. J., (1989). Technical Note: Monte Carlo Optimization, J. Optim. Theory Appl. 60:149-157.
  • Duran, M. A., Grossman, I. E., (1986). An outer approximation algorithm for a class of mixed integer nonlinear programs, Math. Prog. 36:306-339.
  • Fletcher, R., Leyffer, S., (1994). Solving mixed integer programs by outer approximation, Math. Prog. 66:327-349.
  • Fogel, L. J., Owens, A. J., Walsh, M. J., (1966). Artificial Intelligence through simulated evolution, John Wiley and Sons(New York).
  • Geoffrion, A. M., (1972). Generalized benders decomposition, J. Optim. Theory Appl. 10:237-260.
  • Gidas, B., (1985). Non-stationary Markov chains and convergence of the annealing algorithm, J. Statist. Phys. 39:73-131.
  • Glover, F., (1990). Tabu search, ORSA J. Comp. 1:4-32.
  • Glover, F., (1994). Tabu search for nonlinear and parametric optimization (with links to genetic algorithms), Discrete Appl. Math. 49:231-255.
  • Goyal, V., Ierapetritou, M. G., (2004). Computational studies using a novel simplicial-approximation based algorithm for MINLP optimization, Comput. Chem. Eng. 28:1771-1780.
  • Holland, J. H., (1962). Outline for a logical theory of adaptive systems, J. ACM 3:297-314.
  • Kelley, J. E., (1960). The cutting plane method for solving convex programs, J. Soc. Indust. and Appl. Math. 8:703-712
  • Kesavan, P., Barton, P. I., (2000). Generalized branch-and-cut framework for mixed-integer nonlinear optimization problems, Comput. Chem. Eng. 24:1361-1366.
  • Kocis, G. R., Grossmann, I. E., (1988). Global Optimization of Nonconvex Mixed-Integer Nonlinear Programming (MINLP) Problems in Process Synthesis, Ind. Eng. Chem. Res. 27:1407-1421.
  • Leyffer, S., (1998). Integrating SQP and branch-and-bound for mixed integer nonlinear programming, Technical Report NA/182, Dundee University.
  • Lin, Y. C., Hwang, K. S., Wang, F. S., (2004). A mixed-coding scheme of evolutionary algorithms to solve mixed-integer nonlinear programming problems, Comput. Math. Appl. 47:1295-1307.
  • Marchand, H., Martin, A., Weismantel, R., (2002). Laurence Wolsey, Cutting planes in integer and mixed integer programming, Discrete Appl. Math. 123:397-446.
  • Ponsich, A., Coello, C. A., (2009). Differential Evolution performances for the solution of mixed-integer constrained process engineering problems, Applied Soft Computing. doi:10.1016/j.asoc.2009.11.030.
    » https://doi.org/10.1016/j.asoc.2009.11.030
  • Schluter, M., Egea, J. A., Banga, J. R., (2009). Extended ant colony optimization for non-convex mixed integer nonlinear programming, Comput. Oper. Res. 36:2217-2229.
  • Tian, P., Ma, J., Zhang, D. M., (1998). Nonlinear integer programming by Darwin and Boltzmann mixed strategy, Eur. J. Oper. Res. 105:224-235.
  • Vanderplaats, G. N. (1999). Numerical optimization techniques for engineering design, Vanderplaats Research & Development. Inc., Colorado Springs, CO.
  • Wang, L., (2009). Cutting plane algorithms for the inverse mixed integer linear programming problem, Oper. Res. Lett. 37:114-116.
  • Yan, L. X., Ma, D. X., (2001). Global optimization of nonconvex nonlinear programs using line-up competition algorithm, Comput. Chem. Eng. 25:1601-1610.
  • Yan, L., Shen, K., Hu, S., (2004). Solving mixed integer nonlinear programming problems with line-up competition algorithm, Comput. Chem. Eng. 28:2647-2657.

Publication Dates

  • Publication in this collection
    Feb 2016

History

  • Received
    15 July 2015
  • Reviewed
    25 Sept 2015
  • Accepted
    13 Oct 2015
Individual owner www.lajss.org - São Paulo - SP - Brazil
E-mail: lajsssecretary@gmsie.usp.br