A Heuristic Algorithm Based on Line-up Competition and Generalized Pattern Search for Solving Integer and Mixed Integer Non-linear Optimization Problems

The global optimization of integer and mixed integer non-linear problems has a lot of applications in engineering. In this paper a heuristic algorithm is developed using line-up competition and generalized pattern search to solve integer and mixed integer non-linear optimization problems subjected to various linear or nonlinear constraints. Due to its ability to find more than one local or global optimal points, the proposed algorithm is more beneficial for multimodal problems. The performance of this algorithm is demonstrated through several non-convex integer and mixed integer optimization problems exhibiting good agreement with those reported in the literature. In addition, the convergence time is compared with LCAs’ one demonstrating the efficiency and speed of the algorithm. Meanwhile, the constraints are satisfied after passing only a few iterations.

The solution methods are classified into three major classes and named as: relaxation methods, search heuristics, and pattern search methods (Abramson, 2002).
Pattern search methods were proposed to minimize a continuous function without any knowledge of its derivative.The class of generalized pattern search (GPS) methods was introduced for solving unconstrained non-linear programming (Box, 1975), and was used to optimize mixed integer constraint non-linear optimization problems (Audet and Dennis, 2001).The line-up competition algorithm (LCA) which is categorized in evolutionary algorithms was proposed to optimize non-linear (Yan and Ma, 2001) and mixed integer non-linear optimization problems (Yan et al., 2004).
In this paper, an algorithm based on line-up competition and generalized pattern search is proposed to optimize integer and mixed integer non-linear problems.Using this algorithm more than one optimal point could be obtained, which makes it appropriate for multi-modal problems.The present algorithm is simple, easy to implement, and fast.The rest of the paper is organized as follows.In section 2, the proposed algorithm is described and all the required steps are mathematically formulated.Section 3 is allotted to the numerical implementation of the algorithm.Finally, the performance of this algorithm is tested through several examples in section 4. Structures 13 (2016) 224-242

ALGORITHM
This section of the paper is allocated to the description of the proposed algorithm.First, in subsection 2.1,an overall perspective of the algorithm is presented, and the steps which should be followed are explained.In subsections 2.2 to 2.7, these steps are mathematically formulated and their details are discussed.

Outline on the Present Algorithm
In the present algorithm, a uniform mesh is first generated over the solution space as the initial population.This uniform mesh guarantees that the initial population covers the whole search space not leading to loss any area of the space.In the rest of the paper, each point of the mesh is called a "family".These families are ranked to form a line-up according to the values of their objective functions, i.e. the best family is placed in the first position in the line-up, while the worst is placed in the final position.Based on the position of each family in the line-up, a search space is allocated to each family.In the next step each family produces 2n children in its allocated search space, where n is the number of variables.These children are produced using generalized pattern search method which cause that all the directions on the corresponding search space get covered.The members of each family compete with each other, as well as their father, and the best one survives as the father of the next generation.This algorithm can be described as follow: 1. Generate mesh points on the search space and compute the value of the objective function for each family.2. Rank the mesh points to form a line-up according to their objective function values.For a minimization problem the line-up is an ascending sequence and vice versa.3. Allocate a search space to each family according to their position in the line-up.The best family (first in the line-up) has the smallest search space, while the worst (final in the lineup) has the biggest search space.4. Produce 2n children using generalized pattern search method.Then, the children compete with each other, as well as their father, and the best one survives as the next generation's father.5. Update the search space according to the f-th first family.The search space will be expanded if there is at least one improvement in the f-th first family and will be contracted if there is no improvement in the f-th first family.Notice that the value off is defined by the user and helps to find more than one optimal point.6.If the stopping criterion isn't satisfied return to (3).
The above-mentioned steps are described in mathematical terms in the following subsections.

Mesh Generation
To start the optimization process, it is necessary to generate an initial population.In this paper, the initial population is generated using a regular mesh over the search space.It means that some deterministically generated points are distributed in each direction corresponding to each variable.Solids and Structures 13 (2016) 224-242 Let's define vector m, whose arrays, m1 to mn, denote the number of mesh points which should be generated in 1 1 ( , ) l u to ( , )

Latin American Journal of
n n l u , respectively.The mesh points related to mj are generated in( , ) utilizing the following equation.
X is the j t -th value in ( , ) j j l u , and INT denotes the integer operator.Notice that mj-s control the number of mesh points.

Ranking the Families
The columns of the mesh matrix, M, should be ranked based on their objective function values. Considering û as a vector, whose arrays are the ranked objective function val- ues, V would be a matrix where its columns are the mesh points corresponding to the arrays of W.

Allocation of the Search Space
Allocation of the search space is based on the position of each family in the line-up.In other words, the best family has the smallest search space and the worst has the biggest one.The search space for the j-th mesh point is a rectangular region.The lower and upper boundaries of i-th variable for j-th family (mesh point) in k-th generation are calculated using Eq. 4 and Eq. 5, respectively: ,j 1,...,C = (5)

Production of Children
In each family 2n children are produced utilizing GPS(2n).Let's define children generator matrix, D, as follow: where In is unit matrix.Now, the arrays of the children matrix of s-th family in k-th generation are produced using the following equation: Latin American Journal of Solids and Structures 13 (2016) 224-242 After generating the children, they compete with each other and their father.The winner of the competition will be the father of that family for the next generation.Notice, that if the father wins in the competition, he again stands as the father of the next generation.But, if one of the children wins the competition, the father will be replaced by the best child.

Update the Search Space
The search space will be expanded if there is at least one improvement in the f-th first family as: ( ) where α is the expansion factor and α>1.The search space will be contracted if there is no improvement in the f-th first family as: ( 1) ( ) j 1,...,n where β is the contraction factor and 0<β<1.

Stopping Criterion
The stopping criterion can be explained by Eq.10: ( ) where ( ) is the maximum constraint violation of the f-th first family, ε1 is the convergence tolerance parameter, and ε2 is constraint violation tolerance.

Constraint Handling
Constraint handling in optimization problems is a real challenge.In this paper, a pseudo cost function approach is used to replace the original objective function with a pseudo cost function, which is a weighted sum of the original objective function and the constraint violations (Vanderplaats, 1999).Therefore, the pseudo cost function acts as the objective function of the new unconstrained optimization problem.Eq. 11 shows the mathematical explanation of a classical pseudo cost (static penalty function (Back et al., 1997) in which P is penalty factor.

Algorithm Parameters Choice
There are several parameters in the present algorithm which should be chosen by the user.It is important to use appropriate values for these parameters because they influence the performance of the algorithm.These parameters are vector m, expansion factor α, contraction factor β, f (as described previously), penalty factor p, and stopping criterion tolerances ε1 and ε2.
The larger value for the arrays of m makes it possible to find the global optimum point more accurately.However, it increases the computational time.It's suitable to choose the value of its arrays according to the optimization cost function, constraints, and boundaries.In other words, for highly non-linear problems, the arrays of m might be increased.Also, for a wide boundary of the j-th variable, a larger value for mj is suitable.In this paper mj=2 is used for all examples except example 12, and the obtained results are winsome, so mj=2 seems suitable, but in problems with lots of variables, e.g.example 12 mj=1 can be used.
Expansion and contraction factors influence the quality of the solution and computational time.A larger value for both expansion and contraction factors provides better results, but the computational time increases.Meanwhile, larger values for β provide better global search.Based on our computing experiences, for a simple problem,β ≥ 0.4 is sufficient for finding the global optimal solution.While for a difficult problem, β ≥ 0.9 is recommended.For both simple and difficult problems, α=1 is efficient.Note that the word "simple" refers to problems with a small number of local and global minima, while the word "difficult" refers to noisy problems with several local and global minima.In other words, if there are several minima in the solution space, a small value of β will cause a fast contraction of the search space leading to the loss of some optima.A larger quantity for thef parameter provides a better global search and helps to find several global and local optimal points (if they exist), but the time of convergence to a global solution is possibly much longer.So, the value of this parameter might be chosen according to the non-linearity of the cost function and constraints.
Penalty factor is another important parameter which can affect the performance of the algorithm.The value of this parameter should be large enough in comparison with the cost function value.A value about 10 10 times greater than the values of the cost function is suitable for this parameter.In addition, when there are several constraints with a different order of magnitudes, all the constraints should be normalized.
Smaller stopping criterion tolerances make the convergence slow and their choice depends on the desired precision.

NUMERICAL RESULTS
The performance of the present optimization algorithm is tested in twelve integer and mixed integer non-linear optimization problems taken from the literature.These optimization problems are the test problems for mixed integer non-linear programming (Yan et al, 2004;Costa and Oliveira, 2001;Deep et al, 2009).A more thorough list of the test problems can be found in (Schluter et al, 2009).Table 1 shows the algorithm parameters for each example.Example 1: This example is taken from (Yan et al., 2004) and also given in (Costa and Oliveira, 2001;Deep et al., 2009).
. The result is in full agreement with other studies.Figure 2   Example 2: This example is taken from (Yan et al., 2004).
The global optimum point is opt x 0.2 0.8 1.9079 1 1 0 1 . The obtained optimum point is in full agreement with that reported in (Yan et al., 2004).The pseudo cost function, convergence parameter, and maximum constraint violation history of the problem are depicted in figure 4 (a)-(c), respectively, showing fast convergence to the optimum point.The maximum constraints violation is almost zero in all iterations, meaning that the generated mesh covers the search space appropriately.Example 3: This is a synthesis problem of a process system, taken from (Yan et al., 2004).which is in full agreement with the data coming from (Yan et al., 2004).
Figure 6 (a)-(c) show the pseudo cost function, convergence parameter, maximum constraint violation history of the problem, respectively.Since there are 2 equal constraints, finding the feasible region would require that some iterations to be passed.Accordingly, all the constraints are satisfied after 26 iterations.Example 4: This example is taken from (Tian et al., 1998).

:
) 4 In this problem, all the variables are limited to have integer values.There are many local and global optimum solutions for this problem.Using an appropriate value for parameter f, several optimum points can be found.Table 2 shows 10 optimum points and their corresponding optimum values which are obtained by this algorithm.All the optimum points have the same value and the algorithm finds them with only one run through 95 iterations (Figure 7) which takes only 1.812 sec to converge.Example 5: This example is taken from (Tian et al., 1998).
10, 9,..., 9,10 1,..., 6 This problem contains many global and local optimum points, of which 6 are found after only 45 iterations (Figure 8).Table 3 presents the optimal points and their values.Example 6: This example is taken from (Costa and Oliveira, 2001) and also reported in (Deep et al., 2009).Using the present algorithm, 3 global and 2 local optimal points are obtained after 45 iterations, which take 2.484 sec to converge.Table 4 shows the optimum points and their corresponding optimum values.Example 8: This example is taken from (Deep et al., 2009).
25 50 ln(0.01) . : The global optimum point is opt x 1.5 25 50 It is in full agreement with the results reported in other studies.Example 9: This example is taken from (Costa and Oliveira, 2001).
The global optimum point is opt x 13.4252 0 3.5162 0 1 0 . The optimum value of the cost function is reported as opt (x ) 99.245209 f = in (Fogel et al., 1966).As seen, the proposed method provides a better value for the cost function in comparison to the one reported by Fogle et al.
Example 10: This is an integer cubic problem which is taken from (Dickman and Gilman, 1989).There are eight design variables and ten inequality constraints.The optimum point is , which is in full agreement with (Dickman and Gilman, 1989).
Example 11: This example is taken from (Costa and Oliveira, 2001) and also given in (Lin et al., 2004).The problem contains three integer variables and seven continuous ones subjected to 18 inequality constraints.
1 min .: Where, for the specific problem considered in this paper, M=3, N=2, H=6000, αj=250, βj=0.6,Nj u =3, Vj l =250 and Vj u =2500.The values of the other parameters are given as follows:  , which is in full agreement with that which was reported in the other studies.
Example 12: This example addresses the optimal design of multiproduct batch plants which is taken from (Goyal and Ierapetritou, 2004) and also given in (Kocis and Grossmann, 1988).This MINLP problem consists of 100 variables, from which 60 are binary ones.The problem contains 217 constraints and a nonlinear objective function.The problem data can be found in (Goyal and Ierapetritou, 2004).The optimal value of the objective function is 2.68×10 6 , and is in full agreement with the value reported in (Goyal and Ierapetritou, 2004).

CONCLUSION AND SUMMARY
In this paper, an algorithm was proposed for the solution of constrained, integer and mixed integer, non-linear optimization problems.In this algorithm a deterministic search over the solution space is performed to find the optimal solution.The performance of the proposed algorithm was tested through several integer and mixed integer (including multi-modal) optimization problems.The obtained results were compared with those reported in the literature demonstrating efficiency and fast convergence.One of the most important advantages of this method is the ability to find more than one optimal point with only one run of the computer program.So, this algorithm is suitable for multimodal optimization problems.

Latin
American Journal of Solids and Structures 13 (2016) 224-242nonlinear programming (MINLP) problems, i.e. the potential existence of multiple local solutions.A mixed integer optimization problem can be formulated as: the number of variables, x is the vector of variables, and ( , )

Table 1 :
Algorithm parameters for each example.