Open-access A NEW METHOD FOR OPTIMIZING A FUNCTION OVER THE EFFICIENT SET

ABSTRACT

In this paper, we present a new deterministic method to optimize a linear function over the efficient set of a multiple objective integer linear problem, it is called the 2-phase algorithm. This algorithm is based on two phases : the first uses the student copula to determine the pilot objective and the second optimises it to obtain the optimal solution. To check the algorithm’s performance, we compare it with two benchmark algorithms from the literature. A detailed didactic example is given to illustrate the different steps. The algorithm is implemented on Machine and interesting results are obtained and discussed.

Keywords:
deterministic optimization; multi-objective optimization; optimization over the efficient set; Student copula

1 INTRODUCTION

Multi-objective optimization problems (MOP) are characterized by the need to simultaneously optimize multiple, often conflicting objectives. In real-world scenarios, decision-makers are often tasked with selecting a solution from a set of non-dominated solutions that lie on the Pareto front. The Pareto front is a key concept in multi-objective optimization, representing the set of all solutions that cannot be improved in one objective without degrading another. However, selecting the ”best” solution from this set can be a challenging and subjective task due to the diversity and trade-offs inherent in these problems (e.g., Benson (1978), Sylva & Crema (2004), Sylva, and Teghem & Kunsch (1986)).

The primary difficulty in multi-objective decision-making lies in choosing between multiple, often conflicting objectives. For example, in engineering, environmental planning, and economics, decision-makers are frequently required to make trade-offs between efficiency, cost, and sustainability. These types of problems are inherently non-convex and cannot be solved by traditional optimization methods, which focus on optimizing a single objective. Therefore, the field of multi-objective optimization aims to provide a set of solutions that represent different trade-offs between the objectives, leaving the decision-maker to choose the most suitable solution based on their preferences or contextual factors.

Despite the success of traditional methods, researchers have continued to explore more sophisticated approaches for optimizing functions over the efficient set, particularly when dealing with complex, large-scale problems. Recent work in this area has focused on optimization algorithms that can work directly over the efficient set, bypassing the need to explicitly compute the entire Pareto front. One of these methods is an approach based of the Student copula, which we introduce in this paper. This approach allows for the optimization of a linear function over the efficient set of a multi-objective integer linear programming (MOILP) problem. The method is based on a two-phase algorithm: the first phase uses the Student copula to determine the pilot objective, and the second phase optimizes this objective to find the optimal solution.

The proposed method is compared with two benchmark algorithms from the literature, providing valuable insights into its performance. Additionally, a detailed example is presented to illustrate the steps involved in the algorithm, demonstrating how it can be implemented and its effectiveness in real-world scenarios. The results obtained through this approach show promising potential, particularly in solving problems where traditional methods might struggle due to their computational complexity.

In summary, multi-objective optimization remains a challenging field with diverse approaches for solving problems in various domains. The methods discussed below represent different ways of navigating the Pareto front, each with its strengths and weaknesses. The new method proposed in this paper, which combines the Student copula with optimization over the efficient set, offers a promising alternative that can provide efficient and practical solutions to multi-objective integer linear programming problems.

Our work is organized as follows :

  • In Section 2, the literature review is presented and we present a table summarizing the characteristics of methods optimizing the function over the efficient set of multiobjective integer linear program.

  • In Section 3, we recall some basic concepts of the approach of the optimization the function over the efficient set and we present a few copula concepts.

  • The algorithm is developed in the fourth section, where its convergence is proven.

  • A didactic example is given in Section 5 to illustrate the algorithm.

  • Section 6 presents the proposed method’s computational results and compares them with those of Chaabane & Pirlot (2010) and Jseus Jorg (2009). The comparison with the method of Boland et al. (2017) is not made because it is limited to three objectives.

  • Finally, we end up by a conclusion.

2 LITERATURE REVIEW OF OPTIMIZATION OF FUNCTION OVER THE EFFICIENT SET OF MULTIOBJECTIVE PROGRAM

In multi-objective programming problems, selecting a solution from the Pareto front is a complex task due to the diversity of non-dominated solutions. Among the most commonly used methods to assist decision-makers in this selection, the weighted objective method transforms the problem into a single-objective one by assigning weights to each objective based on its relative importance Raiffa (1968). However, this approach can be subjective and does not guarantee that all Paretooptimal solutions will be explored. Another popular method is the epsilon-constraint method, which involves solving multiple single-objective problems while constraining the other objectives to acceptable levels Chankong & Haimes (1983). This method is more flexible and generates different non-dominated solutions, although it can be computationally expensive. Compromise programming is another approach that seeks to minimize a distance function relative to an ideal solution, allowing the decision-maker to select an acceptable compromise between objectives Yu (1973). Interactive methods, such as adaptive search algorithms, engage the decision-maker throughout the process, allowing them to guide the selection of solutions based on their specific preferences Keeney & Raiffa (1976). Finally, approaches based on simulation optimization and multi-criteria analysis (such as the Analytic Hierarchy Process, AHP) integrate various scenarios and formalize the decision-maker’s preferences to identify the best solution Saaty (1980); Stewart & Bezdek (2015). These methods, though varied, share the goal of helping the decisionmaker navigate through the multitude of possible solutions while considering their priorities and the problem context.

Another approach to avoid this complex decision-making process, researchers have been exploring the optimization of a function over the efficient set since the 1970s.

For continuous variables, numerous algorithms have been proposed to solve the multi-objective linear programming (MOLP) problems (e.g., Benson & Sayin (1994), Ecker & Song (1994), Yamamoto (2004), Liu & Ehrgott (2018)).

Additionally, there has been research addressing problems with discrete variables, such as multiobjective linear integer programming (MOILP) problems.

The first work to solve a linear function over the efficient set of MOILP is developed by Abbas & Chaabane (2006), it’s called: a method for optimizing a linear function over the efficient set of a MOILP. Their algorithm is inspired by the works of Benson & Sayin (1994) and Ecker & Song (1994) and adapted to the integer problem. The algorithm based on exploring the edges incident to a solution and cutting edges instead of solutions, by using two types of cuts (type 1 and type 2). Sadly, Abbas and Chaabane do not provide any computational results, which makes evaluating the performance of their algorithm.

Jseus Jorg (2009) proposes a method that operates in criterion space. His method is an adaptation of the approach of Sylva & Crema (2004).

The work of Chaabane & Pirlot (2010) use two main techniques :

  • Progressively reducing the feasible region by adding constraints to eliminate dominated solutions.

  • Exploring edges adjacent to the current non-dominated solution to find alternative efficient solutions.

Chaabane et al. (2012) approached the problem using the augmented weighted Tchebychev norm to characterize non-dominated solutions. This avoids weakly non-dominated solutions. The feasible region is progressively reduced by adding constraints to eliminate dominated solutions found earlier.

Boland et al. (2017) propose a new method for decomposing the search space that reduces the number of sub-spaces that need to be explored and limits the time spent on computing bounds on the value of the linear function. They compare his algorithm to Jseus Jorg (2009) and shows that it is more efficient and faster. But unfortunately, it is limited to three objectives.

For larger-scale instances, Zaidi et al. (2024) proposed an algorithm based on genetic algorithms with an adapted architecture. Using this approach, they were able to process up to 5,000 variables in a reasonable time.

Other works are proposed, multiple objective integer linear programming stochastic problems (MOILPS), a method is proposed by Chaabane & Mebrek (2014). This method converts the problem into a deterministic and it combines between L-shaped and Chaabane & Pirlot (2010) methods.

The quadratic optimization over a discrete Pareto set of a multi-objective linear fractional program addressed by Moulai & Mekhilef (2021) and Mahdi & Chaabane (2015). For MOILP in a fuzzy environment, a method is proposed by Menni & Chaabane (2020).

The following table summarizes the characteristics of methods optimizing the function over the efficient set of multiobjective integer linear program.

Table 1
Characteristics of methods that optimize a function over the set of efficient solutions of MOILP problem.

3 MODELING APPROACH AND BASIC CONCEPTS

3.1 Modeling approach

The (Φ -MOILP) problem is the optimization of a linear function Φ on the efficient set of a multiple objective integer linear problem (MOILP).

The MOILP is formulated as follows:

( P ) min Z j ( x ) = c j x , j = 1 , . . . , p , s t . x D .

Where D{xD+n/Axb} is the feasible set of the problem, with A ∈ ℝm×n , b ∈ ℝm and C=(cj, j=1,...,p) is a p × n matrix defining a number p ≥ 2 of objective functions. We suppose that the feasible set 𝔻 is not empty and bounded. As the objective functions are usually conflicting, there does not exist any feasible solution optimizing all the criteria simultaneously and thus, the concept of efficient solution is widely used.

We denote by 𝔼 the set of efficient solutions of P 𝔼 and the problem we want to tackle is P 𝔼 defined as :

( P E ) min Φ = ϕ x , s . t . x E .

where ϕ denotes a n-dimensional vector.

We use the Pareto dominance for this method, as defined by Teghem & Kunsch (1986).

Definition 1.A pointx¯Dis an efficient solution if and only if there is no x ∈ 𝔻 such thatZj(x)Zj(x¯)for all j ∈ {1, 2, .., p} andZj(x)<Zj(x¯)for at least one j. Otherwise,x¯is not efficient and the corresponding vector(Z1(x¯),Z2(x¯),...,Zp(x¯))is said to be dominated.

We first recall there exists a test to analyze if an arbitrary feasible solution x ∈ 𝔻 is an efficient solution or not.

Theorem 1 (Efficiency test).Consider the single-objective linear programming problem Pψ (x )

P ψ ( x * ) Ψ = m a x ( e ψ ) , s . t . C x = C x * - I ψ , x D , ψ i 0 , i = 1 , . . , p .

where e = (1, 1, ..., 1) is a (1 × p) vector, C is the objective matrix, ψ is a (p × 1) vector, I is the identity matrix (p × p) and x an arbitrary feasible solution of 𝔻. x ∈ 𝔼 if and only if Ψ is equal to zero in problem P ψ (x ) (seeBenson (1978)).

For the remaining of the paper, we will consider the problem in the form :

P k ( D * ) min Z k ( x ) , s . t . x D * D .

Where Z k is a pilot objective (corresponding to the greatest absolute value of the student copula. It’s defined in section 3.1) and 𝔻 a subset of 𝔻.

x are an optimal solutions of P k (𝔻), if x not unique, the goal will be to find the non-dominated solution among the P k (𝔻) solutions.

Theorem 2 (Unicity check). We consider the following problem

P k ( x * ) max e ψ , s . t . c k x = c k x * , k { 1 , . . . , p } c j x = c j x * - I ψ , j { 1 , . . . , p } { k } , x D * , ψ i 0 , i { 1 , , p } { k } .

where

  • I is the identity matrix ((p − 1) × (p − 1)),

  • e = (1, 1, ..., 1) is a (1 × (p − 1)) vector,

  • ψ is a (1 × (p − 1)) vector and x is an optimal solution of P k (𝔻).

An optimal solution of Pk (x ) corresponds to a non-dominated point among the optimal solutions of problem P k (𝔻).

Proof. Let (x¯,ψ¯) an optimal solution of the P k (x ). If eψ¯=0. we have :

( c k x ¯ = c k x * ) ( c j x ¯ = c j x * ) , j { 1 , . . . , p } { k } C x ¯ = C x * .

Using theorem 1, x¯E and x¯ is efficient. If eψ¯>0: We suppose on the contrary that it exists (x^,ψ^) an optimal solution of P k (x ) dominating x¯. We have thus x^D* with Zk(x^)=Zk(x¯), Zj(x^)Zj(x¯) j{1,...,p}{k} with at least one strict inequality Zi(x^)<Zi(x¯) i{1,...,p}{k}. By consequence is that ψ^jψ¯j j{1,...,p}{i,k}, ψ^i>ψ¯i and thus eψ^>eψ¯ which would imply that (x¯,ψ¯) is not an optimal solution of P k (x ). □

3.2 Student copula

Copulas constitute a branch of mathematics. It’s coined by Sklar (1959), that investigates the dependency between random variables. They are multivariate distribution functions that disentangle marginal dependence from joint dependence.

The Student copula, also known as the t-copula, represents the underlying copula of a multivariate Student distribution. It captures extreme positive and negative dependencies. The t-copula is bi-variate, therefore, it is defined as follows :

C ~ ( ρ , v ) ( U 1 , U 2 ) = - t v ( U 1 ) - t v ( U 2 ) Γ [ 1 2 ( v + 1 ) ] Γ ( v 2 ) π v 1 - ρ 2 ( 1 + v 1 2 - 2 ρ v 1 v 2 + v 2 2 v ( 1 - ρ 2 ) ) - 1 - v 2 d v 2 d v 1

Where t v is the inverse univariate function of t-student probability distribution, with degrees of freedom v, ρ ∈ [−1, 1] correlation coefficient if v > 2, and Γ is the Gamma distribution density.

4 2-PHASE ALGORITHM

4.1 Principle of the algorithm

The algorithm contains two phases :

The first is an initialization. Its aim is to determine a particular objective Z k with k ∈ {1,.., p}, called pilot objective. The t-copula C~(ρ,v)(Zj,Φ) are calculated for all j ∈ {1,.., p} and the pilot objective Z k is the one corresponding to the greatest absolute value of the t-copula, i.e.:

C ~ ( ρ , v ) ( Z k , Φ ) = max j { 1 , . . , p } | C ~ ( ρ , v ) ( Z j , Φ ) | .

Then, at iteration l = 1, an optimal solution of problem P k (𝔻) is determined. In case the optimal solution is not single, theorem 2 is used to determine an efficient solution among them. This first solution is x 1 for l = 1 and it is called x because it is an efficient solution.

At each iteration of the second phase, Initialized a set Sl=0, and we keep in S l the solutions obtained by resolving the problem P k (𝔻), where

D * = D i l = { x D Z i ( x ) Z i ( x * ) - ε ; Φ ( x ) Φ ( x * ) } , i { 1 , . . . , p } { k } (1)

Dil is the set of feasible solutions improving at least the value of Z i (x ) without any deterioration of the value of the compromise function Φ(x ).

Remark 1.Of course, ifDil=0, there is no solutionxil, we store in the 𝕋 set the directions that cannot improve our solution. i.e. :

D i l = 0 T { i } .

Again in case of multiplicity of optimal solutions of this problem P k (𝔻), theorem 2 is used to choose xil as a non-dominated solution inside 𝔻.

Nevertheless, let us note that xil is not necessary an efficient solution. So the efficiency test of theorem 1 is applied to each solution of S l .

Two situations are possible,

  • If SlE0, i.e. there exists at least one efficient solution in S l . Logically, we choose the efficient solution x giving the smallest value of the function Φ.i.e.

x * S l E Φ ( x * ) = min y S l E Φ ( y ) .

At the next iteration l + 1, we will consider the optimization of the p − 1− | T | P k (𝔻) problems, where : D*=Dil+1,i{1,...,p}{k},iT, as defined by relation (1), to determine a new sequence S l+1 .

  • If SlE=0, to try to find a efficient solution x witch improve the Φ function. Determined a new sequence S l+1 , eliminating all the solutions of S l of the new set of solutions considered.

So we will determine the new sequence Sl+1={xil+1;i{1,...,p}{k};iT} by the optimization of the problem P k (𝔻) where

D * = D i l + 1 = { x D | | Z i ( x ) min y S l Z i ( y ) - ε ; Φ ( x ) Φ ( x * ) } , i { 1 , . . . , p } { k } , i T .

This second phase stops when 𝕋 = {1,..., p}\ {k}, i.e., when all sets Dil becomes empty.

The optimality of solution x is proved in the next subsection.

Algorithm 1
2-phase Algorithm.

4.2 Convergence and properties

Proposition 1.xis an optimal solution of P𝔼.

Proof. Let x be the last efficient solution found in the algorithm at the iteration l¯. x is a solution of Sl¯ i.e. an optimal solution of a problem Pk(Di*l¯), i ∈ {1,..., p} \ {k}, such that x*Sl¯E with

Φ ( x * ) min x S l ¯ E Φ ( x ) (2)

As x is the last efficient solution found, the sets of solutions Dil¯+1={xDZi(x)Zi(x*)-ε; Φ(x)Φ(x*)},i{1,...,p}{k} do not contain any efficient solution.

Let us suppose that the proposition is not true, i.e. there exists an other efficient solution x^E with

Φ ( x ^ ) < Φ ( x * ) (3)

As both solutions x and x^ are efficient, there exists at least one objective i ∈ {1,..., p} such that Zi(x^)Zi(x*)-ε.

  • If i ≠ k, x^ would be a solution of Dil¯+1 which is a contradiction with the fact there exists no efficient solutions in this set of solutions.

  • If i = k, we have thus Zk(x^)<Zk(x*).

As x^E, it must exists i^{1,..,p}{k} such that x^Di^l¯.

  • - If i^=i*, then x^Di*l¯ with Zk(x^)<Zk(x*) , which is contradiction with the fact that x is an optimal solution of Pk(Di*l¯).

  • - If i^i*, as x^E, this solution must be an optimal solution of Pk(Di^l¯) so that x^ESl¯. Relations (2) and (3) are thus non compatible.

In conclusion such solution x^ does not exist and x is an optimal solution of P 𝔼. □

Proposition 2. The algorithm is finite.

Proof. Clearly, as the set 𝔻 is bounded and that at each iteration the considered sets Dil are strictly reduced, the algorithm is finite.

In the worst case, the algorithm performs in | 𝔻 | −1 iterations. □

Corollary 1. The algorithm proposed finds an exact optimal solution of P 𝔼 in a finite number of iterations.

5 NUMERICAL ILLUSTRATION

Consider the following example (see Figure 1)

( P ) min Z 1 = - x 1 - 2 x 2 min Z 2 = - 3 x 1 + 2 x 2 min Z 3 = x 1 - 2 x 2 s . t . ( D ) x 1 + x 2 7 2 x 1 11 2 x 2 7 x 1 , x 2 +

Figure 1
Set 𝔻 of feasible solutions, the red points are efficient.

Let the compromise problem be

( P E ) min Φ = 2 x 1 + 3 x 2 s . t . x 1 , x 2 E

Phase 1

We calculate

C ~ ( ρ , v ) ( Z 1 , Φ ) = - 0 . 99 , C ~ ( ρ , v ) ( Z 2 , Φ ) = 1 , C ~ ( ρ , v ) ( Z 3 , Φ ) = - 0 . 98 .

C~(ρ,v)(Z2,Φ) is the maximum, thus k: = 2 and Z 2 is the pilot objective.

We solve the problem P 2(𝔻).

P 2 ( D ) min Z 2 = - 3 x 1 + 2 x 2 s . t . x 1 , x 2 D

The result of solving the problem (P 2(𝔻)) is x 1 = x = (5, 0); Φ(x ) = 10; Z 1(x ) = −5; Z 2(x ) = −15; Z 3(x ) = 5. As it is the unique optimal solution of (P 2(𝔻)), it is not necessary to use theorem 2.

Phase 2

Iteration 1 Let l = 2; ε = 0.0001; S2=0.

We solve the problem P2(D*)P2(D12) (see Figure 2).

P 2 ( D 1 2 ) min Z 2 = - 3 x 1 + 2 x 2 s . t . - x 1 - 2 x 2 - 5 . 0001 2 x 1 + 3 x 2 10 x 1 , x 2 D

Figure 2
P2(D12), the green point is the optimal solution.

The unique optimal solution of problem P2(D12) is x12=(2,2)';Φ(x12)=10;Z1(x12)=-6;Z2(x12)=-2 and Z3(x12)=-2 and thus it is not necessary to use theorem 2.

Then, we solve the problem P2(D*)P2(D32) (see Figure 3).

P 2 ( D 3 2 ) min Z 2 = - 3 x 1 + 2 x 2 s . t . x 1 - 2 x 2 4 . 9999 2 x 1 + 3 x 2 10 x 1 , x 2 D

Figure 3
P2(D32), the green point is the optimal solution.

The unique optimal solution of problem P2(D3) is x32=(4,0)';Φ(x32)=8;Z1(x32)=-4;Z2(x22)=-12 and Z3(x32)=4 and thus it is not necessary to use theorem 2.

We use the theorem 1 for checking the efficiency of x12 and x32 and we solve the problems Pψ(x12) and Pψ(x32).

The solution of problem Pψ(x12) is x = (4, 3).

ψ = (4, 4, 0); Ψ = 8, thus x12 is not an efficient solution.

The solution of problem Pψ(x32) is x = (5, 1).

ψ = (3, 1, 1); Ψ = 5, and thus x32 is not an efficient solution.

We are in the case where S2E=0.

Iteration 2l = 3; ε = 0.0001;

We first solve the problem P2(D*)P2(D13) (see Figure 4).

P 2 ( D 1 3 ) min Z 2 = - 3 x 1 + 2 x 2 s . t . - x 1 - 2 x 2 - 6 . 0001 2 x 1 + 3 x 2 10 x 1 , x 2 D

Figure 4
P2(D31).

This problem P2(D13) is infeasible, so that 𝕋 = {1}.

Solved the problem P2(D*)P2(D33) (see Figure 5).

P 2 ( D 3 3 ) min Z 2 = - 3 x 1 + 2 x 2 s . t . x 1 - 2 x 2 - 2 . 0001 2 x 1 + 3 x 2 10 x 1 , x 2 D

Figure 5
P2(D33), the green point is the optimal solution.

The unique optimal solution of problem P2(D3) is x33=(1,2)';Φ(x33)=8;Z1(x33)=-5;Z2(x33)=1 and Z3(x33)=-3 and thus it is not necessary to use theorem 2 .

Using the theorem 1 for checking the efficiency of x33.

Solving the problem Pψ(x33).

The solution of problem Pψ(x33) is x = (3, 3).

ψ = (4, 4, 0); Ψ = 8, thus x 3 is not an efficient solution. We are in the case S3E=0.

Iteration 3l = 4; ε = 0.0001;

We solve the problem P2(D*)P2(D34) (see Figure 6).

P 2 ( D 3 4 ) min Z 2 = - 3 x 1 + 2 x 2 s . t . x 1 - 2 x 2 - 3 . 0001 2 x 1 + 3 x 2 10 x 1 , x 2 D

Figure 6
P2(D34), the green point is the optimal solution.

The unique optimal solution of problem P2(D34) is x34=(0,2)';Φ(x34)=6;Z1(x34)=-4;Z2(x34)=4 and Z3(x34)=-4, thus it is not necessary to use theorem 2.

Using the theorem 1 for checking the efficiency of x34.

Solving the problem Pψ(x34).

The solution of problem Pψ(x34) is x = (2, 3).

ψ = (4, 4, 0); Ψ = 8, thus x34 is not an efficient solution.

We are in the case S4E=0.

Iteration 4l = 5; ε = 0.0001;

We solve the problem P2(D*)P2(D35) (see Figure 7).

P 2 ( D 3 5 ) min Z 2 = - 3 x 1 + 2 x 2 s . t . x 1 - 2 x 2 - 4 . 0001 2 x 1 + 3 x 2 10 x 1 , x 2 D

Figure 7
P2(D35), the green point is the optimal solution.

The unique optimal solution of problem P2(D35) is x35=(0,3)';Φ(x35)=9;Z1(x35)=-6;Z2(x35)=6 and Z3(x34)=-6, thus it is not necessary to use theorem 2.

Using the theorem 1 for checking the efficiency of x35.

Solving the problem Pψ(x35).

The solution of problem Pψ(x35) is x = (0, 3) and ψ = (0, 0, 0); Ψ = 0 and thus x35 is an efficient solution. We are here in the case where S5E0 and the new efficient solution x is (0,3).

Iteration 5l = 6; ε = 0.0001;

We solve the problem P2(D*)P2(D36) (see Figure 8).

P 2 ( D 3 6 ) min Z 2 = - 3 x 1 + 2 x 2 s . t . x 1 - 2 x 2 - 6 . 0001 2 x 1 + 3 x 2 9 x 1 , x 2 D

Figure 8
P2(D36).

This problem P2(D36) is infeasible, so that 𝕋 = {1, 3}.

The procedure stops because it is impossible to improve any direction.

x = (0, 3) is the optimal solution of P 𝔼 with Φ(x ) = 9, Z 1(x ) = −6, Z 2(x ) = 6 and Z 3(x ) = −6.

6 COMPUTATIONAL STUDY

In this section we compare our algorithm to those of Chaabane et al. (2012) (run on a CPU computer Intel Pentium B950 @ 2.10GHz, with the CPU Single Thread Rating is 945) and Jseus Jorg (2009) (run on a CPU computer Intel Pentium M 1.86GHz, with the CPU Single Thread Rating is 499) algorithms. Our algorithm is run on a computer (Intel Core i3-4158U @ 2.00GHz, with the CPU Single Thread Rating is 1075), with 4 Gb of RAM. We have used the CPLEX 12.6.1 library for solving integer linear programming problems. For checking the performances of the algorithm, a total of 195 instances are randomly generated. The components of the matrices A, C, and the vector b were drawn in the ranges [1,30], [-20,20] and [50,150] respectively. For the vector ϕ the same distribution as C was used. To avoid unfeasible solutions, all problem constraints are of the ” ≤ ” type. Since all the coefficients of A are positive, the feasible region is bounded. The number of variables n of the problem instances varied from 10 to 500, the number of constraints m varied from 10 to 300, and the number of objective functions p equals 3, 5, 7, and 8 in 39 categories; five instances are solved in each category. These benchmarks can be accessed through the following link: https://zenodo.org/records/14938610.

6.1 Computational results of the 2-phase algorithm.

The computational results obtained have been summarized in table (see Table 2), where the average CPU time (in seconds), the average number of iterations required to solve the problems are shown and the average of convergence metric of function Φ. Also, the minimum and maximum values of each measure are reported in brackets.

Table 2
Table of computational results.

Convergence metric: The metric used to evaluate the improvement of the objective function Φ is as follows:

R P = ln Φ min ( 0 ) Φ min ( T ) .

The term Φmin(T) in the metric refers to the optimal value of the function Φ achieved at iteration T.

Table 2 shows that the number of iterations remains relatively stable and does not increase very much with the problem size or the objective number. However, this is not the case for CPU time, which increases significantly with the problem size and the number of objectives. For higher dimensions, resolving such problems becomes difficult due to the multiplicity of objectives and the discrete nature of variables.

6.2 Comparison between the 2-phase algorithm and the Chaabane et al. and Jorg algorithms.

The 2-phase algorithm tackles the problem other way. Unlike other algorithms that start with the minimum of the Φ function over the D set, our algorithm starts with the efficient solution that minimizes the closest objective to the Φ function. This latest is a local minimum.

The results obtained show that the 2-phase algorithm can reach 500 variables in a reasonable time, which is not the case for the algorithms of Chaabane and Jorg that do not exceed 220 and 120 variables (See Table 3 and Table 4).

Table 3
Table of comparison with the Chaabane et al. algorithm.

Table 4
Table of comparison with the Jorg’s algorithm.

The comparison of the CPU time results with the Chaabane method shows that the 2-phase algorithm, achieves better execution times on most the instances and especially, when the number of variables is large and it also shows a stability in the curve increase, which is not the case for the Chaabane method which presents a non logical volatility(See Figure 9, Figure 10 and Figure 11). On the other hand, Jorg method shows a certain logical stability in its evolution, but its results are limited to 120 variables, while 2-phase reaches 500 variables and is better in most cases (See Figure 12, Figure 13 and Figure 14).

Figure 9
Evolution of the CPU time of the 2-phase and Chaabane algorithms for p=3.

Figure 10
Evolution of the CPU time of the 2-phase and Chaabane algorithms for p=5.

Figure 11
Evolution of the CPU time of the 2-phase and Chaabane algorithms for p=8.

Figure 12
Evolution of the CPU time of the 2-phase and Jorg algorithms for p=3.

Figure 13
Evolution of the CPU time of the 2-phase and Jorg algorithms for p=5.

Figure 14
Evolution of the CPU time of the 2-phase and Jorg algorithms for p=7.

7 CONCLUSION

This paper presented a deterministic algorithm for optimizing a linear function over the efficient set of a multi-objective integer linear programming problem. The proposed algorithm is based on two phases: the specificity of the initial phase consists of selecting the pilot objective and it produces the first efficient solution. The second phase consists of optimizing the pilot objective at each iteration to search for an efficient solution that improves the function Φ. The aim is the reduction of feasible solutions set progressively, a way to improve successively at least one objective different from the pilot objective, without any deterioration of the function Φ.

Our algorithm is implemented in a machine. It is tested for several problems randomly generated from a discrete uniform distribution, and the results obtained are very encouraging. Our computational results display an outstanding performance with problems of considerable size (up to 300 constraints, 500 variables, and eight objectives). For future research, we suggest the creation of specific benchmarks for this kind of problem and applying these methods to classical combinatorial optimization problems such as TSP and Flowshop.

References

  • ABBAS M & CHAABANE D. 2006. Optimizing a linear function over an integer efficient set. European Journal of Operational Research, 174: 1140-1161.
  • BENSON H. 1978. Existence of Efficient Solutions for Vector Maximization Problems. Journal of Optimization Theory and Applications, 26: 569-580.
  • BENSON H & SAYIN S. 1994. Optimization over the Efficient Set :Four special cases. Journal of Mathematical Analysis and Applications, 80: 3-17.
  • BOLAND N, CHARKHGARD H & SAVELSBERGH M. 2017. A New Method for Optimizing a Linear Function over the Efficient Set of aMultiobjective Integer Program. European Journal of Operational Research , 260: 904-919.
  • CHAABANE D, BRAHMI B & REMDANI Z. 2012. The augmented weighted Tchebychev norm for optimizing a linear function over an integer efficient set of a multicriteria linear program. International Transactions in Operational Research, 19: 531-545.
  • CHAABANE D & MEBREK F. 2014. Optimization of a linear function over the setof stochastic efficient solutions. Computational Management Science, 11: 157-178.
  • CHAABANE D & PIRLOT M. 2010. A method for optimizing over the efficient set. Journal of Industrial and Management Optimization, 6: 811-823.
  • CHANKONG V & HAIMES YY. 1983. Multi-objective decision making: Theory and methodology. North-Holland.
  • ECKER JG & SONG HG. 1994. Optimizing a Linear Function over an Efficient Set. Journal of Optimization Theory and Applications , 83: 541-563.
  • JSEUS JORG MJ. 2009. An algorithm for optimizing a linear function over an integer efficient set. European Journal of Operational Research , 195: 98-103.
  • KEENEY RL & RAIFFA H. 1976. Decisions with multiple objectives: Preferences and value trade-offs. Wiley.
  • LIU Z & EHRGOTT M. 2018. Primal and Dual Algorithms for Optimization over the Efficient Set. A Journal of Mathematical Programming and Operations Research, 67: 1-26.
  • MAHDI S & CHAABANE D. 2015. A linear fractional optimization over an integer efficient set. RAIRO-Operations Research-Recherche Opérationnelle, 49(2): 265-278.
  • MENNI A & CHAABANE D. 2020. A possibilistic optimization over an integer efficient set within a fuzzy environment. RAIRO Operations Research, 54: 1437-1452.
  • MOULAI M & MEKHILEF A. 2021. Quadratic optimization over a discrete pareto set of a multiobjective linear fractional program. A Journal of Mathematical Programming and Operations Research , 70: 1425-1442.
  • RAIFFA H. 1968. Decision Analysis: Introductory Lectures on Choices Under Uncertainty. Addison-Wesley.
  • SAATY TL. 1980. The Analytic Hierarchy Process: Planning, Priority Setting, Resources Allocation. McGraw-Hill.
  • SKLAR M. 1959. Fonctions de répartition à n dimensions et leurs marges. In Annales de l’ISUP, 8: 229-231.
  • STEWART TJ & BEZDEK JC. 2015. Decision making in multi-objective decision problems. Springer.
  • SYLVA J & CREMA A. 2004. A Method for Finding the set of Non-dominated Vectors for Multiple Objective Integer Linear Programs. European Journal of Operational Research , 158: 46-55.
  • TEGHEM J & KUNSCH P. 1986. A Survey of Techniques for Finding Efficient Solutions to Multiobjective Integer Linear Programming. Asia Pacific Journal of Operations Research, 3: 95-108.
  • YAMAMOTO Y. 2004. Optimization over the Efficient Set: Overview. Journal of Global Optimization, 24: 285-317.
  • YU PL. 1973. A class of solutions for group decision problems. Management Science, 19(8): 668-677.
  • ZAIDI A, CHAABANE D, ASLI L, IDIR L & MATOUB S. 2024. A genetics algorithms for optimizing a function over the integer efficient set. Croatian Operational Research Review, 15(1): 75-88.
  • Funding
    The authors declare no funding was received for this work.
  • Data Availability
    The data that support the findings of this study are available from the link: https://zenodo.org/records/14938610.

Edited by

  • Review Process Editor:
    Annibal Parracho Sant’Anna.

Data availability

The data that support the findings of this study are available from the link: https://zenodo.org/records/14938610.

Publication Dates

  • Publication in this collection
    19 May 2025
  • Date of issue
    2025

History

  • Received
    23 Sept 2024
  • Accepted
    04 Jan 2025
location_on
Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 , Tel.: +55 21 2263-0499 - Rio de Janeiro - RJ - Brazil
E-mail: sobrapo@sobrapo.org.br
rss_feed Acompanhe os números deste periódico no seu leitor de RSS
Reportar erro