ABSTRACT
In this paper, we present a new deterministic method to optimize a linear function over the efficient set of a multiple objective integer linear problem, it is called the 2-phase algorithm. This algorithm is based on two phases : the first uses the student copula to determine the pilot objective and the second optimises it to obtain the optimal solution. To check the algorithm’s performance, we compare it with two benchmark algorithms from the literature. A detailed didactic example is given to illustrate the different steps. The algorithm is implemented on Machine and interesting results are obtained and discussed.
Keywords:
deterministic optimization; multi-objective optimization; optimization over the efficient set; Student copula
1 INTRODUCTION
Multi-objective optimization problems (MOP) are characterized by the need to simultaneously optimize multiple, often conflicting objectives. In real-world scenarios, decision-makers are often tasked with selecting a solution from a set of non-dominated solutions that lie on the Pareto front. The Pareto front is a key concept in multi-objective optimization, representing the set of all solutions that cannot be improved in one objective without degrading another. However, selecting the ”best” solution from this set can be a challenging and subjective task due to the diversity and trade-offs inherent in these problems (e.g., Benson (1978), Sylva & Crema (2004), Sylva, and Teghem & Kunsch (1986)).
The primary difficulty in multi-objective decision-making lies in choosing between multiple, often conflicting objectives. For example, in engineering, environmental planning, and economics, decision-makers are frequently required to make trade-offs between efficiency, cost, and sustainability. These types of problems are inherently non-convex and cannot be solved by traditional optimization methods, which focus on optimizing a single objective. Therefore, the field of multi-objective optimization aims to provide a set of solutions that represent different trade-offs between the objectives, leaving the decision-maker to choose the most suitable solution based on their preferences or contextual factors.
Despite the success of traditional methods, researchers have continued to explore more sophisticated approaches for optimizing functions over the efficient set, particularly when dealing with complex, large-scale problems. Recent work in this area has focused on optimization algorithms that can work directly over the efficient set, bypassing the need to explicitly compute the entire Pareto front. One of these methods is an approach based of the Student copula, which we introduce in this paper. This approach allows for the optimization of a linear function over the efficient set of a multi-objective integer linear programming (MOILP) problem. The method is based on a two-phase algorithm: the first phase uses the Student copula to determine the pilot objective, and the second phase optimizes this objective to find the optimal solution.
The proposed method is compared with two benchmark algorithms from the literature, providing valuable insights into its performance. Additionally, a detailed example is presented to illustrate the steps involved in the algorithm, demonstrating how it can be implemented and its effectiveness in real-world scenarios. The results obtained through this approach show promising potential, particularly in solving problems where traditional methods might struggle due to their computational complexity.
In summary, multi-objective optimization remains a challenging field with diverse approaches for solving problems in various domains. The methods discussed below represent different ways of navigating the Pareto front, each with its strengths and weaknesses. The new method proposed in this paper, which combines the Student copula with optimization over the efficient set, offers a promising alternative that can provide efficient and practical solutions to multi-objective integer linear programming problems.
Our work is organized as follows :
-
In Section 2, the literature review is presented and we present a table summarizing the characteristics of methods optimizing the function over the efficient set of multiobjective integer linear program.
-
In Section 3, we recall some basic concepts of the approach of the optimization the function over the efficient set and we present a few copula concepts.
-
The algorithm is developed in the fourth section, where its convergence is proven.
-
A didactic example is given in Section 5 to illustrate the algorithm.
-
Section 6 presents the proposed method’s computational results and compares them with those of Chaabane & Pirlot (2010) and Jseus Jorg (2009). The comparison with the method of Boland et al. (2017) is not made because it is limited to three objectives.
-
Finally, we end up by a conclusion.
2 LITERATURE REVIEW OF OPTIMIZATION OF FUNCTION OVER THE EFFICIENT SET OF MULTIOBJECTIVE PROGRAM
In multi-objective programming problems, selecting a solution from the Pareto front is a complex task due to the diversity of non-dominated solutions. Among the most commonly used methods to assist decision-makers in this selection, the weighted objective method transforms the problem into a single-objective one by assigning weights to each objective based on its relative importance Raiffa (1968). However, this approach can be subjective and does not guarantee that all Paretooptimal solutions will be explored. Another popular method is the epsilon-constraint method, which involves solving multiple single-objective problems while constraining the other objectives to acceptable levels Chankong & Haimes (1983). This method is more flexible and generates different non-dominated solutions, although it can be computationally expensive. Compromise programming is another approach that seeks to minimize a distance function relative to an ideal solution, allowing the decision-maker to select an acceptable compromise between objectives Yu (1973). Interactive methods, such as adaptive search algorithms, engage the decision-maker throughout the process, allowing them to guide the selection of solutions based on their specific preferences Keeney & Raiffa (1976). Finally, approaches based on simulation optimization and multi-criteria analysis (such as the Analytic Hierarchy Process, AHP) integrate various scenarios and formalize the decision-maker’s preferences to identify the best solution Saaty (1980); Stewart & Bezdek (2015). These methods, though varied, share the goal of helping the decisionmaker navigate through the multitude of possible solutions while considering their priorities and the problem context.
Another approach to avoid this complex decision-making process, researchers have been exploring the optimization of a function over the efficient set since the 1970s.
For continuous variables, numerous algorithms have been proposed to solve the multi-objective linear programming (MOLP) problems (e.g., Benson & Sayin (1994), Ecker & Song (1994), Yamamoto (2004), Liu & Ehrgott (2018)).
Additionally, there has been research addressing problems with discrete variables, such as multiobjective linear integer programming (MOILP) problems.
The first work to solve a linear function over the efficient set of MOILP is developed by Abbas & Chaabane (2006), it’s called: a method for optimizing a linear function over the efficient set of a MOILP. Their algorithm is inspired by the works of Benson & Sayin (1994) and Ecker & Song (1994) and adapted to the integer problem. The algorithm based on exploring the edges incident to a solution and cutting edges instead of solutions, by using two types of cuts (type 1 and type 2). Sadly, Abbas and Chaabane do not provide any computational results, which makes evaluating the performance of their algorithm.
Jseus Jorg (2009) proposes a method that operates in criterion space. His method is an adaptation of the approach of Sylva & Crema (2004).
The work of Chaabane & Pirlot (2010) use two main techniques :
-
Progressively reducing the feasible region by adding constraints to eliminate dominated solutions.
-
Exploring edges adjacent to the current non-dominated solution to find alternative efficient solutions.
Chaabane et al. (2012) approached the problem using the augmented weighted Tchebychev norm to characterize non-dominated solutions. This avoids weakly non-dominated solutions. The feasible region is progressively reduced by adding constraints to eliminate dominated solutions found earlier.
Boland et al. (2017) propose a new method for decomposing the search space that reduces the number of sub-spaces that need to be explored and limits the time spent on computing bounds on the value of the linear function. They compare his algorithm to Jseus Jorg (2009) and shows that it is more efficient and faster. But unfortunately, it is limited to three objectives.
For larger-scale instances, Zaidi et al. (2024) proposed an algorithm based on genetic algorithms with an adapted architecture. Using this approach, they were able to process up to 5,000 variables in a reasonable time.
Other works are proposed, multiple objective integer linear programming stochastic problems (MOILPS), a method is proposed by Chaabane & Mebrek (2014). This method converts the problem into a deterministic and it combines between L-shaped and Chaabane & Pirlot (2010) methods.
The quadratic optimization over a discrete Pareto set of a multi-objective linear fractional program addressed by Moulai & Mekhilef (2021) and Mahdi & Chaabane (2015). For MOILP in a fuzzy environment, a method is proposed by Menni & Chaabane (2020).
The following table summarizes the characteristics of methods optimizing the function over the efficient set of multiobjective integer linear program.
Characteristics of methods that optimize a function over the set of efficient solutions of MOILP problem.
3 MODELING APPROACH AND BASIC CONCEPTS
3.1 Modeling approach
The (Φ -MOILP) problem is the optimization of a linear function Φ on the efficient set of a multiple objective integer linear problem (MOILP).
The MOILP is formulated as follows:
Where is the feasible set of the problem, with A ∈ ℝm×n , b ∈ ℝm and is a p × n matrix defining a number p ≥ 2 of objective functions. We suppose that the feasible set 𝔻 is not empty and bounded. As the objective functions are usually conflicting, there does not exist any feasible solution optimizing all the criteria simultaneously and thus, the concept of efficient solution is widely used.
We denote by 𝔼 the set of efficient solutions of P 𝔼 and the problem we want to tackle is P 𝔼 defined as :
where ϕ denotes a n-dimensional vector.
We use the Pareto dominance for this method, as defined by Teghem & Kunsch (1986).
Definition 1.A pointis an efficient solution if and only if there is no x ∈ 𝔻 such thatfor all j ∈ {1, 2, .., p} andfor at least one j. Otherwise,is not efficient and the corresponding vectoris said to be dominated.
We first recall there exists a test to analyze if an arbitrary feasible solution x ∗ ∈ 𝔻 is an efficient solution or not.
Theorem 1 (Efficiency test).Consider the single-objective linear programming problem Pψ (x ∗)
where e⊤ = (1, 1, ..., 1) is a (1 × p) vector, C is the objective matrix, ψ is a (p × 1) vector, I is the identity matrix (p × p) and x ∗ an arbitrary feasible solution of 𝔻. x ∗ ∈ 𝔼 if and only if Ψ is equal to zero in problem P ψ (x ∗) (seeBenson (1978)).
For the remaining of the paper, we will consider the problem in the form :
Where Z k is a pilot objective (corresponding to the greatest absolute value of the student copula. It’s defined in section 3.1) and 𝔻∗ a subset of 𝔻.
x∗ are an optimal solutions of P k (𝔻∗), if x ∗ not unique, the goal will be to find the non-dominated solution among the P k (𝔻∗) solutions.
Theorem 2 (Unicity check). We consider the following problem
where
-
I is the identity matrix ((p − 1) × (p − 1)),
-
e⊤ = (1, 1, ..., 1) is a (1 × (p − 1)) vector,
-
ψ is a (1 × (p − 1)) vector and x ∗ is an optimal solution of P k (𝔻∗).
An optimal solution of Pk (x ∗) corresponds to a non-dominated point among the optimal solutions of problem P k (𝔻∗).
Proof. Let an optimal solution of the P k (x ∗). If . we have :
Using theorem 1, and is efficient. If : We suppose on the contrary that it exists an optimal solution of P k (x ∗) dominating . We have thus with , with at least one strict inequality . By consequence is that and thus which would imply that is not an optimal solution of P k (x ∗). □
3.2 Student copula
Copulas constitute a branch of mathematics. It’s coined by Sklar (1959), that investigates the dependency between random variables. They are multivariate distribution functions that disentangle marginal dependence from joint dependence.
The Student copula, also known as the t-copula, represents the underlying copula of a multivariate Student distribution. It captures extreme positive and negative dependencies. The t-copula is bi-variate, therefore, it is defined as follows :
Where t v is the inverse univariate function of t-student probability distribution, with degrees of freedom v, ρ ∈ [−1, 1] correlation coefficient if v > 2, and Γ is the Gamma distribution density.
4 2-PHASE ALGORITHM
4.1 Principle of the algorithm
The algorithm contains two phases :
The first is an initialization. Its aim is to determine a particular objective Z k with k ∈ {1,.., p}, called pilot objective. The t-copula are calculated for all j ∈ {1,.., p} and the pilot objective Z k is the one corresponding to the greatest absolute value of the t-copula, i.e.:
Then, at iteration l = 1, an optimal solution of problem P k (𝔻) is determined. In case the optimal solution is not single, theorem 2 is used to determine an efficient solution among them. This first solution is x 1 for l = 1 and it is called x ∗ because it is an efficient solution.
At each iteration of the second phase, Initialized a set , and we keep in S l the solutions obtained by resolving the problem P k (𝔻∗), where
is the set of feasible solutions improving at least the value of Z i (x ∗) without any deterioration of the value of the compromise function Φ(x ∗).
Remark 1.Of course, if, there is no solution, we store in the 𝕋 set the directions that cannot improve our solution. i.e. :
Again in case of multiplicity of optimal solutions of this problem P k (𝔻∗), theorem 2 is used to choose as a non-dominated solution inside 𝔻∗.
Nevertheless, let us note that is not necessary an efficient solution. So the efficiency test of theorem 1 is applied to each solution of S l .
Two situations are possible,
-
If , i.e. there exists at least one efficient solution in S l . Logically, we choose the efficient solution x ∗ giving the smallest value of the function Φ.i.e.
At the next iteration l + 1, we will consider the optimization of the p − 1− | T | P k (𝔻∗) problems, where : , as defined by relation (1), to determine a new sequence S l+1 .
-
If , to try to find a efficient solution x ∗ witch improve the Φ function. Determined a new sequence S l+1 , eliminating all the solutions of S l of the new set of solutions considered.
So we will determine the new sequence by the optimization of the problem P k (𝔻∗) where
This second phase stops when 𝕋 = {1,..., p}\ {k}, i.e., when all sets becomes empty.
The optimality of solution x ∗ is proved in the next subsection.
4.2 Convergence and properties
Proposition 1.x∗is an optimal solution of P𝔼.
Proof. Let x ∗ be the last efficient solution found in the algorithm at the iteration . x ∗ is a solution of i.e. an optimal solution of a problem , i ∗ ∈ {1,..., p} \ {k}, such that with
As x ∗ is the last efficient solution found, the sets of solutions do not contain any efficient solution.
Let us suppose that the proposition is not true, i.e. there exists an other efficient solution with
As both solutions x ∗ and are efficient, there exists at least one objective i ∈ {1,..., p} such that .
-
If i ≠ k, would be a solution of which is a contradiction with the fact there exists no efficient solutions in this set of solutions.
-
If i = k, we have thus .
As , it must exists such that .
-
- If , then with , which is contradiction with the fact that x ∗ is an optimal solution of .
-
- If , as , this solution must be an optimal solution of so that . Relations (2) and (3) are thus non compatible.
In conclusion such solution does not exist and x ∗ is an optimal solution of P 𝔼. □
Proposition 2. The algorithm is finite.
Proof. Clearly, as the set 𝔻 is bounded and that at each iteration the considered sets are strictly reduced, the algorithm is finite.
In the worst case, the algorithm performs in | 𝔻 | −1 iterations. □
Corollary 1. The algorithm proposed finds an exact optimal solution of P 𝔼 in a finite number of iterations.
5 NUMERICAL ILLUSTRATION
Consider the following example (see Figure 1)
Let the compromise problem be
Phase 1
We calculate
is the maximum, thus k: = 2 and Z 2 is the pilot objective.
We solve the problem P 2(𝔻).
The result of solving the problem (P 2(𝔻)) is x 1 = x ∗ = (5, 0)′; Φ(x ∗) = 10; Z 1(x ∗) = −5; Z 2(x ∗) = −15; Z 3(x ∗) = 5. As it is the unique optimal solution of (P 2(𝔻)), it is not necessary to use theorem 2.
Phase 2
Iteration 1 Let l = 2; ε = 0.0001; .
We solve the problem (see Figure 2).
The unique optimal solution of problem is and and thus it is not necessary to use theorem 2.
Then, we solve the problem (see Figure 3).
The unique optimal solution of problem is and and thus it is not necessary to use theorem 2.
We use the theorem 1 for checking the efficiency of and and we solve the problems and .
The solution of problem is x = (4, 3)′.
ψ = (4, 4, 0)′; Ψ = 8, thus is not an efficient solution.
The solution of problem is x = (5, 1)′.
ψ = (3, 1, 1)′; Ψ = 5, and thus is not an efficient solution.
We are in the case where .
Iteration 2l = 3; ε = 0.0001;
We first solve the problem (see Figure 4).
This problem is infeasible, so that 𝕋 = {1}.
Solved the problem (see Figure 5).
The unique optimal solution of problem is and and thus it is not necessary to use theorem 2 .
Using the theorem 1 for checking the efficiency of .
Solving the problem .
The solution of problem is x = (3, 3)′.
ψ = (4, 4, 0)′; Ψ = 8, thus x 3 is not an efficient solution. We are in the case .
Iteration 3l = 4; ε = 0.0001;
We solve the problem (see Figure 6).
The unique optimal solution of problem is and , thus it is not necessary to use theorem 2.
Using the theorem 1 for checking the efficiency of .
Solving the problem .
The solution of problem is x = (2, 3)′.
ψ = (4, 4, 0)′; Ψ = 8, thus is not an efficient solution.
We are in the case .
Iteration 4l = 5; ε = 0.0001;
We solve the problem (see Figure 7).
The unique optimal solution of problem is and , thus it is not necessary to use theorem 2.
Using the theorem 1 for checking the efficiency of .
Solving the problem .
The solution of problem is x = (0, 3)′ and ψ = (0, 0, 0)′; Ψ = 0 and thus is an efficient solution. We are here in the case where and the new efficient solution x ∗ is (0,3).
Iteration 5l = 6; ε = 0.0001;
We solve the problem (see Figure 8).
This problem is infeasible, so that 𝕋 = {1, 3}.
The procedure stops because it is impossible to improve any direction.
x∗ = (0, 3) is the optimal solution of P 𝔼 with Φ(x ∗) = 9, Z 1(x ∗) = −6, Z 2(x ∗) = 6 and Z 3(x ∗) = −6.
6 COMPUTATIONAL STUDY
In this section we compare our algorithm to those of Chaabane et al. (2012) (run on a CPU computer Intel Pentium B950 @ 2.10GHz, with the CPU Single Thread Rating is 945) and Jseus Jorg (2009) (run on a CPU computer Intel Pentium M 1.86GHz, with the CPU Single Thread Rating is 499) algorithms. Our algorithm is run on a computer (Intel Core i3-4158U @ 2.00GHz, with the CPU Single Thread Rating is 1075), with 4 Gb of RAM. We have used the CPLEX 12.6.1 library for solving integer linear programming problems. For checking the performances of the algorithm, a total of 195 instances are randomly generated. The components of the matrices A, C, and the vector b were drawn in the ranges [1,30], [-20,20] and [50,150] respectively. For the vector ϕ the same distribution as C was used. To avoid unfeasible solutions, all problem constraints are of the ” ≤ ” type. Since all the coefficients of A are positive, the feasible region is bounded. The number of variables n of the problem instances varied from 10 to 500, the number of constraints m varied from 10 to 300, and the number of objective functions p equals 3, 5, 7, and 8 in 39 categories; five instances are solved in each category. These benchmarks can be accessed through the following link: https://zenodo.org/records/14938610.
6.1 Computational results of the 2-phase algorithm.
The computational results obtained have been summarized in table (see Table 2), where the average CPU time (in seconds), the average number of iterations required to solve the problems are shown and the average of convergence metric of function Φ. Also, the minimum and maximum values of each measure are reported in brackets.
Convergence metric: The metric used to evaluate the improvement of the objective function Φ is as follows:
The term Φmin(T) in the metric refers to the optimal value of the function Φ achieved at iteration T.
Table 2 shows that the number of iterations remains relatively stable and does not increase very much with the problem size or the objective number. However, this is not the case for CPU time, which increases significantly with the problem size and the number of objectives. For higher dimensions, resolving such problems becomes difficult due to the multiplicity of objectives and the discrete nature of variables.
6.2 Comparison between the 2-phase algorithm and the Chaabane et al. and Jorg algorithms.
The 2-phase algorithm tackles the problem other way. Unlike other algorithms that start with the minimum of the Φ function over the D set, our algorithm starts with the efficient solution that minimizes the closest objective to the Φ function. This latest is a local minimum.
The results obtained show that the 2-phase algorithm can reach 500 variables in a reasonable time, which is not the case for the algorithms of Chaabane and Jorg that do not exceed 220 and 120 variables (See Table 3 and Table 4).
The comparison of the CPU time results with the Chaabane method shows that the 2-phase algorithm, achieves better execution times on most the instances and especially, when the number of variables is large and it also shows a stability in the curve increase, which is not the case for the Chaabane method which presents a non logical volatility(See Figure 9, Figure 10 and Figure 11). On the other hand, Jorg method shows a certain logical stability in its evolution, but its results are limited to 120 variables, while 2-phase reaches 500 variables and is better in most cases (See Figure 12, Figure 13 and Figure 14).
7 CONCLUSION
This paper presented a deterministic algorithm for optimizing a linear function over the efficient set of a multi-objective integer linear programming problem. The proposed algorithm is based on two phases: the specificity of the initial phase consists of selecting the pilot objective and it produces the first efficient solution. The second phase consists of optimizing the pilot objective at each iteration to search for an efficient solution that improves the function Φ. The aim is the reduction of feasible solutions set progressively, a way to improve successively at least one objective different from the pilot objective, without any deterioration of the function Φ.
Our algorithm is implemented in a machine. It is tested for several problems randomly generated from a discrete uniform distribution, and the results obtained are very encouraging. Our computational results display an outstanding performance with problems of considerable size (up to 300 constraints, 500 variables, and eight objectives). For future research, we suggest the creation of specific benchmarks for this kind of problem and applying these methods to classical combinatorial optimization problems such as TSP and Flowshop.
References
- ABBAS M & CHAABANE D. 2006. Optimizing a linear function over an integer efficient set. European Journal of Operational Research, 174: 1140-1161.
- BENSON H. 1978. Existence of Efficient Solutions for Vector Maximization Problems. Journal of Optimization Theory and Applications, 26: 569-580.
- BENSON H & SAYIN S. 1994. Optimization over the Efficient Set :Four special cases. Journal of Mathematical Analysis and Applications, 80: 3-17.
- BOLAND N, CHARKHGARD H & SAVELSBERGH M. 2017. A New Method for Optimizing a Linear Function over the Efficient Set of aMultiobjective Integer Program. European Journal of Operational Research , 260: 904-919.
- CHAABANE D, BRAHMI B & REMDANI Z. 2012. The augmented weighted Tchebychev norm for optimizing a linear function over an integer efficient set of a multicriteria linear program. International Transactions in Operational Research, 19: 531-545.
- CHAABANE D & MEBREK F. 2014. Optimization of a linear function over the setof stochastic efficient solutions. Computational Management Science, 11: 157-178.
- CHAABANE D & PIRLOT M. 2010. A method for optimizing over the efficient set. Journal of Industrial and Management Optimization, 6: 811-823.
- CHANKONG V & HAIMES YY. 1983. Multi-objective decision making: Theory and methodology. North-Holland.
- ECKER JG & SONG HG. 1994. Optimizing a Linear Function over an Efficient Set. Journal of Optimization Theory and Applications , 83: 541-563.
- JSEUS JORG MJ. 2009. An algorithm for optimizing a linear function over an integer efficient set. European Journal of Operational Research , 195: 98-103.
- KEENEY RL & RAIFFA H. 1976. Decisions with multiple objectives: Preferences and value trade-offs. Wiley.
- LIU Z & EHRGOTT M. 2018. Primal and Dual Algorithms for Optimization over the Efficient Set. A Journal of Mathematical Programming and Operations Research, 67: 1-26.
- MAHDI S & CHAABANE D. 2015. A linear fractional optimization over an integer efficient set. RAIRO-Operations Research-Recherche Opérationnelle, 49(2): 265-278.
- MENNI A & CHAABANE D. 2020. A possibilistic optimization over an integer efficient set within a fuzzy environment. RAIRO Operations Research, 54: 1437-1452.
- MOULAI M & MEKHILEF A. 2021. Quadratic optimization over a discrete pareto set of a multiobjective linear fractional program. A Journal of Mathematical Programming and Operations Research , 70: 1425-1442.
- RAIFFA H. 1968. Decision Analysis: Introductory Lectures on Choices Under Uncertainty. Addison-Wesley.
- SAATY TL. 1980. The Analytic Hierarchy Process: Planning, Priority Setting, Resources Allocation. McGraw-Hill.
- SKLAR M. 1959. Fonctions de répartition à n dimensions et leurs marges. In Annales de l’ISUP, 8: 229-231.
- STEWART TJ & BEZDEK JC. 2015. Decision making in multi-objective decision problems. Springer.
- SYLVA J & CREMA A. 2004. A Method for Finding the set of Non-dominated Vectors for Multiple Objective Integer Linear Programs. European Journal of Operational Research , 158: 46-55.
- TEGHEM J & KUNSCH P. 1986. A Survey of Techniques for Finding Efficient Solutions to Multiobjective Integer Linear Programming. Asia Pacific Journal of Operations Research, 3: 95-108.
- YAMAMOTO Y. 2004. Optimization over the Efficient Set: Overview. Journal of Global Optimization, 24: 285-317.
- YU PL. 1973. A class of solutions for group decision problems. Management Science, 19(8): 668-677.
- ZAIDI A, CHAABANE D, ASLI L, IDIR L & MATOUB S. 2024. A genetics algorithms for optimizing a function over the integer efficient set. Croatian Operational Research Review, 15(1): 75-88.
-
Funding
The authors declare no funding was received for this work.
-
Data Availability
The data that support the findings of this study are available from the link: https://zenodo.org/records/14938610.
Data availability
The data that support the findings of this study are available from the link: https://zenodo.org/records/14938610.
Publication Dates
-
Publication in this collection
19 May 2025 -
Date of issue
2025
History
-
Received
23 Sept 2024 -
Accepted
04 Jan 2025






























