SciELO - Scientific Electronic Library Online

vol.32 issue3Heuristics for minimizing the maximum within-clusters distance author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand




Related links


Pesquisa Operacional

Print version ISSN 0101-7438

Pesqui. Oper. vol.32 no.3 Rio de Janeiro Sept./Dec. 2012  Epub Nov 30, 2012 

Addressing congestion on single allocation hub-and-spoke networks



Ricardo Saraiva de Camargo*; Gilberto de Miranda Jr.

Industrial Engineering Department, Federal University of Minas Gerais, Minas Gerais, MG, Brazil. E-mails:;




When considering hub-and-spoke networks with single allocation, the absence of alternative routes makes this kind of systems specially vulnerable to congestion effects. In order to improve the design of such networks, congestion costs must be addressed. This article deploys two different techniques for addressing congestion on single allocation hub-and-spoke networks: the Generalized Benders Decomposition and the Outer Approximation method. Both methods are able to solve large scale instances. Computational experiments show how the adoption of advanced solution strategies, such as Pareto-optimal cut generation on the Master Problem branch-and-bound tree, may be decisive. They also demonstrate that the solution effort is not only associated with the size of the instances, but also with their combination of the installation and congestion costs.

Keywords: single allocation hub location problem, Benders decomposition method, outer-approximation algorithm, large scale optimization.




Hub-and-spoke networks became an important field of discrete location research. The relevance is largely explained by their widespread use in cargo and passengers transportation and telecommunication systems [5,16].

In hub-and-spoke networks, direct transportation of flows between pairs of origin-destination nodes is usually extremely costly. As an alternative, flows from different origins but addressed to the same destination can be consolidated at transshipment nodes, known as hubs, prior to be routed, sometimes via other hubs, towards their destinations. Hubs are then responsible for flow aggregation and redistribution. The bundle of flows at the hubs increases the traffic on inter-hub connections, enabling the use of more efficient and higher volume carriers, resulting then in lower per unit transportation costs [47]. Thus economies of scale can be achieved by bulk transportation. Furthermore, hub-and-spoke networks allow lower infrastructure costs and greater overall efficiency of logistics [37].

Generally, in the design of hub-and-spoke networks, there is a connection between every hub pair; no two non-hub nodes can be serviced by a direct link; an origin-destination demand is routed through one or at most two hubs; and the economies of scale at inter-hub connections are represented by a discount factor (0 < α < 1). Usually the main decisions involve the location of hub facilities, the allocation of origin and destination nodes to hubs (formation of the spokes), the establishment of discounted transportation connections and the routing of flows through the network.

Moreover, according to the characteristics considered, different assumptions may be addressed, including: Single or multiple allocation of the non-hub nodes to the installed hubs, the number of hubs to be located may or may not be known beforehand, direct service between non-hub nodes may be enabled, capacity constraints on the amount of traffic a installed hub can handle, consideration of congestion effects at the installed hubs, flow dependent economies of scale on inter-hub connections, and furthermore there may be not a direct connection between every hub pair, implying then an incomplete hub network structure [6] or a network topology where the hubs are connected by means of a spanning tree [18]. A general review of different problems of hub-and-spoke networks can be found at Campbell [14,15], while a exhaustive survey is present on Campbell et al. [16] and Alumur & Kara [5].

One of these problem variants is the single allocation hub location problem (SAHLP) where each non-hub is allocated to a single hub only, a fixed cost is incurred each time a node is selected to be a hub, and the path connecting each pair of origin-destination nodes has one or two hubs present. When there are no installation costs but the number p of hubs to be located is known beforehand, the problem is named as the single allocation p-hub location problem (SApHLP). As both problems are closely related, they share the same mathematical programming formulations, differentiating only by the presence or not of a constraint establishing that p hubs must be located and a term on the objective function summing the fixed cost of the installed hubs.

Furthermore, while the multiple allocation hub location problem is closely related to the facility location problem [40], the SAHLP is more akin to the quadratic assignment problem [46]. Hence it is harder to handle, requiring different approaches regarding solution techniques and proper formulations.

Among the available mixed integer linear programming formulations [3,14,24,28,46,55], two are worthy of notice: the models of Skorin-Kapov et al. [55] and Ernst & Krishnamoorthy [28]. While the first has the tightest linear programming relaxation (optimality gaps smaller than 1%), yielding integer solution values for the integer variables most of the time at the expense of computer memory and time; the latter presents a good trade-off between formulation size (fewer variables and constraints) and computer effort to solve it.

When solution methods are considered, the SAHLP and the SApHLP do not share the same strategies. Most of the algorithms [2,28,29,38,39,46,51,54] to solve the SApHLP are based on specialized heuristics and branch-and-bound procedures, while those [1,17,21,53,56] addressed to the SAHLP rely on meta-heuristics like genetic and simulated annealing algorithms.

Independently of which hub-and-spoke problem is addressed, one of the main overall advantages of such systems is the exploitation of scale economies. However this may induce the formation of networks that tend to overload a small number of hubs, resulting in some inter-hub connections more heavily-utilized than others. This is specially true when single allocations are involved, since the flow leaving from and arriving at a non-hub node goes through only one hub. Hence it is unavoidable to take congestion effects into consideration.

A common way of addressing congestion in SAHLP and SApHLP networks is to limit the amount of traffic a installed hub can handle [7,8,20,30,41,44,52]. Unfortunately, capacity constraints do not mimic the explosive nature of congestion: the more flow a hub attracts, the harder the handling process becomes, the greater the costs. Usually these costs increase extremely rapid due to queuing and delay effects. Hence, elaborate cost functions are needed such as the one employed by Elhedhli & Hu [26].

Elhedhli & Hu [26] are the first authors to consider explicitly the congestion effects on the objective function for the SApHLP. Using a power-law function widely utilized to estimate delay costs in airport applications [34]. They propose a non-linear formulation where these convex cost functions, that increase rapidly as more traffic flows through the installed hubs, are present on the objective function. They linearize their model utilizing a set of infinite piece-wise linear and tangent hyperplanes, and then solve it by means of a Lagrangian relaxation algorithm. They solve only toy instances (up to 25 nodes) with an average optimality gap close to 1%. The obtained solutions have a more balanced overall distribution of flows through the network than the ones attained by disregarding the congestion effects [26].

In this study, instead of linearizing the implied nonlinear formulation, two very efficient algorithms based on the Generalized Benders Decomposition (GBD) method [33] and on the Outer Approximation technique (OA) [23,31,57] are employed to handle the nonlinear SAHLP under congestion. These algorithms are different from the former deployed for the multiple allocation variant proposed at Camargo et al. [11]. The main contributions of the present article are the proper derivation of pareto-optimal Benders cuts as devised by Papadakos [49], the exploitation of the special structure of the formulation in order to enable the blending of GBD and OA cuts at the same master program, and the implementation of this strategy in a branch-and-cut framework.

Due to the combination of the best features of each method, the proposed scheme is able to solve large instances to optimality. As far as the authors know, there is no previous report of such large problems solved by the OA technique. Furthermore, a fair comparison of both techniques is presented demonstrating the advantages of one method over the other as a function of some of the instance parameters.

The paper is organized as follows. Section 2 provides general notations and definitions of the nonlinear SAHLP under congestion effects. The GBD and OA algorithms are presented in sections 3 and 4, respectively. Finally, the computational results are shown in section 6, while the final remarks and future research plans are done in section 7.



In this section, the SAHLP under congestion is formulated as a mixed integer nonlinear program (MINLP) using the model of Skorin-Kapov et al. [55] as a starting basis. The formulation requires the following definitions: Let N be the set of demand node locations which exchange flows and let K be the set of candidate nodes to become hubs. Usually K N, but it is assumed henceforth that all demand nodes are candidates to have a installed hub, implying K N. For all node pairs i and j (i, j N : i j), wij represents the flow demand from origin node i to destination node j which is routed through either one or two installed hubs. Normally wij wji. Let also Oi = j N wij and Di = j N wji be the total of demand that is originated from and destined to node i N, respectively.

Further, let fk be the fixed installation cost of a hub at node k N and let cijkm be the transportation cost per unit of flow from node i to node j routed via hubs at nodes k and m, that is, the standard transportation cost of route i – k – m – j (i, j, k, m N). This transportation cost is the composition of three cost segments: cijkm = cik + αckm + cmj, where cik and cmj are the standard transportation cost per unit of flow from node i to hub k and from hub m to node j, and αckm is the discounted standard transportation cost between hubs k and m. The discount factor 0 < α < 1 represents the scale economies on the inter-hub connections. If only one hub is present in any given route then k = m and no discount factor is applied for the route i – k – m – j.

The MINLP uses flow variables xijkm > 0 to represent the fraction of demand wij (i, j N) that is routed through hubs k and m (k, m N), in this order; the variables gk to account for the total flow passing through hub k N; and the integer variables zik {0,1} to indicate if node i N is allocated to hub k N (zik = 1) or not (zik = 0). When a hub is located at node k N, then zkk = 1; otherwise zkk = 0.

The congestion cost function is usually defined as a power-law τk(gk) = a gkb that increases rapidly as more traffic goes through hub k K, where the parameters a > 0 and b > 1 are scalars related to the hub features. The function τk(gk) is increasing on [0,+), proper convex and smooth, and it is normally used to estimate delay costs in airport applications [34], being already used by Elhedhli & Hu [26] for the same problem. In this research, the adopted power law function is designed to consider congestion effects after a given flow threshold Γk (usually set to 70% of the hub nominal capacity) is trespassed. Such function can be written as τk(gk) = a (max{0,gk - Γk})b without loss of generality, since Γk can be set for any value where the congestion effects start to degrade the network economies of scale.

In the SAHLP, as each non-hub node is allocated to a single installed hub only, demands wij and wji are sent over the same route, enabling then the reduction of the number of variables xijkm in half [54]. Furthermore, as the outgoing and the incoming flows of a non-hub must go and arrive through the same hub, the cost component of local traffic can be written as k i (Oi + Di) cik zik. Thus enabling the redefinition of the costs cijkm as

These three simple manipulations improve the overall performance of the algorithms here proposed. In the remainder of this paper, for the sake of simplicity in presentation, one must consider i, j, k, m N and i < j. Hence the implied formulation is given as:

The objective function (1) minimizes the total cost associated with the demand transportation, the congestion effects and the hub installation costs. Constraints (2) assure that all nodes are allocated to a hub. Constraints (3) guarantee that routes beginning at origin node i, then passing firstly at hub k, and finishing at destination node j will only exist if node i is allocated to hub k. Likewise, constraints (4) guarantee that routes beginning at origin i and passing at hub m just before finishing at destination j will only exist if node j is allocated to hub m. Constraints (5) are responsible for accounting the total hub traffic, avoiding the double computation of the local traffic component. Constraints (6) allow a node i to be allocated to hub k only if hub k is installed, while (7), (8) and (9) are the non-negativity and the integrality constraints of variables xijkm, gk and zik, respectively.

Although there are other formulations for the SAHLP [14,24,28], the formulation of Skorin-Kapov et al. [55] is chosen because it provides the tightest linear programming bound. This is a key feature, since nonlinear terms tend to weaken any mixed-integer programming formulation, enlarging the integrality gap as the nonlinearities become dominant. The chosen formulation has also a very interesting property: for a fixed feasible vector z, the formulation is decomposable for each ij pair, recalling that the variables gk can be recovered by replacing xijkk by zikzjk.

The total hub flow gk has three different forms of being assessed in the literature. For some applications, such as the ones regarding public service networks, such as postal, health care, and public security systems, where the most overloading tasks are the sorting and the assembling of flows at the first hub of a given path, Ernst & Krishnamoorthy [30] propose constraints (10). These constraints assume that the hubs may only be overloaded by the incoming flows originated at the non-hub nodes directly allocated to them.

For those applications where the aforementioned assumption does not hold, Ernst & Krishnamoorthy [30] further suggest that the total hub flow gk can be determined by adding the demands destined to the non-hub nodes directly allocated to a given hub, yielding then constraints (11).

As observed by Labbé et al. [41], constraints (11) compute the total hub flow gk in a wrong way. For those non-hub nodes which are allocated to the same hub, the flows originated from and destined to them are accounted twice: One time on the parcel Oi and another time on the parcel Di. Labbé et al. [41] correct this misconception by proposing constraints (5).


Figure 1


Constraints (5) and (10) induce very different optimal networks, as well as require distinct computational efforts. In order to illustrate these differences, a small experiment was carried out by optimally solving, via CPLEX 12, the instance AP20.2, with 20 nodes and α = 0.2, of the well-known Australian Post (AP) standard data set [28,30]. The adopted power-law congestion cost function had parameters a = 0.01 and b = 2. Figure 2 shows how the network design and the required solution time were affected after constraints (10) were exchanged for (5). The attained values for when no congestion is accounted are also shown. Once again constraints are meaningless since they wrongly compute the total hub flow, therefore they were not considered in this experiment.



In Figure 1(a), the total load (gk) of each installed hubs is represented by a bar above the hub's index. When congestion effects are disregarded, only two hubs (1 and 10) are installed at the optimal solution. When congestion effects are accounted by computing the total hub flow through constraints (10) or (5), the optimal networks have three (hubs 3, 10 and 11) and six (hubs 1, 3, 8, 9, 10 and 11) hubs, respectively. In other words, they have very distinctive optimal infra-structures.

When the computational efforts are observed, the differences are more pronounced. Figure 1(b) presents, in logarithmic scale, the required solution time for the three considered situations. As can be seen, when constraints (5) are utilized, the required solution time is 204 times greater than when using constraints (10). Constraints (5) greatly hardens the problem, because the term xijkk is a linearization of the product of zik zjk, which greatly affects the attained lower bounds during the branch-and-bound search.

Constraints (5) and (10) have different design purposes, being suitable for distinct applications. For those applications in which constraints (10) are more adequate, Camargo et al. [13] have devised a new technique that hybridizes methods of OA [23,31] and Benders decomposition [9]. The proposed technique blends OA and classical Benders cuts in a straightforwardly manner, since there is no coupling of the g and x variables on constraints (10). Their algorithm is capable of solving instances up to 200 nodes in reasonable time. Remark that previous articles on the literature solved only small problems up to 25 nodes [26,27].

Unfortunately, for those practical situations where one needs to account the total hub flow using constraints (5), which are more general, the technique already proposed by Camargo et al. [13] can not be directly applied, due to coupling of the g and x variables. Furthermore, as far as the authors know, there are no other studies in the literature that address congestion effects deploying constraints (5).

In the light of the aforementioned reasons, this article proposes a new hybrid OA/Benders algorithm that exploits the structure of the formulation, and optimally solves instances up to 100 nodes. Further, the method clearly outperforms an implemented GBD [33] algorithm, a well known technique for solving MINLPs.

Moreover, whenever two formulations - one tight and large, and another weak and small-are available for a given problem, the proposed scheme can be applied, if the global utilization of the common resources is accounted using the smaller model and is computed independently of the large scale system variables. This kind of manipulation simplifies the handling of the non-linearities, by confining them to a kind of sandbox, while uses the large scale system to improve the formulation bounds.

For sake of presentation and understanding of the required concepts of the hybrid technique, the next section is dedicated to the application of the GBD method for the proposed model.



The Benders decomposition algorithm [9] is a partition method for solving mixed-integer linear and non-linear programming problems. In general terms, the algorithm relies on a projection problem manipulation, followed by solution strategies of dualization, outer linearization and relaxation. The complicating variables (integer variables) of the original problem are projected out, resulting into an equivalent model with fewer variables, but many more constraints. When attaining optimality, a large number of these constraints will not be binding, suggesting then a strategy of relaxation that ignores all but a few of these constraints.

Benders proposes a procedure to add these constraints on demand by using two levels of coordination. At a higher level, known as master problem (MP), a relaxed version of the original problem having the set of the integer variables and its associated constraints is responsible for fixing the values of these integer variables and for providing a lower bound (LB) for the problem. At a lower level, known as subproblem (SP), the dual of the original problem with the values of the integer variables temporarily fixed by the MP is responsible for rendering a new cut or a Benders cut to be added to the MP and for generating a upper bound (UB) for the problem. The algorithm iterates, solving the MP and the SP one at a time, until the UB and the LB converge towards an optimal solution, if one exists.

Since the pioneering work of Geoffrion & Graves [32], the Benders decomposition method has been successfully deployed in solving large-scale problems: e.g., the uncapacitated network design problem with undirected arcs [43], the locomotive and car assignment problem [19] and, more recently, the multiple allocation hub location problem [10–12].

In Sections 3.1, 3.2 and 3.3, formal descriptions of the MP, the associated SP, implementation issues and solution strategies are presented.

3.1 Benders master program

Projecting the problem (1)-(9) onto the space of the z variables results into the equivalentproblem:

where Z = {z {0, 1} | constraints(2) and (6) hold} and ϕ(z) is the following SP:


Since the constraints defining z are enough to ensure feasibility, the SP is bounded. Further, as ϕ(z) has a convex and differentiable objective function and linear constraints, its Karush-Kuhn-Tucker conditions are necessary and sufficient for optimality, hence amenable to dualization techniques. So associating vectors of dual variables u, v and βto constraints (3), (4) and (5), respectively, and because there is no duality gap, ϕ(z) can be re-written as:

Since the supremum is the least upper bound and with the help of variable η > 0, the whole problem (1)-(9) is then equivalent to following MP:

Because a large number of the constraints of the MP (12)-(14) will not be binding when optimality is attained, the GBD algorithm solves the MP through a strategy of relaxation that ignores all but a few of the constraints (13). These constraints are then added, via a iterated procedure, to the MP as needed. Thus for a given iteration t, where z = zt and after the solution of the associated SP and the recovery of the optimal values of ut, vt and βt, the optimal value of ϕ(zt) is given by:

Further, constraints (13) can be rewritten for iteration t in the form:

Therefore, by eliminating the minimum in (16) by using (15), the relaxed Benders master program (RMP) is stated as:

where T is the maximum number of iterations in order to attaining the optimal solution. In the next section, the resolution of SP ϕ(z) is presented.

3.2 Benders subproblem

For a hub structure zt fixed by the MP (17)-(19) at iteration t, the SP ϕ(zt) is given by:

In the SP (20)-(25), there are two sets of variables, x and g, coupled by the constraints (21). After dualizing these constraints by the associated dual multipliers β, a decomposable problem is implied:

subject to constraints (22)-(25).

The problem d(β) is decomposable in two smaller SPs and a constant: A linear problem, dL(β), having only the xijkm variables; a non-linear problem dNL(β), having just the gk variables; and a fixed term. Further, the SP dNL(β) is convex and differentiable, being therefore its Karush-Kuhn-Tucker conditions necessary and sufficient for optimality.

Hence for each k, given a structure zt fixed by the RMP at iteration t, the optimal solution of βt minimizes dNL(β) or:

In order to determine the optimal value of , the value of has to be computed first. As the variables xijkm can also be stated as the product of zik zjm for any RMP solution zt, the optimal value of can be obtained by rewriting equation (21) in the following way:

This small and apparently innocuous detail makes it possible to easily calculate the value of gt, avoiding therefore the use of non-linear programming methods for evaluating ϕ(zt). Moreover, this equation (27) has an additional advantage since it also allows the decomposition of the SP dL(β) in smaller problems, one for each i – j pair. Henceforth the optimal values of ut and vt can now be computed by solving the dual linear programming of these smaller problems:

3.3 Implementation and solution strategies

The efficiency of GBD algorithm depends mainly on the number of iterations required to attain global convergence. This number is intimately related to the quality of the Benders cuts assembled. Strong cuts usually mean fewer iterations. In order to have strong cuts, the solution of the SP has to be judiciously done, since the Benders algorithm is very sensitive to the selection of the dual variables. If care is not taken, a poor behavior of the algorithm may be expected.

Magnanti & Wong [42] propose a methodology for tackling this issue and hence accelerate the Benders algorithm. They notice that when the dual SP has multiple optimal solutions different Benders cuts can be generated. Instead of adding them all to the MP, they propose a SP to be solved, very similar to (28)-(32), in order to find the strongest cut, which dominates all the other cuts in the sense of pareto-optimality. Magnanti & Wong define the strength of a cut compared to another cut according to the following definition:

Definition 1. A cut is said to be pareto-optimal if it is not dominated by any other cut. Let U = {(u,v) : satisfying constraints (28)-(32)} be the set of feasible values for the dual variables u and v. A Benders cut (18) corresponding to (u1, v1) U dominates that corresponding to (u2, v2) U, if:

with strict inequality for at least one point z Z. Similarly, it is said that (u1, v1) dominates (u2, v2) and it is called pareto-optimal.

In order to build up the pareto-optimal cut generation SP, they use the notion of core points:

Definition 2. A point z0 Z is a core point if it belongs to the relative interior of the convex hull of Z or z0 ri(Zc), where ri(*) and Zc are the relative interior and convex hull of set Z, respectively.

In the original algorithm of Magnanti & Wong, they have two different SPs to solve at each iteration: One associated to the current MP solution zt, SP (28)-(32), and another related to a initial fixed given core point z0. This additional SP differs slightly from the original SP, having the form:

constraints - where (β, u, v) is the optimal value of the objective function of the SP associated with the MP zt. This Magnanti & Wong SP (33)-(35) generates pareto-optimal cuts that speed up the convergence of the method. In order to achieve a good performance, one must assess the computational burden of finding these pareto-optimal cuts faced to the number of iterations that they might save.

More recently, Papadakos [49] shows that it is possible to deal with a different version of the Magnanti & Wong SP that decreases the computational effort for cut generation by disregarding constraint (34). As a drawback, this enhancement only works if a different core point z0 is used at each iteration [49].

Even though, as pointed out by Mercier et al. [45], there are not practical methods to find good core points, Papadakos [49] proves that given a valid initial core point z0 ri(Zc) and z Z then any convex combination of z0 and z is also a core point. This successful approximation scheme is given by:

where 0 < λ < 1. As also observed by Papadakos [49] and Mercier et al. [45], λ = 1/2 gives the best results empirically.

Furthermore, the selected starting core point that demonstrated the best overall performance during the computer experiments is taken as:

where n = |N|.

Proposition 1. The point described by (36) and (37) is a valid core point, i.e. z0 ri(Zc).

Proof. In order to proof Proposition 1, it is necessary to show that z0 can be expressed as convex combination of at least two integer feasible solutions. A point z is a convex combination of several other points zr, r = 1, ..., m, if:

It is possible to verify that z0 respects the convex combination definition by making z1 equal to the solution where all nodes are hubs and zr, r = 2, ..., (n + 1), equal to the n possible solutions having a single installed hub, and establishing λ1 = 0.5 – 0.5/(n – 1) and λlr = 0.5/(n – 1), r = 2, ...,(n + 1).

A sketch of the implemented algorithm is detailed in Algorithm 1 where UB, LB, ε, and ϑ* are the upper bound, lower bound, stopping precision, the objective function optimal value of the RMP, and the objective function value of the current solution, respectively.



Usually, the solution time of the MP (line 6) is usually much higher than the SP because of the integrality constraints. A common strategy to short it is to reduce the number of MP solved by embedding the pareto-optimal cuts generation procedure in a standard Branch-And-Cut framework, Algorithm 2.



Before starting the branch-and-cut, a cut pool Ω is set up. Some cuts can be added early in the tree to avoid the exploration of too many infeasible branch-and-bound nodes. However, adding too many unnecessary cuts can slow down the LP relaxation at each node. A good starting cut pool is obtained by solving the LP relaxation of the MP and adding five to ten cuts. In Algorithm 2, solving a node ρ ∈ Ω means solving the LP relaxation of MP, augmented with branching constraints of ρ and the cuts in pool Ω. The adopted implementation algorithm has been carried in the Branch-And-Cut framework of CPLEX 9.1 using the standard callback classes.

where δ is a scalar parameter set to 1.10. Line 9 only allows the inclusion of cuts that might improve the overall lower bound of the algorithm.

Although the GBD algorithm solves the SAHLP under congestion successfully, see the computational results Section 6, an specialized OA procedure is also presented in the next section. Both techniques are very competitive and clearly outperformed CPLEX 9.1 on the testscarried out.



The OA method is a simple but effective technique based on a cutting plane approach for solving MINLPs [23,31,57]. The OA technique has been applied on structural optimization of flow-sheet processes [35], on the synthesis of heat exchanger networks [4,50], on general engineering processes [35] and on logistics applications [22,36]. A general survey of the technique can be found at Grossmann & Kravanja [35].

The method is a coordination technique between a MP and a SP akin in essence to the GBD. Nevertheless the OA MP is written on the space of all the problem variables, opposing to the GBD projection onto the space of the complicating ones. This feature enhances the strength of the associated cutting planes and ensures the convergence in fewer iterations than the observed in GBD. As a drawback, the solution of the MP requires a greater computational effort worsening as the problem size increases.

In order to understand the development of the OA technique for the SAHLP under congestion, a general overview of the method is required. Given a MINLP in its most basic algebraic representation, where x and z are the sets of continuous and discrete variables, respectively,

are two continuously differentiable functions, and Z and X are polyhedral sets:

It is possible to reduce this problem to a pure nonlinear program by choosing a fixed vector z = zh, zh Z, for some iteration h, yielding the following nonlinear SP:

When solved, the above SP (42)-(44) permits to infer the gradient of the functions f(z, x) and gj(z, x),j at (zh, xh). If no further feasibility constraints are required, then a straightforward manipulation enables the problem (38)-(41) to be equivalent to a mixed integer program (MIP):

where H is the maximum number of iterations and ξ is an objective function variable under-estimator.

Problem (45)-(50) is known as the OA MP. Constraints (46) and (47) are responsible for performing the OA of the objective function and the feasible region, respectively. When functions g(z, x) are proper convex and a constraint qualification holds for every solution of (42)-(44), then constraints (47) are necessary and sufficient to outer approximate the feasible region.



The LBs attained by the OA technique are greater or equal to the ones obtained by the GBD, resulting then in fewer iterations for convergence [23]. However, in order to have these better LBs, the OA's RMP has the number of variables and constraints greater than the GBD's RMP. Consequently, the size of the largest instance that the OA technique is capable of solving is much smaller than the largest of the GBD.

One way of circumventing the limitations of the OA's RMP in handling large size instances is to pay attention to its constraint matrix structure. Sometimes, when the RMP is a MILP, it can be efficiently solved by means of a Benders decomposition algorithm. This is specially true when the constraints of the problem being solved have a stair-case structure, see Figure 2(a), like the ones of location-transportation problems.

Unfortunately, this is not the case when capacity limits or congestion control are enforced by constraints that measure the level of utilization of the installed infra-structure. Normally, these constraints destroy the stair-case structure, see Figure 2(b), usually deprecating the performance of employed algorithms.

Nevertheless, whenever the MINLPs can be reformulated by adding new variables in order to separate the non-linearities from the large scale system, regaining thus a stair-case structure, see Figure 2(c), an hybrid strategy combining OA/Benders cuts can be efficiently used.

This strategy may allow the parallel solution of the SPs of OA and Benders methods. In other words, both OA and Benders cuts can be added to the RMP at the same time. Moreover, assuming that number of additional variables is much smaller than the number of variables of the large-scale system, the required solution time of the RMP might be shortened when compared with the standalone application of OA or GBD, because it has better bounds than the GBD's RMP and it has a smaller size than the OA's RMP.

In the case of formulation (1)-(9), the objective function is separable on the linear and nonlinear terms. Hence applying the OA technique only requires the replacement of τk(gk) by ξk for each k on the objective function and the addition of constraints in the form . These are responsible for the OA of function τk(gk). The equivalent formulation of the OA MP can then be given as:

Constraints (47) are not present in formulation (51)-(53), because the coupling constraints are linear, making unnecessary thus to perform a OA of the feasible region. Furthermore, as expected, this formulation is still too large to be efficiently solved. The large size of x variables set might represent a computer burden during the solution of the OA MP. One way of addressing this drawback is to project these variables out provided that some manipulations are carried out. The idea here is to enable the solution of the OA MP by means of a Benders decomposition algorithm.

Observing the role of variables xijkk on constraints (5), it is possible to replace it using additional variables yijk > 0 and some coupling constraints on y, z and x. The equation (5) responsible for computing gk variables is then reformulated as follows:

where the values of yijk can be computed using the following coupling constraints:

Moreover, in order to avoid the degradation of linear programming bounds, constraints (3) and (4) are now rewritten as:

These manipulations allow the decomposition of the OA MP using the standard scheme of Benders decomposition. It is now possible to project the x variables out, making the OA MP more manageable. For fixed values of yt and zt at some iteration t, the Benders primal SP is given by:

Associating the dual variables u and v to constraints and , respectively, the following dual Benders SP is obtained for a given i – j pair:

Using (57) and the help of an auxiliary variable η > 0, Benders cuts can be derived, where an are the optimal values of the dual variables of iteration t. Once again, no further feasibility constraints are required, being the dual Benders SP always bounded. Moreover, it is also possible to eliminate the variables gk from the OA MP using equations (54). Therefore the resulting reformulated OA MP is now written as:

In formulation (62)-(73), τk() is calculated using equation (54) for the pair (zh, yh), while the quantity is computed recalling equation (26). It is important to remark that the final form of the OA MP enables the parallel solution of SP involving the pair (g, β) and the (u, v) variables. Whenever a new solution (z, y) is available, new Benders cuts (64) or new OA cuts (65) can be added to the MP in any order.

Moreover, all the results concerning pareto-optimal cuts and core points can be reused here, since a starting core point on the y space can be readily obtained by making . But even when using pareto-optimal cuts, the solution of (62)-(73) does not imply the generation of cuts as strong as those obtained by the classical OA MP, i.e. without projecting out the x variables. However, it provides a good trade-off between the strength of the cuts and the computational effort on solving the MP.

A sketch of the implemented algorithm is detailed in Algorithm 3, where and are the objective function optimal value of the OA MP and the OA SP, respectively.



At lines 3 and 13 of Algorithm 3, the OA MP is solved after relaxing and imposing the integrality constraints (73), respectively. In lines 3 through 12, Benders and OA cuts are added to the OA MP while the difference between the under-estimator variable and the values of the SP objective function are greater than a threshold δ.

In a similar way to Algorithm 1, the OA MP resolution time is usually much larger than the SP time, due to the integrality constraints. Therefore, the generation of Benders and OA cuts is embedded in a standard branch-and-cut framework, akin to Algorithm 2. The next section presents computation experiments comparing the deployed solution strategies.



In order to assess the performance of the proposed algorithms and the impact of the single allocation feature of this class of problems under hub congestion, three sets of computational experiments were designed.

The first one demonstrates how the hub infrastructure changes when congestion costs are considered and that, depending on how the congestion cost function parameters are set, it is possible to solve to optimality large instances by using the GBD algorithm (Algorithm 1).

The second set verifies how a given instance becomes harder to solve as the congestion and the installation costs vary. It infers that the instances become harder to solve when the two effects are combined. Hence being important to deploy a more elaborate GBD: Algorithm 2. This experiment also shows that the computational burden is not only explained by the instance size, but by the parameters settings too.

Finally, when the combination of the congestion and the installation cost parameters are stressed, a further approach is required. Thus a comparison of the OA Algorithm 3 and of the GBD Algorithm 2 is presented on the final set of experiments.

In all the experiments, the well-known Australian Post (AP) standard data set introduced by [28,30] is used. The names of the instances are coded as APn.α, where n is the number of nodes and α = {2,4,6,8} represents the selected discount factors 0.2, 0.4, 0.6 and 0.8, respectively. These instances have sizes ranging from 10 to 200 nodes, Euclidean distances to represent the transportation costs, and installation costs for the first 50 nodes only.

As K N is considered in the experiments, hub fixed costs were generated for all instances using a Gaussian distribution with average equal to fo and variance set to 40% to mimic how different installation costs vary in realistic problems. Here fo represents the scaled difference in objective value between a scenario in which a virtual hub is located in the center of mass and a scenario in which all nodes are hubs, as introduced by [25]. Further, as done by [25], the nodes with higher flows are selected to have the higher fixed costs, hardening in general the selection of potential hubs. For alternative articles considering this test bed and the methodological generation of hub set-up costs refer to [10-12,25,26,48].

The nominal capacity of each hub k was generated by taking the total demand of the node (Ok + Dk) added to a random fraction (U[15%,50%]) of the total demand, recalling that the hub total traffic does not drop linearly with the number of installed hubs, equation (5). The algorithms were implemented in C + + using the IBM CPLEX 11 Concert Technology under a Linux operating system. The experiments were run on a regular PC desktop with a Intel Core 2 Quad 3.2 GHz processor and 8Gb of RAM, and had 72,000 seconds as a time limit.

In the first set of experiments, the parameters a and b were set to 0.0001 and 2, respectively. The obtained results are presented in Table 1. The entry LP plots the number of cycles where the GBD MP was solved disregarding the integrality constraints (hot-start cycles). The entry Integer shows the number of additional integer cycles needed to prove optimality. Columns Solution Time and Installed Hubs show the total computational time required and the number of installed hubs.



An interesting feature of this class of hub-and-spoke problems can be observed in Table 1 when compared to the results reported in Camargo et al. [11]. The single allocation systems are much more affected by congestion effects than the multiple ones. This is expected, since, in the present case, there are no alternative paths for a given origin destination pair once the network is designed. This enhanced sensitivity can be deduced from Table 1 where an addition of 1.35 hubs on average was required to handle the congestion effects even for very small congestion cost parameters.

In this sense, the economies of scale play a fundamental role in the network design, once they allow the affordance of the congestion extra costs prior to installing additional infrastructure. Further, the computational burden to prove optimality grows as the efficiency of the economies of scale in counterbalancing the congestion effects is reduced. Figure 3 shows this trend by plotting the log scale of the solution time of instances AP40, AP50 and AP70 under congestion.



Observing the rows with small economies of scale, APn.8, the required number of additional hubs to manage the congestion costs is significant, raising the following question: If the network is unable to provide large discounts on inter-hub connections due to the lack of transportation technology or other environmental reasons, is it a good strategy to design such networks using the single allocation approach? Future research may then address the economies of scale as a function of the total flow on the inter-hub connections and the associated impact on the network design under hub congestion.

For the second set of experiments, the instance AP10.2 was selected in order to show how a given instance becomes hard to solve as the congestion costs are increased. The parameter b was set to 2, while the congestion cost function parameter a was scaled from 0.0001 to 100. Instances with low congestion cost parameter a tend to have low solution times. When the congestion costs are increased, the computer effort is augmented. However, if the congestion costs are very high, the computational burden to solve the associated instance is not worsened in the same manner. This can be seen in Figure 4 as the computational effort stabilizes after a given threshold of the parameter a is trespassed. In other words, all the economies of scale were preserved, at the cost of an additional investment on the network infrastructure.



This phenomenon suggests that this class of problems has its inherent difficulty associated to other components than the congestion cost parameters, such as the hub installation costs. In order to investigate the role of these costs, the same instance AP10.2 was used, but having the parameters a and b fixed to 0.001 and 2, respectively. A scaling factor multiplying the installation costs was adopted ranging from 0.5 to 20.

As happened in the later case, Figure 5 shows that after a given threshold of the increased hub installation costs, the computing time to prove the instance optimality is stabilized. Hence from Figures 4 and 5, is possible to infer that the problem under study becomes harder to solve if a given instance presents high hub installation costs and also a very aggressive congestion cost function.



In the next experiments, for fixed parameters a = 0.001 and b = 2, a comparison of computing times and number of integer cycles to attain optimality of Algorithm 1 and 2 is depicted in the fourth, fifth, seventh and eighth columns, respectively, of Table 2. Observe that different installation scaling factors were adopted, as indicated in the second column, making these instances harder to solve than the ones present in Table 1.



Algorithm 1 requires 7 times more computing time and 11 times more integer cycles than Algorithm 2, on average. It is also interesting to remark that there are computing times in Table 2 that are comparable to the ones associated with instances of 150 and 200 nodes of Table 1. Definitely indicating that the complexity of this problem it is not only a matter of instance size.

Furthermore, after observing the evolution of the upper and lower bounds of both algorithms during the solution process of instance AP40.6 of Table 2 (refer to Figure 6), the main advantage of Algorithm 2 over Algorithm 1 is its ability to reduce the tail-off effect. In order to close 1% of optimality gap, Algorithm 2 takes close to a 1000 seconds or 42% of the total time, against 3,600 seconds or 87% of Algorithm 1. An akin behavior occurs on all the other instances.

The tail-off effect is more pronounced if the instances combine the two characteristics pointed above, large hub installation and an aggressive congestion costs, making the deployment of Algorithm 2 an interesting solution approach. However, when this combination is stressed-out even more, a different strategy is made necessary, as demonstrated in the third and final set of experiments.

In these final experiments, the congestion cost parameter is set to 0.01 and the installation cost scaling factors were adopted, as indicated in the second column of Table 3. Recall that in the first two sets of experiments, parameter a was set to 0.0001 and 0.001, respectively. In other words, the combined effect of having high congestion and installation costs makes these instances even harder to solve.



Table 3 presents the comparison results of Algorithms 2 and 3. The Algorithm 3 or the OA technique demonstrates a superior performance on these very hard instances, being 3.3 times faster and taking 3 times less integer cycles on average to prove optimality than the Algorithm 3. Notice that Algorithm 2 fails to prove optimality in four instances, ending with an average optimality gap close to 1% (instances AP70 and AP100).

Although, on one hand, Algorithm 3 demonstrates a remarkable performance tackling hard instances, on the other hand, the solution effort for solving each OA MP is still very high, even after projecting the x variables out to reduce its dimension. In this sense, this technique is preferable only if large optimality gaps of the first integer cycles of Algorithm 2 are detected as can be seen in Table 4.



In order to clarify the former statement, a direct comparison of the three implemented algorithms is presented in Table 4. In this experiment, instances AP10.2 and AP20.4 were selected, being the hub installation scaling factor and congestion parameter a varied according to the second and third columns, respectively.

The boldfaced entries in Table 4 represents the minimum computational effort for the instances solved. The deployment of Algorithm 3 is more interesting than the others when the optimality gaps of the first integer cycles of Algorithms 1 and 2 are greater than 40% on average for the tested problems.

A final comment is made here necessary. For the authors knowledge, there is no report on the literature that OA methods are able to solve such large scale problems. Notice that the instances AP100.2 and AP100.8 have 10,000 integer variables and 49,500,000 continuous variables. Therefore, solving the OA master programs by using the Benders decomposition method is a promising technique, i.e. whenever the problem structure is amenable and the GBD is not a good alternative.



Addressing congestion effects by explicitly considering them as costs on the objective function yields a more realistic modeling approach, specially when suitable nonlinear functions are selected. Once the rapid growth of the congestion costs is portrayed, a flow load balancing over the installed infrastructure is induced, leading thus to a superior network design.

This research presents two different techniques for solving large scale instances of single allocation hub location problem under congestion: the GBD technique and the OA method. Both are capable of solving instances up to 100 nodes, being the GBD (Algorithm 1) able to manage larger instances of 150 and 200 nodes.

Finally, the complexity on this class of problems is not only associated with the size of the instances. When an aggressive congestion cost function and large hub installation costs are combined, the computational effort to solve these instances may become remarkable. In these cases, the addition of pareto-optimal cuts on the MP branch-and-bound tree is an interesting way to reduce the tail-off effect, improving the overall performance of both of the portrayed methods: GBD and OA. In general, on one hand the GBD method is more scalable with instance size, on the other hand, the OA technique is less affected by the tail-off effect.



The authors' research has been funded by Conselho Nacional de Pesquisa - CNPq, project 480757/2008-9.



[1] ABDINNOUR-HELM S. 1998. A hybrid heuristic for the uncapacitated hub location problem. European Journal of Operational Research, 2-3(106): 489-499.         [ Links ]

[2] ABDINNOUR-HELM S. 2001. Using simulated annealing to solve the p-hub median problem. International Journal of Physical Distribution & Logistics Management, 31(3): 203-220.         [ Links ]

[3] ABDINNOUR-HELM S & VENKATARAMANAN MA. 1998. Solution approaches to hub location problems. Annals of Operations Research, 78: 31-50.         [ Links ]

[4] ADJIMAN CS, ANDROULAKIS IP & FLOUDAS CA. 1997. Global optimization of MINLP problems in process synthesis and design. Computers & Chemical Engineering, 21(Suppl. S): S445-S450.         [ Links ]

[5] ALUMUR SA & KARA BY. 2008. Network hub location problems: The state of the art. European Journal of Operational Research, 190: 1-21.         [ Links ]

[6] ALUMUR SA, KARA BY & KARASAN OE. 2009. The design of single allocation incomplete hub networks. Transportation Research Part B, 43: 936-951.         [ Links ]

[7] AYKIN T. 1994. Lagrangian relaxation based approaches to capacitated hub-and-spoke networkdesign problem. European Journal of Operational Research, 79: 501-523.         [ Links ]

[8] AYKIN T. 1995. Network policies for hub-and-spoke systems with applications to the air transportation system. Transportation Science, 29: 201-221.         [ Links ]

[9] BENDERS JF. 1962. Partitioning procedures for solving mixed-variables programming problems. Numerisch Mathematik, 4: 238-252.         [ Links ]

[10] CAMARGO RS, MIRANDA JR G DE & LUNA HP. 2008. Benders decomposition for the uncapacitated multiple allocation hub location problem. Computers and Operations Research, 35: 1047-1064.         [ Links ]

[11] CAMARGO RS, MIRANDA JR G DE, FERREIRA R & LUNA HP. 2009. Multiple allocation hub-and-spoke network design under hub congestion. Computers and Operations Research, 36: 3097-3106.         [ Links ]

[12] CAMARGO RS, MIRANDA JR G DE & LUNA HP. 2009. Benders decomposition for hub location problems with economies of scale. Transportation Science, 43: 86-97.         [ Links ]

[13] CAMARGO RS, MIRANDA JR G DE & FERREIRA RPM. 2011. A hybrid outer-approximation/benders decomposition algorithm for the single allocation hub location problem under congestion. Operations Research Letters, 39(5): 329-337.         [ Links ]

[14] CAMPBELL JF. 1994. Integer programming formulations of discrete hub location problems. European Journal of Operational Research, 72: 387-405.         [ Links ]

[15] CAMPBELL JF. 1994. A survey of network hub location. Studies in Locational Analysis, 6: 31-49.         [ Links ]

[16] CAMPBELL JF, ERNST AT & KRISHNAMOORTHY M. 2002. Hub location problems. In: Drezner Z and Hamacher HW (editors), Facility Location: Applications and Theory, chapter 12, pages 373-407. Springer, 1st edition.         [ Links ]

[17] CHEN JF. 2007. A hybrid heuristic for the uncapacitated single allocation hub location problem. Omega, 35: 211-220.         [ Links ]

[18] CONTRERAS I, FERNANDEZ E & MARN A. 2010. The tree of hubs location problem. European Journal of Operational Research, 202: 390-400.         [ Links ]

[19] CORDEAU JF, SOUMIS F & DESROSIERS J. 2001. Simultaneous assignment of locomotives and cars to passenger trains. Operations Research, 49(4): 531-548.         [ Links ]

[20] COSTA M DA G, CAPTIVO ME & CLÍMACO J. 2008. Capacitated single allocation hub location problema bi-criteria approach. Computers and Operations Research, 35(11): 3671-3695.         [ Links ]

[21] CUNHA CB & SILVA MR. 2007. A genetic algorithm for the problem of configuring a hub-and-spoke network for a LTL trucking company in Brazil. European Journal of Operational Research, 179: 747-758.         [ Links ]

[22] DESAI J & SEN S. 2010. A global optimization algorithm for reliable network design. European Journal of Operational Research, 200(1): 1-8.         [ Links ]

[23] DURAN MA & GROSSMANN IE. 1986. An outer-approximation algorithm for a claar of mixed-integer nonlinear programs. Mathematical Programming, 36: 307.         [ Links ]

[24] EBERY J. 2001. Solving large single allocation p-hub problems with two or three hubs. European Journal of Operational Research, 128(2): 447-458.         [ Links ]

[25] EBERY J, KRISHNAMOORTHY M, ERNST A & BOLAND N. 2000. The capacitated multipleallocation hub location problema: Formulations and algorithms. European Journal of Operational Research, 120: 614-631.         [ Links ]

[26] ELHEDHLI S & HU FX. 2005. Hub-and-spoke network design with congestion. Computers and Operations Research, 32: 1615-1632.         [ Links ]

[27] ELHEDHLI S & WU H. 2010. A lagrangean heuristic for hub-and-spoke system design with capacity selection and congestion. INFORMS Journal on Computing, 22(2): 282-296.         [ Links ]

[28] ERNST AT & KRISHNAMOORTHY M. 1996. Efficient algorithms for the uncapacitated single allocation p-hub median problem. Location Science, 4: 139-154.         [ Links ]

[29] ERNST AT & KRISHNAMOORTHY M. 1998. An exact solution approach based on shortest-paths for p-hub median problems. Journal on Computing, 10: 149-162.         [ Links ]

[30] ERNST AT & KRISHNAMOORTHY M. 1999. Solution algorithms for the capacitated single allocation hub location problem. Annals of Operations Research, 86: 141-159.         [ Links ]

[31] FLETCHER R & LEYFER S. 1994. Solving mixed integer nonlinear programs by outer approximation. Mathematical Programming, 66: 327.         [ Links ]

[32] GEOFFRION AM & GRAVES GW. 1974. Multicommodity distribution system design by Benders decomposition. Management Science, 20: 822-844.         [ Links ]

[33] GEOFFRION AM. 1972. Generalized Benders decomposition. Journal of optimization Theory and Applications, 10(4): 237-260.         [ Links ]

[34] GILLEN D & LEVINSON D. 1999. The full cost of air travel in the california corridor. Transportation Research Record: Journal of the Transportation Research Board, 1662: 1-9.         [ Links ]

[35] GROSSMANN IE & KRAVANJA Z. 1995. Mixed-integer nonlinear programming techniques for process systems engineering. Computers & Chemical Engineering, 19: 189-204.         [ Links ]

[36] HUANG SM, BATTA R & NAGI R. 2005. Distribution network design: Selection and sizing of congested connections. Naval Research Logistics, 52(8): 701-712.         [ Links ]

[37] KARA BY & TANSEL BC. 2003. The latest arrival hub location problem. Management Science, 47: 1408-1420.         [ Links ]

[38] KLINCEWICZ JG. 1991. Heuristics for the p-hub location problem. European Journal of Operational Research, 53: 25-37.         [ Links ]

[39] KLINCEWICZ JG. 1992. Avoiding local optima in the p-hub location problem using tabu search and grasp. Annals of Operations Research, 40: 283-302.         [ Links ]

[40] KLINCEWICZ JG. 1996. A dual algorithm for the uncapacitated hub location problem. LocationScience, 4(3): 173-184.         [ Links ]

[41] LABBÉ M, YAMAN H & GOURDIN E. 2005. A branch and cut algorithm for hub location problems with single assignment. Mathematical Programming: Series A, 102: 371-405.         [ Links ]

[42] MAGNANTI TL & WONG RT. 1981. Accelerating benders decomposition: Algorithmic enhancement and model selection criteria. Operations Research, 29(3): 464-483.         [ Links ]

[43] MAGNANTI TL, MIRCHANDANI P & WONG RT. 1986. Tailoring Benders decomposition for uncapacitated network design. Mathematical Programming Study, 26: 112-154.         [ Links ]

[44] MARIANOV V & SERRA D. 2003. Location models for airline hubs behaving as M/D/c queues. Computer and Operations Research, 30(7): 983-1003. ISSN 0305-0548.         [ Links ]

[45] MERCIER A, CORDEAU JF & SOUMIS F. 2005. Acomputational study of benders decomposition for the integrated aircraft routing and crew scheduling problem. Computers and Operations Research, 32(6): 1451-1476.         [ Links ]

[46] O'KELLY ME. 1987. A quadratic integer program for the location of interacting hub facilities. European Journal of Operational Research, 32: 393-404.         [ Links ]

[47] O'KELLY ME. 1998. A geographer's analysis of hub-and-spoke networks. Journal of Transport Geography, 3(6): 171-186.         [ Links ]

[48] O'KELLY ME & BRYAN DL. 1998. Hub location with flow economies of scale. Transportation Research Part B, 32(8): 605-616.         [ Links ]

[49] PAPADAKOS N. 2008. Practical enhancements to the Magnanti-Wong method. Operations Research Letters, 36: 444-449.         [ Links ]

[50] PAULES GE & FLOUDAS CA. 1992. Stochastic-programming in process synthesis - A 2-stage model wiht MINLP recourse for multiperiod heat-integrated distillation sequences. Computers & Chemical Engineering, 16(3): 189-210.         [ Links ]

[51] PIRKUL H & SCHILLING DA. 1998. An efficient procedure for designing single allocation hub and spoke systems. Management Science, 44(12): 235-242.         [ Links ]

[52] RODRIGUEZ V, ALVAREZ MJ & BARCOS L. 2007. Hub location under capacity constraints. Transportation Research Part E, 43: 495-505.         [ Links ]

[53] SILVA MR & CUNHA CB. 2009. New simple and efficient heuristics for the uncapacitated single allocation hub location problem. Computers and Operations Research, 36(12): 3152-3165.         [ Links ]

[54] SKORIN-KAPOV D & SKORIN-KAPOV J. 1994. On tabu search for the location of interacting hub facilities. European Journal of Operational Research, 73: 501-508.         [ Links ]

[55] SKORIN-KAPOV D & SKORIN-KAPOV J & O'KELLY M. 1996. Tight linear programming relaxations of uncapacitated p-hub median problems. European Journal of Operational Research, 94: 582-593.         [ Links ]

[56] TOPCUOGLU H, CORUT F, ERMIS M & YILMAZ G. 2005. Solving the uncapacitated hub location problem using genetic algorithms. Computers and Operations Research, 32(4): 967-984.         [ Links ]

[57] YUAN X, ZHANG S, PIBOLEAU L & DOMENECH S. 1988. Une methode d'optimisation nonlineare an variables pour la conception de procedes. Operations Research, 22: 331.         [ Links ]



* Corresponding author

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License