Acessibilidade / Reportar erro

Fortran subroutines for network flow optimization using an interior point algorithm

Abstracts

We describe Fortran subroutines for network flow optimization using an interior point network flow algorithm, that, together with a Fortran language driver, make up PDNET. The algorithm is described in detail and its implementation is outlined. Usage of the package is described and some computational experiments are reported. Source code for the software can be downloaded at http://www.research.att.com/~mgcr/pdnet.

optimization; network flow problems; interior point method; conjugate gradient method; FORTRAN subroutines


É apresentado o sistema PDNET, um conjunto de subrotinas em Fortran para a otimização de fluxos lineares em redes utilizando um algoritmo de pontos interiores. O algoritmo e a sua implementação são descritos com algum detalhe. A utilização do sistema é explicada e são apresentados alguns resultados computacionais. O código fonte está disponível em http://www.research.att.com/~mgcr/pdnet.

otimização; problemas de fluxo em rede; método de ponto interior; método do gradiente conjugado; subrotinas FORTRAN


Fortran subroutines for network flow optimization using an interior point algorithm

L. F. PortugalI; M. G. C. ResendeII, * * Corresponding author / autor para quem as correspondências devem ser encaminhadas ; G. VeigaIII; J. PatrícioIV; J. J. JúdiceV

IDep. de Ciências da Terra - Universidade de Coimbra, Coimbra - Portugal

IIInternet and Network Systems Research Center, AT&T Labs Research, Florham Park - NJ, USA, mgcr@research.att.com

IIIHTF Software, Rio de Janeiro - RJ, Brazil, gveiga@gmail.com

IVInstituto Politécnico de Tomar - Tomar - Portugal e Instituto de Telecomunicações - Polo de Coimbra - Portugal, Joao.Patricio@aim.estt.ipt.pt

VDep. de Matemática - Univ. de Coimbra - Coimbra e Instituto de Telecomunicações - Polo de Coimbra - Portugal, Joaquim.Judice@co.it.pt

ABSTRACT

We describe Fortran subroutines for network flow optimization using an interior point network flow algorithm, that, together with a Fortran language driver, make up PDNET. The algorithm is described in detail and its implementation is outlined. Usage of the package is described and some computational experiments are reported. Source code for the software can be downloaded at http://www.research.att.com/~mgcr/pdnet.

Keywords: optimization; network flow problems; interior point method; conjugate gradient method; FORTRAN subroutines.

RESUMO

É apresentado o sistema PDNET, um conjunto de subrotinas em Fortran para a otimização de fluxos lineares em redes utilizando um algoritmo de pontos interiores. O algoritmo e a sua implementação são descritos com algum detalhe. A utilização do sistema é explicada e são apresentados alguns resultados computacionais. O código fonte está disponível em http://www.research.att.com/~mgcr/pdnet.

Palavras-chave: otimização; problemas de fluxo em rede; método de ponto interior; método do gradiente conjugado; subrotinas FORTRAN.

1. Introduction

Given a directed graph G=(ν,ε), where ν is a set of m vertices and ε a set of n edges, let (i,j) denote a directed edge from vertex i to vertex j. The minimum cost network flow problem can be formulated as

subject to

where xij denotes the flow on edge (i,j) and cij is the cost of passing one unit of flow on edge (i,j). For each vertex i ∈ ν,bi denotes the flow produced or consumed at vertex i. If , bi > 0, vertex bi is a source. If bi < 0, vertex i is a sink. Otherwise (bi=0), vertex is a transshipment vertex.

For each edge , (i,j)∈ε, lij (uij) denotes the lower (upper) bound on flow on edge (i,j). Most often, the problem data are assumed to be integer. In matrix notation, the above network flow problem can be formulated as a primal linear program of the form

where c is a m×n vector whose elements are cij, A is the m×n vertex-edge incidence matrix of the graph G=(ν,ε), i.e. for each edge (i,j) in ε there is an associated column in matrix A with exactly two nonzero entries: an entry in row and an entry 1 in row i and entry -1 in row j;b, x,and u are defined as above, and is an n-dimensional vector of upper bound slacks. Furthermore, an appropriate variable change allows us to assume that the lower bounds are zero, without loss of generality. The dual of (3) is

where is the m-dimensional vector of dual variables and w and z are n-dimensional vectors of dual slacks.

If graph G is disconnected and has p connected components, there are exactly p redundant flow conservation constraints, which are sometimes removed from the problem formulation. Without loss of generality, we rule out trivially infeasible problems by assuming

,

where νk is the set of vertices for the k-th component of G.

If it is required that the flow xij be integer, (2) is replaced with

In the description of the algorithm, we assume without loss of generality, that lij=0 for all (i,j)∈ε and c≠0 that. A simple change of variables is done in the subroutines to transform the original problem into an equivalent one with lij=0 for all (i,j)∈ε. The flow is transformed back to the original problem upon termination. The case where is a simple feasibility problem, and is handled by solving a maximum flow problem [1].

Before concluding this introduction, we present some notation and outline the remainder of the paper. We denote the i-th column of A by Ai, the i-th row of A by Ai and a submatrix of A formed by columns with indices in set S by As. Let χ∈. We denote by X the diagonal matrix having the elements of in the n×n diagonal. The Euclidean or 2-norm is denoted by .

This paper describes Fortran subroutines used in an implementation of PDNET, an interior point network flow method introduced in Portugal, Resende, Veiga & Júdice [12]. The paper is organized as follows. In Section 2 we review the truncated infeasible-primal feasible-dual interior point method for linear programming. The implementation of this algorithm to handle network flow problems is described in Section 3. Section 4 describes the subroutines and their usage. Computational results, comparing PDNET with the network optimizer in CPLEX 10, are reported in Section 5. Concluding remarks are made in Section 6.

2. Truncated primal-infeasible dual-feasible interior point algorithm

In this section, we recall the interior point algorithm implemented in PDNET. Let

The truncated primal-infeasible dual-feasible interior point (TPIDF) algorithm [12] starts with any solution (x0, y0, s0, w0, z0)∈S+. At iteration k, the Newton direction (Δxk, Δyk, Δsk, Δwk, Δzk)is obtained as the solution of the linear system of equations

where e is a vector of ones of appropriate order,rk is such that

with β1=0.1, and

Primal and dual steps are taken in the direction (Δxk, Δyk, Δsk, Δwk, Δzk) to compute new iterates according to

where αp and αd are step-sizes in the primal and dual spaces, respectively, and are given by

where = 0.9995.

The solution of the linear system (5) is obtained in two steps. First, we compute the component of the direction as the solution of the system of normal equations

where Θk is given by

The remaining components of the direction are then recovered by

3. Implementation

We discuss how the truncated primal-infeasible dual-feasible algorithm can be used for solving network flow problems. For ease of discussion, we assume, without loss of generality, that the graph is connected. However, disconnected graphs are handled by PDNET.

3.1 Computing the Newton direction

Since the exact solution of (10) can be computationally expensive, a preconditioned conjugate gradient (PCG) algorithm is used to compute approximately an interior point search direction at each iteration. The PCG solves the linear system

where M is a positive definite matrix and Θk is given by (11), and

The aim is to make the preconditioned matrix

less ill-conditioned than AΘkAT, and improve the efficiency of the conjugate gradient algorithm by reducing the number of iterations it takes to find a feasible direction.

Pseudo-code for the preconditioned conjugate gradient algorithm implemented in PDNET is presented in Figure 1. The matrix-vector multiplications in line 7 are of the form AΘkATPi, and can be carried out without forming AΘkAT explicitly. PDNET uses as its initial direction Δy0 the direction Δy produced in the previous call to the conjugate gradient algorithm, i.e. during the previous interior point iteration. The first time pcg is called, we assume Δy0=(0,...,0).

The preconditioned residual is computed in lines 3 and 11 when the system of linear equations

is solved. PDNET uses primal-dual variants of the diagonal and spanning tree preconditioners described in [15,16].

The diagonal preconditioner, M = diag (AΘkAT), can be constructed in O(n)operations, and makes the computation of the preconditioned residual of the conjugate gradient possible with divisions. This preconditioner has been shown to be effective during the initial interior point iterations [11,14,15,16,18].

In the spanning tree preconditioner [16], one identifies a maximal spanning tree of the graph , using as weights the diagonal elements of the current scaling matrix,

wke,

where e is a unit n-vector. An exact maximal spanning tree is computed with the Fibonacci heap variant of Prim's algorithm [13], as described in [1]. At the k -th interior point iteration, let τk={t1,...,tq} be the indices of the edges of the maximal spanning tree. The spanning tree preconditioner is

where

For simplicity of notation, we include in the linear dependent rows corresponding to the redundant flow conservation constraints. At each conjugate gradient iteration, the preconditioned residual system

Mz i+1=ri+1

is solved with the variables corresponding to redundant constraints set to zero. Since can be ordered into a block diagonal form with triangular diagonal blocks, then the preconditioned residuals can be computed in O(m)operations.

A heuristic is used to select the preconditioner. The initial selection is the diagonal preconditioner, since it tends to outperform the other preconditioners during the initial interior point iterations. The number of conjugate gradients taken at each interior point iteration is monitored. If the number of conjugate gradient iterations exceeds , the current computation of the direction is discarded, and a new conjugate gradient computation is done with the spanning tree preconditioner. The diagonal preconditioner is not used again. The diagonal preconditioner is limited to at most 30 interior point iterations. If at iteration 30 the diagonal preconditioner is still in effect, at iteration 31 the spanning tree preconditioner is triggered. Also, as a safeguard, a hard limit of 1000 conjugate gradient iterations is imposed.

To determine when the approximate direction Δyi produced by the conjugate gradient algorithm is satisfactory, one can compute the angle θ between

and stop when

, where is the tolerance at interior point iteration k [8,15]. PDNET initially uses and tightens the tolerance after each interior point iteration as follows:

where, in PDNET,=0.95 . The exact computation of

has the complexity of one conjugate gradient iteration and should not be carried out every conjugate gradient iteration. Since (AΘkATyi is approximately equal to -ri, where ri is the estimate of the residual at the i -th conjugate gradient iteration, then

Since, on network linear programs, the preconditioned conjugate gradient method finds good directions in few iterations, this estimate is quite accurate in practice. Since it is inexpensive, it is computed at each conjugate gradient iteration.

3.2 Stopping criteria for interior point method

In [15], two stopping criteria for the interior point method were used. The first, called the primal-basic (PB) stopping rule, uses the spanning tree computed for the tree preconditioner. If the network flow problem has a unique solution, the edges of the tree converge to the optimal basic sequence of the problem. Let τ be the index set of the edges of the tree, and define

to the index set of edges that are fixed to their upper bounds. If the solution of the linear system

is such that , then is a feasible basic solution. Furthermore, if the data is integer, then has only integer components. Optimality of can be verified by computing a lower bound on the optimal objective function value. This can be done with a strategy introduced independently in [15] and [10,17]. Denote by the i-th component of and let

A tentative optimal dual solution y*(having a possibly better objective value than the current dual interior point solution yk) can be found by orthogonally projecting onto the supporting affine space of the dual face complementary to . In an attempt to preserve dual feasibility, we compute yk as the solution of the least squares problem

Resende & Veiga [15] describe a O(m) operation procedure to compute this projection. A feasible dual solution (y*, z*, w* )is built by adjusting the dual slacks. Let Then,

If cTx*-bTy*+uTw*=0, then (x*, s*) and (y*, w*, z*) are optimal primal and dual solutions, respectively. If the data is integer and 0<cTx*-bTy*+uTw*<1, (x*, s*)is a primal optimal (integer) solution.

To apply the second stopping procedure of [15], called the maximum flow (MF) stopping criterion, an indicator function to partition the edge set into active and inactive (fixed at upper or lower bounds) is needed. In PDNET, the indicator used is the so-called primal-dual indicator, studied by Gay [5] and El-Bakry, Tapia & Zhang [4]. Let ξ be a small tolerance. Edge is classified as inactive at its lower bound if

Edge i is classified as inactive at its upper bound if

The remaining edges are set active. In PDNET,ξ is initially set to 10-3 and this tolerance is tightened each time the MF test is triggered according to ξnew= ξold x Δξ, where, in PDNET,Δξ=95.

We select a tentative optimal dual face as a maximum weighted spanning forest limited to the active edges as determined by the indicator. The edge weights used in PDNET are those of the scaling matrix Θk.

As in the PB indicator, we project the current dual interior solution yk orthogonally onto . Once the projected dual solution y* is at hand, we attempt to find a feasible flow x* complementary to y*. A refined tentative optimal face is selected by redefining the set of active edges as

where ∈r is a small tolerance ( ∈r=10-8 in PDNET). The method attempts to build a primal feasible solution, x*, complementary to the tentative dual optimal solution by setting the inactive edges to lower or upper bounds, i.e., for,

By considering only the active edges, a restricted network is built. Flow on this network must satisfy

Clearly, from the flow balance constraints (16), if a feasible flow for the restricted network exists, it defines, along with and a primal feasible solution complementary to y*. A feasible flow for the restricted network can be determined by solving a maximum flow problem on the augmented network defined by the underlying graph where

In addition, for each edge there is an associated capacity uij. Let

The additional edges are such that Σ={(σ,i):i∈u+}, with associated capacity for each edge (σ,i) , and with associated capacity- for each edge (i,π). It can be shown that if is the maximum flow value from σ to π, and is a maximal flow on the augmented network, then if and only if is a feasible flow for the restricted network [15]. Therefore, finding a feasible flow for the restricted network involves the solution of a maximum flow problem. Furthermore, if the data is integer, this feasible flow is integer, as we can select a maximum flow algorithm that provides an integer solution.

Since this stopping criterion involves the solution of a maximum flow problem, it should not be triggered until the interior point algorithm is near the optimal solution. The criterion is triggered at iteration k, when μk<∈µ occurs for first time. The choice ∈µ = 1 used in PDNET is appropriate for the set of test problems considered here. In a more general purpose implementation, a scale invariant criterion is desirable. All subsequent iterations test this stopping rule. In PDNET, the implementation of Goldfarb & Grigoriadis [6] of Dinic's algorithm is used to solve the maximum flow problems.

3.3 Other implementation issues

To conclude this section, we make some remarks on other important implementation issues of the primal-infeasible, dual-feasible algorithm, namely the starting solution, the adjustment of parameter μk, and the primal and dual stepsizes.

Recall that the algorithm starts with any solution {x0, s0, y0, w0, z0} satisfying

and

but does not have to satisfy Ax0=b. Additionally, it is desirable that the initial point also satisfy the remaining equations that define the central path (5), i.e.

for μ>0. For(i, j)∈ε, let

be the starting solution, 0< νij <1 where and μ>0. It is easy to verify that this starting solution satisfies (17-18) as well as (20).

Condition (19) is satisfied if, for (ij)∈ε, υij is chosen as

where, for some initial guess y0 of the dual vector y,

In PDNET, we set the initial guess

and parameter

The primal-dual parameter has an initial value μ01μ, where in PDNET β1=0.1. Subsequently, for iterations k>1, μk is computed as in (7).

The stepsize parameters and are both set to 0.995 throughout the iterations, slightly more conservative than as suggested by [9].

4. Fortran subroutines

The current implementation PDNET consists of a collection of core subroutines with additional subsidiary modules written in Fortran [7]. The software distribution associated with this paper provides fully functional utilities by including Fortran reference implementations for the main program, providing default parameter settings and supplying additional routines for data input and output functions. In this section, we describe the usage of the PDNET framework with comments on user extensions. Specific instructions for building and installing PDNET from the source code are provided with the software distribution. Table 1 lists all modules provided in the PDNET software distribution.

We adopted several design and programming style guidelines in implementing the PDNET routines in Fortran. We required that all arrays and variables be passed as subroutine arguments, avoiding the use of COMMON blocks. The resulting code is expected to compile and run without modification under most software development and computing environments.

4.1 PDNET core subroutines

The PDNET core subroutines are invoked via a single interface provided by subroutine pdnet(). Following the specifications listed in Table 2, the calling program must provide data via the input arguments and allocate the internal and output arrays appropriately as described in Subsection 4.3 in file fdriver.f90.

We provide reference implementations of the PDNET main program which also serve as guides for developing custom applications invoking the PDNET core subroutines. In Subsection 4.2, we discuss in detail the input and output routines used in the reference implementations. Subsection 4.4 discusses the setting of parameters in PDNET. In addition, the core subroutines call an external function for maximum flow computation, which is provided in a subsidiary module, with its interface discussed in Subsection 4.5.

4.2 Data input

Programs invoking the PDNET subroutines must supply an instance for the minimum-cost flow problem. As presented in Table 3, the data structure includes the description of the underlying graph, node netflow values, arc costs, capacities and lower bounds. All node and arc attributes are integer valued.

The reference implementations of the main program PDNET reads network data from an input file by invoking functions available in the Fortran module pdnet_read.f90. These functions build PDNET input data structures from data files in the DIMACS format [2] with instances of minimum cost flows problems. As illustrated in the PDNET module fdriver.f90, we provide specialized interfaces used in Fortran programs using dynamic memory allocation.

4.3 Memory allocation

In PDNET, total memory utilization is carefully managed by allocating each individual array to a temporary vector passed as argument to internal PDNET functions. Furthermore, input and output arrays, presented in Table 4, are passed as arguments to subroutine pdnet() and must be allocated by the calling procedure.

4.4 Parameter setting

PDNET execution is controlled by a variety of parameters, selecting features of the underlying algorithm and the interface with the calling program. A subset of these parameters is exposed to the calling procedure via a PDNET core subroutine. The remaining parameters are set at compile time with default values assigned in inside module pdnet_default.f90.

The run time parameters listed in Tables 5 and 6 are set with Fortran function pdnet_setintparm(). The integer parameters are assigned to components of vector info and double precision parameters are assigned to vector dinfo.

4.5 Maximum flow computation

PDNET includes a maximum flow computation module called pdnet_maxflow, featuring the implementation of Goldfarb and Grigoriadis [6] of Dinic's algorithm, that is used to check the maximum flow stopping criterion. Furthermore, a modification of this module, called pdnet_feasible, is called by the module pdnet_checkfeas() after reading the network file to compute a maximum flow on the network, therefore checking for infeasibility.

4.6 Running PDNET

Module fdriver inputs the network structure and passes control to module pdnet(). This subroutine starts by reading the control parameters from the vector info, with pdnet_getinfo(). Correctness of the input is checked with pdnet_check(), and data structure created with pdnet_datastruct(). Internal parameters for methods are set with pdnet_default(), and additional structures are created with pdnet_buildstruct(). Subroutines pdnet_transform() and pdnet_perturb() are called in order to shift the lower bounds to zero and to transform the data into double precision, respectively. Subroutines pdnet_probdata() and pdnet_checkfeas() check the problem characteristics and inquire if the network has enough capacity to transport the proposed amount of commodity. The primal-dual main loop is then started and the right-hand-side of the search direction system is computed by pdnet_comprhs(). The maximum spanning tree is computed by pdnet_heap(), and its optimality is tested by pdnet_optcheck(). Under certain conditions (μ<1), the maxflow stopping criterion is invoked by subroutine pdnet_checkmaxflow(). If required, preconditioner switch takes place, followed by a call to pdnet_precconjgrd() to solve the Newton direction linear system using the chosen preconditioner. A summary of the iteration is printed by pdnet_printout(). Primal and dual updates are made by pdnet_updatesol() and stopping criteria check takes place, before returning to the start of the iteration loop.

We now present a usage example. Let us consider the problem of finding the minimum cost flow on the network represented in Figure 2. In this figure, the integers close to the nodes represent the node's produced (positive value) or consumed (negative value) net flows, and the integers close to the arcs stand for their unit cost. Furthermore the capacity of each arc is bounded between 0 and 10. In Figure 3, we show the DIMACS format representation of this problem, stored in file test.min. Furthermore, Figures 4 and 5 show the beginning and the end of the printout given by PDNET, respectively.





5. Computational results

A preliminary version of PDNET was tested extensively with results reported in [12]. In this section, we report on a limited experiment with the version of PDNET in the software distribution.

The experiments were done on a PC with an Intel Pentium IV processor running at 2 Ghz and 2 Gb of main memory. The operating system is Ubuntu Linux 7.10 (kernel 2.6.22). The code was compiled on the Intel Fortran compiler version 10.0 using the flags -O3. CPU times in seconds were computed by calling the Fortran 90 function cpu_time(). The test problems are instances of the classes mesh, grid and netgen_lo of minimum-cost network flow problems, taken from the First DIMACS Implementation Challenge [3]. The specifications of the mesh instances generated are presented in Table 7. The specifications used in the GRIDGEN generator to build the grid problems are displayed in Table 8. Finally, the instances of the test set netgen_lo were generated according to the guidelines stated in the computational study of Resende and Veiga [15], using the NETGEN generator following the specifications presented in Table 9. All these generators can be downloaded from the FTP site dimacs.rutgers.edu.

Instances of increasing dimension were considered. The three netgen_lo instances presented were generated considering , and respectively. In Table 10, the number of iterations (IT) and the CPU time (CPU) of PDNET and CPLEX 10 are compared. The reported results show that PDNET tends to outperform CPLEX in the larger instances of the mesh and netgen_lo set problems, but fails to do so in grid set problems. For testset netgen_lo the difference is quite noticeable for the largest instance. We observed that in this problem about 90% of the CPU time and iteration count in CPLEX 10 was spent computing a feasible solution. The results in this experience and those reported in [12] show that this code is quite competitive with CPLEX and other efficient network flow codes for large-scale problems.

6. Concluding remarks

In this paper, we describe a Fortran implementation of PDNET, a primal-infeasible dual-feasible interior point method for solving large—scale linear network flow problems. The subroutines are described, and directions for usage are given. A number of technical features of the implementation, which enable the user to control several aspects of the program execution, are also presented. Some computational experience with a number of test problems from the DIMACS collection is reported. These results illustrate the efficiency of PDNET for the solution of linear network flow problems. Source code for the software is available for download at http://www.research.att.com/~mgcr/pdnet.

Recebido em 01/2007; aceito em 04/2008

Received January 2007; accepted April 2008

  • (1) Ahuja, N.K.; Magnanti, T.L. & Orlin, J.B. (1993). Network Flows Prentice Hall, Englewood Cliffs, NJ.
  • (2) DIMACS (1991). The first DIMACS international algorithm implementation challenge: Problem definitions and specifications. World-Wide Web document.
  • (3) DIMACS (1991). The first DIMACS international algorithm implementation challenge: The benchmark experiments. Technical report, DIMACS, New Brunswick, NJ.
  • (4) El-Bakry, A.S.; Tapia, R.A. & Zhang, Y. (1994). A study on the use of indicators for identifying zero variables for interior-point methods. SIAM Review, 36, 45-72.
  • (5) Gay, D.M. (1989). Stopping tests that compute optimal solutions for interior-point linear programming algorithms. Technical report, AT&T Bell Laboratories, Murray Hill, NJ.
  • (6) Goldfarb, D. & Grigoriadis, M.D. (1988). A computational comparison of the Dinic and network simplex methods for maximum flow. Annals of Operations Research, 13, 83-123.
  • (7) International Organization for Standardization (1997). Information technology - Programming languages - Fortran - Part 1: Base language. ISO/IEC 1539-1:1997, International Organization for Standardization, Geneva, Switzerland.
  • (8) Karmarkar, N.K. & Ramakrishnan, K.G. (1991). Computational results of an interior point algorithm for large scale linear programming. Mathematical Programming, 52, 555-586.
  • (9) McShane, K.A. & Monma, C.L. & Shanno, D.F. (1989). An implementation of a primal-dual interior point method for linear programming. ORSA Journal on Computing, 1, 70-83.
  • (10) Mehrotra, S. & Ye, Y. (1993). Finding an interior point in the optimal face of linear programs. Mathematical Programming, 62, 497-516.
  • (11) Portugal, L.; Bastos, F.; Júdice, J.; Paixão, J. & Terlaky, T. (1996). An investigation of interior point algorithms for the linear transportation problem. SIAM J. Sci. Computing, 17, 1202-1223.
  • (12) Portugal, L.F.; Resende, M.G.C.; Veiga, G. & Júdice, J.J. (2000). A truncated primal-infeasible dual-feasible network interior point method. Networks, 35, 91-108.
  • (13) Prim, R.C. (1957). Shortest connection networks and some generalizations. Bell System Technical Journal, 36, 1389-1401.
  • (14) Resende, M.G.C. & Veiga, G. (1993). Computing the projection in an interior point algorithm: An experimental comparison. Investigación Operativa, 3, 81-92.
  • (15) Resende, M.G.C. & Veiga, G. (1993). An efficient implementation of a network interior point method. In: Network Flows and Matching: First DIMACS Implementation Challenge [edited by David S. Johnson and Catherine C. McGeoch], volume 12 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 299-348. American Mathematical Society.
  • (16) Resende, M.G.C. & Veiga, G. (1993). An implementation of the dual affine scaling algorithm for minimum cost flow on bipartite uncapacitated networks. SIAM Journal on Optimization, 3, 516-537.
  • (17) Ye, Y. (1992). On the finite convergence of interior-point algorithms for linear programming. Mathematical Programming, 57, 325-335.
  • (18) Yeh, Quey-Jen (1989). A reduced dual affine scaling algorithm for solving assignment and transportation problems. PhD thesis, Columbia University, New York, NY.
  • *
    Corresponding author / autor para quem as correspondências devem ser encaminhadas
  • Publication Dates

    • Publication in this collection
      20 Oct 2008
    • Date of issue
      Aug 2008

    History

    • Accepted
      Apr 2008
    • Received
      Jan 2007
    Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 Rio de Janeiro RJ - Brasil, Tel.: +55 21 2263-0499, Fax: +55 21 2263-0501 - Rio de Janeiro - RJ - Brazil
    E-mail: sobrapo@sobrapo.org.br