Scielo RSS <![CDATA[Pesquisa Operacional]]> http://www.scielo.br/rss.php?pid=0101-743820180001&lang=pt vol. 38 num. 1 lang. pt <![CDATA[SciELO Logo]]> http://www.scielo.br/img/en/fbpelogp.gif http://www.scielo.br <![CDATA[<strong>A GENERIC TACTICAL PLANNING MODEL TO SUPPLY A BIOREFINERY WITH BIOMASS</strong>]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382018000100001&lng=pt&nrm=iso&tlng=pt ABSTRACT The supply chains which bring biomass to biorefineries play a critical role in biofuel production. Optimization models can help decision makers to design more efficient chains and minimize the cost of biomass delivered to the refineries. This article based on a French national research project on biomass logistics considers one refinery, able to process several crops and several parts of the same crop, over a one-year horizon divided into days or weeks. A network model and a data model are first developed to let the decision maker describe the supply chain structure and its data, without affecting the underlying mathematical model. The latter is a mixed integer linear program which combines for the first time various features, either original or tackled separately in the literature. Knowing the refinery demands, it determines the activity levels in the network (amounts harvested, baled, transported, stored, etc.) and the required equipment, in order to minimize a total cost including harvesting costs, transport costs and storage costs. Numerical evaluations based on real data show that the proposed model can optimize large supply chains in reasonable running times. <![CDATA[STEPWISE SELECTION OF VARIABLES IN DEA USING CONTRIBUTION LOADS]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382018000100031&lng=pt&nrm=iso&tlng=pt ABSTRACT In this paper, we propose a new methodology for variable selection in Data Envelopment Analysis (DEA). The methodology is based on an internal measure which evaluates the contribution of each variable in the calculation of the efficiency scores of DMUs. In order to apply the proposed method, an algorithm, known as “ADEA”, was developed and implemented in R. Step by step, the algorithm maximizes the load of the variable (input or output) which contribute least to the calculation of the efficiency scores, redistributing the weights of the variables without altering the efficiency scores of the DMUs. Once the weights have been redistributed, if the lower contribution does not reach a previously given critical value, a variable with minimum contribution will be removed from the model and, as a result, the DEA will be solved again. The algorithm will stop when all variables reach a given contribution load to the DEA or until no more variables can be removed. In this way and contrary to what is usual, the algorithm provides a clear stop rule. In both cases, the efficiencies obtained from the DEA will be considered suitable and rightly interpreted in terms of the remaining variables, indicating the load themselves; moreover, the algorithm will provide a sequence of alternative nested models - potential solutions - that could be evaluated according to external criterion. To illustrate the procedure, we have applied the methodology proposed to obtain a research ranking of Spanish public universities. In this case, at each step of the algorithm, the critical value is obtained based on a simulation study. <![CDATA[PERFORMANCE COMPARISON OF SCENARIO-GENERATION METHODS APPLIED TO A STOCHASTIC OPTIMIZATION ASSET-LIABILITY MANAGEMENT MODEL]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382018000100053&lng=pt&nrm=iso&tlng=pt ABSTRACT In this paper, we provide an empirical discussion of the differences among some scenario tree-generation approaches for stochastic programming. We consider the classical Monte Carlo sampling and Moment matching methods. Moreover, we test the Resampled average approximation, which is an adaptation of Monte Carlo sampling and Monte Carlo with naive allocation strategy as the benchmark. We test the empirical effects of each approach on the stability of the problem objective function and initial portfolio allocation, using a multistage stochastic chance-constrained asset-liability management (ALM) model as the application. The Moment matching and Resampled average approximation are more stable than the other two strategies. <![CDATA[RESTAURANT RESERVATION MANAGEMENT CONSIDERING TABLE COMBINATION]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382018000100073&lng=pt&nrm=iso&tlng=pt ABSTRACT This paper presents a case study of table reservation practice for restaurant business within Walt Disney World. A unique feature here is to consider table combination to capture revenue potentials from different party sizes and at different time periods. For example, a party of large size can be served by combining two or more small tables. A mixed integer programming (MIP) model is developed to make the reservation recommendation. We propose a rolling horizon reservation policy such that the value of a particular table is periodically evaluated and updated. This is a typical revenue management method in the airlines and other industries, the essence of which is to compare the future expected revenue with a currently offered price. Using historical data, numerical test shows a significant revenue improvement potential from our proposed model. <![CDATA[DETERMINING PRODUCTION AND INVENTORY PARAMETERS: AN INTEGRATED SIMULATION AND MAVT APPROACH WITH TRADEOFF ELICITATION]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382018000100087&lng=pt&nrm=iso&tlng=pt ABSTRACT This study puts forward a multi-attribute decision model that determines good production and inventory parameter settings in make-to-stock (MTS) environments controlled by the Conwip (constant work-in-process) method. This model uses a discrete event simulation to evaluate the performance of a system in relation to work-in-process and a finished goods inventory. Based on Multi-Attribute Value Theory (MAVT), a compromise solution is found by taking into account the decision-maker’s preferences and tradeoff judgments as to the attributes of cycle time, throughput rate, holding cost and stockout cost. A numerical application is given to illustrate the use of the proposed multi-attribute model which includes a sensitivity analysis. <![CDATA[A PROBABILISTIC APPROACH TO THE INEQUALITY ADJUSTMENT OF THE HUMAN DEVELOPMENT INDEX]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382018000100099&lng=pt&nrm=iso&tlng=pt ABSTRACT The Composition of Probabilistic Preferences is here applied to combine eight indicators of human development. The four components of HDI are considered together with indices of inequality in longevity, schooling, and income, and the UNDP Gender Development Index. To compose the probabilities of preference by the different criteria, a pessimistic and conservative point of view is taken. This point of view, while generating the global score, results in employing for each option the probability of not presenting the lowest evaluation according to any criterion, increasing the importance assigned in the computations to the proximity to the frontier of worst performances. Alternative rankings of the 188 countries ranked in the 2015 and the 2016 Human Development Report are obtained. Capacities allowing for interactions between up to eight components are employed. Representative profiles of ordered classes are derived from the rankings. Classifications of the countries into previously determined classes are also performed. <![CDATA[DEVELOPMENT OF A TECHNOLOGICAL INDEX FOR THE ASSESSMENT OF THE BEEF PRODUCTION SYSTEMS OF THE VERMELHO RIVER BASIN IN GOIÁS, BRAZIL]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382018000100117&lng=pt&nrm=iso&tlng=pt ABSTRACT This study analyzed the productive strategies and technology of beef producers in the Vermelho basin in Goiás, Brazil. The data were used to develop a technological index, applicable to the local beef production systems. The data were obtained using questionnaires. A set of 60 properties was selected to provide a representative sample of the relief and soil quality within the study area. The data were analyzed using multiple correspondence, cluster analysis, and beta regression procedures. The variables that most contributed to the definition of the technological level were identified. The variables and production units each formed three clusters, corresponding to three levels of technology: low, mid, and high. The data were used to calculate a predictive index for the analysis and mapping of the technology used in the study area. High cattle densities were found in systems with low technology, indicating low productivity and profitability, and reduced environmental sustainability. <![CDATA[MINIMIZING THE PREPARATION TIME OF A TUBES MACHINE: EXACT SOLUTION AND HEURISTICS]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382018000100135&lng=pt&nrm=iso&tlng=pt ABSTRACT In this paper we optimize the preparation time of a tubes machine. Tubes are hard tubes made by gluing strips of paper that are packed in paper reels, and some of them may be reused between the production of one and another tube. We present a mathematical model for the minimization of changing reels and movements and also implementations for the heuristics Nearest Neighbor, an improvement of a nearest neighbor (Best Nearest Neighbor), refinements of the Best Nearest Neighbor heuristic and a heuristic of permutation called Best Configuration using the IDE (integrated development environment) WxDev C++. The results obtained by simulations improve the one used by the company. <![CDATA[APPLYING THE TODIM FUZZY METHOD TO THE VALUATION OF BRAZILIAN BANKS]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382018000100153&lng=pt&nrm=iso&tlng=pt ABSTRACT We propose the use of multicriteria decision analysis for the valuation of Brazilian banks. In order to model uncertainties, we combine the use of multicriteria decision analysis with fuzzy mathematics. We specifically modify the method Tomada de Decisão Interativa Multicritério (TODIM) to incorporate fuzzy numbers, resulting in a methodology we call TODIM Fuzzy. A six-year data set of audited financial statements is considered to illustrate the use of the methodology when analyzing the six largest (net worth) banks operating in Brazil. Several sensitivity analyses are conducted to illustrate how small changes in the inputs can alter the results obtained with the original TODIM and the TODIM Fuzzy. The numerical illustrations have displayed consistent and robust results when the scores of the six banks are computed and the institutions are compared among themselves.