Resumo em Inglês:
ABSTRACT Many industries have a complex manufacturing line, which complicates the planning of their productions. One possible solution to this problem is the optimization of the production line. In this work, mixed integer linear programming models are capable of representing the productive organization of the production line of a hair cosmetics factory, characterized by hybrid flow shop problem. The proposed models can be solved using CPLEX® Optimization Studio for real cases of the studied industrial production line. The objective was the minimization of the production makespan and the completion time. The innovation of this work is the application of the mixed integer linear programming method in complex cases, which are often found in industries and are solved by heuristic methods, simulations, or approximations of analytical methods.Resumo em Inglês:
ABSTRACT We improve the shift-scheduling process by using nonstationary queueing models to evaluate schedules and two heuristics to generate schedules. Firstly, we improved the fitness function and the initial population generation method for a benchmark genetic algorithm in the literature. We also proposed a simple local search heuristic. The improved genetic algorithm found solutions that obey the delay probability constraint more often. The proposed local search heuristic also finds feasible solutions with a much lower computational expense, especially under low arrival rates. Differently from a genetic algorithm, the local search heuristic does not rely on random choices. Furthermore, it finds one final solution from one initial solution, rather than from a population of solutions. The developed local search heuristic works with only one well-defined goal, making it simple and straightforward to implement. Nevertheless, the code for the heuristic is simple enough to accept changes and cope with multiple objectives.Resumo em Inglês:
ABSTRACT In this paper we develop a generic mixed bi-parametric barrier-penalty method based upon barrier and penalty generic algorithms for constrained nonlinear programming problems. When the feasible set is defined by equality and inequality functional constraints, it is possible to provide an explicit barrier and penalty functions. If such case, the continuity and differentiable properties of the restrictions and objective functions could be inherited to the penalized function. The main contribution of this work is a constructive proof for the global convergence of the sequence generated by the proposed mixed method. The proof uses separately the main results of global convergence of barrier and penalty methods. Finally, for some simple nonlinear problem, we deduce explicitly the mixed barrier-penalty function and illustrate all functions defined in this work. Also we implement MATLAB code for generate iterative points for the mixed method.Resumo em Inglês:
ABSTRACT In this paper, we proposed a new ranked data control chart using repetitive sampling criterion to increase the performance of detecting any shift in mean process. For the comparisons target, the average run length (ARL) of the proposed control chart based on repetitive extreme ranked set sampling computed using exact and estimated parameters. The results showed that the ARL affected negatively by the parameter estimation. Moreover, the performances of the proposed control chart is evaluated and compared with similar control chart that obtained by using different sampling schemes such as the simple random sampling, ranked set sampling, extreme ranked set sampling and repetitive ranked set sampling.. The results showed that the ranked data based control chart outperform the classical control chart in terms of the ARL.Resumo em Inglês:
ABSTRACT When analyzed from the perspective of one input and one output, the classic Data Envelop ment Analysis (DEA) model (known as BCC after its developers Banker, Charnes, and Cooper) presents an efficient frontier with “downward” concavity (convex), therefore delivering variable returns to scale. However, these returns show a decrease in marginal productivity as the number of inputs increases, that is, the frontier presents decreasing global returns to scale. Both the convex frontier (DEA BCC) and the concave frontier (“upward” concavity) present average productivities that vary along the curve; thus, the local returns to scale are variable. It is claimed that the two formats are complementary, and therefore both should be verified in the literature. Thus, this article proposes an algorithm capable of modeling an efficient frontier, for one input or one output, with increasing global returns to scale, whereby an increase in input causes an increase in marginal productivity.Resumo em Inglês:
ABSTRACT This study analyses the importance of Brazilian ports, based on the flow of non-containerized cargo in 2014, considering both national and foreign trades. For that, we study a non-traditional centrality, called layer centrality, which evaluates the importance of ports, based on how well they are connected to influential ports. This measurement was preliminarily proposed in 2011 and applied to a simple nonweighted network, though herein we extend it to weighted graphs. For comparison purposes, we also apply three traditional measures, namely degree, eigenvector, and flow betweenness centralities. Our findings show that the most impactful ports are private terminals Ponta da Madeira and Tubarão, although public ports, particularly Santos, are usually impactful for national trades. Moreover, we analysed the map for public ports and suggest a suitable location for a new public port.Resumo em Inglês:
ABSTRACT The Half-Normal distribution has been intensively extended in the recent years. A review of the literature showed that at least 10 extensions of the Half-Normal distribution were introduced between 2008 and 2016. These extensions generalized the behavior of the density and hazard functions, which are restricted to monotonous decreasing and monotonically increasing, respectively. In this paper we propose a new extension called the transmuted Half-Normal distribution using the quadratic rank transmutation map, introduced by Shaw & Buckley (2009). A comprehensive account of mathematical properties of the new distribution is presented. We provide explicit expressions for the moments, moment-generating function, Shannon’s entropy, mean deviations, Bonferroni and Lorenz curves, order statistics, and reliability. The estimation of the parameters is implemented by the maximum likelihood method. The bias and accuracy of the estimators are assayed by the Monte Carlo simulations. This proposed distribution allows us to incorporate covariates directly in the mean and consequently to quantify their influences on the average of the response variable. Experiment with two real data sets show usefulness and its value as a good alternative to several extensions of the Half-Normal distribution in data modeling with and without covariates.Resumo em Inglês:
ABSTRACT Previous studies have shown that the mean queue length of a GI/G/1 system is significantly influenced by the skewness of inter-arrival times, but not by the skewness of service times. These results are limited because all the distributions considered in previous studies were positively skewed. To address this limitation, this paper investigates the effects of the skewness of inter-arrival and service times on the probability distribution of waiting times, when a negatively skewed distribution is used to model inter-arrival and service times. Subsequent to a series of experiments on a GI/G/1 queue using discrete-event simulation, results have shown that the lowest mean waiting time and the lowest variance of waiting times can be attained with a combination of positive inter-arrival skewness and negative service skewness. Results also show an interesting effect of the skewness of service times in the probability of no-delay in environments with a higher utilization factor.Resumo em Inglês:
ABSTRACT The Flexible and Interactive Tradeoff (FITradeoff) method is a multicriteria decision making/aiding (MCDM/A) method that uses partial information about the decision maker’s (DM’s) preferences in order to build a recommendation. This method preserves the strong axiomatic structure of the traditional tradeoff procedure, with an interactive and flexible process that demands less cognitive effort from the DM. Although FITradeoff has already been applied for aiding several practical decision situations, there is no previous study that tests the performance of this method with respect to expected theoretical benefits such as time and effort reduction. In this context, this paper presents the results of a simulation experiment that aims to analyze the behavior of FITradeoff in a wide range of scenarios. Computational constraints were identified in the simulation process, such as the number of simulations for each iteration. Despite the memory limitation of the software used, the number of total simulations performed was greater than what is commonly found in literature. We investigate how the performance of FITradeoff can be affected by changes in number of criteria, number of alternatives and weight pattern, and therefore it is possible to have a deeper understanding of the method and its main features. Therefore, this work focuses on studying, through simulations, the behavior of the multicriteria method FITradeoff, developed by De Almeida et al. (2016), filling a lack in the literature by analyzing said method in a wide array of scenarios by a simulation process in order to bring a better understanding of the method, as well as to validate it in a simulation study.Resumo em Inglês:
ABSTRACT The objective of this study was to evaluate the chances of commitment in group decision making process, modeled by a multicriteria method based on game theory and its evaluation in relation to the satisfaction and sense of justice of the players. We hypothesized that mathematical methods may favor group decision-making, reflecting higher levels of sense of justice and satisfaction and consequently greater chances of commitment to the agreements made. By means of 75 simulations, with five volunteers each, the hypotheses of this study were confirmed, which include the affirmation that the commitment in group decision making can be increased by the use of a method to support group decision making and, consequently, that the sense of justice and satisfaction will be greater when agreements are supported by some sort of mathematical method.Resumo em Inglês:
ABSTRACT Computers and Intractability: A Guide to the Theory of NP-Completeness, by Michael R. Garey and David S. Johnson, was published 40 years ago (1979). Despite its age, it is unanimously considered by many in the computational complexity community as its most important book. NP-completeness is perhaps the single most important concept to come out of theoretical computer science. The book was written in the late 1970s, when problems solvable in polynomial time were linked to the concepts of efficiently solvable and tractability, and the complexity class NP was defined to capture the concept of good characterization. Besides his contributions to the theory of NP-completeness, David S. Johnson also made important contributions to approximation algorithms and the experimental analysis of algorithms. This paper summarizes many of Johnson’s contributions to these areas and is an homage to his memory.Resumo em Inglês:
ABSTRACT This study considers the use of a composicional statistical model under a Bayesian approach using Markov Chain Monte Carlo simulation methods applied for road traffic victims ocurring in federal roads of Brazil in a specified period of time. The main motivation of the present study is based on a database with information on the injury severity of each person involved in an accident occurred in federal highways in Brazil during a time period ranging from January, 2018 to April, 2019 reported by the federal highway police office of Brazil. Four types of events associated with each injured person (uninjured, minor injury, serious injury and death) are grouped for each state of Brazil in each month characterizing compositional multivariate data. Such kind of data requires specific modeling and inference approaches that differ from the traditional use of multivariate models assuming multivariate normal distributions.The proportion events associated to the accidents (uninjured, minor injuries, serious injuries and deaths) are considered as a sample of vectors of proportions adding to a value one together with some covariates such as pavement conditions in each province, regions of Brazil, months and years that may affect the severity of the injury of each person involved in an accident. From the obtained results, it is observed that the proportions of serious accidents and deaths are affected by some covariates as the different regions of Brazil and years.Resumo em Inglês:
ABSTRACT This paper proposes a new ordinal method to rank alternatives with multiple criteria and decision makers (DMs). This is a decision group ordinal method called SAPEVO-M, an acronym for Simple Aggregation of Preferences Expressed by Ordinal Vectors Group Decision Making. SAPEVO-M method allows for the aggregation of DM preference rankings into a consensus ranking and expresses the DM degrees of importance in the form of a rank order. It was developed for dealing with purely ordinal criteria and it is also applicable to situations in which ordinal and cardinal criteria are intermixed. A free version of the method was made available on the internet.Resumo em Inglês:
ABSTRACT In this paper we address a reel allocation problem that appears in a corrugated cardboard packaging company. Given a production plan which incorporates customer orders and a set of reels in stock, the decision is how to allocate these reels to the corrugator to produce the ordered corrugated cardboard sheets, aiming to minimize the number of reels used, any partial consumption of reels, the unusable leftovers and the number of production stoppages. A mixed-integer mathematical model is proposed and computational tests were performed using instances generated based on real data. The computational results show that the proposed mathematical model is consistent with the reality of the studied company and brings interesting insights that can be used in the decision-making.Resumo em Inglês:
ABSTRACT This article presents an analysis to determine the location of the operational base of the ambulance dispatch service by applying the barycenter calculation, using as an object of study the Emergency Medical Assistance Service - SAMU. The research was performed by the quantitative modeling method, through an applied research, so that the results could be used to solve the following problem: How to determine the location of an operational base according to service history? Thus, the objective of this paper is to evaluate the applicability of Operational Research, through the barycenter calculation method as a tool to determine the best location of the operational base. To this end, we analyzed the history of 19,711 calls made from 2010 to 2016, using the software “R-Studio” for a quantitative treatment, which resulted in the proposition of a viable and reliable solution to the problem, validating the method proposed.Resumo em Inglês:
ABSTRACT Expert agreement is a key issue for consensus building process; appropriate methods and tools are needed to support an efficient group decision-making process. This study presents a combination of Analytic Hierarchy Process (AHP) with the Delphi method as a useful support for group decision-making processes aimed at consensus building. A new methodology that includes stability and exclusion analysis; as well as new coefficients of concordance and consistency with a statistical approach are proposed through a case study to explore the consensus building in the group decision-making process.