Scielo RSS <![CDATA[Pesquisa Operacional]]> vol. 36 num. 3 lang. en <![CDATA[SciELO Logo]]> <![CDATA[BALANCING AMBULANCE CREW WORKLOADS VIA A TIERED DISPATCH POLICY]]> ABSTRACT Emergency Medical Services (EMS) system’s mission is to provide timely and effective treatment to anyone in need of urgent medical care throughout their jurisdiction. The default dispatch policy is to send the nearest ambulance to all medical emergencies and it is widely accepted by many EMS providers. However, sending nearest ambulance is not always optimal, often imposes heavy workloads on ambulance crews posted in high demand zones while reducing available coverage or requiring ambulance relocations to ensure high demand zones are covered adequately. In this paper we propose a tiered dispatch policy to balance the ambulance crew workloads while meeting fast response times for priority 1 calls. We use a tabu search algorithm to determine the initial ambulance locations and a simulation model to evaluate the impact of a tiered dispatch policy on ambulance crew workloads, coverage rates for priority 1-3 calls, and on survivability rate for out of hospital cardiac arrests. We present computational statistics and demonstrate the efficacy of the tiered dispatch policy using real-world data. <![CDATA[MODELING AND SOLVING A RICH VEHICLE ROUTING PROBLEM FOR THE DELIVERY OF GOODS IN URBAN AREAS]]> ABSTRACT This work addresses a vehicle routing problem that aims at representing delivery operations of large volumes of products in dense urban areas. Inspired by a case study in a drinks producer and distributor, we propose a mathematical programming model and solution approaches that take into account costs with own and chartered vehicles, multiple deliverymen, time windows in customers, compatibility of vehicles and customers, time limitations for the circulation of large vehicles in city centers and multiple daily trips. Results with instances based on real data provided by the company highlight the potential of applicability of some of the proposed methods. <![CDATA[A MODEL-BASED HEURISTIC FOR THE IRREGULAR STRIP PACKING PROBLEM]]> ABSTRACT The irregular strip packing problem is a common variant of cutting and packing problems. Only a few exact methods have been proposed to solve this problem in the literature. However, several heuristics have been proposed to solve it. Despite the number of proposed heuristics, only a few methods that combine exact and heuristic approaches to solve the problem can be found in the literature. In this paper, a matheuristic is proposed to solve the irregular strip packing problem. The method has three phases in which exact mixed integer programming models from the literature are used to solve the sub-problems. The results show that the matheuristic is less dependent on the instance size and finds equal or better solutions in 87,5% of the cases in shorter computational times compared with the results of other models in the literature. Furthermore, the matheuristic is faster than other heuristics from the literature. <![CDATA[A PROBABILISTIC APPROACH APPLIED TO THE CLASSIFICATION OF COURSES BY MULTIPLE EVALUATORS]]> ABSTRACT How to measure the perceived influence of a course on its alumni skills? This paper describes the use of CPP-TRI as a tool to face this problem. The method was applied here in the context of a M.Sc. course evaluation. Levels of impact and importance previously determined for different features provide the framework for the analysis. Classifications by different groups of evaluators are combined. Taking into account the subjectivity in the assessments, CPP-TRI treats them as realizations of random variables. The combination of the evaluations is performed by computing joint probabilities, what avoids the assignment of weights to evaluators. Interval classifications between a hostile and a benevolent limit are provided offerings the educational evaluator a deeper understanding of the results. An additional study is here performed on the classification of the features. A total of sixteen features are sorted. <![CDATA[USE OF CONTINUED ITERATION ON THE REDUCTION OF ITERATIONS OF THE INTERIOR POINT METHOD]]> ABSTRACT Interior point methods have been widely used to determine the solution of large-scale linear programming problems. The predictor-corrector method stands out among all variations of interior point methods due to its efficiency and fast convergence. In each iteration it is necessary to solve two linear systems to determine the predictor-corrector direction. Solving such systems corresponds to the step which requires more processing time, and therefore, it should be done efficiently. The most common approach to solve them is the Cholesky factorization. However, Cholesky factorization demands a high computational effort in each iteration. Thus, searching for effort reduction, the continued iteration is proposed. This technique consists in determining a new direction through projection of the search direction and it was inserted into PCx code. The computational results regarding medium and large-scale problems, have indicated a good performance of the proposed approach in comparison with the predictor-corrector method. <![CDATA[EMPIRICAL COMPARISON OF THE MULTIDIMENSIONAL MODELS OF ITEM RESPONSE THEORY IN E-COMMERCE]]> ABSTRACT The measurement of latent traits within the organizational field such as quality, effectiveness and learning has been conducted in several formats using a wide variety of quantitative methods, including Item Response Theory that consistently increased in organizational studies. The purpose of this article is to compare the hierarchical and non-hierarchical structures of three multidimensional models of ItemResponse Theory, based on the interface quality measurement in e-commerce sites. We compared the multiple unidimensional, compensatory multidimensional and bifactorial models, and also elaborated and applied 75 items in a sample of 441 e-commerce websites. As a result, we conducted a discussion of the latent construct, the quality in e-commerce and its multidimensional configuration to adjust and compare three multidimensional models. <![CDATA[A NON-ARCHIMEDEAN DEA MODEL TO ASSESS GROUP COMPARISONS]]> ABSTRACT We consider the use of the non-Archimedean infinitesimal epsilon in DEA-CCR models. The application of interest is defined by the performance measure of the Brazilian Agricultural Research Corporation research centers. We characterize an assurance region for the non-Archimedean element and suggest a value for it. Types of DMUs are compared using fractional regression models and quasi maximum likelihood inference. We conclude that the research centers aimed at studying specific agricultural products are dominant. The classic DEA-CCR performance measures and the solution provided by non-Archimedean model have Spearman correlation over 90%. <![CDATA[THE COMPOSED ZERO TRUNCATED LINDLEY-POISSON DISTRIBUTION]]> ABSTRACT In this paper, a new compounding distribution, named zero truncated Lindley-Poisson distribution is introduced. The probability density function, cumulative distribution function, survival function, failure rate function and quantiles expressions of it are provided. The parameters estimatives were obtained by six methods: maximum likelihood (MLE), ordinary least-squares (OLS), weighted least-squares (WLS), maximum product of spacings (MPS), Crame´r-von-Mises (CM) and Anderson-Darling (AD), and intensive simulation studies are conducted to evaluate the performance of parameter estimation. Some generalizations are also proposed. Application in a real data set is given and shows that the composed zero truncated Lindley-Poisson distribution provides better fit than the Lindley distribution and three of its generalizations. The paper is motivated by application in real data set and we hope this model may be able to attract wider applicability in survival and reliability. <![CDATA[DISCOVERING AND LABELLING OF TEMPORAL GRANULARITY PATTERNS IN ELECTRIC POWER DEMAND WITH A BRAZILIAN CASE STUDY]]> ABSTRACT Clustering is commonly used to group data in order to represent the behaviour of a system as accurately as possible by obtaining patterns and profiles. In this paper, clustering is applied with partitioning-clustering techniques, specifically, Partitioning around Medoids (PAM) to analyse load curves from a city of South-eastern Brazil in São Paulo state. A top-down approach in time granularity is performed to detect and to label profiles which could be affected by seasonal trends and daily/hourly time blocks. Time-granularity patterns are useful to support the improvement of activities related to distribution, transmission and scheduling of energy supply. Results indicated four main patterns which were post-processed in hourly blocks by using shades of grey to help final-user to understand demand thresholds according to the meaning of dark grey, light grey and white colours. A particular and different behaviour of load curve was identified for the studied city if it is compared to the classical behaviour of urban cities.