Scielo RSS <![CDATA[Pesquisa Operacional]]> vol. 35 num. 1 lang. pt <![CDATA[SciELO Logo]]> <![CDATA[A GRASP ALGORITHM FOR THE CONTAINER LOADING PROBLEM WITH MULTI-DROP CONSTRAINTS]]> This paper studies a variant of the container loading problem in which to the classical geometric constraints of packing problems we add other conditions appearing in practical problems, the multi-drop constraints. When adding multi-drop constraints, we demand that the relevant boxes must be available, without rearranging others, when each drop-off point is reached. We present first a review of the different types of multi-drop constraints that appear in literature. Then we propose a GRASP algorithm that solves the different types of multi-drop constraints and also includes other types of realistic constraints such as full support of the boxes and load bearing strength. The computational results validate the proposed algorithm, which outperforms the existing procedures dealing with multi-drop conditions and is also able to obtain good results for more standard versions of the container loading problem. <![CDATA[DAILY SCHEDULING OF SMALL HYDRO POWER PLANTS DISPATCH WITH MODIFIED PARTICLES SWARM OPTIMIZATION]]> This paper presents a new approach for short-term hydro power scheduling of reservoirs using an algorithm-based Particle Swarm Optimization (PSO). PSO is a population-based algorithm designed to find good solutions to optimization problems, its characteristics have encouraged its adoption to tackle a variety of problems in different fields. In this paper the authors consider an optimization problem related to a daily scheduling of small hydro power dispatch. The goal is construct a feasible solution that maximize the cascade electricity production, following the environmental constraints and water balance. The paper proposes an improved Particle Swarm Optimization (PSO) algorithm, which takes advantage of simplicity and facility of implementation. The algorithm was successfully applied to the optimization of the daily schedule strategies of small hydro power plants, considering maximum water utilization and all constraints related to simultaneous water uses. Extensive computational tests and comparisons with other heuristics methods showed the effectiveness of the proposed approach. <![CDATA[CREDIT SCORING MODELING WITH STATE-DEPENDENT SAMPLE SELECTION: A COMPARISON STUDY WITH THE USUAL LOGISTIC MODELING]]> Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model, a logistic regression with state-dependent sample selection model and a bounded logistic regression model via a large simulation study. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian retail bank portfolio. Our simulation results so far revealed that there is nostatistically significant difference in terms of predictive capacity among the naive logistic regression models, the logistic regression with state-dependent sample selection models and the bounded logistic regression models. However, there is difference between the distributions of the estimated default probabilities from these three statistical modeling techniques, with the naive logistic regression models and the boundedlogistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. Which are common in practice. <![CDATA[A NEW LOOK AT THE BOWL PHENOMENON]]> An interesting empirical result in the assembly line literature states that slightly unbalanced assembly lines (in the format of a bowl - with central stations less loaded than the external ones) present higher throughputs than perfectly balanced ones. This effect is known as the bowl phenomenon. In this study, we analyze the presence of this phenomenon in assembly lines with integer task times. For this purpose, we modify existing models for the simple assembly line balancing problem and assembly line worker assignment and balancing problem in order to generate configurations exhibiting the desiredformat. These configurations are implemented in a stochastic simulation model, which is run for a large set of recently introduced instances. The obtained results are analyzed and the findings obtained here indicate,for the first time, the existence of the bowl phenomenon in a large set of configurations (corresponding to the wide range of instances tested) and also the possibility of reproducing such phenomenon in lines witha heterogeneous workforce. <![CDATA[ARTIFICIAL NEURAL NETWORK AND WAVELET DECOMPOSITION IN THE FORECAST OF GLOBAL HORIZONTAL SOLAR RADIATION]]> This paper proposes a method (denoted by WD-ANN) that combines the Artificial Neural Networks (ANN) and the Wavelet Decomposition (WD) to generate short-term global horizontal solar radiation forecasting, which is an essential information for evaluating the electrical power generated from the conversion of solar energy into electrical energy. The WD-ANN method consists of two basic steps: firstly, it is performed the decomposition of level p of the time series of interest, generating p + 1 wavelet orthonormal components; secondly, the p + 1 wavelet orthonormal components (generated in the step 1) are inserted simultaneously into an ANN in order to generate short-term forecasting. The results showed that the proposed method (WD-ANN) improved substantially the performance over the (traditional) ANN method. <![CDATA[SPECIAL LOTTERY DRAWINGS - ANALYSIS OF AN UNCONVENTIONAL INVESTMENT OPPORTUNITY]]> Quina Loto is one of the most popular lottery games in Brazil. Prizes are paid as a percentage of each drawing's revenues. After deductions and taxes, 34% of the revenues are destined for the payments of prizes on each drawing, 29% divided among winners and 5% saved in order to contribute for the major prize of the special drawing held annually, in June 24th. Due to the low expected return on investment, lotteries are widely regarded as a bad investment decision. Experience, however, shows that the special drawing might be an exception. In this paper we provide a thorough analysis of a theoretical investment in the special drawing of 2013, considering players' behavior, lesser prizes earnings, effects of the own investment in the jackpot, probabilities of sharing the prizes and outcomes covering methods. Finally, we compare our conclusions against the result of the lottery on June 24th, 2013. <![CDATA[A NONLINEAR FEASIBILITY PROBLEM HEURISTIC]]> In this work we consider a region S ⊂ given by a finite number of nonlinear smooth convex inequalities and having nonempty interior. We assume a point x 0 is given, which is close in certain norm to the analytic center of S, and that a new nonlinear smooth convex inequality is added to those defining S (perturbed region). It is constructively shown how to obtain a shift of the right-hand side of this inequality such that the point x 0 is still close (in the same norm) to the analytic center of this shifted region. Starting from this point and using the theoretical results shown, we develop a heuristic that allows us to obtain the approximate analytic center of the perturbed region. Then, we present a procedure to solve the problem of nonlinear feasibility. The procedure was implemented and we performed some numerical tests for the quadratic (random) case. <![CDATA[ON HYPOTHESIS TESTS FOR COVARIANCE MATRICES UNDER MULTIVARIATE NORMALITY]]> In this paper we proposed a new statistical test for testing the covariance matrix in one population under multivariate normal assumption. In general, the proposed and the likelihood-ratio tests resulted in larger values of estimated powers than VMAX for bivariate and trivariate cases. VMAX was not sensitive to general changes in the covariance (correlation) structure. The advantage of the new test is that it is based on the comparison of all elements of the postulated covariance matrix under the null hypothesis with their respective maximum likelihood sample estimates and therefore, it does not restrict the information of the covariance matrix into a scalar number such as the determinant or trace, for example. Due to the fact that it is based on the maximum likelihood estimates and the Fisher information matrix, it can be used for data coming from distribution other than the multivariate normal. <![CDATA[URBAN SOLID WASTE MANAGEMENT BY PROCESS MAPPING AND SIMULATION]]> Adequate management of urban solid waste (USW) is beneficial to the environment, cities and people who depend on the income generated from collecting this waste material to survive. Due to the level of complexity of these processes involved in recyclable USW sorting, management tools which enable systemic studies become valuable assets by evaluating the effects changes may cause on the organization. This paper presents a modeling study, which used process mapping and discrete event simulation, in which a USW collection process was evaluated. The study was conducted in conjunction with the USW trash gatherers association of Itajubá, state of Minas Gerais, Brazil. The lack of standardized processes presented difficulties for developing the simulation model. To diminish this problem, three conceptual modelingtechniques were used (SIPOC, flowcharts and IDEF-SIM). Data were collected via observation, interviews and questionnaires. The validated conceptual model, obtained in this study, will be able to be used to better understand the process. Finally, the model was used to evaluate process improvement scenarios. <![CDATA[A BIVARIATE GENERALIZED EXPONENTIAL DISTRIBUTION DERIVED FROM COPULA FUNCTIONS IN THE PRESENCE OF CENSORED DATA AND COVARIATES]]> In this paper, we introduce a Bayesian analysis for a bivariate generalized exponential distribution in the presence of censored data and covariates derived from Copula functions. The generalized exponential distribution could be a good alternative to analyze lifetime data in comparison to usual existing parametric lifetime distributions as Weibull or Gamma distributions. We have being using standard existing MCMC (Markov Chain Monte Carlo) methods to simulate samples for the joint posterior of interest. Two examples are introduced to illustrate the proposed methodology: an example with simulated bivariate lifetime data and an example with a real lifetime data set. <![CDATA[ANALYZING PERCEPTIONS ABOUT THE INFLUENCE OF A MASTER COURSE OVER THE PROFESSIONAL SKILLS OF ITS ALUMNI: A MULTICRITERIA APPROACH]]> This work proposes and applies an approach to evaluate perceptions about the effects of Professional Master's Degree in Engineering over the professional performance of their alumni. The proposal is based on a trichotomous outranking multicriteria method, called ELECTRE TRI (Mousseau et al., 2000). An application of the proposal has been made, catching the perceptions from students and professors of a Brazilian course of a Professional Master in Production Engineering, also from chiefs these students and coordinators of courses of Professional Masters of a similar range. The set of criteria was based on literature review and refined by opinions from specialists. The participants of the research were asked about their perceptions on importance degree of criteria and level of influence of the course upon the alumni' skills, from each criterion considered in the survey. The results obtained from each group were compared. Additionally a sensibility analysis was carried on the results.