Acessibilidade / Reportar erro

A practical guide for operational validation of discrete simulation models

Abstract

As the number of simulation experiments increases, the necessity for validation and verification of these models demands special attention on the part of the simulation practitioners. By analyzing the current scientific literature, it is observed that the operational validation description presented in many papers does not agree on the importance designated to this process and about its applied techniques, subjective or objective. With the expectation of orienting professionals, researchers and students in simulation, this article aims to elaborate a practical guide through the compilation of statistical techniques in the operational validation of discrete simulation models. Finally, the guide's applicability was evaluated by using two study objects, which represent two manufacturing cells, one from the automobile industry and the other from a Brazilian tech company. For each application, the guide identified distinct steps, due to the different aspects that characterize the analyzed distributions

discrete event simulation; operational validation; practical guide


A practical guide for operational validation of discrete simulation models

Fabiano LealI; Rafael Florêncio da Silva CostaI; José Arnaldo Barra MontevechiI; Dagoberto Alves de AlmeidaI; Fernando Augusto Silva MarinsII,* * Corresponding author

IFederal University of Itajubá, Institute of Production Engineering and Management

IISão Paulo State University, School of Engineering, Guaratingueta Campus. E-mail: fmarins@feg.unesp.br

ABSTRACT

As the number of simulation experiments increases, the necessity for validation and verification of these models demands special attention on the part of the simulation practitioners. By analyzing the current scientific literature, it is observed that the operational validation description presented in many papers does not agree on the importance designated to this process and about its applied techniques, subjective or objective. With the expectation of orienting professionals, researchers and students in simulation, this article aims to elaborate a practical guide through the compilation of statistical techniques in the operational validation of discrete simulation models. Finally, the guide's applicability was evaluated by using two study objects, which represent two manufacturing cells, one from the automobile industry and the other from a Brazilian tech company. For each application, the guide identified distinct steps, due to the different aspects that characterize the analyzed distributions.

Keywords: discrete event simulation, operational validation, practical guide.

1 INTRODUCTION

Law (2006) claims that the use of simulation models can replace experimentations that occur directly in real systems, whether they exist or are still in design phase. In these systems, nonsimulated experiments may not be economically viable, or even impossible to perform. According to Chwif & Medina (2007), a simulation model is able to capture complex and important characteristics of real systems, which allows its representation, in a computer, of the behavior that such systems would present when subjected to the same boundary conditions. The simulation model is used, particularly, as an instrument to answer questions like "what if...?".

The literature presents a number of works that have applied discrete events simulation in several areas. It is possible to cite works such as those by Garza-Reyes et al. (2010); Kouskouras & Georgiou (2007); Kumar & Sridharan (2007); Leal (2003); Montevechi et al. (2007); Mosqueda et al. (2009); Raja & Rao (2007); Van Volsem et al. (2007). These works present, in different forms and different levels of detail, the process of validating models.

Harrel et al. (2004) state that, by nature, the process of building a model is prone to error, since the modeler must translate the real system into a conceptual model which, in turn, must be translated into a computational model. In this sense, verification and validation processes (V&V) have been developed in order to minimize these errors and make useful models to meet the purposes for which they were created.

Validation, as defined by Law (2006), is the process for determining whether the simulation model is an accurate enough representation of the system, for a particular purpose of the study. According to the author, a valid model can be used for decision making, and the ease or difficulty of the validation process depends on the complexity of the system being modeled.

When referring to the validation of simulation, two types can be considered: conceptual model validation and validation of the computational model (a.k.a. operational validation). The validation of the conceptual model will be presented in this article's bibliographic review, but will not be the focus of the study. The validation of the computational model, the focus of this study, aims to ensure that, within the limits of the model and with certain reliability, the model can be used for the experiments that characterize the scenario being analyzed. This model is called operational.

Especially in the case of validation, it is coo n to publish papers in hich this stage is poorly detailed, or even omitted. This paper will discuss this question.

Some questions are answered in this research. They are:

1. Does the literature regarding the subject show divergences in the form used for operational validation of models of discrete event simulation? What methods are being used in operational validation?

2. Is it possible to structure a guide of operational validation applied to discrete event siu lation models?

The first question proposed is answered by analyzing the literature, as will be seen in the following item.

The second question posed in this article is answered by presenting a practical guide for operational validation of simulation models, tested on two objects of study. These objects of study correspond to two productive cells from different companies. In one of the presented models, there was a confirmatory experiment requested by the industrial director of the company analyzed, thus improving the acceptance of the model by the managers of the simulated system.

This research aims, therefore, to elaborate a guide through the compilation of statistical techniques for operational validation of discrete simulation models. It is worth mentioning that this guide is applicable to cases in which historical data of the real system are available to those responsible for the simulation project. Besides the benefit in assisting researchers in modeling and simulation areas, this guide will have a relevant teaching application by leading the reader in the operational validation according to some characteristics of the model.

It is important to highlight that it is not intended defend in this paper that the tests organized in the practical guide will ensure that the model is absolutely valid. Box et al. (2009) even claimed that all models are wrong, but some models are useful. Authors such as Robinson (2007) and Balci (1997) defend the idea that you cannot fully validate a model. What is possible, according to these authors, is to increase the credibility of this model for those who must make the decisions.

This article is structured into five sections. Section 2 presents the concepts associated with the validation of discrete simulation models, as well as a brief description of how some papers in the literature performed operational validation. In Section 3 some techniques of operational validation are presented and discussed and in Section 4 the practical guide elaborated and its application are presented through two objects of study. Section 5 presents the conclusions of the work.

2 VALIDATION OF DISCRETE SIMULATION MODELS

Authors such as Chwif (1999) and Montevechi et al. (2007) show a sequence of steps for a simulation project, countering the false idea that simulation consists only in the model's computer programming. Figure 1, proposed by Montevechi et al. (2007), presents a method of the steps for a simulation project, highlighting the steps of generating conceptual, computational and operational models.


The positioning of verification and validation activities, before the experimental phase, can be observed in Figure 1. These activities might ensure that the model actuallyrepresents the real system, within the limits that were set for the model. Although Figure 1 shows the activities of verification and validation at specific points, these actions occur throughout the simulation project. The actions taken by those responsible for building the model are directly related to V&V activities, from designing the conceptual model to planning scenarios to be tested. The positioning of these activities in Figure 1 aims to highlight moments of the project in which the verification and validation should be properly discussed and documented.

The flowchart in Figure 1 highlights the presence of three models that characterize their respective stages. The conceptual model must be validated to be converted into a computational model. The computational model validated is claimed to be operational (operational model or experimental model), and is able to carry out the experiments. The procedures chosen for the definition and execution of experiments (e.g. DOE) must have proper mechanisms of validation (residue analysis). Thus, here the computational model validated is not separated into operational or experimental model, since in practice they represent the same model.

The conceptual model, properly validated, is used in the construction, verification and validation of the computational model. The use of the conceptual model in the verification and validation of computational model is made by observing the flows of entry and exit, of conversion point of the entity, of resources and of controls used, etc. The use of conceptual odel for these purposes can be found in works such as Leal et al. (2008) and Leal et al. (2009).

The use of the conceptual model can assist in face-to-face validation (shown in Section 3), along with the use of animation of the simulation. However, the validation of the computational model does not only occur with the use of the conceptual model. Statistical techniques should also be applied in order to show that the computational model, within its domain ofapplicability, behaves with satisfactory accuracy and consistency with the objectives of the study.

According to Chwif & Medina (2007), two important characteristics of validation should always be taken into consideration:

a) there is no way to 100% validate a model or to ensure it is 100% valid; and

b) there is no guarantee that a model is totally free of bugs. Thus, although the model can be checked for a given circumstance, there is no guarantee that, for any circumstances it works as intended.

The issue of validation involves not only the technical aspect. A complete overview of the process of modeling and simulation also involves art and science (Kleijnen, 1995). In respect to the model, authors such as Harrel & Tumay (1997) emphasize that it should be valid in the sense of satisfactorily representing reality and only including elements that influence the problem to be solved.

Balci (2003) made some considerations about the process of model verification and validation. According to the author, the results of verification and validation should not be considered as a binary variable, where the model being simulated is absolutely correct or absolutely incorrect. A simulation model is built in compliance with the objectives of the modeling and simulation projects, and its credibility is judged according to these goals. The author also writes that, the validation of simulation models, as well as verification, is a difficult process that requires creativity and insight.

Some validation techniques are more subjective, using the experience of specialists of the modeled system to validate the simulation model. However, this type of validation hardly ensures the results obtained for simulated experiments. Statistical research on data obtained in the simulation and on historical data (real) of the simulated system is necessary, whenever possible, , thus making it an operational model.

Operational validation is understood as validation of the computational model (Fig. 1), defining it as suitable for the carrying out of experiments, within the limits set in the simulation project's objectives. This differentiation is important, as another validation is carried out in the conceptual model in a previous step to obtain the computational model, as shown in Figure 1.

It is possible to observe in the literature articles that use discrete simulation models, which only mention the operational validation techniques used, with no further details of the validation procedures (Garza-Reyes etal., 2010; Mosqueda etal., 2009). Other articles carried out experiments and optimizations by using simulation models without providing information about operational validation process (Ahmed & Alkhamis, 2009; Ekren & Ornek, 2008; Ekren et al., 2010; Meade et al. , 2006; Sandanayake et al., 2008).

As for authors such as Bekker & Viviers (2008); Longo (2010); Nazzal et al. (2006) and Kouskouras & Georgiou (2007), they validated operational models by using more subjective techniques, as face-to-face validation, Turing test and animation, present in some simulation softwares. Item 3 of this article will discuss some of these techniques.

Another operational validation used is the coparison of siulation and real syste results, using the average and standard deviation of the results (Potter et al., 2007).

Finally, there are authors (Abdulmalek & Rajgopal, 2007; Lam & Lau, 2004; Jordan etal., 2009; Mahfouz et al., 2010) who carry out operational validation also through some form of statistical test, such as building confidence intervals.

Given this diversity of validation forms used by the authors in the area, this article aims to develop a practical guide for operational validation in a more objective way. Some of the techniques mentioned above will be presented in the following item.

3 OPERATIONAL VALIDATION TECHNIQUES

According to Banks et al. (2005), validation can be performed through a series of tests, some of them subjective and some others objective. According to these authors, subjective tests often involve people who know some or many aspects of the system, making judgments about the model and its outputs. The so-called objective tests always require data about the system's behavior as well as corresponding data produced by the model, in order to compare some aspect of the system with the same aspect of the model, through one or more statistical tests.

Although this article focuses the statistical validation of the computational model, it is not intended to devalue other validation techniques. Kleijnen (1995) states in his article that analysts and users of simulation model must be convinced of its validity, not only by statistical techniques, but also by the application of other procedures, such as the use of animation to assist face-to-face validation or the use of Turing tests. As such, according to the same author, simulation will continue to be both an art and a science.

Some techniques for validation of models are presented by Sargent (2009). The author attributes these techniques to computational model validation. It is noteworthy that some of these techniques are more subjective and others more objective, according to the statement of Banks et al. (2005), presented at the beginning of this item.

• Animation: the operational performance of the model is graphically shown as the model evolves over time;

• Validation by events: the events of the siulation odel are copared to those of the real system, in order to determine similarities;

• Face-to-face validation: individuals who know the system are asked if the model behavior is reasonable;

• Internal validity: different replicas of a stochastic model are made to determine the amount of variability of the model. A large variability can lead to questioning of the results;

• Operational Graphics: values of several performance measures, such as the queue size and the percentage of busy servers, are shown graphically as the model evolves over time;

• Sensitivity analysis: this technique consists of changing values of inputs and internal parameters of a model to determine the effect on the behavior and output of the model. The same relationships should occur between model and real system;

• Predictive validation: the odel is used to predict the behavior of the syste , then these predictions are compared with the behavior of the system;

• Turing test: consists in presenting the data generated by the real and the simulated model, randomly, for individuals who know the operations of the system being modeled. These individuals are then asked whether they can distinguish the outputs of both simulated model and actual system;

• Validation by historical data: if there are historical data (data collected in the system), part of the data is used to build the model and the rest of the data is used to determine whether the model behaves like the real system.

In the literature concerning statistical procedures, authors such as Chung (2004) highlight some tests, such as the F test. In his opinion, only one version of the test is required for simulation practitioners. This version is represented by equation (1):

In Equation 1, here:

•

is the variance of the data set of highest variance;

•

is the variance of the data set of lowest variance.

In this case, we have the following Null Hypothesis: the variances of both sets of data (real system and model) are similar. The Alternative Hypothesis points to no similarity between the sets of variance. It is necessary to select the significance level a and to determine the critical value for F. Calculating the value of F by equation (1), the Null Hypothesis is rejected if this value exceeds the critical value (taken from a table of F-Snedecor distribution).

As for the t-test, it is used when the data are normal (can be represented by a normal distribution). Another requirement for using this test is the condition of similarity of the variances between sets. This test will determine if there is a statistically significant difference between the averages of the sets analyzed, given a significance level a. The t-test has the following Null Hypothesis: the averages of both sets are equal. The lternative ypothesis points to the non-equality betee n the averages of the two sets.

The Smith-Satterthwaite method is used for cases in which data of the system and the model are normal, but with different variance. This test considers the differences in variances by adjusting the degrees of freedom for a critical t value (taken from a table of t-Student distribution).

The application of this model begins by calculating the number of degrees of freedom, using equation (2), where df represents the number of degrees of freedom, the variance of the first set, the variance of the second set, n2 number of elements of the first set and nj number of elements of the second set.

Once the nuber of degrees of freedo is deterined, a t-test is executed. It is iportant to highlight that, until this point, the data sets evaluated were considered parametric. To Montgomery & Runger (2003), traditionally, parametric methods are procedures based on a particular parametric family of distributions, which in this case is normal.

According to Montgomery & Runger (2003), when the distribution under study is not characterized as normal, non-parametric methods become very useful. Thus, in situations in which both sets of data are not considered normal, or either one of them, a non-parametric test is used, such as a Mann-Whitney U test. No assumption about the population distribution in nonparametric methods is made, except that it is continuous.

In the Mann-Whitney U test, given two independent samples, the null hypothesis is tested to see whether the median of two samples are equal. It is said, then, that this test is the nonparametric alternative to the t test for difference of means.

The choice of parametric or nonparametric methods is preceded by the assumption that the distribution is continuous. However, some response variables may be characterized as discrete. In this case, authors like Bisgaard & Fuller (1994); Lewis et al. (2000) and Montgomery (2001) recommend performing a transformation to stabilize the variance before applying statistical techniques for the operational validation. According to these researchers, when scores are used as an experimental response, the assumption made of constant variance is violated. These authors indicate, in their work, transformation functions appropriate for each case.

People responsible for designing simulation can be confronted with the problem in which the simulation model was not statistically validated, but was accepted by subjective techniques. It is recommended in this case the investigation of the causes of this statistical non-validation. There are some possibilities that may explain this fact, according to Chung (2004):

a) non-stationary system - in some systems, you may need to collect the input data and validate them at different times. If the distributions of input data differentiate over time, the process is considered non-stationary. Systems can be non-stationary due to seasonal or cyclical changes. Two approaches can be used: one is the incorporation of non-stationary components of the system in the model, as the inclusion of the distributions of input data that vary over time; and the other is data collection and validation at the same point of time. The second approach provides a model valid only for the conditions under which the input data were obtained;

b) poor input data - data in insufficient quantity and/or precision. In the case of using historical data, the situation in reality may have changed, and

c) invalid assumptions - over-simplified or even poorly modeled.

4 DEVELOPMENT AND IMPLEMENTATION OF THE PROPOSED GUIDE

Through analysis of literature, especially in the areas of applied statistics and discrete event simulation, a guide was elaborated in the form of a diagram, with the purpose of guiding those interested in simulation projects in the stage of operational validation of the model. This guide is illustrated in Figure 2.

The first step of this guide suggests participants of the simulation project to organize the information obtained from the simulation model and the real system into two sets. This information is from an output variable, such as: total pieces produced, lead time, number of people in line, waiting time, etc. The choice of variable (or variables) is directly related to the objectives of the project.

Based on this step, the guide suggests a sequence of statistical tests, with some decisions to make concerning the nature of the distribution and normality or of the non-distribution and equality (or inequality) of variances. These tests and decisions are scattered throughout the literature, as presented in Section 3. The objective of this guide is to organize these tests and to assist researchers, students and professionals in the stage of operational validation of the simulation model.

To illustrate the applicability of the guide, two objects of study are presented. The first object of study is intended to illustrate the operational validation in which results of the simulation model and the real system have similar variances.

This object of study is a manufacturing cell of a multinational auto parts sector and is responsible for first processing the product. This function performed by the cell has six machines, two ovens for heat treatment, nine staffers divided into three production shifts and three employees for each shift. Each of these employees not only operates two machines or two ovens, but is also responsible for transporting raw materials or products into and out of the cell. These employees also inspect the goods and prepare each machine.

Initially, a conceptual model was built to represent the flow of cell production process and to help in building the computational model. After the validation of the conceptual model by experts, it was converted into a computational model using Promodel® simulator. Figure 3 shows the screen of the computational model generated after building 16 successive versions of the model. Building in versions allows more efficient model verification (Chwif & Medina, 2007; Banks, 1998; Kleijnen, 1995).


To accomplish the first step of the guide shown in Figure 2 (preparation of "real system" and "model" sets), the total of parts produced per day was selected as the simulation model's output variable. Thus, the model was run for sixteen days of production, with ten replicates each day. The value for each day of the output variable was determined using the average from the ten replicates. The same amount of output data from the simulated model was extracted from the real production history. The data of the simulation results and historical production are presented in Table 1.


Having defined the sets, the first decision-making represented in this guide is analyzed. This decision-making is step number two of the guide (Fig. 2).

The result obtained in the simulation model (parts produced per day) could be interpreted as a count in an interval of time, characterizing a typical case of Poisson distribution (Montgomery & Runger, 2003). Therefore, this distribution would be characterized as discrete.

However, when performing a test of adherence to a normal distribution, both sets (real and simulated) were accepted in this distribution. As shown in Figure 4, the data can be represented by a normal distribution, since the p-value was higher than 0.05 (significance level). This can be explained as follows: replicas were made and their average was used for validation. This fact is also confirmed by the central limit theorem, which states that whenever a random experiment is replicated, a random variable equal to the average result of the replicas tend to have a normal distribution, as the number of replicas becomes large (Montgomery & Runger, 2003). Thus, step number two of the guide points to continuous distribution.


If the number of replicas was small, the nature of the distribution (number of parts produced per day) could be considered discrete, indicating the step number three of the guide. This step suggests a transformation function, aiming to stabilize the variance. This issue was presented in Section 3 of this article. In case of Poisson distribution, the authors Bisgaard & Fuller (1994) suggest the square root of the data as transformation function.

It is not the scope of this paper to discuss the transformation functions. As such, the following reading is suggested: Bisgaard & Fuller (1994); Lewis etal. (2000) and Montgomery (2001).

After verifying that simulated and real sets represent continuous distributions, or after the transformation function (steps 2 and 3 of the guide), step number 4 is performed. In this step, there is a test of adherence for normal distribution.

In this first object of study, data from two sets could be represented by a normal distribution, according to the normality test performed (Anderson-Darling), shown in Figure 4. If one or both sets of data do not characterize a normal distribution, we use a nonparametric test (step number five of the guide) as presented in Section 3 of this article. One of these tests is the Mann-Whitney U test. In this, the objective is to test the null hypothesis, which is that the two sets have equal medians. The operational validation of the model, in the case of sets of non-parametric data, ends in step 5. Further details can be found in Montgomery & Runger (2003).

As in the first object of study, the sets assumed normal distributions and an F-test was performed, as indicated in step number 6 of the guide. The purpose of the F-test is to test the hypothesis that both data sets (real and simulated) have equal variances. Figure 5 shows the result of F-test on the first object of study.


It was verified, according to Figure 5, that both data sets (real and simulated) have equal variances (p-value of 0.443). From this observation (step 7 of the guide), it is possible to perform a t test for two independent samples (step 9 of the guide), which tests the hypothesis that there is no statistical difference between the two sets of data analyzed (real and simulated).

After the t-test, it is possible to affirm that the computational model of real system is statistically validated, since p-value is 0.141. It is said then that the computer model is able to receive experimentations in this output variable analyzed. After this, the operational model of the manufacturing cell under study is obtained.

To illustrate the use of step 8, we present the second object of study. In the second application of the operational validation guide, it is intended to illustrate the operational validation in which results of the simulation model and the real system do not have similar variances.

The second study object is a cell for assembling optical transponders from Padtec S/A, a Brazilian technological company. Transponders mounted by this cell represent about 40% of the company's sales revenue, while the second product in revenue represents 20%. This cell has specific optical equipment for the testing of the transponders, work benches and computers. Two employees perform activities of the production process, allocated on a single production shift.

Similar to the first object of study, the conceptual model was built and validated by the engineers responsible for production. Then, this conceptual model was implemented in the simulator Promodel®. Twelve models were built, in increasing order of complexity, until a model was achieved to be submitted to the operational validation process. Figure 6 shows the screen of the computer model, built with software Promodel®.


Just as the first object of study, the computational model of this object of study was built in stages, thus facilitating the verification. Moreover, deterministic values were initially simulated in order to ensure that the logic of the model was correct. The debugging tool of Promodel® was used to indicate possible prograin g errors. ounters ere also placed along the odel for spot checks.

After verification, face-to-face validation was performed. By means of graphic animations, experts were able to evaluate the system's behavior. In this more subjective test, the computational model was validated.

After this face-to-face validation, the guide of operational validation was applied. To this end, the model was run for thirteen weeks, with ten replicas each. The weekly amount of parts produced was measured by the average of these values (considering the ten replicas). At the same time, values for total transponders mounted per week in the real system were obtained from the company's ERP system for the same period. Table 2 shows these results.

After completing the first step of the guide, there was an adherence test for the normal distribution, and it was found that these data can be adjusted this way. As shown in Figure 7, a finding of continuous and normal distribution leads the analysis to step 6 of the guide.


Thus, step 6 of the guide was carried out. It was found, through the F-test, that both data sets (real and simulated) do not have equal variances, with a p-value lower than 0.05. This information does not invalidate the model. It allows for decision making in step 7 of the guide, leading the validation to step 8.

Step 8 can be accomplished either in algebraic form, as indicated in this article, or via software. Minitab®, for example, presents the possibility for the user to select the option of non-equality of variances before t-test.

It is noteworthy that this result was discussed with the process engineers and the great variability observed in the production history is consistent with the production strategy adopted by the company in question, i.e., there are weeks with a large volume of transponders mounted and in other weeks, however, low volume of transponders mounted. In a given week, for example, there may have been a late arrival of imported materials, which may have come earlier next week, thus presenting a large production. As the simulation model is a simplification of reality, it cannot incorporate this great variability.

Immediately after step 8 (applied to the case of non-equal variances), one can then perform a t test for two independent samples, which tests the equality of averages for simulated and real data, taking into account the non-equality of variances. As the test showed p-value equal to 0.533 (greater than 0.05), hypothesis of the simulation model validity is accepted.

After this test, it is possible to state that the computer model is statistically validated. There is, then, the operational or experimental model of the manufacturing cell under study. The validity of this simulation model was confirmed by the Padtec team of industrial engineering after running an experimental confirmation of the real system. This experimental confirmation was requested by the cell's team of experts, since this team expressed interest in the system reaching the real results identified by the simulation model for a given scenario.

This experiment consisted in eliminating an activity of the manufacturing cell's production flow (just like it was done in a scenario of the simulation model). This scenario was chosen because the simulation model showed a 40% increase in monthly production of the cell by eliminating this activity. This confirmation experiment revealed a real monthly production very close to that predicted by the simulation model. Experiments of this nature increase the credibility of the model for those who will actually make the decisions with the help of the model.

5 CONCLUSIONS

This paper presented two research questions. The first asked what were the forms found for operational validation of discrete simulation models in the literature. Through a summarized literature review in items 2 and 3 of this article, it was possible to verify that, in the scientific literature (specifically in the area of modeling and simulation), there are articles that use more subjective forms of operational validation, while others use more objective forms (statistical techniques). Moreover, among these articles, some (as shown) had no details of the operational validation. This survey underlines the need of a practical guide for operational validation.

The second question presented in this paper suggests the creation of a practical guide for operational validation. This guide was created and presented in item 4 of this article. This item also presented the application of the guide in two objects of study. Each step of the guide was presented and discussed within the situations encountered on the objects of study.

Although the techniques used in the guide already exist in the literature, they are scattered throughout publications in the field of statistics. The sequencing of these techniques, according to decisions to be made by those responsible for the modeling, composes this practical guide. The expected effect of the creation of this guide is not restricted only to aid the design of simulation in companies, but also in scholarly works. Several academic studies have been oriented by the authors of this paper using the guide. It is possible to conclude that of these orientations using the guide, there is a great increase in interest, on the part of students (undergraduate and graduate), in ensuring operational validation of their models in a documented way in their works.

The proposed use of the operational validation guide does not suggest the elimination of subjective techniques for validating the computational model. As presented in this article, techniques such as face-to-face validation and Turing test are used in articles published in journals and congresses. The combination of these subjective techniques with the use of the guide here presented may improve this difficult process of operational validation, considered by some authors (cited in this article) as a mixture of art and science.

ACKNOWLEDGEMENTS

The authors acknowledge the support from FAPEMIG and CNPQ, the covenant between UNIFEI and UNESP, and Padtec.

Received July 2009 / Accepted August 2010

  • [I] ABDULMALEK FA & RAJGOPAL J. 2007. Analyzing the benefits of lean manufacturing and value stream mapping via simulation: A process sector case study. International Journal of Production Economics, 107: 223-236.
  • [2] AHMED MA & ALKHAMIS TM. 2009. Simulation optimization for an emergency department healthcare unit in Kuwait. European Journal of Operational Research, 198: 936-942.
  • [3] BALCI O. 1997. Verification, validation and accreditation of simulation models. In: Proceedings of the 1997 Winter Simulation Conference, Atlanta, GA, USA.
  • [4] BALCI O. 2003. Verification, validation, and certification of modeling and simulation applications. In: Proceedings of the 2003 Winter Simulation Conference, New Orleans, Louisiana, USA.
  • [5] BANKS J. 1998. Handbook of simulation: Principles, Methodology, Advances, Applications, and Practice. Ed. John Wiley & Sons, Inc., 864p.
  • [6] BANKS J, CARSON JS, NELSON BL & NICOL DM. 2005. Discrete-event Simulation. 4 ed. Upper Saddle River, NJ: Prentice-Hall.
  • [7] BEKKER J IVIERS L. 2008. sing coputer siulation to deterine operations policies for a mechanised car park. Simulation Modelling Practice and Theory, 16: 613-625.
  • [8] BISGAARD S & FULLER H. 1994. Analysis of Factorial Experiments with defects or defectives as the response. Report n. 119, Center for Quality and Productivity Improvement University of Wisconsin.
  • [9] BOX GEP, LUCEÑO A & PANIAGUA-QUIÑONES MC. 2009. Statistical control by monitoring and adjustment. 2 ed. Ney Jersey: John Wiley & Sons.
  • [10] CHUNG CA. 2004. Simulation Modeling Handbook: a practical approach. Washington D.C: CRC press, 608p.
  • [11] CHWIF L. 1999. Redução de modelos de simulação de eventos discretos na sua concepção: uma abordagem causal. 1999. 151 f. Tese (Doutorado em Engenharia Mecânica) - Escola Politécnica, Universidade de São Paulo, São Paulo.
  • [12] CHWIF L & MEDINA AC. 2007. Modelagem e Simulação de Eventos Discretos: Teoria e Aplicações. São Paulo: Ed. dos Autores, 254p.
  • [13] EKREN RNEK . 2008. A simulation based experiental design to analyze factors affecting production flow time. Simulation Modelling Practice and Theory, 16: 278-293.
  • [14] EKREN BY, HERAGU SS, KRISHNMURTHTY A & MALMBORG CJ. 2010. Simulation based experimental design to identify factors affecting performance of AVS/RS. Computers & Industrial Engineering, 58: 175-185.
  • [15] GARZA-REYES JA, ELDRIDGE S, BARBER KD & SORIANO-MEIER H. 2010. Overall equipment effectiveness (OEE) and process capability (PC) measures: a relationship analysis. International Journal of Quality & Reliability Management, 27(1): 48-62.
  • [16] HARREL C, GHOSH BK & BOWDEN RO. 2004. Simulation Using Promodel. 2 ed. New York: McGraw-Hill.
  • [17] HARREL C & TUMAY K. 1997. Simulation Made Easy. Engineering & Management press, 311p.
  • [18] JORDAN JD, MELOUK SH & FAAS PD. 2009. Analyzing production modifications of a c-130 engine repair facility using simulation. In: Proceedings of the Winter Simulation Conference, Austin, Texas, USA.
  • [19] KLEIJNEN JPC. 1995. Theory and Methodology: Verification and validation of simulation models. European Journal of Operational Research, 82: 145-162.
  • [20] KOUSKOURAS KG & GEORGIOU AC. 2007. A discrete event simulation model in the case of managing a software project. European Journal of Operational Research, 181: 374-389.
  • [21] KUMAR S & SRIDHARAN R. 2007. Simulation modeling and analysis of tool sharing and part scheduling decisions in single-stage multimachine flexible manufacturing systems. Robotics and Computer-Integrated Manufacturing, 23: 361-370.
  • [22] LAM K & LAU RSM. 2004. A simulation approach to restructuring call centers. Business Process Management Journal, 10(4): 481-494.
  • [23] LAW AM. 2006. How to build valid and credible siulation odels. In: Proceedings of the 2006 Winter Simulation Conference, Monterey, CA, USA.
  • [24] LEAL F. 2003. Um diagnostico do processo de atendimento a clientes em uma agência bancária através de mapeamento do processo e simulação computational. 224 f. Dissertação (Mestrado em Engenharia de Produção) - Universidade Federal de Itajubá, Itajubá, MG.
  • [25] LEAL F, ALMEIDA DA DE & MONTEVECHI JAB. 2008. Uma Proposta de Técnica de Modelagem Conceitual para a Simulação atraves de elementos do IDEF. In: Anais do XL Simpósio Brasileiro de Pesquisa Operacional, Joao Pessoa, PB.
  • [26] LEAL F, OLIVEIRA MLM DE, ALMEIDA DA DE & MONTEVECHI JAB. 2009. Desenvolvimento e aplicação de uma técnica de modelagem conceitual de processos em projetos de simulação: o IDEF-SIM. In: Anais do XXIX Encontro Nacional de Engenharia de Produção, Salvador, BA.
  • [27] LEWIS SL, MONTGOMERY DC & MYERS RH. 2000. The analysis of designed experiments with non-normal responses. Quality Engineering, 12(2): 225-243.
  • [28] LONGO F. 2010. Design and integration of the containers inspection activities in the container terminal operations. International Journal of Production Econoics, article in press.
  • [29] MAHFOUZ A, HASSAN SA & ARISHA A. 2010. Practical simulation application: Evaluation of process control parameters in Twisted-Pair Cables manufacturing system. Simulation Modelling Practice and Theory, article in press.
  • [30] MEADE DJ, KUMAR S & HOUSHYAR A. 2006. Financial analysis of a theoretical lean manufacturing implementation using hybrid simulation modeling. Journal of Manufacturing Systems, 2(25): 137-152.
  • [31] MONTEVECHI JAB, PINHO AF DE, LEAL F & MARINS FAS. 2007. Application of design of experiments on the simulation of a process in an automotive industry. In: Proceedings of the 2007 Winter Simulation Conference, Washington, DC, USA.
  • [32] MONTGOMERY C. 2001. Design and analysis of experients. 5th ed. John Wiley Sons: e New York.
  • [33] MONTGOMERY DC & RUNGER GC. 2003. Estatística Aplicada e Probabilidade para Engenheiros. 2 ed. Editora LTC, 476p.
  • [34] MOSQUEDA MRP, TOLLNER EW, BOYHAN GE, LI C & McCLENDON RW. 2009. Simulating onion packinghouse product flow for performance evaluation and education. Biosystems Engineering, 102: 135-142.
  • [35] NAZZAL D, MOLLAGHASEMI M & ANDERSON D. 2006. A simulation-based evaluation of the cost of cycle time reduction in Agere Systems wafer fabrication facility - a case study. International Journal of Production Economics, 100: 300-313.
  • [36] POTTER A, YANG B & LALWANI C. 2007. A simulation study of despatch bay performance in the steel processing industry. European Journal of Operational Research, 179: 567-578.
  • [37] RAJA R & RAO KS. 2007. Performance evaluation through simulation modeling in a cotton spinning system. Simulation Modelling Practice and Theory, 15: 1163-1172.
  • [38] ROBINSON S. 2007. A statistical process control approach to selecting a warm-u p period for a discrete-event simulation. European Journal of Operational Research, 176: 332-346.
  • [39] SANDANAYAKE YG, ODUOZA CF & PROVERBS DG. 2008. A systematic modelling and simulation approach for JIT performance optimisation. Robotics and Computer-Integrated Manufacturing, 24: 735-743.
  • [40] SARGENT RG. 2009. Verification and validation of simulation models. In: Proceedings of the Winter Simulation Conference, Austin, Texas, USA.
  • [41] VAN VOLSEM S, DULLAERT W & VAN LANDEGHEM H. 2007. An Evolutionary Algorithm and discrete event simulation for optimizing inspection strategies for multi-stage processes. European Journal of Operational Research, 179: 621-633.
  • *
    Corresponding author
  • Publication Dates

    • Publication in this collection
      02 May 2011
    • Date of issue
      Apr 2011

    History

    • Received
      July 2009
    • Accepted
      Aug 2010
    Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 Rio de Janeiro RJ - Brasil, Tel.: +55 21 2263-0499, Fax: +55 21 2263-0501 - Rio de Janeiro - RJ - Brazil
    E-mail: sobrapo@sobrapo.org.br