Acessibilidade / Reportar erro

DESIGN OF EXPERIMENTS USED IN COMPUTER TRIALS: A SUPPORTIVE METHOD FOR PRODUCT DEVELOPMENT

ABSTRACT

The study proposes the development of a framework that can take advantage of the Design and Analysis of Computer Experiments (DACE) more efficiently. The need for rapid product development cycles creates a demand for tools that aid in decision making. One of these tools is Computer Aided Engineering which uses the Finite Element Method that requires a long time to be executed. The philosophy of Design of Experiments and the construction of empirical models have been used together with physical simulations. After a theoretical review, two cases were applied to product development and the results were analyzed. A theoretical framework based on the experience gained from the empirical studies conducted was proposed. Despite the existence of many studies on the subject where mathematical and statistical aspects are explored, there is a lack of studies related to practical and operational issues regarding the application of DACE.

Keywords:
simulation; design of experiments; metamodel

1 INTRODUCTION

This article discusses the theme “Design of Experiments (DoE)” when applied together with physical simulation. In this scenario, the DoE is called DACE with stands for Design and Analysis of Computer Experiments. In the literature, DoE is defined as a combination of treatments that allows building relationships between the effects of a set of controlled factors, called independent variables, over one or more measurement variables, called dependent variables (Box et al.,197810 BOX GEP, HUNTER WG & HUNTER JS. 1978. Estatistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building. John Wiley and Sons. [s.l.].; Montgomery, 201241 MONTGOMERY DC. 2012. Introduction to Statistical Quality Control. Jefferson City, MO: Wiley.). In product development, the relationships between product systems and subsystems can be established using the mathematical expression.

Sometimes, the cost of this process (in terms of physical prototype build and test setup) is very large. An alternative to minimizing costs is to use computational physics simulations, also known as virtual prototyping or Computer Aided Engineering (CAE) (Stinstra & den Hertog, 200856 STINSTRA E & DEN HERTOG D. 2008. Robust optimization using computer experiments. European Journal of Operational Research, 191(3): 816-837.; Stinstra, 200655 STINSTRA E. 2006. The meta-model approach for simulation-based design optimization. Open Access publications from Tilburg University, .; Sheng et al., 201551 SHENG M, LI G, SHAH S, LAMB AR & BORDAS SPA. 2015. Enriched finite elements for branching cracks in deformable porous media. Engineering Analysis with Boundary Elements, 50: 435-446.). Although providing clear advantages in terms of time and cost, virtual prototyping can also require considerable time to generate a response (Bates et al., 20067 BATES RA, KENETT RS, STEINBERG DM & WYNN HP. 2006. Achieving robust design from computer simulations. Quality Technology & Quantitative Management, 3(2): 161-177.). A new approach to solving this problem is the use of meta-models. A meta-model is an interpolator obtained from an experimental plan that simulates the physical behavior of product using virtual prototyping. The advantage is the achievement of a much shorter time product development time.

However, there are differences between the classic DoE and the DACE (Kleijnen 200529 KLEIJNEN JPC. 2005. An overview of the design and analysis of simulation experiments for sensitivity analysis. European Journal of Operational Research, 164(2): 287-300.), the main one being the absence of random variation between two replicate runs of one treatment (two simulations performed with the factors at the same levels produce the same response values). With regard to objectives, DACE seeks a basic understanding of the system and does not test hypotheses about factors, differently of traditional DoE that espouse testing hypothesis about factors. The goal in these cases is to seek robust configurations of the factors at the expense of optimal settings. Computational experiments can easily have more than 10 input variables and a large number of dependent variables, which may be prohibitive in traditional DoE. Another difference is the complexity of the relationship between the response and output variables; while the DoE technique only enables the construction of linear polynomials, the computational experiments, DACE, can result in non-linear relationships.

With regard to the literature review, most of the studies are generally concerned about the mathematical properties of DACE and interpolating models that are fitted to the data after the experiment (Beers & Kleijnen, 20048 BEERS WCM & KLEIJNEN JPC. 2004. Kriging interpolation in simulation: a survey. In: Proceedings of the Winter Simulation Conference.; Fang et al., 200016 FANG KT, MA CX & WINKER P. 2000. Centered $L 2$-discrepancy of random sampling and Latin hypercube design, and construction of uniform designs. Mathematics of Computation, 71(237): 275-297.; Kami´nski, 2015; Pronzato & M¨uller, 201246 PRONZATO L & MÜLLER WG. 2012. Design of computer experiments: Space filling and beyond. Statistics and Computing, 22(3): 681-701.; Sacks et al., 198948 SACKS J, WELCH WJ, MITCHELL TJ & WYNN HP. 1989. Design and analysis of computer experiments (with discussion). Journal of Statistical Science, 4(4): 409-423.; Simpson et al., 199753 SIMPSON TW, PEPLINSKI JD, KOCH PN & ALLEN JK. 1997. On the use of statistics in design and the implications for deterministic computer experiments. Proceedings of DETC’97 1997 ASME Design Engineering Technical Conferences, .; Stinstra & den Hertog, 200856 STINSTRA E & DEN HERTOG D. 2008. Robust optimization using computer experiments. European Journal of Operational Research, 191(3): 816-837.; Stinstra et al., 200357 STINSTRA E, STEHOUWER P, DEN HERTOG D & VESTJENS A. 2003. Constrained maximin designs for computer experiments. Technometrics, 45(4): 340-346.). This work aims to contribute to the development of the operational aspects of the implementation of metamodeling strategies in the context of DACE.

This paper has five sections. The introduction is presented at Section 1. Section 2 shows the theory fundamentals for this work. Section 3 presents two cases of DACE application in manufacturing industry. From the presented cases, Section 4 proposes a framework for use in experimentation in virtual environments. Finally, Section 5 presents the conclusion.

2 DESIGN AND ANALYSIS OF COMPUTER EXPERIMENTS: STRATEGY AND METAMODEL BUILDING

2.1 One-shot and sequential strategy

According Shen et al. (201850 SHEN W, LIN DKJ & CHANG CJ. 2018. Design and analysis of computer experiment via dimensional analysis. Quality Engineering, 30(2): 311-328.), the significance of computer experiments in scientific and technological development has been recognized and well studied in recent years. Yang & El-haik (200864 YANG K & EL-HAIK B. 2008. Design for Six Sigma A Roadmap for Product Development. McGraw-Hill Education. 2. ed. [s.l.].) show the use of DoE as a technique for building empirical models represents significant progress in the process of product development, ensuring the robustness of the product developed and the efficiency of the process as a whole. Because computer simulations can be time-consuming, computer experiments are conducted to produce surrogate models. Unfortunately, its application is not widespread around companies in the world (Araujo et al., 19964 ARAUJO CS, BENEDETTO-NETO H, CAMPELLO AC, SEGRE FM & WRIGHT IC. 1996. The utilization of product development methods: A survey of UK industry. Journal of Engeering Design, 7(3): 265-277.; Arvidsson et al., 20035 ARVIDSSON M, GREMYR I & JOHANSSON P. 2003. Use and knowledge of robust design methodology: a survey of Swedish industry. Journal of Engineering Design, 14(2): 129-143.; Fujita & Matsuo, 200517 FUJITA K & MATSUO T. 2005. Utilisation of Product Development Tools and Methods. International Conference on Engineering Design ICED05, pp. 1-14.). One reason for its limited usage is the obligation to build multiple physical prototypes that have the features specifically selected for each treatment.

The approach to overcome this difficulty is to use computational physics simulations, known as Virtual Prototyping, which is defined as a computer code that connects the input variables to the output variables, thus producing approximate numerical solutions of the equations that govern physical phenomena. In these cases, the meta-model nomenclature is usual (Huang et al., 200720 HUANG T, KONG CW, GUO HL, BALDWIN A & LI H. 2007. A virtual prototyping system for simulating construction processes. Automation in Construction, 16(5): 576-585.). The metamodel refers to the interpolated models fitted over experimental data from several runs of computer experiments. The goal is to build empirical models emulating the behavior of the original code, thereby allowing the study of a large number of scenarios within a shorter time.

However, according to Shen et al. (201850 SHEN W, LIN DKJ & CHANG CJ. 2018. Design and analysis of computer experiment via dimensional analysis. Quality Engineering, 30(2): 311-328.), variables in many computer experiments have physical meanings according to the background scenario and obey the dimensional constraints. Even though the derived emulator fits the data well, it may generate predictions that do not satisfy the physical constraints when interpolating and extrapolating.

Using the classical DoE in computational experiments as a sample plan is not the best option, as it could result in the low efficiency in prediction of outputs, owing to the lack of fit in relation to the DACE approach (Bates et al., 20067 BATES RA, KENETT RS, STEINBERG DM & WYNN HP. 2006. Achieving robust design from computer simulations. Quality Technology & Quantitative Management, 3(2): 161-177.; Beers & Kleijnen, 20048 BEERS WCM & KLEIJNEN JPC. 2004. Kriging interpolation in simulation: a survey. In: Proceedings of the Winter Simulation Conference.; Jin et al., 200124 JIN R, CHEN W & SIMPSON TW. 2001. Comparative studies of metamodelling techniques under multiple modelling criteria. Structural and Multidisciplinary Optimization, 23(1): 1-13.; Kleijnen, 200529 KLEIJNEN JPC. 2005. An overview of the design and analysis of simulation experiments for sensitivity analysis. European Journal of Operational Research, 164(2): 287-300.; Lehman et al., 200432 LEHMAN JS, SANTNER TJ & NOTZ WI. 2004. Designing computer experiments to determine robust control variables. Statistica Sinica, 14(1): 571-590.; Sacks et al., 198948 SACKS J, WELCH WJ, MITCHELL TJ & WYNN HP. 1989. Design and analysis of computer experiments (with discussion). Journal of Statistical Science, 4(4): 409-423.; Simpson et al., 199753 SIMPSON TW, PEPLINSKI JD, KOCH PN & ALLEN JK. 1997. On the use of statistics in design and the implications for deterministic computer experiments. Proceedings of DETC’97 1997 ASME Design Engineering Technical Conferences, ., 199852 SIMPSON TW, MAUERY T, KORTE J & MISTREE F. 1998. Comparison of response surface and kriging models for multidisciplinary design optimization. pp. 1-11. American Institute of Aeronautics and Astronautics., 199753 SIMPSON TW, PEPLINSKI JD, KOCH PN & ALLEN JK. 1997. On the use of statistics in design and the implications for deterministic computer experiments. Proceedings of DETC’97 1997 ASME Design Engineering Technical Conferences, .; Welch et al., 199262 WELCH WJ, BUCK RJ, WYNN HP, SACKS J, MITCHELL TJ & MORRIS MD. 1992. Screening, predicting, and computer experiments. Technometrics, 34(1): 15-25.). Even so, specific experimental designs have been developed for the computing environment. In the existing literature (Rosenbaum & Schulz, 201247 ROSENBAUM B & SCHULZ V. 2012. Comparing sampling strategies for aerodynamic Kriging surrogate models. ZAMM Zeitschrift fur Angewandte Mathematik und Mechanik, 92(11-12): 852-868.; Yang & Xue, 201565 YANG Q & XUE D. 2015. Comparative study on influencing factors in adaptive metamodeling. Engineering with Computers, 31(3): 561-577.) two main strategies are proposed to deal with the particularities of computational experiments. The first, known as one shot, involves running a single experiment and adjusting a single metamodel that represents the entire sample space. In this approach, it is important for the experimental design to explore all points because no information is known about the functional form of the relationship between the response and the x-factors.

One-shot strategy three entities are defined (Kleijnen & Sargent, 200030 KLEIJNEN JPC & SARGENT RG. 2000. A methodology for fitting and validating metamodels in simulation. European Journal of Operational Research, 120(1): 14-29.): 1) the problem, 2) the simulation model, and 3) a metamodel (as seen in Figure 1).

Figure 1:
Components of the “one-shot” strategy for metamodel building (Kleijnen & Sargent, 200030 KLEIJNEN JPC & SARGENT RG. 2000. A methodology for fitting and validating metamodels in simulation. European Journal of Operational Research, 120(1): 14-29.).

With the three well defined elements, 10 steps must be performed serially in order to achieve success in building a reliable metamodel. These 10 stages comprise the standard framework for the use of the one-shot strategy Kleijnen & Sargent, 200030 KLEIJNEN JPC & SARGENT RG. 2000. A methodology for fitting and validating metamodels in simulation. European Journal of Operational Research, 120(1): 14-29.. These steps are as follows: 1) determine the purpose of the metamodel; 2) identify the inputs and their characteristics (factors noises or signals); 3) define the project area, where X levels, N and M, are specified; 4) identify the outputs and features; 5) specify the accuracy required for the metamodel; 6) specify the metrics used to determine the validity of the metamodel; 7) specify the structure and shape of the metamodel; 8) specify the experimental design that will be used to evaluate the metamodel; 9) adjust the metamodel; and 10) determine the validity of the metamodel using the metric defined in step 6.

The one-shot strategy results in a metamodel constructed with only one experiment that encloses the entire sample space. In this strategy, it is important for the experimental design to explore all points because no information is known about the functional form of the relationship between the response and the x factors. In this way, its application is simplified, since the only information that must be decided before its application in relation to the experimental design is the number of rounds and the criterion of spreading of points that will be used. However, there is a clear disadvantage of this application when the simulation code (model) has a high cost to be used or consumes a lot of computational resources to perform a single simulation (Keys & Rees, 200428 KEYS AC & REES LP. 2004. A sequential-design metamodeling strategy for simulation optimization. Computers and Operations Research, 31(11): 1911-1932.).

The lack of information a priori causes the one-shot strategy to place equal importance on all regions of the sample space, which sometimes results in a large sample plan. To overcome this characteristic, a strategy for metamodels construction that has been previously studied is the sequential approach (Box et al., 197810 BOX GEP, HUNTER WG & HUNTER JS. 1978. Estatistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building. John Wiley and Sons. [s.l.].; Long et al., 201538 LONG T, WU D, GUO X, WANG GG & LIU L. 2015. Efficient adaptive response surface method using intelligent space exploration strategy. Structural and Multidisciplinary Optimization, 51(6): 1335-1362.; Yang & Xue, 201565 YANG Q & XUE D. 2015. Comparative study on influencing factors in adaptive metamodeling. Engineering with Computers, 31(3): 561-577.). Unlike the one-shot approach, this approach construct metamodels with several sequential experiments in which new items are added to an initial design according to some specified criteria a priori. Figure 2 shows a simplified flow chart of the steps for using this strategy, as there seems to be no consensus regarding the number and sequence of steps (Pan et al., 201443 PAN G, YE P, WANG P & YANG Z. 2014. A sequential optimization sampling method for metamodels with radial basis functions. The Scientific World Journal, .).

Figure 2:
General steps for sequential strategy.

Unlike the one-shot strategy, in sequential strategy, the metamodels are constructed with several sequential experiments in which new points are added to an initial design according to some criterion specified a priori. In this way, it seeks to increase the efficiency of the whole process by accessing the value of the simulation model only in regions of interest in the sample space, or by using existing information at a given time t to plan the data collection at a time t + 1.

Although some authors propose frameworks for the construction of metamodels using the sequential strategy (Pan et al., 201443 PAN G, YE P, WANG P & YANG Z. 2014. A sequential optimization sampling method for metamodels with radial basis functions. The Scientific World Journal, .; Tabatabaei et al., 201558 TABATABAEI M, HAKANEN J, HARTIKAINEN M, MIETTINEN K & SINDHYA K. 2015. A survey on handling computationally expensive multiobjective optimization problems using surrogates: non-nature inspired methods. Structural and Multidisciplinary Optimization, pp. 1-25.), there does not seem to be a consensus in the literature, as in the one-shot approach. For this, in this paper it has been proposing a new sequential strategy, it is presented in Section 4. This proposed sequential strategy will depart from the general framework proposed by Kleijnen & Sargent (200030 KLEIJNEN JPC & SARGENT RG. 2000. A methodology for fitting and validating metamodels in simulation. European Journal of Operational Research, 120(1): 14-29.).

2.2 Metamodel building

In general, when used in the development of new products, the objectives of a meta-model using one-shot strategy are the same as those using the sequential strategy, they are: a) predicting the value of y (dependent variable) of a specific set of X independent variables; and, b) optimizing responses y. However, the literature shows that the second objective (optimization) is much more common than the first one (prediction) for sequencing strategy (Peng et al., 201444 PENG L, LIU L, LONG T & YANG W. 2014. An efficient truss structure optimization framework based on CAD/CAE integration and sequential radial basis function metamodel. Structural and Multidisciplinary Optimization, 50(2): 329-346.; Tabatabaei et al., 201558 TABATABAEI M, HAKANEN J, HARTIKAINEN M, MIETTINEN K & SINDHYA K. 2015. A survey on handling computationally expensive multiobjective optimization problems using surrogates: non-nature inspired methods. Structural and Multidisciplinary Optimization, pp. 1-25.). This is related to the fact that sequential strategy is preferred for simulation models that have high costs (Crombecq et al., 201113 CROMBECQ K, LAERMANS E & DHAENE T. 2011. Efficient space-filling and noncollapsing sequential design strategies for simulation-based modeling. European Journal of Operational Research, 214(3): 683-696.; Song et al., 201254 SONG D, HAN J & LIU G. 2012. Active model-based predictive control and experimental investigation on unmanned helicopters in full flight envelope. IEEE Transactions on Control Systems Technology, 21(4): 1502-1509.; Peng et al., 201444 PENG L, LIU L, LONG T & YANG W. 2014. An efficient truss structure optimization framework based on CAD/CAE integration and sequential radial basis function metamodel. Structural and Multidisciplinary Optimization, 50(2): 329-346.).

The use of composite models of several submodels (Barton, 19976 BARTON RR. 1997. Design of experiments for fitting subsystem metamodels. In: Proceedings of the Winter Simulation Conference.), in large-scale structural optimizations (Peng et al., 201444 PENG L, LIU L, LONG T & YANG W. 2014. An efficient truss structure optimization framework based on CAD/CAE integration and sequential radial basis function metamodel. Structural and Multidisciplinary Optimization, 50(2): 329-346.) and aerodynamic optimization and CFD (Computer Fuid Dynamics) (Rosenbaum & Schulz, 201247 ROSENBAUM B & SCHULZ V. 2012. Comparing sampling strategies for aerodynamic Kriging surrogate models. ZAMM Zeitschrift fur Angewandte Mathematik und Mechanik, 92(11-12): 852-868.) have been finding in the literature. However, the number of practical examples sequential strategy is much smaller than the one found for the one-shot strategy, due to the fact that sequential strategy research is relatively new.

Mathematical forms for metamodels are widely discussed in the literature (Jin et al., 2001; Li et al., 201033 LI YF, NG SH, XIE M & GOH TN. 2010. A systematic comparison of metamodeling techniques for simulation optimization in Decision Support Systems. Applied Soft Computing Journal, 10(4): 1257-1273.; Villa-Vialaneix et al., 201260 VILLA-VIALANEIX N, FOLLADOR M, RATTO M & LEIP A. 2012. A comparison of eight metamodeling techniques for the simulation of N2O fluxes and N leaching from corn crops. Environmental Modelling and Software, 34: 51-66.). Kriging models have been considered by many authors as the most widely used for computational experiments (Gaul et al., 199118 GAUL L, KLEIN P & PLENGE M. 1991. Simulation of wave propagation in irregular soil domains by BEM and associated small scale experiments. Engineering Analysis with Boundary Elements, 8(4): 200-205.; Ankenman et al., 20103 ANKENMAN B, NELSON BL & STAUM J. 2010. Stochastic kriging for simulation metamodeling. Operations research, 58(2): 371-382.; Beers & Kleijnen, 20048 BEERS WCM & KLEIJNEN JPC. 2004. Kriging interpolation in simulation: a survey. In: Proceedings of the Winter Simulation Conference.; Couckuyt et al., 201212 COUCKUYT I, FORRESTER A, GORISSEN D, DE TURCK F & DHAENE T. 2012. Blind Kriging: Implementation and performance analysis. Advances in Engineering Software, 49: 1-13.; Kleijnen & Van Beers, 200531 KLEIJNEN JPC & VAN BEERS WCM. 2005. Robustness of Kriging when interpolating in random simulation with heterogeneous variances: Some experiments. European Journal of Operational Research, 165(3): 826-834.; Xiao & Chen, 200763 XIAO H & CHEN Z. 2007. Numerical experiments of preconditioned Krylov subspace methods solving the dense non-symmetric systems arising from BEM. Engineering Analysis with Boundary Elements, 31: 1013-1023.; Shao et al., 201249 SHAO W, DENG H, MA Y & WEI Z. 2012. Extended Gaussian Kriging for computer experiments in engineering design. Engineering with Computers, 28: 161-178.; Simpson et al., 199852 SIMPSON TW, MAUERY T, KORTE J & MISTREE F. 1998. Comparison of response surface and kriging models for multidisciplinary design optimization. pp. 1-11. American Institute of Aeronautics and Astronautics.; Zakerifar et al., 201166 ZAKERIFAR M, BILES WE & EVANS GW. 2011. Kriging metamodeling in multipleobjective simulation optimization. Simulation, 87(10): 843-856.; Sheng et al., 201551 SHENG M, LI G, SHAH S, LAMB AR & BORDAS SPA. 2015. Enriched finite elements for branching cracks in deformable porous media. Engineering Analysis with Boundary Elements, 50: 435-446.; Zhang et al., 201567 ZHANG LW, LI DM & LIEW KM. 2015. An element-free computational framework for elastodynamic problems based on the IMLS-Ritz method. Engineering Analysis with Boundary Elements, .). The resulting equation for a set of ordinary Kriging models can be written as follows (Johnson et al., 2008):

y ^ ( x ) = μ ^ + r ( x , θ ^ ) R 1 ( X , θ ^ ) ( y μ ^ 1 n ) (1)

The first term on the right in Equation (1) is the average response over the experimental space. The θ k parameters from vector IMG represent the correlation of the response Y between two values (levels) of the k-th factor. The parameter R represents the correlation function used. The metrics to evaluate metamodel quality arise from the comparative analysis of the simulated values (results of physical simulation) y and the values predicted by the metamodel, ŷ. Bias errors (Li et al., 201033 LI YF, NG SH, XIE M & GOH TN. 2010. A systematic comparison of metamodeling techniques for simulation optimization in Decision Support Systems. Applied Soft Computing Journal, 10(4): 1257-1273.) represent the systematic difference between the simulated data and metamodel, and they are not dependent on the value of the predicted variable, i.e., they are constant for the entire sample space. Precision errors (Hussain et al., 200221 HUSSAIN MF, BARTON RR & JOSHI SB. 2002. Metamodeling: Radial basis functions, versus polynomials. European Journal of Operational Research, 138: 142-154.; Jin et al., 200124 JIN R, CHEN W & SIMPSON TW. 2001. Comparative studies of metamodelling techniques under multiple modelling criteria. Structural and Multidisciplinary Optimization, 23(1): 1-13.), unlike the bias errors, are not constant for all values of input variables. To be able to use the value of errors in the comparison and evaluation of the metamodel, the average error, RMSE or root mean square error, is frequently estimated using Jackknife estimation (Duchesne & MacGregor, 200114 DUCHESNE C & MACGREGOR JF. 2001. Jackknife and bootstrap methods in the identification of dynamic models. Journal of Process Control, 11(5): 553-564.; Beers & Kleijnen, 20048 BEERS WCM & KLEIJNEN JPC. 2004. Kriging interpolation in simulation: a survey. In: Proceedings of the Winter Simulation Conference.).

2.3 Criterions for using designs for computer experiments

There is a difference between the one-shot and sequential strategies when of the specification of the design of experiments. On one-shot strategy, first the practitioner makes the selection of the number of points and he defines the criterion of spreading of points will be optimized. After he makes this, the experimental design cannot be changed. For sequential strategy, an initial set of points is determined (following some spatial filling criteria, such as the one-shot strategy) in which the values of the simulation model will be evaluated and the criterion that will be used to determine in which region of the sample space will be placed the next point (s). There are three types of designs commonly used computational experiments: MaxMin, Uniform, and Minimum Potential.

Although there are several interesting properties for the computational designs, the distribution of the points within the sample space is done according to some criterion that usually requires almost no initial information. MaxMin design, proposed by Johnson et al. (199025 JOHNSON ME, MOORE LM & YLVISAKER D. 1990. Minimax and maximin distance designs. Journal of Statistical Planning and Inference, 26(2): 131-148.), seeks to maximize the distance between two experimental points of the same factor. With this, the points are distributed as dispersed as possible within the space. The disadvantage of this type design of experiments is that, since the smallest space between two points is maximized, the points tend to stand in the boundary of the sample space (Morris & Mitchell, 199542 MORRIS MD & MITCHELL TJ. 1995. Exploratory designs for computational experiments. Journal of Statistical Planning and Inference, 43(3): 381-402.).

Uniform Arrangements Design was proposed by Fang (1980) and Wang and Fang (1981). This design seeks to minimize the error of the distribution of points in the sample space. For this, we calculate what was known as discrepancy. Details of this type of computational experiment can be obtained in Fang et al. (200016 FANG KT, MA CX & WINKER P. 2000. Centered $L 2$-discrepancy of random sampling and Latin hypercube design, and construction of uniform designs. Mathematics of Computation, 71(237): 275-297.).

Minimal Potential arrangements design was proposed by Audze-Eglais in 1977, it is based on the physical analogy of a system composed of points with electric charge of the same signal. When the points are very close, the electric charges generate repulsive force, creating electrical potential energy. When far away, the springs generate attraction force, creating elastic potential energy. The system will be in equilibrium when the potential energy is minimal (Bates, Sienz, Langley, 20067 BATES RA, KENETT RS, STEINBERG DM & WYNN HP. 2006. Achieving robust design from computer simulations. Quality Technology & Quantitative Management, 3(2): 161-177.). An important characteristic of the designs is the fact that they are almost orthogonal. Even for small size samples, Pearson’s correlation between two factors is always small.

The quality of the model is verified by the difference between the value predicted by the model and the observed value (Li et al., 201033 LI YF, NG SH, XIE M & GOH TN. 2010. A systematic comparison of metamodeling techniques for simulation optimization in Decision Support Systems. Applied Soft Computing Journal, 10(4): 1257-1273.). These errors represent the systematic difference between the metamodel and the simulated data. These errors are usually calculated by RMSE - root mean square error. Another alternative is to make the adjustment of the model several times disregarding one of the points of the original set, and then predict the value of the point disregarded with the adjusted model. This technique is known as Jackknife prediction (Bischl et al., 20129 BISCHL B, MERSMANN O, TRAUTMANN H & WEIHS C. 2012. Resampling methods for meta-model validation with recommendations for evolutionary computation. Evolutionary Computation, 20(2): 249-275.; Duchesne & MacGregor, 200114 DUCHESNE C & MACGREGOR JF. 2001. Jackknife and bootstrap methods in the identification of dynamic models. Journal of Process Control, 11(5): 553-564.).

According to Bursztyn & Steinberg (200611 BURSZTYN D & STEINBERG DM. 2006. Comparison of designs for computer experiments. Journal of Statistical Planning and Inference, 136(3): 1103-1119.), a design for DACE should facilitate metamodel forecasts with a small error, but the use of alphabetical optimality metrics based on the covariance of the parameter estimates is not applicable because of the absence of experimental error. This statement is supported by several other authors (Allen et al., 20032 ALLEN TT, YU L & SCHMITZ J. 2003. An experimental design criterion for minimizing meta-model prediction errors applied to die casting process design. Journal of the Royal Statistical Society: Series C (Applied Statistics), 52(1): 103-117.; Kleijnen, 200529 KLEIJNEN JPC. 2005. An overview of the design and analysis of simulation experiments for sensitivity analysis. European Journal of Operational Research, 164(2): 287-300.). The orthogonality of the experimental matrix, a criterion that has considerable importance for the classic DoE, also has relative importance in designs for computer experiments. However, this is a statement that does not seem to be consensus, because of the number of authors who still include this criterion as important to be evaluated (Fang et al., 200016 FANG KT, MA CX & WINKER P. 2000. Centered $L 2$-discrepancy of random sampling and Latin hypercube design, and construction of uniform designs. Mathematics of Computation, 71(237): 275-297.; He & Tang, 201319 HE Y & TANG B. 2013. Strong orthogonal arrays and associated Latin hypercubes for computer experiments. Biometrika, 100(1): 254-260.; Lin et al., 201035 LIN CD, BINGHAM D, SITTER RR & TANG B. 2010. A new and flexible method for constructing designs for computer experiments. Annals of Statistics, 38(3): 1460-1477.; Liu & Fang, 200637 LIU M & FANG K. 2006. A case study in the application of supersaturated designs to computer experiments. Acta Mathematica Scientia, 26(4): 595-602.; Meng, 200640 MENG J. 2006. Integrated robust design using computer experiments and optimization of a diesel HPCR injector. Ph.D. thesis. Florida State University.; Viana, 201359 VIANA FAC. 2013. Things You Wanted to Know About the Latin Hypercube Design and Were Afraid to Ask. 10thWorld Congress on Structural and Multidisciplinary Optimization, .).

With regard to DACE, the literature indicates the use of the criterion of space filling. Given that almost no information about the metamodel form is available a priori, responses must to be evaluated throughout the complete sample space (Janouchová & Kučerová, 201323 JANOUCHOVÁ E & KUČEROVÁ A. 2013. Competitive comparison of optimal designs of experiments for sampling-based sensitivity analysis. Computers and Structures, 124: 47-60.; Jin et al., 200124 JIN R, CHEN W & SIMPSON TW. 2001. Comparative studies of metamodelling techniques under multiple modelling criteria. Structural and Multidisciplinary Optimization, 23(1): 1-13.; Liefvendahl & Stocki, 200634 LIEFVENDAHL M & STOCKI R. 2006. A study on algorithms for optimization of Latin hypercubes. Journal of Statistical Planning and Inference, 136(9): 3231-3247.; Pronzato & Müller, 201246 PRONZATO L & MÜLLER WG. 2012. Design of computer experiments: Space filling and beyond. Statistics and Computing, 22(3): 681-701.; Stinstra, 200655 STINSTRA E. 2006. The meta-model approach for simulation-based design optimization. Open Access publications from Tilburg University, .; Viana, 201359 VIANA FAC. 2013. Things You Wanted to Know About the Latin Hypercube Design and Were Afraid to Ask. 10thWorld Congress on Structural and Multidisciplinary Optimization, .). Another criterion when using DACE is to prevent two levels of a non-important factor to be united in one, which is known as a small discrepancy design (Husslage et al., 2011). For readers interested in exploring these criteria related to the use of DACE there a vast amount of literature is available that (Bates; Sienz; Langley, 20037 BATES RA, KENETT RS, STEINBERG DM & WYNN HP. 2006. Achieving robust design from computer simulations. Quality Technology & Quantitative Management, 3(2): 161-177.; Fang et al. (200016 FANG KT, MA CX & WINKER P. 2000. Centered $L 2$-discrepancy of random sampling and Latin hypercube design, and construction of uniform designs. Mathematics of Computation, 71(237): 275-297.); Fuerle; Sienz, 2011; Johnson et al. (199025 JOHNSON ME, MOORE LM & YLVISAKER D. 1990. Minimax and maximin distance designs. Journal of Statistical Planning and Inference, 26(2): 131-148.); Morris & Mitchell (199542 MORRIS MD & MITCHELL TJ. 1995. Exploratory designs for computational experiments. Journal of Statistical Planning and Inference, 43(3): 381-402.); Stinstra et al. (200357 STINSTRA E, STEHOUWER P, DEN HERTOG D & VESTJENS A. 2003. Constrained maximin designs for computer experiments. Technometrics, 45(4): 340-346.).

To improve understanding of the issues discussed on experimentation within the computing environment, Table 1 presents a comparative analysis of the main features of classic DoE and the computational experiment, DACE. It is possible to infer the literature review that DACE is still relatively little known by users of DoE. It is also observed that there are few empirical studies on the subject and that there are limited practical contributions reported in specialized scientific journals. In the next section, practical applications will be presented regarding the use of computational experiments.

Table 1
Criteria for using DoE and DACE.

3 EMPIRICAL STUDIES WITH COMPUTER EXPERIMENTS

This section deals with the application of computational experiments, DACE, in product development projects through two instances, which were executed between the years 2013 and 2015, on a home appliances company. All applications were made following the one-shot strategy because the limitation of computational package already existing in the selected company (SAS JMP). Moreover, the 10 steps presented by Kleijnen & Sargent (200030 KLEIJNEN JPC & SARGENT RG. 2000. A methodology for fitting and validating metamodels in simulation. European Journal of Operational Research, 120(1): 14-29.) was followed for each problem.

3.1 Design robustness evaluated with metamodels

The first application of the methodology of computational experiments, presents a structural robustness study a washing machine. Figure 3 illustrates the problem application of DACE. This figure shows a schematic section of the washing machine highlighting the main components of the system that allow water extraction. In the washing process, there is a heterogynous distribution of the moment of mass inertia around the rotating shaft. The accumulated clothes in specific basket regions, also called unbalanced load, generate a force orthogonal to the spin axis, out of the product. To resist the static effect, the washer components must be constructed with materials stiff enough so that the displacement on components does not cause contact between the basket and the tub during the spin cycle, a phenomenon known as gap closure. A gap “g” between the tub and basket is designed, such that the product can absorb eventual displacements from the parts.

Figure 3
Schematics of a wash machine (Jung et al., 200826 JUNG CH, KIM CS & CHOI YH. 2008. A dynamic model and numerical study on the liquid balancer used in an automatic washing machine. Journal of Mechanical Science and Technology, 22(9): 1843-1852.).

To simulate the dynamics of this system MSC Adams software, which deals with multibody physics, was used. Ioriatti (200722 IORIATTI AS. 2007. A Study of Vertical Axis Washer Dynamics Using Multicomponent Systems. Master thesis. University of So Paulo, Brazil.) shows how this computational package can be applied to vertical axis wash machines. In this application, inertial axes from MSC Adams are introduced into the structural computational package Ansys, which calculates the deflections of all the components using finite element method. Thus, the final value of the gap “g” between the basket and the tank is obtained.

Considering the long duration of each physical simulation, which lasts about 20 hours for each load condition, a computational experiment was proposed to enable the construction of a metamodel that allows the evaluation of the value “g” in a shorter time. The experiment has an output variable that is the total combined deflection of the basket and tub with three input variables: the amount of unbalanced load in kilograms, the height of the unbalanced load in millimeters, and the amount of balanced load in kilograms.

To summarize the results, Table 2 shows the estimated values of qs (Equation 1) considering several combinations of experimental designs MaxMin, uniform type, and function correlations Gaussian and Cubic. Using the values of the Jackknife errors, it was possible to estimate the mean square root error of each combination. Table 3 shows these values, which indicate a great advantage in using a combination of uniform criteria with the Gaussian correlation function. The error for this alternative is, on average, 50% lower than the other options.

Table 2
Theta estimates for two different space-filling designs and two correlation functions.

Table 3
Comparison for Jackknife Root Mean Square errors for different designs and correlation functions.

3.2 Structural optimization of a planetary carrier using metamodels

The second case of DACE application refers to a cost reduction for parts that form the planetary gearbox of a washing machine. A gearbox consists of four distinct parts, as shown in Figure 4. It is the ratio between the pitch diameter of the sun and planetary gears that generate the torque amplification, which is applied to clothing to provide mechanical action to remove the dirt. Finally, there is the planetary carrier, which interfaces with the planetary gears, as well as an output shaft part that delivers the amplified torque. The planetary carrier transforms the translatory movement of sun gears to rotate the output shaft (Liu et al., 201636 LIU L, LIANG X & ZUO MJ. 2016. Vibration signal modeling of a planetary gear set with transmission path effect analysis. Measurement, 85: 20-31.).

Figure 4
Parts of a planetary gearbox (Liu et al., 201636 LIU L, LIANG X & ZUO MJ. 2016. Vibration signal modeling of a planetary gear set with transmission path effect analysis. Measurement, 85: 20-31.).

Figure 5 shows a three-dimensional representation of a planetary carrier using a CAD system. In this study, the carrier is made from sintered steel by powder metallurgy. While it is desirable that it does not have a large mass that would increase part cost, it is necessary to ensure certain stiffness so that the resistive torque of the load does not cause deflection in the interface region with the gears. Another important constraint on the part of the project is the mechanical stress on the base interface pin.

Figure 5
Tridimensional representation (CAD) of a planetary carrier.

To avoid high prototyping costs that might result from the development activity, it was decided to construct metamodels that enable the prediction the of the deflection of the edge of interface pins, alternated mechanical tension (fatigue) at the pin base and the mass of the piece, given the geometrical configuration of the parts and the noise factors that may affect the system. An optimization algorithm that allow the identification of the the best project at the lowest possible cost was then applied to these metamodels.

The chosen configuration must be robust to the variation of resistive torque caused by the clothing drive in the basket. Although some authors (Akcabay et al., 20141 AKCABAY DT, DOWLING DR & SCHULTZ WW. 2014. Clothes washing simulations. Computers and Fluids, (100): 79-94.; Mac Namara et al., 201239 MAC NAMARA C, GABRIELE A, AMADOR C & BAKALIS S. 2012. Dynamics of textile motion in a front-loading domestic washing machine. Chemical Engineering Science, 75: 14-27.) have proposed mathematical models to simulate the load of laundry movement during washing and, thus, the resistive torque, the real value of the resistive torque was considered after been measured using a torque wrench coupled to the input shaft, where the value of the measure could vary between 7.5 and 25 Nm (Newtons per meter).

In this experiment, resistive torque is the only noise variable considered. All other variables (X1 to X5, see Table 3) are part of the planetary drive geometry design. For each experimental run the mechanical stress at the base of the pin was estimated using the finite element method. The quality of fit from metamodels was assessed using the mean square error determined by Jackknife technique (Duchesne & MacGregor, 200114 DUCHESNE C & MACGREGOR JF. 2001. Jackknife and bootstrap methods in the identification of dynamic models. Journal of Process Control, 11(5): 553-564.; Beers & Kleijnen, 20048 BEERS WCM & KLEIJNEN JPC. 2004. Kriging interpolation in simulation: a survey. In: Proceedings of the Winter Simulation Conference.). The fit was done using the Gaussian correlation function. The experimental design consisted of a uniform type, with 20 rounds. Table 4 shows a greyscale map of the generated design. The value of the variable for each round, standardized between -1 and 1, is presented within each cell. The level of darkness represents how near each variable is to the border of inferential space, * representing the positive border (+1) and ** representing the negative border (-1). The analysis of Table 4 shows that the points were well distributed within the sample space.

Table 4
Distribution of points inside inferential space.

Table 5 shows the errors between the simulated values and the predicted ones for the mechanical tension. At the end of the table, the root of mean square error is presented.

Table 5
Jackknife error between simulated and predicted values of mechanical cyclic stress.

It is possible to see that run 13 has a considerably larger error for the variable cyclic stress than the others. The simulated value for this round was 268.68 [Mpa] and the predicted value was 209.79 [Mpa] (21.9% lower). To assess whether the above point has a statistically different error from the others, a graph in which the y-axis represents the simulated values and the x-axis values represent the estimated Jackknife values through metamodels is shown in Figure 6.

Figure 6
Relationship between simulated values (y-axis) and jackknife predicted (x-axis).

The red line was built based on least squares. The region in dark red has a confidence interval of 95%, and the region in light red is the forecast range that is 95% of the linear regression. The point outside the confidence and prediction interval represents run 13. In addition to this analysis, a statistic known as Cook’s Distance was calculated for every point. According to Montgomery and Runger (2013), this is a method used to assess how a specific point is influential throughout the model, and it is given by Equation (2)

D i = ( β ^ ( i ) β ^ ) X X ( β ^ ( i ) β ^ ) p σ ^ 2 (2)

The square distance D i is the measurement from the usual estimate of βs with all n observations and estimates with the i-th point removed. A value greater than 1 indicates that the point is influential for the model.

Figure 7 shows the Cook distance values for all runs, considering the cyclic stress. It is clear that run 13 produced a Jackknife error different from all the other points and is thus very influential in the model (it has a Cook distance with value of 3.81). Thus, the point was discarded and all models were again adjusted. Upon analysis of the procedure base on Cook’s distance in the new setting, it was found that point 4 has become very influential with a distance value of 2.3086. Therefore, this point was also removed from the data and the fit was again ensured for the third time, A root of mean squared error of 3,124 was obtained, which was about 80% less than that of the metamodel with all points.

Figure 7
Values of Cook’s Distance for all design points.

4 PROPOSED FRAMEWORK TO METAMODEL BUILDING

Figure 8 shows the proposed framework for constructing metamodels that complement the framework proposed by Kleijnen & Sargent (200030 KLEIJNEN JPC & SARGENT RG. 2000. A methodology for fitting and validating metamodels in simulation. European Journal of Operational Research, 120(1): 14-29.) shown in Section 2.1. At the bottom right of some boxes, there are some numbers that relate that step with the 10 original steps discussed in Section 2.1. The boxes that do not have any number were added after the case study, and are used to evaluate the quality of the metamodel and to decide to drop one or more points from the original data set.

Figure 8
Proposed framework for constructing metamodels.

5 CONCLUSION

This article presented some theoretical characteristics of computational experiments, called DACE, which were compared with the classic DoE. The computational experiments aim to build meta-models (models from models) to reduce the time used by the computing resources of engineering software. It is methodologically based on designs of classic experiments, DoE, but adapted to the virtual environment.

The importance of the development of this methodology is attributed to the fact that product development today is configured as a strategic activity for companies operating in the global market. Understanding of consumer needs and translating them into language engineering are crucial tasks in the development process and should direct the determination of product specifications. In this context, the DoE tool can be used to build mathematical relationships between the response variables of a system (which are related to consumer needs) and its input variables, thereby allowing the optimization of these responses and determining settings that bring strength to the various conditions of use. However, the use of DoE is not as widespread in companies, mainly because the costs involved in its application. The costs arise from the need to build physical prototypes and conduct actual tests.

The use of computer software as an alternative for determining the relationship between independent and dependent variables instead of physical simulation gained strength from the 1990s, but this may require a considerable amount of time to run, depending on the complexity of the physical model employed. The DACE methodology can be employed to build metamodels in the computing environment. This allows the evaluation of an input variable setting faster than the sole use of engineering software. The application of DACE is different from that used in classic experiments, because of the computing environment that lacks random variation (condition for which the classical DoE was developed). Therefore, it is necessary to use specific metamodeling for experimental designs and techniques.

The literature on the subject has devoted considerable effort to the study of mathematical aspects, but failed to discuss the operational aspects that may be crucial to successful implementation. Therefore, this work aimed to address the lack of empirical studies on the application of computational experiments in product development.

Two cases of DACE used in the construction of metamodels were applied in the product development of a white line company. The challenge was to obtain metamodels that minimize the difference between the results obtained by computer programs and those obtained by the mathematical model by means of the computational experiment design. We sought to compare results derived from different experimental design to show readers that the use of such methodologies demand, technical and statistical knowledge. Thus, making clear the difficulties that may be encountered in the use of computational designs, DACEs. For example, in the first case, in which a comparison between two types of experimental designs and two correlation functions was made, it was concluded that the combination of Uniform type design with the Gaussian correlation function produced the best results for the application. In the second case, in which the study was focused on cost reduction of a part that makes up the planetary gear unit of a type of washing machine, the metamodel was found to have the possibility to be improved based on the use of a given metric. In the analyzed case, the metric was Cook’s distance, which improving the metamodel foresight.

A DACE application framework was proposed to complement the standard framework put forward by researchers on the subject. It is important to note that the use of computational experiments is different from the classic DoE and requires prior knowledge of basic statistics and the studied system. Various management aspects, such as the criticality of the system studied, project phase and time available to perform the studies, should be assessed in the use of DACE. The conclusion reached by observing the application of computational experiments, based on judicious human judgment along with the techniques presented in this work, facilitates the savings on large amounts of material resources, reduction of development time of enterprise products and improvement of the quality of the product development process.

REFERENCES

  • 1
    AKCABAY DT, DOWLING DR & SCHULTZ WW. 2014. Clothes washing simulations. Computers and Fluids, (100): 79-94.
  • 2
    ALLEN TT, YU L & SCHMITZ J. 2003. An experimental design criterion for minimizing meta-model prediction errors applied to die casting process design. Journal of the Royal Statistical Society: Series C (Applied Statistics), 52(1): 103-117.
  • 3
    ANKENMAN B, NELSON BL & STAUM J. 2010. Stochastic kriging for simulation metamodeling. Operations research, 58(2): 371-382.
  • 4
    ARAUJO CS, BENEDETTO-NETO H, CAMPELLO AC, SEGRE FM & WRIGHT IC. 1996. The utilization of product development methods: A survey of UK industry. Journal of Engeering Design, 7(3): 265-277.
  • 5
    ARVIDSSON M, GREMYR I & JOHANSSON P. 2003. Use and knowledge of robust design methodology: a survey of Swedish industry. Journal of Engineering Design, 14(2): 129-143.
  • 6
    BARTON RR. 1997. Design of experiments for fitting subsystem metamodels. In: Proceedings of the Winter Simulation Conference.
  • 7
    BATES RA, KENETT RS, STEINBERG DM & WYNN HP. 2006. Achieving robust design from computer simulations. Quality Technology & Quantitative Management, 3(2): 161-177.
  • 8
    BEERS WCM & KLEIJNEN JPC. 2004. Kriging interpolation in simulation: a survey. In: Proceedings of the Winter Simulation Conference.
  • 9
    BISCHL B, MERSMANN O, TRAUTMANN H & WEIHS C. 2012. Resampling methods for meta-model validation with recommendations for evolutionary computation. Evolutionary Computation, 20(2): 249-275.
  • 10
    BOX GEP, HUNTER WG & HUNTER JS. 1978. Estatistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building. John Wiley and Sons. [s.l.].
  • 11
    BURSZTYN D & STEINBERG DM. 2006. Comparison of designs for computer experiments. Journal of Statistical Planning and Inference, 136(3): 1103-1119.
  • 12
    COUCKUYT I, FORRESTER A, GORISSEN D, DE TURCK F & DHAENE T. 2012. Blind Kriging: Implementation and performance analysis. Advances in Engineering Software, 49: 1-13.
  • 13
    CROMBECQ K, LAERMANS E & DHAENE T. 2011. Efficient space-filling and noncollapsing sequential design strategies for simulation-based modeling. European Journal of Operational Research, 214(3): 683-696.
  • 14
    DUCHESNE C & MACGREGOR JF. 2001. Jackknife and bootstrap methods in the identification of dynamic models. Journal of Process Control, 11(5): 553-564.
  • 15
    ELLEKJÆR MR & BISGAARD S. 1998. The use of experimental design in the development of new products. International Journal of Quality Science, 3(3): 254-274.
  • 16
    FANG KT, MA CX & WINKER P. 2000. Centered $L 2$-discrepancy of random sampling and Latin hypercube design, and construction of uniform designs. Mathematics of Computation, 71(237): 275-297.
  • 17
    FUJITA K & MATSUO T. 2005. Utilisation of Product Development Tools and Methods. International Conference on Engineering Design ICED05, pp. 1-14.
  • 18
    GAUL L, KLEIN P & PLENGE M. 1991. Simulation of wave propagation in irregular soil domains by BEM and associated small scale experiments. Engineering Analysis with Boundary Elements, 8(4): 200-205.
  • 19
    HE Y & TANG B. 2013. Strong orthogonal arrays and associated Latin hypercubes for computer experiments. Biometrika, 100(1): 254-260.
  • 20
    HUANG T, KONG CW, GUO HL, BALDWIN A & LI H. 2007. A virtual prototyping system for simulating construction processes. Automation in Construction, 16(5): 576-585.
  • 21
    HUSSAIN MF, BARTON RR & JOSHI SB. 2002. Metamodeling: Radial basis functions, versus polynomials. European Journal of Operational Research, 138: 142-154.
  • 22
    IORIATTI AS. 2007. A Study of Vertical Axis Washer Dynamics Using Multicomponent Systems. Master thesis. University of So Paulo, Brazil.
  • 23
    JANOUCHOVÁ E & KUČEROVÁ A. 2013. Competitive comparison of optimal designs of experiments for sampling-based sensitivity analysis. Computers and Structures, 124: 47-60.
  • 24
    JIN R, CHEN W & SIMPSON TW. 2001. Comparative studies of metamodelling techniques under multiple modelling criteria. Structural and Multidisciplinary Optimization, 23(1): 1-13.
  • 25
    JOHNSON ME, MOORE LM & YLVISAKER D. 1990. Minimax and maximin distance designs. Journal of Statistical Planning and Inference, 26(2): 131-148.
  • 26
    JUNG CH, KIM CS & CHOI YH. 2008. A dynamic model and numerical study on the liquid balancer used in an automatic washing machine. Journal of Mechanical Science and Technology, 22(9): 1843-1852.
  • 27
    KAMIŃSKI B. 2015. A method for the updating of stochastic kriging metamodels. European Journal of Operational Research, 247: 859-866.
  • 28
    KEYS AC & REES LP. 2004. A sequential-design metamodeling strategy for simulation optimization. Computers and Operations Research, 31(11): 1911-1932.
  • 29
    KLEIJNEN JPC. 2005. An overview of the design and analysis of simulation experiments for sensitivity analysis. European Journal of Operational Research, 164(2): 287-300.
  • 30
    KLEIJNEN JPC & SARGENT RG. 2000. A methodology for fitting and validating metamodels in simulation. European Journal of Operational Research, 120(1): 14-29.
  • 31
    KLEIJNEN JPC & VAN BEERS WCM. 2005. Robustness of Kriging when interpolating in random simulation with heterogeneous variances: Some experiments. European Journal of Operational Research, 165(3): 826-834.
  • 32
    LEHMAN JS, SANTNER TJ & NOTZ WI. 2004. Designing computer experiments to determine robust control variables. Statistica Sinica, 14(1): 571-590.
  • 33
    LI YF, NG SH, XIE M & GOH TN. 2010. A systematic comparison of metamodeling techniques for simulation optimization in Decision Support Systems. Applied Soft Computing Journal, 10(4): 1257-1273.
  • 34
    LIEFVENDAHL M & STOCKI R. 2006. A study on algorithms for optimization of Latin hypercubes. Journal of Statistical Planning and Inference, 136(9): 3231-3247.
  • 35
    LIN CD, BINGHAM D, SITTER RR & TANG B. 2010. A new and flexible method for constructing designs for computer experiments. Annals of Statistics, 38(3): 1460-1477.
  • 36
    LIU L, LIANG X & ZUO MJ. 2016. Vibration signal modeling of a planetary gear set with transmission path effect analysis. Measurement, 85: 20-31.
  • 37
    LIU M & FANG K. 2006. A case study in the application of supersaturated designs to computer experiments. Acta Mathematica Scientia, 26(4): 595-602.
  • 38
    LONG T, WU D, GUO X, WANG GG & LIU L. 2015. Efficient adaptive response surface method using intelligent space exploration strategy. Structural and Multidisciplinary Optimization, 51(6): 1335-1362.
  • 39
    MAC NAMARA C, GABRIELE A, AMADOR C & BAKALIS S. 2012. Dynamics of textile motion in a front-loading domestic washing machine. Chemical Engineering Science, 75: 14-27.
  • 40
    MENG J. 2006. Integrated robust design using computer experiments and optimization of a diesel HPCR injector. Ph.D. thesis. Florida State University.
  • 41
    MONTGOMERY DC. 2012. Introduction to Statistical Quality Control. Jefferson City, MO: Wiley.
  • 42
    MORRIS MD & MITCHELL TJ. 1995. Exploratory designs for computational experiments. Journal of Statistical Planning and Inference, 43(3): 381-402.
  • 43
    PAN G, YE P, WANG P & YANG Z. 2014. A sequential optimization sampling method for metamodels with radial basis functions. The Scientific World Journal, .
  • 44
    PENG L, LIU L, LONG T & YANG W. 2014. An efficient truss structure optimization framework based on CAD/CAE integration and sequential radial basis function metamodel. Structural and Multidisciplinary Optimization, 50(2): 329-346.
  • 45
    PHILLIPS R, NEAILEY K & BROUGHTON T. 1999. A comparative study of six stage-gate approaches to product development. Integrated manufacturing systems, 10(5): 289-297.
  • 46
    PRONZATO L & MÜLLER WG. 2012. Design of computer experiments: Space filling and beyond. Statistics and Computing, 22(3): 681-701.
  • 47
    ROSENBAUM B & SCHULZ V. 2012. Comparing sampling strategies for aerodynamic Kriging surrogate models. ZAMM Zeitschrift fur Angewandte Mathematik und Mechanik, 92(11-12): 852-868.
  • 48
    SACKS J, WELCH WJ, MITCHELL TJ & WYNN HP. 1989. Design and analysis of computer experiments (with discussion). Journal of Statistical Science, 4(4): 409-423.
  • 49
    SHAO W, DENG H, MA Y & WEI Z. 2012. Extended Gaussian Kriging for computer experiments in engineering design. Engineering with Computers, 28: 161-178.
  • 50
    SHEN W, LIN DKJ & CHANG CJ. 2018. Design and analysis of computer experiment via dimensional analysis. Quality Engineering, 30(2): 311-328.
  • 51
    SHENG M, LI G, SHAH S, LAMB AR & BORDAS SPA. 2015. Enriched finite elements for branching cracks in deformable porous media. Engineering Analysis with Boundary Elements, 50: 435-446.
  • 52
    SIMPSON TW, MAUERY T, KORTE J & MISTREE F. 1998. Comparison of response surface and kriging models for multidisciplinary design optimization. pp. 1-11. American Institute of Aeronautics and Astronautics.
  • 53
    SIMPSON TW, PEPLINSKI JD, KOCH PN & ALLEN JK. 1997. On the use of statistics in design and the implications for deterministic computer experiments. Proceedings of DETC’97 1997 ASME Design Engineering Technical Conferences, .
  • 54
    SONG D, HAN J & LIU G. 2012. Active model-based predictive control and experimental investigation on unmanned helicopters in full flight envelope. IEEE Transactions on Control Systems Technology, 21(4): 1502-1509.
  • 55
    STINSTRA E. 2006. The meta-model approach for simulation-based design optimization. Open Access publications from Tilburg University, .
  • 56
    STINSTRA E & DEN HERTOG D. 2008. Robust optimization using computer experiments. European Journal of Operational Research, 191(3): 816-837.
  • 57
    STINSTRA E, STEHOUWER P, DEN HERTOG D & VESTJENS A. 2003. Constrained maximin designs for computer experiments. Technometrics, 45(4): 340-346.
  • 58
    TABATABAEI M, HAKANEN J, HARTIKAINEN M, MIETTINEN K & SINDHYA K. 2015. A survey on handling computationally expensive multiobjective optimization problems using surrogates: non-nature inspired methods. Structural and Multidisciplinary Optimization, pp. 1-25.
  • 59
    VIANA FAC. 2013. Things You Wanted to Know About the Latin Hypercube Design and Were Afraid to Ask. 10thWorld Congress on Structural and Multidisciplinary Optimization, .
  • 60
    VILLA-VIALANEIX N, FOLLADOR M, RATTO M & LEIP A. 2012. A comparison of eight metamodeling techniques for the simulation of N2O fluxes and N leaching from corn crops. Environmental Modelling and Software, 34: 51-66.
  • 61
    WANG W, CHEN RB & HSU CL. 2011. Using adaptive multi-accurate function evaluations in a surrogate-assisted method for computer experiments. Journal of computational and applied mathematics, 235(10): 3151-3162.
  • 62
    WELCH WJ, BUCK RJ, WYNN HP, SACKS J, MITCHELL TJ & MORRIS MD. 1992. Screening, predicting, and computer experiments. Technometrics, 34(1): 15-25.
  • 63
    XIAO H & CHEN Z. 2007. Numerical experiments of preconditioned Krylov subspace methods solving the dense non-symmetric systems arising from BEM. Engineering Analysis with Boundary Elements, 31: 1013-1023.
  • 64
    YANG K & EL-HAIK B. 2008. Design for Six Sigma A Roadmap for Product Development. McGraw-Hill Education. 2. ed. [s.l.].
  • 65
    YANG Q & XUE D. 2015. Comparative study on influencing factors in adaptive metamodeling. Engineering with Computers, 31(3): 561-577.
  • 66
    ZAKERIFAR M, BILES WE & EVANS GW. 2011. Kriging metamodeling in multipleobjective simulation optimization. Simulation, 87(10): 843-856.
  • 67
    ZHANG LW, LI DM & LIEW KM. 2015. An element-free computational framework for elastodynamic problems based on the IMLS-Ritz method. Engineering Analysis with Boundary Elements, .

Publication Dates

  • Publication in this collection
    23 Sept 2019
  • Date of issue
    May-Aug 2019

History

  • Received
    29 Oct 2018
  • Accepted
    25 May 2019
Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 Rio de Janeiro RJ - Brasil, Tel.: +55 21 2263-0499, Fax: +55 21 2263-0501 - Rio de Janeiro - RJ - Brazil
E-mail: sobrapo@sobrapo.org.br