Acessibilidade / Reportar erro

Evaluation of capabilities of different global sensitivity analysis techniques for building energy simulation: experiment on design variables

Avaliação das capacidades de diferentes técnicas de análise de sensibilidade global para simulação de desempenho de edificações: experimento em variáveis de projeto

Abstract

The objective of this study is to investigate the capabilities of different global sensitivity analysis methods applied to building performance simulation, i.e. Morris, Monte Carlo, Design of Experiments, and Sobol methods. A single-zone commercial building located in Florianópolis, southern Brazil, was used as a case study. Fifteen inputs related to design variables were considered, such as thermal properties of the construction envelope, solar orientation, and fenestration characteristics. The performance measures were the annual heating and cooling loads. It was found that each method can provide different visual capabilities and measures of interpretation, but, in general, there was little difference in showing the most influent and least influent variables. For the heating loads, the thermal transmittances were the most influent variables, while for the cooling loads, the solar absorptances stood out. The Morris method showed to be the most feasible method due to its simplicity and low computational cost. However, as the building simulation model is still complex and non-linear, the variance-based method such as the Sobol is still necessary for general purposes.

Keywords:
Building simulation; Sensitivity analysis; Morris method; Mote Carlo; Sobol; Design of experiments

Resumo

O objetivo deste estudo é investigar as capacidades de diferentes métodos de análise de sensibilidade global aplicados à simulação do desempenho de edificações, i.e. Morris, Monte Carlo, Projeto de Experimentos, e Sobol. Uma edificação comercial de zona única localizada em Florianópolis, sul do Brasil, foi utilizada como estudo de caso. Quinze entradas relacionadas às variáveis de projeto foram consideradas, tais como as propriedades térmicas do envelope construtivo, orientação solar, e características das aberturas. As medidas de desempenho foram as cargas anuais de aquecimento e resfriamento. Foi constatado que cada método apresenta diferentes capacidades visuais e medidas de interpretação, mas, em geral, houve pouca diferença nas variáveis mais ou menos influentes reportadas. Para as cargas térmicas de aquecimento, as transmitâncias térmicas foram as variáveis mais influentes, enquanto para as cargas de resfriamento, as absortâncias solares se destacaram. O método de Morris se mostrou o método mais viável devido à sua simplicidade e baixo custo computacional. Contudo, como os modelos de simulação ainda são complexos e não-lineares, os métodos baseados na variância, como o Sobol, ainda são necessários para propósitos genéricos.

Palavras-chave:
Simulação computacional; Análise de sensibilidade; Método de Morris; Monte Carlo; Projeto de experimentos

Introduction

In the built environment field of research, the buildings play a key role in understanding and mitigating most of the global issues faced by the countries today, mainly the reduction of the growth rate of energy consumption and investment in renewable energy use. In Brazil, for instance, the latest National Energy Balance EPE (EMPRESA…, 2019 EMPRESA DE PESQUISA ENERGÉTICA. Brazilian Energy Balance 2019. Rio de Janeiro: EPE, 2019. ) has shown that the electricity consumption in buildings represents 50.6% of all consumption in the country; and residential buildings are the biggest consumer in the buildings sector, i.e. 25.6%.

In order to study the behaviour of a building and to develop performance evaluation methods, technologies and strategies to reach some energy efficiency level, many aspects must be considered such as the architectural design; the material and components of the envelope; the geometry and size of the rooms; the solar orientation; the building systems (plug loads, heating, ventilation and cooling); the operational routines; and others. All these variables are normally dealt with by some performance evaluation method and their effects on some performance criteria (i.e. output variable, dependent variable). According to Meacham et al. (2005MEACHAM, B.; BOWEN, R.; TRAW, J.; MOORE, A. Performance-based building regulation: current situation and future needs. Building Research and Information, v. 33, n. 2, p. 91-106, 2005.), in the traditional prescriptive-based methods, the performance objectives are somehow implicit in the technical recommendations from standards and building codes and in cumulative experience from the practice. The so-called performance-based methods consider the performance objectives explicitly, by developing a framework of goals, expectations, and desires, depending on the developing of a reference model, data acquisition, design alternatives and evaluating the impacts in the performance criteria.

In this sense, to ensure that the performance-based methods can be assessed properly, the building simulation programmes are intensively used for many purposes. One can use building simulation to analyse a conditioned building and determine its energy requirements; to analyse natural ventilation applicability or even natural lighting; to perform heating, ventilation and air-conditioning equipment sizing; to conduct calibration, optimisation and decision-making studies in some design alternatives for a specific case; to understand the role of some specific phenomena, such as heat or moisture transfer in some parts of the building; to perform studies in renewable energy systems; etc.

As stated by Ioannou and Itard (2015IOANNOU, A.; ITARD, L. C. M. Energy Performance and comfort in residential buildings: sensitivity for building parameters and occupancy. Energy and Buildings , v. 92, p. 216-233, 2015.), contemporary buildings are becoming more complex and with higher sustainability requirements, which make the building simulation a real necessity. The simulation is needed to consider, simultaneously, the weather information, the surrounding environment, the envelope of the building, the material properties, the use and occupation, and the impact of different design choices and characteristics (WANG; MATHEW; PANG, 2012WANG, L.; MATHEW, P.; PANG, X. Uncertainties in energy consumption introduced by building operations and weather for a medium-size office building. Energy and Buildings, v. 53, p. 152-158, 2012.). This integrated capability, i.e. the capacity of analysing all systems together as a complete numeric model, is one of the most important aspects of the building simulation tools (HONG; CHOU; BONG, 2000HONG, T.; CHOU, S.; BONG, T. Building simulation: an overview of developments and information sources. Building and Environment, v. 35, n. 4, p. 347-361, 2000.).

However, the actual purpose of a building simulation is not to reach an extremely accurate result, but rather to perform the right type of computational experiment using a proper tool (AUGENBROE, 2011AUGENBROE, G. The role of simulation in performance based building. In: LAMBERTS, R.; HENSEN, J. L. (org.). Building performance simulation for design and operation. New York: Spon Press, 2011.), which constitutes art and a research field itself. In this sense, there are many building simulation applications that could be considered incomplete or at least precipitated in terms of achieved results. Østergård, Jensen and Maagaard (2016ØSTERGÅRD, T.; JENSEN, R. L.; MAAGAARD, S. E. Building simulations supporting decision making in early design: a review. Renewable and Sustainable Energy Reviews, v. 61, p. 187-201, 2016.) reviewed research areas and corresponding challenges by using building simulation in the design process. The authors divided the literature into some groups such as searching for an ideal design concept based on recommendations and experience (proactive simulation); experimenting the design option by using a large number of simulations (statistical methods); using many interdependent performance objectives (holistic design); searching for an ideal solution using automated process (optimisation); among others.

By considering the Østergård, Jensen and Maagaard (2016ØSTERGÅRD, T.; JENSEN, R. L.; MAAGAARD, S. E. Building simulations supporting decision making in early design: a review. Renewable and Sustainable Energy Reviews, v. 61, p. 187-201, 2016.) interpretation, this paper is focused on the statistical methods, which depends on many considerations regarding the model, the inputs, the simulation process, the performance criteria and data analysis. In this sense, the sensitivity analysis methods can provide much useful information in the performance-based design of buildings by showing quantitative interaction between inputs-outputs of a model and help to prioritise some inputs over others in the optimum design process (PETERSEN; KRISTENSEN; KNUDSEN, 2019PETERSEN, S.; KRISTENSEN, M. H.; KNUDSEN, M. D. Prerequisites for reliable sensitivity analysis of a high fidelity building energy model. Energy and Buildings, v. 183, p. 1-16, 2019.; TIAN et al., 2017TIAN, W. A review of sensitivity analysis methods in building energy analysis. Renewable and Sustainable Energy Reviews , v. 20, p. 411-419, 2013.).

According to Borgonovo and Plischke (2016BORGONOVO, E.; PLISCHKE, E. Sensitivity analysis: a review of recent advances. European Journal of Operational Research, v. 248, n. 3, p. 869-887, 2016., p. 870), the sensitivity analysis “[…] is the set of methods that allow us to understand key insights of scientific codes […]”, i.e. mathematical modelling (such as building simulation models). From the building performance evaluation literature, Tian (2013SOBOL, I. M. Sensitivity estimates for non-linear mathematical models. Matem. Modelirovanie, v. 2, n. 1, p. 112-118, 1990.) described some sensitivity analysis methods such as local methods, based on one-at-a-time samples, and the global methods such as regression-based methods, the Morris method, variance-based methods and meta-modelling approaches.

Nguyen and Reiter (2015NGUYEN, A.-T.; REITER, S. A performance comparison of sensitivity analysis methods for building energy models. Building Simulation , v. 8, n. 6, p. 651-664, 2015.) performed a comparison of nine sensitivity analysis methods by applying them in three mathematical models and two case studies of building performance simulation. The methods applied were: regression-based sensitivity indices such as Pearson product-moment correlation coefficient (PEAR), standardised regression coefficient (SRC), standardised rank regression coefficient (SRRC), Spearman coefficient (SPEA), partial correlation coefficient (PCC) and partial rank correlation coefficient (PRCC); variance-based methods such as Sobol indices and Fourier amplitude sensitivity test (FAST); and the so-called screening-based method, i.e. the Morris method. Some design variables were chosen as inputs for the EnergyPlus simulation model in two cases, i.e. a detached house and an apartment. The SimLab computer programme was used to conduct sensitivity analyses. The authors indicated that the FAST and Sobol gave similar results, but they are very computationally expensive, and the SRC and PCC gave more reliable results than their rank transformations. The authors also recommended the Morris method to be used to exclude unimportant variables, but not to rank the most influent ones, as it differed from the other approaches.

Yang et al. (2016YANG, S. et al. Comparison of sensitivity analysis methods in building energy assessment. Procedia Engineering , v. 146, p. 174-181, 2016.) also performed a comparison between four sensitivity analysis methods such as the SRC, Morris, FAST and the treed Gaussian process (TGP) − a meta-modelling-based sensitivity analysis. The authors applied the methods in two commercial buildings in China with different aspect ratios. Some packages in R language were used to perform the sensitivity analysis, and the EnergyPlus programme was used to perform the simulations. The authors recommended the TGP method due to its low computational cost and good accuracy, although the “accuracy” is not easily defined in terms of a mathematical model. However, at least two methods should be applied, according to the authors, and they could be the TGP and the SRC method.

Other studies have assessed the performance of a building by using more than one sensitivity analysis method. Tian et al. (2017TIAN, W. et al. Building energy assessment based on a sequential sensitivity analysis approach. Procedia Engineering, v. 205, p. 1042-1048, 2017.) used the Monte Carlo approach along with SRC indices and Sobol sensitivity indices to assess influential design variables of an office building in China. Petersen, Kristensen and Knudsen (2019PETERSEN, S.; KRISTENSEN, M. H.; KNUDSEN, M. D. Prerequisites for reliable sensitivity analysis of a high fidelity building energy model. Energy and Buildings, v. 183, p. 1-16, 2019.) compared the one-at-a-time approach, the Morris method and the Sobol method to assess the performance of an office building in Denmark. Silva and Ghisi (2020SILVA, A. S.; GHISI, E. Análise de sensibilidade global dos parâmetros termofísicos de uma edificação residencial de acordo com o método de simulação do RTQ-R. Ambiente Construído , Porto Alegre, v. 13, n. 4, p. 135-148, out./dez. 2013.) used the local sensitivity methods and the Morris method to develop a framework of performance evaluation of buildings in Brazil, by considering design variables of a low-income house.

Some gaps were addressed in this study: the need to use different sensitivity measures for building performance simulation; the need to understand an adequate combination of methods for performance evaluation purposes; and the quantification of the ranking differences among sensitivity analysis methods. Thus, the objective of this study is to analyse the capabilities of different sensitivity analysis methods applied to an experiment dealing with design variables in building performance simulation.

Different methods for different purposes

A model is a simplification of the “reality”, i.e. it is an attempt to describe some real phenomena into a mathematical or computational expression to reach some objective. Usually, as nature has many objects for investigation, several models have been built to predict all kinds of phenomena, from the understanding of specific chemical reactions to the prediction of future global weather conditions.

Complex models, such as the case of the building performance simulation programmes, can only be assessed through “experimentation”. That is why it is necessary to understand the different methods that exist for mathematical experimentation, such as the sensitivity and uncertainty analysis methods that rely on “perturbing” the model by modifying its input variables while observing the changes that occur in the output variables.

For understanding the methods explained in the sequence, a simple model will be considered. Let Y = ƒ (X) be the model, where Y is the output, and X = [x1, ... xi, ... xk] is a vector of inputs.

Morris method

The Morris method or the Elementary Effects method was developed with the motivation to make a numeric experiment more efficient (MORRIS, 1991MORRIS, M. D. Factorial sampling plans for preliminary computational experiments. Technometrics, v. 33, n. 2, p. 161, 1991.). According to the author, it is common to exist dozens, or even hundreds of variables involved in a numerical model, which makes the experiment more complex and demanding more time to be evaluated properly.

This method is used to discover input variables that could be considered negligible, linear or additive, and non-linear; or which causes higher-order effects. The experiment begins with the definition of an experimental region ω, which is a regular k-dimensional space with p-levels, where each variable 𝑥 𝑖 can assume a value in the interval. This interval enables variables with a different order of magnitude to be considered simultaneously, in a normalised way. Later on, they can be easily transformed from the unitary hypercube to their original units. The independent variables are denoted by 𝑥 𝑖 where 𝑖 varies from {1, 2, ..., k} in 𝑝-levels in the region ω. For one value of x, the ith elementary effect is defined by Equation 1.

d i X = Y ( x 1 , , x i - 1 , x i + Δ , x i + 1 , x k - Y ( x ) Δ Eq. 1

Where:

Δ is a value between

p is the number of levels in ω;

x is each selected value in ω;

y is the analysed function/model, which uses X as input variables; and

di (X) is the elementary effect from the ith the variable in the function Y.

The sensitivity indices calculated by the Morris methods are the mean (µ) and standard deviation (σ) of the elementary effects. Campolongo et al. (2007CAMPOLONGO, F.; CARIBONI, J.; SALTELLI, A. An effective screening design for sensitivity analysis of large models. Environmental Modelling & Software, v. 22, n. 10, p. 1509-1518, 2007.) developed a new measure (µ*) based on the consideration of the absolute numerical values of the original Morris mean calculation to avoid having negative and positive values or overlapping issues. The main arguments of the Morris method are the number of trajectories of the search process (r), the number of inputs (k) and the number of discrete levels (p), which makes the sample size equal to r X (k + 1).

According to the literature, the Morris method needs the discretisation and normalisation of the input variables, i.e. the inputs should have a known amplitude and discrete distribution and, for sampling purposes, the input space must be normalised. Another aspect is that the Morris measures should be interpreted as qualitative indices rather than quantitative. This implicates in understanding that the ranking provided by the Morris method is more important than the specific values for the measures.

Heiselberg et al. (2009HEISELBERG, P. et al. Application of sensitivity analysis in design of sustainable buildings. Renewable Energy, v. 34, n. 9, p. 2030-2036, 2009.) used the Morris method to develop a sensitivity analysis framework. The authors applied it to the performance evaluation of an office building in Denmark to assess the cooling and heating energy consumption by using the BE06 programme. The sensitivity analysis showed some influent variables for the heating period, such as the control of the lighting systems and airflow rates. Nembrini, Samberger and Labelle (2014NEMBRINI, J.; SAMBERGER, S.; LABELLE, G. Parametric scripting for early design performance simulation. Energy and Buildings, v. 68, p. 786-798, 2014.) used the Morris method to assess influential variables of shape, geometry and systems of multi-family building models.

Corrado and Mechri (2009CORRADO, V.; MECHRI, H. E. Uncertainty and sensitivity analysis for building energy rating. Journal of Building Physics, v. 33, n. 2, p. 125-156, 2009.) determined the influence of 129 design variables of an Italian residential building by using the Morris method. McLeod, Hopfe and Kwan (2013MCLEOD, R. S.; HOPFE, C. J.; KWAN, A. An investigation into future performance and overheating risks in Passivhaus dwellings. Building and Environment , v. 70, p. 189-209, 2013.) analysed the energy consumption of a building in the United Kingdom by using the Morris method, considering future climate predictions. Silva, Almeida e Ghisi (2016SANTOS, T. L. dos; PORTO, F. H. F. dos S.; SILVA, A. S. Análise da correlação entre conforto e desempenho térmico em habitações de interesse social por simulação computacional. Ambiente Construído, Porto Alegre, v. 20, n. 2, p. 211-229, abr,/jun. 2020.) assessed design variables in a low-income house at four different climates in Brazil by using the Morris method. The sensitivity analysis helps to identify the difference in the ranking of influential variables in each climate and each constructive system.

Monte Carlo methods

The Monte Carlo approach is a well-known statistical procedure used for assessing uncertainties in complex models. The essence of the Monte Carlo approach is to compute an expectation of a probability density of an output based on randomly sampling the inputs of a model. This approach for sensitivity analysis depends mainly on the sampling method, the sample size, the probability distribution of input variables, the sensitivity measures and the convergence of the output results.

The literature shows that there are many sampling procedures such as simple random, stratified random, Latin hypercube, quasi-random, and others (MACDONALD, 2009MACDONALD, I. A. Comparison of sampling techniques on the performance of Monte Carlo based sensitivity analysis. In: BUILDING SIMULATION, Glasgow, 2009. Proceedings … Glasgow, 2009.). The main advantage is that with the same sample of inputs, many sensitivity indices can be calculated after having the outputs computed (e.g. SRC, SRRC, PCC, PRCC, SPEA, etc.). In the same sense, the sample size is not related to the number of input variables, which could make a computational experiment less expensive. The probability distribution to consider for the input variables could be discrete or continuous. Among the continuous distributions, the literature usually considers normal, uniform, or other parametric distribution for assessing the parameter sample space.

Hopfe and Hensen (2011HOPFE, C. J.; HENSEN, J. L. M. Uncertainty analysis in building performance simulation for design support. Energy and Buildings , v. 43, n. 10, p. 2798-2805, 2011.) used the Monte Carlo method along with a stepwise regression coefficient to analyse the influence of physical properties of an office building in the Netherlands on the thermal comfort of users. Yıldız and Arsan (2011YILDIZ, Y.; ARSAN, Z. D. Identification of the building parameters that influence heating and cooling energy loads for apartment buildings in hot-humid climates. Energy, v. 36, n. 7, p. 4287-4296, 2011.) used the SRRC measure to evaluate influent variables in the energy consumption of a house in Turkey. Hygh et al. (2012HYGH, J. S. et al. Multivariate regression as an energy assessment tool in early building design. Building and Environment , v. 57, p. 165-175, 2012.) used the SRC to analyse influent variables in the energy consumption of an apartment building in the United States. Silva and Ghisi (2013SILVA, A. S.; ALMEIDA, L. S. S.; GHISI, E. Decision-making process for improving thermal and energy performance of residential buildings: A case study of constructive systems in Brazil. Energy and Buildings, v. 128, p. 270-286, 2016.) used the Monte Carlo approach with the SRC and PCC measures to analyse the influence of design variables on the thermal and energy performance of a low-income house in Florianópolis, Brazil. Encinas and De Herde (2013ENCINAS, F.; DE HERDE, A. Sensitivity analysis in building performance simulation for summer comfort assessment of apartments from the real estate market. Energy and Buildings, v. 65, p. 55-65, 2013.) used the Monte Carlo method with the Spearman coefficient to analyse the thermal discomfort of users in a building in Chile by considering variables related to ventilation, air infiltration and internal loads. Mahar et al. (2020MAHAR, W. A. et al. Sensitivity analysis of passive design strategies for residential buildings in cold semi-arid climates. Sustainability, v. 12, n. 3, p. 1-22 , 2020.) used the Monte Carlo approach to assess design variables in a residential building in Pakistan. The authors used twenty-one design variables, sampled using the Latin Hypercube sampling, by considering both discrete and continuous distribution, depending on the variable. The SRC measure was used to quantify the sensitivity, enabling the identification of the most influent variable for that climate.

The authors agree with the advantages of the Monte Carlo approach, especially in dealing with design variables. However, there is a fundamental issue regarding this approach, which depends on the linearity of the model. The rank transformation indices could be used to overcome this issue (i.e. SRCC and PRCC instead of SRC and PCC, respectively).

Variance-based methods

The variance-based methods rely on the “variance” statistical measure. This can provide good sensitivity estimation for both non-linear and non-additive models. The variance calculation can be performed using Equation 2.

V Y = i k V i + i k j > i k V ij + + V 1,2 , , k Eq. 2

Where:

Vi is the variance due to each variable;

Vij is the variance of the interaction effects of second-order, and so on until reaching the V1, 2 ..., k which is the variance regarding the kth order.

As the variance calculation of a model could be considered a “model-free” approach, i.e. with no application limitations, there are some methods to assess it. The most traditional method is the Design of Experiments, in which the experiment relies on the factorial design multivariate sampling - where all levels are combined, and the sample size increases exponentially (N = pk). This is the main disadvantage of this method for building simulation purposes, as the sample size could become impracticable for many inputs or many levels of variation.

For the Design of Experiments, the sum of squares, the mean squares or the F-value could be used as a measure of sensitivity, for each variable or interaction effect.

The Sobol (1990SILVA, A. S.; GHISI, E. Estimating the sensitivity of design variables in the thermal and energy performance of buildings through a systematic procedure. Journal of Cleaner Production, v. 244, p. 118753, 2020.) method deals with estimating the first order and total order effects of the input variables by making approximate integrals according to the Monte Carlo approach. The Sobol method sample does not require a full factorial sample, but rather a necessary sample to enable a good estimation. The original method has many equations and could be assessed in the original work. The method requires two matrices of samples and the procedure includes resampling.

Some authors have proposed different extensions and improvements in the Sobol method. Mara and Joseph (2008MARA, T. A.; JOSEPH, O. R. Comparison of some efficient methods to evaluate the main effect of computer model factors. Journal of Statistical Computation and Simulation, v. 78, n. 2, p. 167-178, 2008.) proposed an extension based on the random balance design with a permutation of matrices. This approach allows the estimation of the first-order indices. Saltelli et al. (2010SALTELLI, A. et al. Variance based sensitivity analysis of model output: design and estimator for the total sensitivity index. Computer Physics Communications , v. 181, n. 2, p. 259-270, 2010.) developed another way to compute the first and total order indices by using the Jansen (1999JANSEN, M. J. W. Analysis of variance designs for model output. Computer Physics Communications, v. 117, n. 1-2, p. 35-43, 1999.) estimator. This new approach is based on the expected variance rather than the reduction of the expected variance and uses Monte Carlo quasi-random samples.

Langner et al. (2012LANGNER, M. R. et al. An investigation of design parameters that affect commercial high-rise office building energy consumption and demand. Journal of Building Performance Simulation, v. 5, n. 5, p. 313-328, 2012.) used the fractional factorial design (a tentative to reduce the full factorial sample) to analyse some design variables in three commercial buildings in the United States through simulation using EnergyPlus. Jaffal, Inard and Ghiaus (2009JAFFAL, I.; INARD, C.; GHIAUS, C. Fast method to predict building heating demand based on the design of experiments. Energy and Buildings , v. 41, n. 6, p. 669-677, 2009.) used the design of experiment approach to assess envelope variables of a building using TRNSYS simulation and polynomial estimation of the heating demand. Santos, Porto e Silva (2020SILVA, A. S.; GHISI, E. Análise de sensibilidade global dos parâmetros termofísicos de uma edificação residencial de acordo com o método de simulação do RTQ-R. Ambiente Construído , Porto Alegre, v. 13, n. 4, p. 135-148, out./dez. 2013.) analysed different design variables for a low-income house in different climatic zones in Brazil. The experiment relied on the full factorial design and variance-based measures, which enabled the identification of influent variables in the thermal comfort and thermal performance of the houses.

Nguyen and Reiter (2015NGUYEN, A.-T.; REITER, S. A performance comparison of sensitivity analysis methods for building energy models. Building Simulation , v. 8, n. 6, p. 651-664, 2015.) used the Sobol method in comparison to other sensitivity analysis techniques to assess the energy performance of two buildings in Vietnan, using the EnergyPlus simulation programme and the Simlab (JOINT…, 2013JOINT RESEARCH CENTRE. European Commission. Simlab 2.2. 2013. Disponível em Disponível em https://ipsc.jrc.ec.europa.eu . Access: 21 jun. 2013.
https://ipsc.jrc.ec.europa.eu...
) tool to calculate the sensitivity indices. Kristensen and Petersen (2016KRISTENSEN, M. H.; PETERSEN, S. Choosing the appropriate sensitivity analysis method for building energy model-based investigations. Energy and Buildings, v. 130, p. 166-176, 2016.) used the Sobol method to assess the impacts of design variables in the annual energy needs of a residential reference building. Some features of the method were compared with Morris and Local approach, especially regarding the probability distribution of the inputs. Menberg, Heo and Choudhary (2016MENBERG, K.; HEO, Y.; CHOUDHARY, R. Sensitivity analysis methods for building energy models: Comparing computational costs and extractable information. Energy and Buildings, v. 133, p. 433-445, 2016.) also compared the Morris method with the Sobol and Monte Carlo approach for analysing design variables of an educational building in Cambridge. The heating energy needs were calculated, and design and operational inputs were assessed using the TRNSYS simulation programme.

The variance-based methods have many advantages, i.e. they do not depend on the linearity of the model, they are capable of capturing the influence over the whole amplitude of the inputs, they can quantity interaction effects, and they are capable of determining grouped sensitivity measures.

Method

An experiment considering the building design characteristics as independent variables was performed in order to compare the capabilities of different sensitivity analysis methods in building simulation. The method is structured as follows:

  1. the selection of the sensitivity analysis methods for application;

  2. description of the building simulation model;

  3. description of the climate considered for the experiment;

  4. specific settings of the input variables for the experiment; and

  5. data treatment, comparison and visualisation methods.

Sensitivity analysis methods

Table 1 shows the sensitivity analysis methods considered in this work, along with the corresponding sensitivity measure, the sampling procedure and a bibliographic reference.

The methods shown in Table 1 were implemented through scripts in the R language for performing each sensitivity analysis. Some predefined functions published by other authors were used to write the scripts:

  1. the Morris method was implemented using the “morris” function from the R package “sensitivity”developed by Pujol, Iooss and Janon (2015 PUJOL, G.; IOOSS, B.; JANON, A. R Package "sensitivity": Global Sensitivity Analysis of Model Outputs. Comprehensive R Archive Network - CRAN , 2020. Available: <Available: https://cran.r-project.org/web/packages/ . Accessed in: 10 dec. 2020.
    https://cran.r-project.org/web/packages/...
    ). The theoretical model was adopted from Morris (1991MORRIS, M. D. Factorial sampling plans for preliminary computational experiments. Technometrics, v. 33, n. 2, p. 161, 1991.) and Campolongo, Cariboni and Saltelli (2007CAMPOLONGO, F.; CARIBONI, J.; SALTELLI, A. An effective screening design for sensitivity analysis of large models. Environmental Modelling & Software, v. 22, n. 10, p. 1509-1518, 2007.);

  2. the Monte Carlo method was implemented using the Latin Hypercube Sampling with the “LHS” function from the R package “pse” developed by Chalom, Mandai and Prado (2017 CHALOM, A.; MANDAI, C. Y.; PRADO, P. I. K. L. R Package "pse": parameter space exploration with Latin Hypercubes. Comprehensive R Archive Network - CRAN , 2017. Available: Available: https://cran.r-project.org/web/packages/ . Accessed in: 10 dec. 2020.
    https://cran.r-project.org/web/packages/...
    ) with Bratley and Fox (1988BRATLEY, P.; FOX, B. L. ALGORITHM 659: implementing Sobol's quasi-random sequence generator. ACM Transactions on Mathematical Software, v. 14, n. 1, p. 88-100, 1988.) model;

  3. the sensitivity indices used for evaluating the Monte Carlo samples were also taken from the “sensitivity” package. For the standardised regression coefficients, the function “src” was used and for the partial correlation coefficients, the function “pcc” was used; both functions used the theoretical model from Saltelli et al. (2008SALTELLI, A. et al. Global sensitivity analysis: the primer. John Wiley and Sons, 2008.);

  4. for the design of experiments (DoE), the function “fac.design” was used from the “DoE.base” package developed by Groemping, Amarov and Xu (2020 GROEMPING, U.; AMAROV, B.; XU, H. R Package 'DoE.base' - Full Factorials, Orthogonal Arrays and Base Utilities for DoE Packages. Comprehensive R Archive Network - CRAN, 2020. Available: Available: https://cran.r-project.org/web/packages/ . Accessed in: 10 dec. 2020.
    https://cran.r-project.org/web/packages/...
    ) using theory from Collings (2016COLLINGS, B. J. Generating the intrablock and interblock subgroups for confounding in general factorial experiments. The Annals of Statistics, v. 12, n. 4, p. 1500-1509, 2016.). The analysis of variance was performed using the “aov” function from the existing R package “stats”;

  5. the Sobol {a} method was implemented using the function “sobolmara” from the “sensitivity” package; the theoretical model from Mara and Joseph (2008MARA, T. A.; JOSEPH, O. R. Comparison of some efficient methods to evaluate the main effect of computer model factors. Journal of Statistical Computation and Simulation, v. 78, n. 2, p. 167-178, 2008.) was considered; and

  6. the Sobol {b} method was implemented using the function “soboljansen”from the “sensitivity” package; the theoretical model from Jansen (1999ASSOCIAÇÃO BRASILEIRA DE NORMAS TÉCNICAS. NBR 15220-2: desempenho térmico de edificações: parte 2: método de cálculo da transmitância térmica, da capacidade térmica, do atraso térmico e do fator solar de elementos e componentes de edificações. Rio de Janeiro, 2005.) and Saltelli et al. (2010SALTELLI, A. et al. Global sensitivity analysis: the primer. John Wiley and Sons, 2008.) was considered.

These preexisting R functions and packages were incorporated into scripts developed especially for this work to facilitate their use for building simulation purposes in EnergyPlusTM. It is important to mention that all the scripts/functions were split to separate the two major processes: the sample generation from the sensitivity analysis itself. This is necessary because the simulation procedures usually demand much time to run. In this sense, one could generate all the samples for all sensitivity analysis methods in the “.idf” file format for EnergyPlusTM without running the simulation immediately; this could be done later.

Figura 1 shows a flowchart of the method, which exemplifies the step-by-step procedures used in each sensitivity analysis method. It should be noticed that other R scripts were created to perform some complementary actions such as one script to create many “.idf” files, according to an input sample of independent variables; and other to calculate the dependent variable from each “.csv” file, an output from each EnergyPlusTM simulation.

Building model and simulation settings

As the main objective of this study is to compare the capabilities of the different sensitivity analysis methods, the building model itself was simplified: a single-zone building was chosen for the case study, according to Figure 2. The advantage of a single-zone approach is that the simulation time is reduced and there is no need for additional aggregation procedures for calculation of equivalent performance criteria between zones.

Table 1
Sensitivity analysis methods and details

Figure 1
Flowchart of the method - processes followed for each sensitivity analysis method

Figure 2
Floor plan of the simulation model of a commercial building

Table 2
Thermal properties for the base model

The building base model has walls with a double layer of concrete and air gap; roof with ceramic tiles, air gap, concrete, air gap and plaster ceiling; floor with concrete and ceramic. The windows are composed of simple 6mm clear glass. The thermal transmittance and thermal capacity of each element, calculated according to Brazilian technical standards methods (ABNT, 2005ASSOCIAÇÃO BRASILEIRA DE NORMAS TÉCNICAS. NBR 15220-2: desempenho térmico de edificações: parte 2: método de cálculo da transmitância térmica, da capacidade térmica, do atraso térmico e do fator solar de elementos e componentes de edificações. Rio de Janeiro, 2005.), are shown in Table 2. The building was modelled in the EnergyPlusTM simulation programme to perform dynamic simulation of the building throughout the whole year.

The contact with the ground was considered by using the “Ground:Domain” object of EnergyPlusTM, which uses an implicit finite difference model to determine the ground temperatures. It was used the ground temperature profile from Kusuda and Achenbach (1965KUSUDA, T.; ACHENBACH, P. R. Earth temperatures and thermal diffusivity at selected stations in the United States. ASHRAE Transactions, v. 71, n. 1, 1965.), correlated with the local monthly average ground temperatures in the shallower layer (approximately 0.5m above the surface); this shallow layer information was taken from the TRY weather file of Florianópolis, detailed in the next section.

It was considered a night ventilation strategy, along with some quantity of air-infiltration rates in the windows through the day. The base model considered 0.2 person/m² of occupancy density, 16 W/m² of lighting and 10.7 W/m² of equipment loads, which are average values for office buildings according to NBR 16401-1 (ABNT, 2008ASSOCIAÇÃO BRASILEIRA DE NORMAS TÉCNICAS. NBR 16401-1: instalações de ar-condicionado: sistemas centais e unitários: parte 1: projeto das instalações. Rio de Janeiro, 2008.). These loads varied on an hourly basis according to the occupancy, the equipment and the lighting usage; these schedules are shown in Figure 3.

The simulation experiment was performed to estimate the thermal loads in the entire year for the simulation. The object “Ideal Loads Air System” from EnergyPlusTM was used to perform this calculation. Thermostat temperatures of 25 °C for cooling and 19°C for heating were set in this ideal system. The cooling system was available in the whole year for weekdays from 7 a.m. to 6 p.m.; the heating system was only available for weekdays for June to September (the cold season), from 7 a.m. to 6 p.m.

Climate

The simulation experiments were performed for the climate of Florianópolis, Santa Catarina, southern Brazil, according to the Test Reference Year (TRY) weather file determined by Goulart, Lamberts and Firmino (1998GOULART, S.; LAMBERTS, R.; FIRMINO, S. Dados climáticos para projeto e avaliação energética de edificações para 14 cidades brasileiras. Florianópolis: Núcleo de Pesquisa em Construção/UFSC, 1998.). Figure 4 shows the global and direct solar radiation, dry bulb and dew point temperatures and relative humidity for this weather file; an average day for each month is shown.

Figure 3
Schedules for occupancy, equipment and lighting usage for the base model

Figure 4
Weather variables for Florianópolis, southern Brazil (data from TRY weather file (GOULART; LAMBERTS; FIRMINO, 1998GOULART, S.; LAMBERTS, R.; FIRMINO, S. Dados climáticos para projeto e avaliação energética de edificações para 14 cidades brasileiras. Florianópolis: Núcleo de Pesquisa em Construção/UFSC, 1998.))

According to the Köppen-Geiger classification (ALVARES et al., 2013ALVARES, C. A. et al. Köppen's climate classification map for Brazil. Meteorologische Zeitschrift, v. 22, n. 6, p. 711-728, 2013.), Florianópolis has a humid subtropical climate, a typical coast region in southern Brazil, with well-defined summer and winter and no dry season. The lowest dry bulb temperatures are found from June to August and the highest temperatures in January and February. For some months (August−December and March), the average diffuse solar radiation is higher than the direct radiation, indicating high cloud coverage. The monthly average relative humidity varies from 82 to 89%.

Simulation experiment

The simulation experiment is a sensitivity analysis of the building's design variables. The aim was to find some important variables in the cooling and heating loads of the building and to discard some unimportant variables.

The design variables were selected based on the envelope of the building and are shown in Table 3. Fifteen independent variables (k) were considered, such as the thermal transmittance and thermal capacity of the building components, some thermal and optical properties of opaque and transparent surfaces, among other variables. These variables are responsible for controlling the thermal gains and losses of the building, which affects energy consumption.

Table 3 also shows a lower and upper numeric limit for each independent variable. For this study, these intervals were discrete, as required by the Morris method and the Design of Experiments. In this sense, to enable an adequate comparison among methods, all variables were discretised according to levels of variation (see Table 4). Up to eight levels were considered for each independent variable, depending on the sensitivity analysis method applied.

Finally, Table 4 shows the settings for each sensitivity analysis method. The design of experiments method was, somehow, different, i.e. as it requires a full factorial combination between variables' levels, it was considered only two levels of variation (the actual lower and upper limits for each variable); and the two less important variables for each dependent variable (cooling and heating loads) were excluded, a priori, from this factorial sampling to avoid an impractical sample size and simulation time.

Thus, seven methods were tested in this analysis, achieving from 320 to 8500 sample sizes for each method.

Table 3
Independent variables for the simulation experiment

Table 4
Sensitivity analysis methods and settings used for the first analysis, along with the sample size required

Results

The results are discussed along with the specific tables and graphs for each method.

Overview of the output data

This section intends to show the results of the comparison of the sensitivity analysis methods for the simulation experiment. Figure 5 shows the histograms of the heating loads for each sensitivity analysis method, while Table 5 shows the descriptive statistics. On the other hand, Figure 6 and Table 6 show the same information for the cooling loads. The seven methods presented in Table 4 were grouped into six histograms as both Monte Carlo approaches (with SRC and PCC indices) were simulated and calculated for the same sample.

The Design of Experiments (DoE) sample reached a higher amplitude in both heating and cooling loads due to the factorial combination among the extreme levels (p) of the independent variables. On the contrary, the smaller samples, such as the Morris {a} case, obtained the lowest amplitude among the results. This is an interesting point regarding the stratification capabilities of the sample strategies; the random samples lead to narrower results while the DoE leads to higher possibilities and amplitudes. In other words, more specific cases were simulated and evaluated.

Another aspect is that the heating loads are much lower than the cooling loads, which is a characteristic of the climate evaluated. In this case, a decision-maker could prioritise the cooling requirements above heating requirements in this commercial building.

Figure 5
Histograms of the heating loads of the simulation experiment for all methods

Table 5
Descriptive statistics for heating loads for all methods

Figure 6
Histograms of the cooling loads of the simulation experiment for all methods

Table 6
Descriptive statistics for cooling loads for all methods

According to Tables 5 and 6, the coefficient of variation of the data (Coef.var) is very similar between methods, except for the Design of Experiments, which obtained higher values: 58.0% for heating loads and 47.3% for cooling loads.

Capabilities of each method

In this section, the results and capabilities of each sensitivity analysis method are presented and discussed.

Morris method

By looking at each sensitivity analysis method at once, Figure 7 shows the mean and standard deviation of the Morris {a} method for the heating and cooling loads. For the heating loads, the solar orientation (“Azimuth”) showed the higher non-linearity, and the thermal transmittance of the roof (“Uroof”) showed a greater influence. Both means (µ and µ*) can be analysed together to conclude which variable is the most important and what its signal is; e.g. the Uroof variable is directly proportional to the heating loads value, while the aroof variable is inversely proportional (negative sign).

For the cooling loads, the solar absorptance of the roof seemed to be the most influent variable, and the thermal capacity of the roof reached the higher non-linear effect. Most of the input variables showed lower influence compared to the few most influent in this case.

The Morris {b} approach, which differs from the Morris {a} concerning the sample size, showed similar results and will not be reported separately due to space limitations. However, due to the different sample sizes, some variables have changed the ranking position among the most influent variables, as will be reported in Figure 14.

Figure 7
Mean (µ and µ*) and standard deviation (σ) of the elementary effects for the Morris {a} method for heating (a) and cooling loads (b)

The Morris graphs are interesting as one can see some features:

  1. non-linearity effects: variables that stand out from the others. For instance, in the Heating Loads and µ mean, the influence of the Uroof and Uwall are similar, but the non-linearity is very different. This indicates that, from the lower level to the upper level of the Uroof variable (i.e. from 0.8 to 3.2 W/m²K, according to Table 3), the influence in the heating loads is not linear. We cannot visualise this “non-linearity” directly, but we have a qualitative measure of it (the σ measure);

  2. another aspect that is interesting to observe is that many variables are very close to each other, forming a cloud of points in the µ×σ graphs. Those variables have similar influence and non-linearity. By applying some statistical criteria (usually subjective), these variables can be considered negligible and be discarded in further decisions; and

  3. positive and negative values: the µ measure can indicate important information, which is related to the proportionality of the input to the output regarding the signal. For the cooling loads, one can see that the aroof and Croof variables are influent, but the higher the aroof, the higher the cooling loads; and the higher the Croof, the lower the cooling loads. This aspect is not clear in the variance-based measures themselves.

Monte Carlo methods

The Monte Carlo approaches demand different visualisation strategies to understand the data; in this case, a scatterplot is useful as the sensitivity measures are based on linear relations. Figure 8 shows the scatterplot for the heating loads and Figure 9 shows the same for the cooling loads.

The trend lines connect the mean value of each level of the inputs; in this sense, the more horizontal is the line, the lower effect the input has in the output. The thermal transmittance of the Venetian blinds (“tvenetian”) and the window shading (“Lshading”) had horizontal trend lines, which indicates that they have little effect on the heating loads. For the cooling loads, it is harder to conclude something based only on the visual inspection; however, it seems that the solar absorptances have a greater influence on the cooling loads than the other variables.

It should be emphasised that the sample is the same for both Monte Carlo {a} and Monte Carlo {b} methods, differing only in the calculation of the sensitivity measures. Some findings are as follows:

  1. sample size: usually, the sample size required for a Monte Carlo sensitivity analysis is greater than for the Morris method. This enables more data available in each stratum of the sample, i.e. there is more information when the data is grouped according to the levels of variation - in the case of a discrete distribution;

  2. trend lines: it is possible to create scatterplot graphs to visualise the data according to each input variable and output variable, along with trend lines. From Figures 8 and 9, the trend line is a non-linear curve which attempts to follow the best behaviour of the data. Some variables showed a linear behaviour for the heating loads (e.g. the Uwall) and other a very non-linear relationship (e.g. Azimuth and Croof). However, this approach did not give a measure of linearity or non-linearity for each variable, but only for the whole model. In the case, the adjusted R² was 0.885 for the heating loads and 0.815 for the cooling loads; and

  3. influential variables: Uwall and Uroof were the most influent variables on heating loads according to visual inspection. However, it is not clear which one is the most influent. The awall and aroof variables seemed the most influent in cooling loads.

Figure 8
Scatterplots for the Monte Carlo samples of heating loads and the design variables

Figure 9
Scatterplots for the Monte Carlo samples of cooling loads and the design variables

Figure 10 shows the SRC and PCC indices for each performance criteria. Due to the high R2, the rank is the same for both SRC and PCC. The advantage is that the coefficients are standardised between -1 and +1, which facilitates a direct comparison.

Design of experiment method

In the case of the design of experiments, one can group the dependent variables according to the most influent variables before visualising them. In this case, the Uoof, Uwall, aroof and awall were the most influent variables in the heating loads. For the cooling loads, the most influent variables were aroof, awall, Uroof and Ctroof.

Figure 11 shows the boxplot combined with a violin plot for the heating and cooling loads grouped according to the four most important variables in the design of experiments method. In the case of heating loads, one can see larger amplitude in the case of Uroof equal to 3.2 W/m²K, Uwall equal to 4.4 W/m²K, aroof equal to 0.20 and awall equal to 0.2, i.e. this combination of variables can lead to higher uncertainties in the performance of the building, indicating that some fifth or sixth variable is also influent in this particular group. For the other groups, a smaller amplitude was found. The combination of low thermal transmittances can lead to similar results, as seen in the first two groups.

For the cooling loads, the best combination of variables was the lower solar absorptances for both walls and roof. The amplitudes also varied in this case, and the last group, with thermal capacity of the roof equal to 40 kJ/m²K, reached the highest amplitude and the highest mean.

Theoretically, the boxplot grouping of variables can only be properly performed in the case of a full factorial design, as one can guarantee that all levels have values in each group. This is not true for the other methods, as they rely on random or quasi-random samples.

Figure 12 shows the F-value sensitivity measure for the design of experiments approach. One can see the magnitude of the influence of the absorptance and transmittance variables compared to the others.

Sobol variance-based methods

Figure 13 shows the sensitivity measure of the Sobol approaches. These methods are not well established in terms of visual features. Scatterplots or boxplots cannot be properly applied as the sample is not generated specifically for these purposes.

One can see that the order of influence is very similar to the Design of Experiments. However, there is some difference between the first and total order indices. To exemplify, for the cooling loads, first-order index (Si) showed a ranking of aroof, awall, Cwall and Croof, while the total-order index (St) showed aroof, awall, Croof and Uroof. This indicates that there are some important interaction effects in the Uroof variable that makes it more important considering all effects.

Ranking of the influential variables

Figure 14 shows the rank of the ten most influent variables according to each sensitivity analysis method for both heating and cooling loads. This represents the final comparison between the methods, where one can verify if methods agreed or disagreed in indicating the rank order. For both the heating and cooling loads, all sensitivity analysis methods agreed in the first and second most influent variables. The thermal transmittance of the roof and walls were the most influent for the heating loads, while the solar absorptance of the roof and walls were the most influent ones for the cooling loads.

However, for the other rank positions, the methods lead to different results. In the case of heating loads, for the 4th most influent variable, most of the methods indicated the solar absorptance of the walls (awall), while Morris {b} method indicated the thermal capacity of the roof (Croof). It seems that for the 4th and 7th order of influence, the methods changed the order of importance among themselves.

Figure 10
Sensitivity measures of the input variables for Monte Carlo {a} with Standardised Regression Coefficients (SRC) and {b} Partial Correlation Coefficient (PCC) for Heating Loads and Cooling Loads

Figure 11
Boxplot and violin plot for the design of experiments sample grouped by the four most influent variables, for heating (a) and cooling loads (b)

Figure 12
F-value of the input data of the Design of Experiments for heating and cooling loads

Figure 13
Bar plot for the Sobol {a} Si and Sobol {b} St sensitivity indices for heating loads and cooling loads

Figure 14
Ranking of the ten most influent design variables (or terms) in the heating loads and cooling loads for each sensitivity analysis method

As for the cooling loads, for the 3rd and 6th order, the importance differed a little among the methods. For the 3rd most influent variable, most of the methods indicated the thermal capacity of the roof (Croof), while the design of experiments indicated the term "Uroof:aroof", which is a second-order effect between thermal transmittance of the roof (Uroof) and its solar absorptance (aroof).

One way of concluding something is to consider one of the methods more suitable; in this case, the design of experiments and the Sobol {b} with total indices were considered as a reference. The design of experiments has the capability of separating the first and second-order effects, while the Sobol {b} considers the total sensitivity indices at once. For the cooling loads, the DoE method showed the term “Uroof:aroof” for the 3rd most influent variable, i.e. this second-order effect was more important than the Croof variable (in the 4th order). The Sobol {b} St showed that the 3rd most influent was the Croof variable because all of the 1st order effects from aroof variable would be included in the total index St. In this case, the 4th most influent variable was the Uroof. Without the DoE approach, one could not tell why this happened, but it can be seen that for this Uroof total index St has a major effect of second-order between it and aroof variable (more important than the variable alone).

Some general comments:

  1. it was expected some difference among the order of importance between methods, as each of them was calculated from a different sample and none of them considered all the sample domain at once;

  2. despite the difference in some rank order, the first and second most influent variables were the same for all methods, which indicates that all of them could be used for the same purpose of factor prioritisation setting from Saltelli et al. (2008SALTELLI, A. et al. Global sensitivity analysis: the primer. John Wiley and Sons, 2008.), i.e. if the value of these variables were fixed in its “true” value, most of the variance in the model would be reduced;

  3. both the Monte Carlo approaches with SRCs or PCCs presented the same order of influence, i.e. the regression or correlation indicated the same variables ranking order;

  4. the DoE and the Sobol {b} analysed together can indicate properly the total and the separated first and second-order effects among variables, which could not be seen with the methods in a separated manner; and

  5. despite the expected non-linearity of some variables (as can be seen in the plots from each method) the Monte Carlo approaches agreed mostly with the variance-based approaches like DoE and Sobol {b}.

If some capabilities such as efficiency, low computational cost and ability to show some differences between the importance and non-linearity, the Morris method is a good choice for this type of experiment involving envelope design variables in building simulation. It enables understanding the influence of the variable very accurately compared to other more advanced approaches such as Sobol {b} without losing the capability of factor prioritisation. However, one needs to highlight that this order of influence is qualitative in the case of the Morris method. However, its low computational cost (640 runs for Morris {b} compared with the 8500 of the Sobol {b}) makes its use worth.

Figures 15 and 16 show a different way of visualising the quantitative results of the sensitivity measures of each method. In this case, the measures were normalised from their units in a vector manner to enable comparison. Figure 15 shows the normalised indices for the heating loads; the variables were filtered in order of importance according to the Sobol {b} method (the last column). By looking to the quantitative values compared to the first and second most influent variables, one can understand why the methods sometimes changed the order of some variable; the values are too close. Any difference in the sampling strategy or the strata coverage in the variable domain would change the results, especially after the 5th most important variable.

One can see that the difference in the values of the Monte Carlo methods among the variables is lower than the difference between values in the variance-based methods, especially in the case of the DoE. In DoE, the Uroof had a normalised index equal to 0.753, while the Uwall had 0.497 (difference of 0.256). This difference is almost the value found from the 1st to the 4th variable in the Monte Carlo {a} method.

Figure 15
Normalised sensitivity indices of all methods for heating loads

Figure 16
Normalised sensitivity indices of all methods for cooling loads

In Figure 16, the difference between the variance-based approaches and the other approaches is also noticed. In DoE and Sobol {b} the difference between the first and second most influent variable is higher than the Monte Carlo approaches. The issue of the Uwall variable can be seen again, i.e. the Sobol {b} indicated that Uwall is the 5th most influent variable by computing the total index, while the other methods underestimated it based on the first order alone.

Conclusions

This study focused on the comparison of global sensitivity analysis methods applied to building performance simulation experiments on design variables. The study was performed on a single-zone commercial building, with 15 design variables as inputs and seven different approaches of sensitivity analysis, i.e. Morris method (two settings {a} and {b}), Monte Carlo SRC, Monte Carlo PCC, Design of Experiments, Sobol (two settings {a} and {b}). The ranking of influence of the inputs on the heating and cooling loads were compared and discussed.

After the specific results, some general conclusions can be made:

  1. regarding design variables experiments, all methods agreed in showing the most influent variables in both performance criteria, with some disagreements over some methods. From the 1st to the 5th influent variable, the methods agreed in general. The first two most influent variables were the same in all methods and performance criteria;

  2. there was no difference in the ranking between the Monte Carlo approaches with SRC or PCC, which was expected when the model had a high coefficient of determination - which was the case. The Morris method differed by the number of trajectories and levels, and also had a little difference in the ranking;

  3. the variance-based methods were considered the most accurate because of their model-free capability. In this sense, they were chosen as a reference to the comparison. When the Sobol {b} and DoE are compared, one can notice that the Sobol total index could capture the essence of the total order effects, as the DoE experiment showed the second-order effects. To exemplify, in DoE experiment, the second-order effect “aroof:Uroof” was the 3rd in the ranking, while the Uroof alone was the 8th; in the Sobol {b} St experiment, the Uroof became the 4th in the ranking due to the computation of the total effects;

  4. the methods agreed in showing the most influent variables and the least influent ones. For practical purposes, one could choose the easiest method to be applied, with low computational cost and easy to understand. The Morris method, in this sense, proved to be a feasible method that could, alone, indicate with precision the most important variables in this type of experiment. However, variance-based methods could be necessary to address high non-linear and complex models, and should be used properly when the computational cost allows; and

  5. each method brings up different capabilities and opportunities for interpreting the output data. The Morris method showed great visualisation capabilities of the qualitative ranking; the Monte Carlo approach can show the overall behaviour of the data through scatterplots and trend lines; DoE can be used to create groups of important variables, which can help a designer to make decisions more rationally; and Sobol methods can show the difference between the first and total order effects, which represented the most accurate of all - but with poor visual features.

This study is limited regarding the fact that only one building was studied, with a single-zone and simple geometry, for only one climate and two performance criteria. However, many scenarios of sensitivity analysis were considered, what justify, somehow, the simplicity of the computer model itself.

Further work must be performed to study different typologies of buildings and their behaviour against the different sensitivity analysis methods applied to design variables. It is also important to evaluate the effects of different sample sizes, different probability distribution or different experiments, such as focusing on uncertainties (i.e. small variations in the input variables).

Notes

Acknowledgements

The authors acknowledge the CNPq, FINEP, UFSC and UFMS. This study was supported in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.

References

  • ALVARES, C. A. et al. Köppen's climate classification map for Brazil. Meteorologische Zeitschrift, v. 22, n. 6, p. 711-728, 2013.
  • ASSOCIAÇÃO BRASILEIRA DE NORMAS TÉCNICAS. NBR 15220-2: desempenho térmico de edificações: parte 2: método de cálculo da transmitância térmica, da capacidade térmica, do atraso térmico e do fator solar de elementos e componentes de edificações. Rio de Janeiro, 2005.
  • ASSOCIAÇÃO BRASILEIRA DE NORMAS TÉCNICAS. NBR 16401-1: instalações de ar-condicionado: sistemas centais e unitários: parte 1: projeto das instalações. Rio de Janeiro, 2008.
  • AUGENBROE, G. The role of simulation in performance based building. In: LAMBERTS, R.; HENSEN, J. L. (org.). Building performance simulation for design and operation. New York: Spon Press, 2011.
  • BORGONOVO, E.; PLISCHKE, E. Sensitivity analysis: a review of recent advances. European Journal of Operational Research, v. 248, n. 3, p. 869-887, 2016.
  • BRATLEY, P.; FOX, B. L. ALGORITHM 659: implementing Sobol's quasi-random sequence generator. ACM Transactions on Mathematical Software, v. 14, n. 1, p. 88-100, 1988.
  • CAMPOLONGO, F.; CARIBONI, J.; SALTELLI, A. An effective screening design for sensitivity analysis of large models. Environmental Modelling & Software, v. 22, n. 10, p. 1509-1518, 2007.
  • CHALOM, A.; MANDAI, C. Y.; PRADO, P. I. K. L. R Package "pse": parameter space exploration with Latin Hypercubes. Comprehensive R Archive Network - CRAN , 2017. Available: Available: https://cran.r-project.org/web/packages/ Accessed in: 10 dec. 2020.
    » https://cran.r-project.org/web/packages/
  • COLLINGS, B. J. Generating the intrablock and interblock subgroups for confounding in general factorial experiments. The Annals of Statistics, v. 12, n. 4, p. 1500-1509, 2016.
  • CORRADO, V.; MECHRI, H. E. Uncertainty and sensitivity analysis for building energy rating. Journal of Building Physics, v. 33, n. 2, p. 125-156, 2009.
  • EMPRESA DE PESQUISA ENERGÉTICA. Brazilian Energy Balance 2019. Rio de Janeiro: EPE, 2019.
  • ENCINAS, F.; DE HERDE, A. Sensitivity analysis in building performance simulation for summer comfort assessment of apartments from the real estate market. Energy and Buildings, v. 65, p. 55-65, 2013.
  • GOULART, S.; LAMBERTS, R.; FIRMINO, S. Dados climáticos para projeto e avaliação energética de edificações para 14 cidades brasileiras. Florianópolis: Núcleo de Pesquisa em Construção/UFSC, 1998.
  • GROEMPING, U.; AMAROV, B.; XU, H. R Package 'DoE.base' - Full Factorials, Orthogonal Arrays and Base Utilities for DoE Packages. Comprehensive R Archive Network - CRAN, 2020. Available: Available: https://cran.r-project.org/web/packages/ Accessed in: 10 dec. 2020.
    » https://cran.r-project.org/web/packages/
  • HEISELBERG, P. et al. Application of sensitivity analysis in design of sustainable buildings. Renewable Energy, v. 34, n. 9, p. 2030-2036, 2009.
  • HONG, T.; CHOU, S.; BONG, T. Building simulation: an overview of developments and information sources. Building and Environment, v. 35, n. 4, p. 347-361, 2000.
  • HOPFE, C. J.; HENSEN, J. L. M. Uncertainty analysis in building performance simulation for design support. Energy and Buildings , v. 43, n. 10, p. 2798-2805, 2011.
  • HYGH, J. S. et al. Multivariate regression as an energy assessment tool in early building design. Building and Environment , v. 57, p. 165-175, 2012.
  • IOANNOU, A.; ITARD, L. C. M. Energy Performance and comfort in residential buildings: sensitivity for building parameters and occupancy. Energy and Buildings , v. 92, p. 216-233, 2015.
  • JAFFAL, I.; INARD, C.; GHIAUS, C. Fast method to predict building heating demand based on the design of experiments. Energy and Buildings , v. 41, n. 6, p. 669-677, 2009.
  • JANSEN, M. J. W. Analysis of variance designs for model output. Computer Physics Communications, v. 117, n. 1-2, p. 35-43, 1999.
  • JOINT RESEARCH CENTRE. European Commission. Simlab 2.2. 2013. Disponível em Disponível em https://ipsc.jrc.ec.europa.eu Access: 21 jun. 2013.
    » https://ipsc.jrc.ec.europa.eu
  • KRISTENSEN, M. H.; PETERSEN, S. Choosing the appropriate sensitivity analysis method for building energy model-based investigations. Energy and Buildings, v. 130, p. 166-176, 2016.
  • KUSUDA, T.; ACHENBACH, P. R. Earth temperatures and thermal diffusivity at selected stations in the United States. ASHRAE Transactions, v. 71, n. 1, 1965.
  • LANGNER, M. R. et al. An investigation of design parameters that affect commercial high-rise office building energy consumption and demand. Journal of Building Performance Simulation, v. 5, n. 5, p. 313-328, 2012.
  • MACDONALD, I. A. Comparison of sampling techniques on the performance of Monte Carlo based sensitivity analysis. In: BUILDING SIMULATION, Glasgow, 2009. Proceedings … Glasgow, 2009.
  • MAHAR, W. A. et al. Sensitivity analysis of passive design strategies for residential buildings in cold semi-arid climates. Sustainability, v. 12, n. 3, p. 1-22 , 2020.
  • MARA, T. A.; JOSEPH, O. R. Comparison of some efficient methods to evaluate the main effect of computer model factors. Journal of Statistical Computation and Simulation, v. 78, n. 2, p. 167-178, 2008.
  • MCLEOD, R. S.; HOPFE, C. J.; KWAN, A. An investigation into future performance and overheating risks in Passivhaus dwellings. Building and Environment , v. 70, p. 189-209, 2013.
  • MEACHAM, B.; BOWEN, R.; TRAW, J.; MOORE, A. Performance-based building regulation: current situation and future needs. Building Research and Information, v. 33, n. 2, p. 91-106, 2005.
  • MENBERG, K.; HEO, Y.; CHOUDHARY, R. Sensitivity analysis methods for building energy models: Comparing computational costs and extractable information. Energy and Buildings, v. 133, p. 433-445, 2016.
  • MORRIS, M. D. Factorial sampling plans for preliminary computational experiments. Technometrics, v. 33, n. 2, p. 161, 1991.
  • NEMBRINI, J.; SAMBERGER, S.; LABELLE, G. Parametric scripting for early design performance simulation. Energy and Buildings, v. 68, p. 786-798, 2014.
  • NGUYEN, A.-T.; REITER, S. A performance comparison of sensitivity analysis methods for building energy models. Building Simulation , v. 8, n. 6, p. 651-664, 2015.
  • ØSTERGÅRD, T.; JENSEN, R. L.; MAAGAARD, S. E. Building simulations supporting decision making in early design: a review. Renewable and Sustainable Energy Reviews, v. 61, p. 187-201, 2016.
  • PETERSEN, S.; KRISTENSEN, M. H.; KNUDSEN, M. D. Prerequisites for reliable sensitivity analysis of a high fidelity building energy model. Energy and Buildings, v. 183, p. 1-16, 2019.
  • PUJOL, G.; IOOSS, B.; JANON, A. R Package "sensitivity": Global Sensitivity Analysis of Model Outputs. Comprehensive R Archive Network - CRAN , 2020. Available: <Available: https://cran.r-project.org/web/packages/ Accessed in: 10 dec. 2020.
    » https://cran.r-project.org/web/packages/
  • SALTELLI, A. et al. Global sensitivity analysis: the primer. John Wiley and Sons, 2008.
  • SALTELLI, A. et al. Variance based sensitivity analysis of model output: design and estimator for the total sensitivity index. Computer Physics Communications , v. 181, n. 2, p. 259-270, 2010.
  • SANTOS, T. L. dos; PORTO, F. H. F. dos S.; SILVA, A. S. Análise da correlação entre conforto e desempenho térmico em habitações de interesse social por simulação computacional. Ambiente Construído, Porto Alegre, v. 20, n. 2, p. 211-229, abr,/jun. 2020.
  • SILVA, A. S.; ALMEIDA, L. S. S.; GHISI, E. Decision-making process for improving thermal and energy performance of residential buildings: A case study of constructive systems in Brazil. Energy and Buildings, v. 128, p. 270-286, 2016.
  • SILVA, A. S.; GHISI, E. Análise de sensibilidade global dos parâmetros termofísicos de uma edificação residencial de acordo com o método de simulação do RTQ-R. Ambiente Construído , Porto Alegre, v. 13, n. 4, p. 135-148, out./dez. 2013.
  • SILVA, A. S.; GHISI, E. Estimating the sensitivity of design variables in the thermal and energy performance of buildings through a systematic procedure. Journal of Cleaner Production, v. 244, p. 118753, 2020.
  • SOBOL, I. M. Sensitivity estimates for non-linear mathematical models. Matem. Modelirovanie, v. 2, n. 1, p. 112-118, 1990.
  • TIAN, W. A review of sensitivity analysis methods in building energy analysis. Renewable and Sustainable Energy Reviews , v. 20, p. 411-419, 2013.
  • TIAN, W. et al. Building energy assessment based on a sequential sensitivity analysis approach. Procedia Engineering, v. 205, p. 1042-1048, 2017.
  • WANG, L.; MATHEW, P.; PANG, X. Uncertainties in energy consumption introduced by building operations and weather for a medium-size office building. Energy and Buildings, v. 53, p. 152-158, 2012.
  • YANG, S. et al. Comparison of sensitivity analysis methods in building energy assessment. Procedia Engineering , v. 146, p. 174-181, 2016.
  • YILDIZ, Y.; ARSAN, Z. D. Identification of the building parameters that influence heating and cooling energy loads for apartment buildings in hot-humid climates. Energy, v. 36, n. 7, p. 4287-4296, 2011.
  • 1
    FISCHER, S. R. A. The design of experiments. Hafner Pub, 1935.
  • Publication Dates

    • Publication in this collection
      05 Mar 2021
    • Date of issue
      Apr-Jun 2021

    History

    • Received
      21 July 2020
    • Accepted
      12 Dec 2020
    Associação Nacional de Tecnologia do Ambiente Construído - ANTAC Av. Osvaldo Aranha, 93, 3º andar, 90035-190 Porto Alegre/RS Brasil, Tel.: (55 51) 3308-4084, Fax: (55 51) 3308-4054 - Porto Alegre - RS - Brazil
    E-mail: ambienteconstruido@ufrgs.br