Acessibilidade / Reportar erro

ARTIFICIAL NEURAL NETWORK AND WAVELET DECOMPOSITION IN THE FORECAST OF GLOBAL HORIZONTAL SOLAR RADIATION

Abstract

This paper proposes a method (denoted by WD-ANN) that combines the Artificial Neural Networks (ANN) and the Wavelet Decomposition (WD) to generate short-term global horizontal solar radiation forecasting, which is an essential information for evaluating the electrical power generated from the conversion of solar energy into electrical energy. The WD-ANN method consists of two basic steps: firstly, it is performed the decomposition of level p of the time series of interest, generating p + 1 wavelet orthonormal components; secondly, the p + 1 wavelet orthonormal components (generated in the step 1) are inserted simultaneously into an ANN in order to generate short-term forecasting. The results showed that the proposed method (WD-ANN) improved substantially the performance over the (traditional) ANN method.

wavelet decomposition; artificial neural networks; forecasts


1 INTRODUCTION

The conversion of solar energy into electrical energy is one of the most promising alternatives to generate electricity from clean and renewable way. It can be done through large generating plants connected to the transmission system or by small generation units for the isolated systems. The Sun provides the Earth's atmosphere annually, approximately, 1.5 × 1018 kWh of energy, but only a fraction of this energy reaches the Earth's surface, due to the reflection and absorption of sunlight by the atmosphere. One problem of renewable energy, for instance, wind and solar energies is the fact that the production of these sources is dependent on meteorological factors. In the case of solar energy particularly, the alternation of day and night, the seasons, the passage of clouds and rainy periods cause great variability and discontinuities in the production of electricity. Also in this case, there is the necessity to have capable devices of storing energy during the day in order to make it available during the night such as battery banks or salt tanks (Wittmann et al., 200822 WITTMANN M, BREITKREUZ H, SCHROEDTER-HOMSCHEIDT S & ECK M. 2008. Case Studies on the Use of Solar Irradiance Forecast for Optimized Operation Strategies of Solar Thermal Power Plants. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 1(1): 18-27.). Thus, the safe economic integration of alternative sources in the operation of the electric system depends on accurate predictions of energy production, so that operators can make decisions about the maintenance and dispatch of generating units that feed the system.

Among the techniques employed in solar radiation forecasting, it can be highlighted the ARIMA (Perdomo et al., 201017 PERDOMO R, BANGUERO E & GORDILLO G. 2010. Statistical Modeling for Global Solar Radiation Forecasting in Bogotá. Photovoltaic Specialists Conference (PVSC), Honolulu, HI, 20-25, Jun.), the artificial neural networks (ANN) (Zervas et al., 200825 ZERVAS PL, SARIMVEIS H, PALYVOS JA & MARKATOS NCG. 2008. Prediction of Daily Global Solar Irradiance on Horizontal Surfaces Based on Neural-Network Techniques. Renewable Energy, 33(8): 1796-1803.; Yona & Senjyu, 200924 YONA A & SENJYU T. 2009. One-Day-Ahead 24-Hours Thermal Energy Collection Forecasting Based on Time Series Analysis Technique for Solar Heat Energy Utilization System. Transmission & Distribution Conference & Exposition: Asia and Pacific, Seoul, 26-30, Oct.; Deng et al., 2010DENG F, SU G, LIU C & WANG Z. 2010. Global Solar Radiation Modeling Using The Artifical Neural Network Technique. Power and Energy Engineering Conference, Chengdu, Asia-Pacific, 28-31, Mar.; Yanling et al., 201223 YANLING G, CHANGZHENG C & BO Z. 2012. Blind Source Separation for Forecast of Solar Irradiance. International Conference on Intelligent System Design and Engineering Application, Sanya, Hainan, 6-7, Jan.; Zhang & Behera, 201226 ZHANG N & BEHERA PK. 2012. Solar Radiation Prediction Based on Recurrent Neural Networks Trained by Levenberg-MarquardtBackpropagation LearningAlgorithm. Innovative SmartGrid Technologies (ISGT). Washington, DC, 16-20, Jan.), the neuro-fuzzy systems (ANFIS), the Kalman Filter (Chaabene & Ammar, 2008CHAABEN EM & AMMAR BM. 2008. Neuro-Fuzzy DynamicModel with Kalman Filter to Forecast Irradiance and Temperature for Solar Energy Systems. Renewable Energy, 33(7): 1435-1443.) and the different ways of combining orthonormal wavelet bases and ANN (Cao et al., 2009CAO S, WENG W, CHEN J, LIU W, YU G & CAO J. 2009. Forecast of Solar Irradiance Using Chaos Optimization Neural Networks. Power and Energy Engineering Conference, Asia-Pacific, 21-31, Mar.; Zhou et al., 201127 ZHOU H, SUN W, LIU D, ZHAO J & YANG N. 2011. The Research of Daily Total Solar-Radiation and Prediction Method of Photovoltaic Generation Based on Wavelet-Neural Network. Power and Energy Engineering Conference (APPEEC),Wuhan, 25-28, Mar.).

Wavelets have been used in the time series literature combined with other types of predictive models and resulting in significant gains in terms of modeling. In this context, the wavelet theory consists in an auxiliary pre-processing procedure of the series in question, which can be accomplished generally in two ways: by decomposition or by noise shrinkage of the time series to be modeled. There are several studies that highlighted the gains from the combinations of decomposition and/or wavelet shrinkage and neural networks, among which it is possible to mention: (Krishna et al. 2011KRISHNA B, SATYAJI RAO YR & NAYAK PC. 2011. Time Series Modeling of River Flow Using Wavelet Neural Networks. Journal of Water Resource and Protection, 3: 50-59.), who applied the combination to model river flow; (Liu et al. 201013 LIU H, TIAN HQ, CHEN C & LI Y. 2010. A Hybrid Statistical Method to Predict Wind Speed and Wind Power. Renewable Energy, 35: 1857-1861.), (Catalão et al. 2011CATALÃO JPS, POUSINHO HMI & MENDES VMF. 2011. Short-Term Wind Power Forecasting in Portugal by Neural Networks and Wavelet Transform. Renewable Energy, 36: 1245-1251.) and (Teixeira Junior et al. 201120 TEIXEIRA JUNIOR LA, PESSANHA JFM & SOUZA RC. 2011. Análise Wavelet e Redes Neurais Artificiais na Previsão da Velocidade de Vento. In: XLIII Simpósio Brasileiro de Pesquisa Operacional, Ubatuba, São Paulo, 15-18, Aug.), who modeled wind time series; (Teixeira Junior et al. 201221 TEIXEIRA JUNIOR LA, PESSANHA JFM, MENEZES ML, CASSIANO KM & SOUZA RC. 2012. Redes Neurais Artificiais e Decomposição Wavelet na Previsão da Radiação Solar Direta. In: Simpósio Brasileiro de Pesquisa Operacional, Rio de Janeiro, 24-28, Sep.), who worked with series of solar radiation; and (Minu et al. 201015 MINU KK, LINEESH MC & JESSY JOHN C. 2010. Wavelet Neural Networks for Nonlinear Time Series Analysis. Applied Mathematical Sciences, 4(50): 2485-2495.), who studied time series of number of terrorist attacks in the world.

In this article, it is proposed a method (denoted by WD-ANN) to generate short-term forecasts of global horizontal solar radiation, which is an essential information for evaluating the electrical power generated from the conversion of solar energy into electrical energy. In summary, the forecasts of WD-ANN method are obtained from the combined use of an ANN and a wavelet decomposition of p level. More specifically, it starts with the wavelet decomposition level p (Faria et al., 2009FARIA DL, CASTRO R, PHILIPPAR TC & GUSMÃO A. 2009.Wavelet Pre-Filtering in Wind Speed Prediction. Power Engineering, Energy and Electrical Drives, POWERENG, International Conference, Lisboa, Portugal, 19-20, Mar.; Teixeira Junior et al., 201120 TEIXEIRA JUNIOR LA, PESSANHA JFM & SOUZA RC. 2011. Análise Wavelet e Redes Neurais Artificiais na Previsão da Velocidade de Vento. In: XLIII Simpósio Brasileiro de Pesquisa Operacional, Ubatuba, São Paulo, 15-18, Aug.; Perdomo et al., 201017 PERDOMO R, BANGUERO E & GORDILLO G. 2010. Statistical Modeling for Global Solar Radiation Forecasting in Bogotá. Photovoltaic Specialists Conference (PVSC), Honolulu, HI, 20-25, Jun.) of the time series of global horizontal solar radiation, generating p+1 orthonormalwavelet components. Then these wavelet components are used as the set of input patterns of an ANN, which is structured to generate short-term forecasts of global horizontal solar radiation.

In the computational experiments, it was used the hourly time series of average global horizontal solar radiation (W/m2) obtained from the Solarimetric stations of Sonda Project INPE/ CPTEC1 1 These time series can be found in <http://sonda.ccst.inpe.br/infos/index.html>. (Pereira et al., 200618 PEREIRA EB, MARTINS FR, ABREU, SL & RUTHER R. 2006. Atlas Brasileiro de Energia Solar. São José dos Campos: INPE.), for 10 locations in Brazil: Brasília, Caicó, Campo Grande, Cuiabá, Florianópolis, Joinville, Natal, Palmas, Petrolina and São Martinho. Only for Cuiabá the analysis is reported with minor details. All time series cover exactly a period of one year, but a different year in each location.

The paper is organized into six sections. In Sections 2 and 3, there are introduced theoretical aspects ofWavelet Theory and Neural Networks, respectively. TheWD-ANN method is detailed in Section 4. The computational experiments and its main results are presented in Section 5. In Section 6, there are the conclusions of the research.

2 WAVELET THEORY

2.1 Hilbert Space, Orthonormal Basis and Fourier Series

According to (Kubrusly 2001KUBRUSLY CS. 2001. Elements of Operator Theory. Boston: Birkhäuser.), a Hilbert space H is any linear space equipped with an inner product and complete. The collection l 2 of all infinite sequences of complex numbers quadratically summable (in other words, l 2 := {ƒ : : | ƒ (t)|2 < ∞}), provided with an inner product <;> (that is, <;>: l 2 ), or, simply, the pair (l 2,<;>), is a particular case of Hilbert space (Kubrusly, 2001KUBRUSLY CS. 2001. Elements of Operator Theory. Boston: Birkhäuser.). According to (Kubrusly & Levan 200210 KUBRUSLY CS & LEVAN N. 2002. Dual-Shift Decomposition of Hilbert Space. Semigroups of Operators: Theory and Application 2, 145-157.), a subspace of a Hilbert space H is a orthonormal basis of H if, and only if, satisfies the axioms (i), (ii) and (iii).

  1. (i) orthogonality: ‹ h' n, h m › = 0, whenever n'm, where n', m

    ;

  2. (ii) normality: ||h' n|| = 1, where n'

    ;

  3. (iii) completeness: ‹x, h' n› = 0 if, and only if, x = 0.

According to Theorem of Fourier series (Kubrusly, 2001KUBRUSLY CS. 2001. Elements of Operator Theory. Boston: Birkhäuser.), if the subset is an orthonormal basis, then the identity in (1) is a single expansion x of H in terms of the orthonormal basis The expansion in (1) is called a Fourier Series.

2.2 Wavelet Function

Consider a Hilbert space (l 2,<; >). One element ω(.) ∈ l 2 - with an inner product <;>: l 2 - is called wavelet function if, and only if, the functions ω m,n(.) := 2m/2 ω(2m(.) − n), where n,m , form an orthonormal basis for the Hilbert space (l 2,<; >). According to (Levan & Kubrusly, 200312 LEVAN N & KUBRUSLY CS. 2003. A Wavelet "Time-Shift-Detail" Decomposition. Mathematics and Computers in Simulation, 63(2): 73-78.), any function ƒ (.) in (l 2,<; >) admits the Fourier series expansion in terms of an orthonormal basis wavelet of l 2, as in (2).

where m is called scaling parameter and n is called translation parameter (Ogden, 199716 OGDEN RT. 1997. Essential Wavelet for Statistical Applications and Data Analysis. Birkhäuser, Boston.).

According to (Levan & Kubrusly 200312 LEVAN N & KUBRUSLY CS. 2003. A Wavelet "Time-Shift-Detail" Decomposition. Mathematics and Computers in Simulation, 63(2): 73-78.), the projection of ƒ (.) on ω m,n (.) can be interpreted as a detail variation of ƒ (.), on scaling m and translation n. According to (Mallat 199814 MALLAT S. 1998. A Wavelet Tour of Signal Processing.Academic Press, San Diego.), the closed subspace (Kubrusly, 2001KUBRUSLY CS. 2001. Elements of Operator Theory. Boston: Birkhäuser.)Wm(ω) := of (l 2,<; >) is called details subspace (on scaling m). In turn, the projection of ƒ (.) on (closed) subspace of details Wm(ω), denoted by , is defined by the partial sum showed in (3).

According to (Levan & Kubrusly 200312 LEVAN N & KUBRUSLY CS. 2003. A Wavelet "Time-Shift-Detail" Decomposition. Mathematics and Computers in Simulation, 63(2): 73-78.), the projection can be interpreted as a detail component of ƒ (.), on scaling m, on(W m(ω),<; >). As a result, given the identity (1), it follows that ƒ (.) can be interpreted as a sum of all detail components , at all entire scaling m, on closed subspace ( ,<;>) of (l 2,<; >). Tautologically, it follows that ( ,<;> ) = (l 2,<; >).

On the other hand, one element ϕ(.) ∈ l 2 - with an inner product <;>: l 2 - is called wavelet scaling function (or simply scaling function) if, and only if, the functions ϕm,n (.) := 2m/2 ϕ(2m(.) − n), where n,m , are such that ‹ϕm',n' (.), ϕj,k (.)› = 0, whenever m' = j and n'k, and ‹ϕm',n' (.), ϕj,k (.)› ≠ 0 else. According to (Mallat 199814 MALLAT S. 1998. A Wavelet Tour of Signal Processing.Academic Press, San Diego.), the closed subspace V m(ϕ) := of (l 2,<; >) is called approximation subspace (on scaling m). The projection of ƒ (.) on (closed) subspace of approximation V m(ϕ) is defined by the sum described in (4).

According to (Mallat 199814 MALLAT S. 1998. A Wavelet Tour of Signal Processing.Academic Press, San Diego.), can be interpreted as an approximation component of ƒ (.), on scaling m, on subspace (V m(ϕ),<; >) de (l 2,<;>).

2.3 Wavelet Transform

Wavelet transform on (l 2,<; >), is the inner product <;>: l 2 between a function ƒ (.) ∈ l 2 and a wavelet function ω m,n (.) ∈ W m(ω) or a scaling function ϕm,n (.) ∈ V m(ϕ), (m, n) ∈ × . According to (Mallat 199814 MALLAT S. 1998. A Wavelet Tour of Signal Processing.Academic Press, San Diego.), the wavelets transforms can be classified and grouped into two distinct sets: detail coefficients, denoted by and approximation coefficients, denoted by For each ordered pair (m, n) ∈ × , it has that the wavelet transforms d m,n and a m,n are defined, respectively, by

2.4 Wavelet Expansion

According to (Levan & Kubrusly 200312 LEVAN N & KUBRUSLY CS. 2003. A Wavelet "Time-Shift-Detail" Decomposition. Mathematics and Computers in Simulation, 63(2): 73-78.), a chain of approximation subspaces of (l 2,<; >) is called wavelet multirresolution analysis (or, simply, wavelet MRA), with scaling function ϕ(.) ∈ l 2, if the following conditions are hold:

  1. (a) V m(ϕ) ⊂ V m+1(ϕ), ∀m

    ;

  2. (b)

    = {0};

  3. (c) H;

    =

  4. (d) vV m(ϕ) ⇔ DvV m+1(ϕ),m

    ; and

  5. (e) V m(ϕ),m

    .
    is an orthonormal basis of

In (Kubrusly & Levan 200210 KUBRUSLY CS & LEVAN N. 2002. Dual-Shift Decomposition of Hilbert Space. Semigroups of Operators: Theory and Application 2, 145-157.), it is shown that a l 2 space can be orthogonally expanded such as l 2 = , and in (Levan & Kubrusly 200312 LEVAN N & KUBRUSLY CS. 2003. A Wavelet "Time-Shift-Detail" Decomposition. Mathematics and Computers in Simulation, 63(2): 73-78.), it is shown, using the axioms of a wavelet MRA , that the identity , for all m 0 , is true. Based on the identities l 2 = and , and on Theorem of Orthogonal Structures (Kubrusly, 2001KUBRUSLY CS. 2001. Elements of Operator Theory. Boston: Birkhäuser.), it is shown in (Kubrusly & Levan 200210 KUBRUSLY CS & LEVAN N. 2002. Dual-Shift Decomposition of Hilbert Space. Semigroups of Operators: Theory and Application 2, 145-157.) that the l 2 space can be orthogonally expanded as in (5).

As a result, it follows that ƒ (.) has (a single) orthogonal decomposition on the Hilbert space (l 2,<;>), as in (6).

Given the definitions of wavelet components and and the identities (1) and (6), it follows that the Fourier series of function ƒ (.), on the Hilbert space (l 2,<;>), in terms of the orthonormal basis wavelet , is given by:

where: a m,n := := , where m 0 < m < +∞ and m 0 .

3 ARTIFICIAL NEURAL NETWORKS

According to (Haykin 2001HAYKIN SS. 2001. Redes Neurais Princípios e Aplicações, 2a. edição. Porto Alegre.), Artificial Neural Networks (ANN) are distributed parallel systems composed of simple processing units called artificial neurons. They are arranged in one or more layers interconnected by a large number of connections (synapses), which are generally unidirectional, and they have weights to balance the inputs received by each neuron. The most common architecture of an ANN is the multilayer perceptron with three layers (input, hidden, and output), as shown in Figure 1 (a).

Figure 1
Multilayer perceptron artificial neural network. (a) ANN's Architecture; (b) Phases of backpropagation algorithm.

Analogously to the human brain's processing (where synapses are reinforced or weakened) the weights on connections between layers are adjusted during the process of learning of an ANN. The first layer of the ANN is the input layer, the only one who is exposed to input variables. This layer transmits the values of the input variables to neurons of the hidden layer so that they can extract the relevant features (or patterns) of the input signals and transmit the results to the output layer. The definition of the number of neurons in each layer is performed empirically. The ANN's training consists of an iterative process to obtain the weights of connections between processing units.

The main training algorithm is named backpropagation, whose weights' fit occurs through an optimization process of two phases: forward and backward, as shown in Figure 1 (b). In the forward phase, it is calculated a response provided by the network for a given input pattern. In the backward phase, the deviation (error) between the desired response (target) and the response provided by the ANN is used to adjust the weights of the connections.

During the neural network training, the various input patterns and their corresponding desired outputs are presented to the ANN, such that the weights of synapses are corrected iteratively by gradient descent algorithm in order to minimize the sum of squared errors (Haykin, 2001HAYKIN SS. 2001. Redes Neurais Princípios e Aplicações, 2a. edição. Porto Alegre.).

The time series forecasting through ANN starts by the assembly of the training patterns (input/output pairs) that depends on the setting of the window size L of time (to the past values of the series and to the explanatory variables) and the forecast horizon h. In an autoregressive process (linear or nonlinear), for example, the input pattern is formed only by past values of the series itself.

In turn, the pattern of desired output is the value of the observed time series forecasting horizon. In Figure 2, it is illustrated how is generally constructed the training set for the forecast based on the past four values passed. Note that the training patterns' construction of the network consists of moving the input and output windows along the entire time series. Thereby, each pair of windows (input/output) serves as a training pattern and must be presented repeatedly until the learning algorithm converges.

Figure 2
Setting of the training set.

4 COMBINATION OF ARTIFICIAL NEURAL NETWORKS ANDWAVELET DECOMPOSITION

The combination of an ANN and wavelet decomposition (WD) may be performed in many different ways. For instance, it can be applied the wavelet decomposition in the time series. Then, each resultant series have to be modeled by the traditional ANN, and finally, it should add the series' forecasts in order to obtain the forecast of the original time series. Another option is to use wavelet functions (normalized in the range [0, 1]) as activation functions of neurons of a traditional ANN and to utilize the input of decomposed patterns throughWD.

In this article, however, it was chosen a combining method (denoted by WD-ANN), in which the wavelet components of the time series are the input patterns of a feedfoward MLP ANN whose output provides a time series forecast (according to the diagram of Fig. 3). Basically, the proposed approach can be divided into steps (1) [described in Section 4.1] and (2) [described in Section 4.2]:

  1. (1) To make the wavelet decomposition of level p (Reis & Silva, 200419 REIS AR & SILVA APA. 2004. Aplicação da Transformada Wavelet Discreta na Previsão de Carga de Curto Prazo via Redes Neurais. Revista Controle & Automação, 15(1): 101-108.; Lei & Ran, 200811 LEI C & RAN L. 2008. Short-Term Wind Speed ForecastingModel forWind Farm Based onWavelet Decomposition. DRPT2008, Nanjing, China, 6-9, Apr.; Teixeira Junior et al., 201120 TEIXEIRA JUNIOR LA, PESSANHA JFM & SOUZA RC. 2011. Análise Wavelet e Redes Neurais Artificiais na Previsão da Velocidade de Vento. In: XLIII Simpósio Brasileiro de Pesquisa Operacional, Ubatuba, São Paulo, 15-18, Aug.) of a time series ƒ (.); and

  2. (2) To use the wavelet components of ƒ (.) (derived from step 1) as inputs of an ANN in order to perform the time series forecasts.

Figure 3
Combination of wavelet decomposition + ANN.

4.1 Wavelet decomposition of level p

Let f (.) be a time series of (l2,<;>), and be a orthonormal wavelet basis of Hilbert space (l 2,<;>). According to identity (7), the wavelet decomposition of level p (Teixeira Junior et al., 201120 TEIXEIRA JUNIOR LA, PESSANHA JFM & SOUZA RC. 2011. Análise Wavelet e Redes Neurais Artificiais na Previsão da Velocidade de Vento. In: XLIII Simpósio Brasileiro de Pesquisa Operacional, Ubatuba, São Paulo, 15-18, Aug.) of ƒ (.), where p is a natural number inside interval 1 < p < ∞, is represented by the (approximated) Fourier series described in (8).

The optimal values of the parameters m 0, n m0 and are such that minimize the Euclidean metric (Kubrusly, 2001KUBRUSLY CS. 2001. Elements of Operator Theory. Boston: Birkhäuser.) from the time series ƒ (.) and your approximation . The wavelets components : = and : = are classified, respectively, as approximation component (at m 0 scale) and detail component (at m scale) of time series ƒ (.) of (l 2,<;>). Given the expansion (8), it follows that the time series can be expanded orthogonally on (l 2,<;>), as in (9).

where = , for a fixed integer m 0, and = , where m is an integer inside the interval m 0 < m < m 0 + (p − 1), being p the level of wavelet decomposition.

4.2 Submission of the wavelet components to the ANN

Take a feedforward MLP ANN. The set of temporal signals

arising from p + 1 wavelet components of a time series [Section 4.1] are such that constitute the set of input patterns to a feedforward MLP ANN to the training process.

Whereas a window size equal to L past values, the time series forecast (the output of ANN) for each t' time (in training, validation and test samples) is obtained from the set of input patterns described in (10).

5 COMPUTATIONAL EXPERIMENT

In the computational experiments, it was considered the hourly time series of global horizontal solar radiation during the period from January to December. The representation of the daily profiles of solar radiation at ten different locations for different years is showed in Figure 4.

Figure 4
Daily profiles of solar radiation.

The sample used in ANN's training contain 7008 observations of solar radiation, while the following 876 observations belong to the validation and the last 876 to test samples. The training of ANN was performed in MATLAB software. In all simulations, the input patterns were normalized by the premnmx transformation and the training algorithm used was Levenberg & Marquardt.

It was chosen the ANN (feedforward MLP) with the best fit to the series of global horizontal solar radiation. The yearly average and standard deviation of the ten series are presented in Table 1. The standard deviation provides a measure of the yearly variability of the global horizontal solar radiation.

Table 1
Mean and standard deviation of the global horizontal solar radiation

In this paper, it is reported the detailed results from Cuiabá whose radiance time series in each month is illustrated in Figure 5.

Figure 5
(a) Global horizontal solar radiation at Cuiabá.

Figure 5
(b) Global horizontal solar radiation at Cuiabá.

For the time series from Cuiabá, the best identified ANN [Section 5.1] presents the following topological structure: input window size equal to 10; one hidden layer composed of 19 artificial neurons with activation function hyperbolic tangent; and one neuron in the output layer with linear activation function (Haykin, 2001HAYKIN SS. 2001. Redes Neurais Princípios e Aplicações, 2a. edição. Porto Alegre.).

Then, the Cuiabá's series of global horizontal solar radiation has undergone a wavelet decomposition of level two (i.e., three wavelet components). For this, it was considered the orthonormal basis Daubechies wavelet with time equals 38 (or, simply, db38) (Daubechies, 1988DAUBECHIES I. 1988. Orthonormal Bases of Compactly Supported Wavelet. Communications Pure and Applied Math, 41(7): 909-996.). After pre-processing of this time series, the best ANN with input wavelet [Section 5.2] presents the following topological structure: input window size equal to 10; one hidden layer composed of 12 artificial neurons with activation function hyperbolic tangent; and one neuron in the output layer with linear activation function (Haykin, 2001HAYKIN SS. 2001. Redes Neurais Princípios e Aplicações, 2a. edição. Porto Alegre.).

5.1 Results of traditional ANN for Cuiabá's time series

In Figure 6, there are the scatter plots between the time series of global horizontal solar radiation and their forecasts, for validation and test samples, by using a traditionalMLP network. It can be noted that the higher the vicinity of the points with respect to the 45◦ inclination line, the greater will be the correlation between the time series of solar radiation and its respective forecasts one step ahead, for the validation and test samples, and consequently, the forecasts will be better.

Figure 6
Scatter plot between observed and forecasted values by ANN method.

5.2 Results of ANN with wavelet entrance for Cuiabá's time series

In Figure 7 are presented thewavelet db38 components resulting from thewavelet decomposition of level two for the time series of global horizontal solar radiation.

Figure 7
Wavelet components of normalized time series of global horizontal solar radiation.

It is noteworthy that the wavelet decomposition of signals in the samples of training, validation and testing were done individually. In Figure 8, it is showed the scatter plots of the observations of global horizontal solar radiation and their forecasts by ANN (with input wavelet), for the validation and test samples.

Figure 8
Scatter plot between observed and forecasted values by WD-ANN method.

5.3 Modeling for the 10 time series

The results for the 10 time series modeled are showed in Table 2. It is possible to see the best wavelet family chosen for each WD-ANN model, and the best window length and number of neurons in the hidden layer for each time series. For both models (ANN and WD-ANN), it was calculated the Root Mean Square Deviation (RMSE) and the coefficient of determination R2 for the training, validation and test periods. Almost all statistics for both periods show lower values of RMSE and higher values of R2 forWD-ANN models when compared to the ANN models and naïve predictor.

Table 2
Types of ANN, RMSE and R2 for each time series' modeling.

6 CONCLUSIONS

In this paper, it was proposed a method (denoted by WD-ANN) that proposes an alternative approach to combine a feedforward MLP ANN with wavelet decomposition to generate shortterm forecasts of global horizontal solar radiation.

It could be seen that the forecasts derived from the WD-ANN method had a significantly higher correlation with the time series observations of global horizontal solar radiation when compared with the forecasts arising from the traditionalANN (i.e., without considering the wavelet signals as input patterns). It also showed the lower values of RMSE for almost all periods of interest.

Finally, it should be noted that to achieve a proper and efficient modeling, it is important to consider how a time series of interest is presented to the predictivemethod (for instance, an ANN). In other words, the choice of predictor's preprocessing of input data (e.g., wavelet decomposition) is as important as the choice of the predictor. In this perspective, it has that theWD-ANN method includes both aspects, making it more generic and sophisticated in any time series modeling.

  • 1
    CAO S, WENG W, CHEN J, LIU W, YU G & CAO J. 2009. Forecast of Solar Irradiance Using Chaos Optimization Neural Networks. Power and Energy Engineering Conference, Asia-Pacific, 21-31, Mar.
  • 2
    CATALÃO JPS, POUSINHO HMI & MENDES VMF. 2011. Short-Term Wind Power Forecasting in Portugal by Neural Networks and Wavelet Transform. Renewable Energy, 36: 1245-1251.
  • 3
    CHAABEN EM & AMMAR BM. 2008. Neuro-Fuzzy DynamicModel with Kalman Filter to Forecast Irradiance and Temperature for Solar Energy Systems. Renewable Energy, 33(7): 1435-1443.
  • 4
    DAUBECHIES I. 1988. Orthonormal Bases of Compactly Supported Wavelet. Communications Pure and Applied Math, 41(7): 909-996.
  • 5
    DENG F, SU G, LIU C & WANG Z. 2010. Global Solar Radiation Modeling Using The Artifical Neural Network Technique. Power and Energy Engineering Conference, Chengdu, Asia-Pacific, 28-31, Mar.
  • 6
    FARIA DL, CASTRO R, PHILIPPAR TC & GUSMÃO A. 2009.Wavelet Pre-Filtering in Wind Speed Prediction. Power Engineering, Energy and Electrical Drives, POWERENG, International Conference, Lisboa, Portugal, 19-20, Mar.
  • 7
    HAYKIN SS. 2001. Redes Neurais Princípios e Aplicações, 2a. edição. Porto Alegre.
  • 8
    KRISHNA B, SATYAJI RAO YR & NAYAK PC. 2011. Time Series Modeling of River Flow Using Wavelet Neural Networks. Journal of Water Resource and Protection, 3: 50-59.
  • 9
    KUBRUSLY CS. 2001. Elements of Operator Theory. Boston: Birkhäuser.
  • 10
    KUBRUSLY CS & LEVAN N. 2002. Dual-Shift Decomposition of Hilbert Space. Semigroups of Operators: Theory and Application 2, 145-157.
  • 11
    LEI C & RAN L. 2008. Short-Term Wind Speed ForecastingModel forWind Farm Based onWavelet Decomposition. DRPT2008, Nanjing, China, 6-9, Apr.
  • 12
    LEVAN N & KUBRUSLY CS. 2003. A Wavelet "Time-Shift-Detail" Decomposition. Mathematics and Computers in Simulation, 63(2): 73-78.
  • 13
    LIU H, TIAN HQ, CHEN C & LI Y. 2010. A Hybrid Statistical Method to Predict Wind Speed and Wind Power. Renewable Energy, 35: 1857-1861.
  • 14
    MALLAT S. 1998. A Wavelet Tour of Signal Processing.Academic Press, San Diego.
  • 15
    MINU KK, LINEESH MC & JESSY JOHN C. 2010. Wavelet Neural Networks for Nonlinear Time Series Analysis. Applied Mathematical Sciences, 4(50): 2485-2495.
  • 16
    OGDEN RT. 1997. Essential Wavelet for Statistical Applications and Data Analysis. Birkhäuser, Boston.
  • 17
    PERDOMO R, BANGUERO E & GORDILLO G. 2010. Statistical Modeling for Global Solar Radiation Forecasting in Bogotá. Photovoltaic Specialists Conference (PVSC), Honolulu, HI, 20-25, Jun.
  • 18
    PEREIRA EB, MARTINS FR, ABREU, SL & RUTHER R. 2006. Atlas Brasileiro de Energia Solar. São José dos Campos: INPE.
  • 19
    REIS AR & SILVA APA. 2004. Aplicação da Transformada Wavelet Discreta na Previsão de Carga de Curto Prazo via Redes Neurais. Revista Controle & Automação, 15(1): 101-108.
  • 20
    TEIXEIRA JUNIOR LA, PESSANHA JFM & SOUZA RC. 2011. Análise Wavelet e Redes Neurais Artificiais na Previsão da Velocidade de Vento. In: XLIII Simpósio Brasileiro de Pesquisa Operacional, Ubatuba, São Paulo, 15-18, Aug.
  • 21
    TEIXEIRA JUNIOR LA, PESSANHA JFM, MENEZES ML, CASSIANO KM & SOUZA RC. 2012. Redes Neurais Artificiais e Decomposição Wavelet na Previsão da Radiação Solar Direta. In: Simpósio Brasileiro de Pesquisa Operacional, Rio de Janeiro, 24-28, Sep.
  • 22
    WITTMANN M, BREITKREUZ H, SCHROEDTER-HOMSCHEIDT S & ECK M. 2008. Case Studies on the Use of Solar Irradiance Forecast for Optimized Operation Strategies of Solar Thermal Power Plants. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 1(1): 18-27.
  • 23
    YANLING G, CHANGZHENG C & BO Z. 2012. Blind Source Separation for Forecast of Solar Irradiance. International Conference on Intelligent System Design and Engineering Application, Sanya, Hainan, 6-7, Jan.
  • 24
    YONA A & SENJYU T. 2009. One-Day-Ahead 24-Hours Thermal Energy Collection Forecasting Based on Time Series Analysis Technique for Solar Heat Energy Utilization System. Transmission & Distribution Conference & Exposition: Asia and Pacific, Seoul, 26-30, Oct.
  • 25
    ZERVAS PL, SARIMVEIS H, PALYVOS JA & MARKATOS NCG. 2008. Prediction of Daily Global Solar Irradiance on Horizontal Surfaces Based on Neural-Network Techniques. Renewable Energy, 33(8): 1796-1803.
  • 26
    ZHANG N & BEHERA PK. 2012. Solar Radiation Prediction Based on Recurrent Neural Networks Trained by Levenberg-MarquardtBackpropagation LearningAlgorithm. Innovative SmartGrid Technologies (ISGT). Washington, DC, 16-20, Jan.
  • 27
    ZHOU H, SUN W, LIU D, ZHAO J & YANG N. 2011. The Research of Daily Total Solar-Radiation and Prediction Method of Photovoltaic Generation Based on Wavelet-Neural Network. Power and Energy Engineering Conference (APPEEC),Wuhan, 25-28, Mar.
  • 1
    These time series can be found in <http://sonda.ccst.inpe.br/infos/index.html>.

Publication Dates

  • Publication in this collection
    Jan-Apr 2015

History

  • Received
    23 Jan 2013
  • Accepted
    07 May 2014
Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 Rio de Janeiro RJ - Brasil, Tel.: +55 21 2263-0499, Fax: +55 21 2263-0501 - Rio de Janeiro - RJ - Brazil
E-mail: sobrapo@sobrapo.org.br