Acessibilidade / Reportar erro

Comparison of VaR Models to the Brazilian Stock Market Under the Hypothesis of Serial Independence in Higher Orders: Are Garch Models Really Indispensable?

ABSTRACT

Our objective in this article was to verify which models for the Value at Risk (VaR), among those that do not consider conditional volatility (Extreme Values Theory and the traditional Historical Simulation), and those that do consider it (GARCH and IGARCH), are adequate for the main index of the Brazilian stock market, the IBOVESPA. For this purpose, backtesting of adherence and the independence of first and higher orders were implemented for the four models mentioned, over forecast horizons of 1 and 10 days. The contribution is based on a the more rigorous criteria than those used in the literature for validating VaR models, as we performed backtesting for violation independence of higher orders on forecast horizons of 10 days. The results show that only GARCH family models were adequate. Thus, it is recommended to entities of the National Financial System that keep relevant positions in the Brazilian stock market, the utilization of internal risk models based on conditional volatility, in order to minimize the occurrence of violation clusters.

Keywords:
Value at Risk; Clusters of Violations; IBOVESPA

RESUMO

O objetivo neste artigo foi verificar quais modelos para o VaR, dentre aqueles que não consideram a volatilidade condicional (Teoria dos Valores Extremos e a tradicional Simulação Histórica), e os que a consideram (GARCH e IGARCH), são adequados para o principal índice do mercado de ações brasileiro, o IBOVESPA. Para isso, foram considerados testes de aderência, independência de primeira ordem e de ordens superiores sobre os quatro modelos citados, para horizontes de projeção de 1 e de 10 dias. A contribuição encontra-se nos critérios mais rigorosos que os utilizados pela literatura para adequação de modelos VaR, incluindo testes de independência de ordens superiores e horizontes de previsão de 10 dias. Os resultados mostram que somente modelos da família GARCH foram adequados. Sugere-se então às entidades do Sistema Financeiro Nacional que tenham aplicações relevantes no mercado de ações brasileiro a utilização de modelos internos de risco que considerem a volatilidade condicional, de modo a minimizar a ocorrência de clusters de violações.

Palavras-chave:
Valor em Risco; Clusters de violações; IBOVESPA

1. INTRODUCTION

The objective of this article was to verify which models for Value at Risk (VaR), among those that consider and do not consider the conditional volatility of returns, are suitable for the main index of the Brazilian stock market, the IBOVESPA. Herein, conditional volatility is understood as the conditional variance of IBOVESPA returns. The term conditional variance indicates that this variance, at a given instant of time, can be modeled as a variable which is dependent of covariates such as the variances of past instants.

The literature on the subject has directed its efforts to the testing of different models for the VaR, considering in addition to the adherence (unconditional coverage), the independence of their violations (conditional coverage). The latter has become an important concern not only for managers of financial institutions, but also for regulatory bodies in the international environment, since the occurrence of clusters of violations (large unprovisioned losses occurred in succession) can lead to the bankruptcy of these institutions and the risk of a systemic financial market crisis (Christtoffersen & Pelletier, 2004Christoffersen, P. F., & Pelletier, D. (2004). Backtesting Value at Risk: A Duration-Based Approach. Journal of Financial Econometrics, 2(1), 84-108.). For other entities of the National Financial System, such as investment funds, pension funds and insurance companies, which have significant portions of the investments of their funds in the stock market, the use of internal stock risk assessment models is important to ensure solvency, the competitiveness and sustainability of their business, as demonstrated by Chan (2010Chan, B. L. (2010). Risco de subscrição frente às regras de solvência do mercado segurador brasileiro (Tese de Doutorado). Faculdade de Economia, Administração e Contabilidade, Universidade de São Paulo, São Paulo. ), in a study on internal risk models and regulatory capital in the context of the Brazilian insurance market.

Similarly, the choice of an adequate internal risk model becomes a point of high relevance for all entities of the National Financial System, which have relevant applications both in the stock market and for the regulatory environment. Thus, the literature on the subject has presented comparisons between the performance of different models for VaR, considering unconditional and conditional coverage backtesting. However, this same literature has failed to point out certain models as suitable without making use of backtesting to verify if there is independence of violations of orders higher than 1. Some international examples are Berkowitz and O’Brien (2001Berkowitz, J., & O’Brien, J. (2001). How accurate are the value-at-risk models at commercial banks. Journal of Finance, 57, 1093-111.), Bali (2003Bali, T. (2003). An extreme value approach to estimating volatility and Value at Risk. The Journal of Business, 76(1), 83-108.), Tolikas (2008Tolikas, K. (2008). Value-at-risk and extreme value distributions for financial returns. The Journal of Risk, 10(3), 31-77.), and more recently in Brazil, Godeiro (2014Godeiro, L. L. (2014). Estimating the VaR (Value-at-Risk) of Brazilian stock portfolios via GARCH family models and via Monte Carlo Simulation. Journal of Applied Finance and Banking, 4(4), 143-170.).

According to Berkowitz, Christtoffersen and Pelletier (2008Berkowitz, J., Christoffersen, P. F., & Pelletier, D. (2008). Evaluating Value-at-Risk Models with Desk-Level Data. Management Science, 57(12), 2213-2227.), it is a standard in financial institutions to use Historical Simulation methods to calculate VaR. According to Tolikas (2008Tolikas, K. (2008). Value-at-risk and extreme value distributions for financial returns. The Journal of Risk, 10(3), 31-77.), these models are preferred because financial institutions tend to favor VaR models that generate estimates with low variability, not being forced to sell assets or change their investment strategies often. However, the use of traditional methods such as Historical Simulation (HS) ignores the long period of literature studies on conditional returns of financial assets (Christtoffersen & Pelletier, 2004Christoffersen, P. F., & Pelletier, D. (2004). Backtesting Value at Risk: A Duration-Based Approach. Journal of Financial Econometrics, 2(1), 84-108.). Moreover, such models have not been able to accurately predict volatility shocks such as those in the subprime financial crises in 2008 and Greece in 2010.

The models that do not consider conditional volatility used in this work were Historical Simulation and Extreme Values Theory (EVT). Those that considered the conditional volatility in the return on assets were the GARCH and IGARCH models. All models were estimated with projection horizons of 1 and 10 days on a series of daily log-returns of the IBOVESPA for the period from January 2, 2002 to July 11, 2017, totaling 3,845 observations. For this purpose, we performed tests of unconditional and conditional coverage, including the possibility of dependence on violations of orders exceeding 1, which has not been taken into account by the Brazilian Central Bank, which regulates the calculation of the VaR and the realization of backtesting in Brazil.

The results show that only the models that consider conditional volatility (GARCH and IGARCH) with asymmetric Student t-distribution were not able to reject the null hypotheses of adherence, first order independence and higher orders, for forecast horizons of not only 1 but also 10 days for the Brazilian stock market. With these results, we suggest that entities of the National Financial System that have relevant applications in the stock market, but which do not yet include the possibility of dependence on orders greater than 1 in the performance of their backtesting, review their internal risk models from this perspective, especially if their models do not consider the conditional volatility of their asset portfolio returns.

This work is divided into five sections, including this introduction. The second section presents a review of the literature on the subject. The third section presents the calculation methodologies for the VaR estimation and for the implementation of the adherence and independence tests used. In the fourth section, empirical results obtained by applying the methods studied in section three to IBOVESPA log-returns are presented and discussed. The fifth section presents conclusions and recommendations.

2. THEORETICAL FRAMEWORK

The risk analysis literature defines VaR as the largest potential loss of a position or portfolio, which can be verified with certain probability α, in a defined time horizon (Tardivo, 2002Tardivo, G. (2002). Value at Risk (VaR): The new benchmark for managing market risk. Journal of Financial Management & Analysis, 15(1), 16-26.).

According to Russon and Tobin (2008Russon, M. G, & Tobin, P. J. (2008). The Intuition and Methodology of Value at Risk. Review of Business, 29(1), 39-50.), there are three main methodological categories for the calculation of VaR: the historical, the parametric, and the simulated, the latter performed through Monte Carlo simulations. As an example of historical VaR there is the method of Historical Simulation, while methods such as the RiskMetrics and ARMA-GARCH, are examples of parametric VaR methods. VaR models estimated by EVT are examples of semi-parametric VaR models, presented in detail by Bali (2003Bali, T. (2003). An extreme value approach to estimating volatility and Value at Risk. The Journal of Business, 76(1), 83-108.), Tolikas (2008Tolikas, K. (2008). Value-at-risk and extreme value distributions for financial returns. The Journal of Risk, 10(3), 31-77.) and Morettin (2011Morettin, P. A. (2011). Econometria Financeira: um curso em séries temporais financeiras (2nd ed.). São Paulo: Blucher.).

The literature on VaR has focused on the comparison between different methods for its calculation, taking, as reference for the comparisons, the results obtained by the application of tests of adherence and independence of the observed violations. Some examples are found in the studies by Tolikas (2008Tolikas, K. (2008). Value-at-risk and extreme value distributions for financial returns. The Journal of Risk, 10(3), 31-77.), Ferreira (2013Ferreira, C. A. (2013). Avaliação de modelos de risco através de backtesting (Dissertação de Mestrado). Instituto de Matemática Pura e Aplicada, Rio de Janeiro.), Godeiro (2014Brooks, C. (2014). Introductory Econometrics for Finance, Cambridge University Press.), among others.

Considering that in moments of financial crisis the distribution of asset returns has heavier tails than the normal distribution, in studies such as those by Bali (2003Bali, T. (2003). An extreme value approach to estimating volatility and Value at Risk. The Journal of Business, 76(1), 83-108.), Tolikas (2008Tolikas, K. (2008). Value-at-risk and extreme value distributions for financial returns. The Journal of Risk, 10(3), 31-77.), EVT is used to model the tails of the returns and to compare the performance of the VaR with the methods of the GARCH family and traditional ones such as Historical Simulation. The results obtained by Tolikas (2008) show a better EVT performance with coverage levels as high as 99.9% at times of crisis compared to traditional methods.

Applications of VaR in the Brazilian context can be found in Ferreira (2013Ferreira, C. A. (2013). Avaliação de modelos de risco através de backtesting (Dissertação de Mestrado). Instituto de Matemática Pura e Aplicada, Rio de Janeiro.). This author uses 35 Brazilian financial series of log-returns, with five series of currency exchange for Brazilian Real (BRL) and three curves of interest, with ten vertices each. This author used the following models to calculate VaR: IGARCH(1,1), family GARCH(m,n) with innovations following normal and Student’s t distributions and historical simulation. To evaluate these models, we implemented Kupiec’s (1995Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3(2), 73-84.) test, Christoffersen’s (1998Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862.) test and an independence test based on violations durations (Christoffersen & Pelletier, 2004Christoffersen, P. F., & Pelletier, D. (2004). Backtesting Value at Risk: A Duration-Based Approach. Journal of Financial Econometrics, 2(1), 84-108.). A major disadvantage of this latter test, for empirical observations, is that log-returns samples of significant sizes generate series of durations that are often small, impairing the consistency of the results obtained.

Another application of VaR in the Brazilian context can be found in Godeiro (2014Godeiro, L. L. (2014). Estimating the VaR (Value-at-Risk) of Brazilian stock portfolios via GARCH family models and via Monte Carlo Simulation. Journal of Applied Finance and Banking, 4(4), 143-170.), which calculates the VaR of three distinct portfolios through models of the GARCH(m,n) family, with innovations following normal and Student’s t distributions, and by means of Monte Carlo simulations. Each portfolio consists of five shares traded on the São Paulo Stock Exchange (B3). The author also uses in his study Kupiec’s (1995Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3(2), 73-84.) and Christoffersen’s (1998Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862.) backtesting for the adherence and independence assumptions of the violations associated with the estimated VaR models.

The performance of the models tested in all the above studies only takes into account the adherence and independence for the VaR of 1 day, despite the obligation imposed by the regulators to calculate the VaR of 10 days. In addition, in the studies comparing VaR models that consider conditional volatility (GARCH and IGARCH) with models that do not consider it (Historical Simulation and EVT, for example), independence tests of orders greater than 1 are not performed.

To understand and implement the adherence and independence tests we used the works by Kupiec (1995Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3(2), 73-84.), Christoffersen (1998Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862.) and Berkowitz et al. (2008Berkowitz, J., Christoffersen, P. F., & Pelletier, D. (2008). Evaluating Value-at-Risk Models with Desk-Level Data. Management Science, 57(12), 2213-2227.). Kupiec (1995)Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3(2), 73-84. presents an adherence test for VaR models, testing whether the percentage of violations is statistically equal to the theoretical probability of occurrences of violations in the model; Christoffersen (1998)Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862. proposes a joint test of adherence and first-order independence of violations by means of Markov chains. Berkowitz et al. (2008)Berkowitz, J., Christoffersen, P. F., & Pelletier, D. (2008). Evaluating Value-at-Risk Models with Desk-Level Data. Management Science, 57(12), 2213-2227. propose a dependency test of orders greater than 1, by means of a Ljung-Box’s (LB) test for autocorrelations of violations centered around their mean. Next, we present the methodology applied in the study.

3. METHODOLOGY

To test the suitability of VaR models with horizons of 1 and 10 days, we calculated the log-returns from daily IBOVESPA series closing data, from January 2, 2002 to July 11, 2017, available in the Economática® database. The data used allowed us to obtain a series of 3845 log-returns. Then, the VaR of 1 and 10 days was calculated, by means of the models IGARCH(1,1), Historical Simulation, GARCH(m,n), and EVT, all considering the investment of a monetary unit of capital (C=1), and then it was implemented the backtesting for adherence and independence of the violations of Kupiec (1995Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3(2), 73-84.), Christoffersen (1998Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862.) and LB proposed by Berkowitz et al. (2008Berkowitz, J., Christoffersen, P. F., & Pelletier, D. (2008). Evaluating Value-at-Risk Models with Desk-Level Data. Management Science, 57(12), 2213-2227.) on the log-returns observations “outside the sample”.

The estimates of the models IGARCH(1,1), Historical Simulation, GARCH(m,n) were conducted with moving windows of daily IBOVESPA log-returns observations with sizes T=250, 500, 1000 and 1500, in order to identify the impact of sample size on the quality of the estimated models. Thus, given the series of 3845 observations of log-returns, we performed for the VaR of 1 day, 3595 estimates for T=250, 3345 for T=500, 2845 for T=1000 and 2345 for T= 1500. For the VaR of 10 days, we conducted 3586, 3336, 2836 and 2336 estimates for T=250, 500, 1000 and 1500, respectively. In the VaR models calculated through the EVT, moving windows were used with T=2100, because this model depends on larger samples for consistent estimates of their parameters. Thus, each TVE model generated 1745 and 1736 estimates for VaR of 1 and 10 days, respectively.

For all VaR estimates, we used the log-returns within the sample, defined by r t =log(Pt/Pt-1), where t is the index of the period in days and Pt the asset price in period t. For conducting the backtesting of 1 day, we used the log-returns outside the sample rt+1, while for the backtesting of 10 days, we used the non-sample accumulated log-returns defined by j=110rt+j. All procedures were performed with the use of R.

3.1 Estimation of the VaR by the IGARCH(1,1) Model

The method initially known as RiskMetrics corresponds to the estimation of an IGARCH (1,1) type model (Integrated GARCH), which assumes that the returns on an asset or portfolio of assets follow a normal distribution and have a conditional variance described by equation 1 (Morettin, 2011Morettin, P. A. (2011). Econometria Financeira: um curso em séries temporais financeiras (2nd ed.). São Paulo: Blucher.). However, distributions that consider tails heavier than normal and also asymmetry of log-returns may be considered.

σ t 2 = λ σ t 1 2 + ( 1 λ ) r t 1 2 ; t = 1, , T ; 0 < λ < 1 (1)

In which σt2 is the conditional variance of the return on an asset in the period t and T the number of observations. Performing σ12=Var(rt), which corresponds to the unconditional variance of returns, 999 processes were simulated in software R for λ=0.001;0.002;…;0.999, in order to obtain the respective mean squared errors (MSE) of each fit, described by the following equation:

M S E = t = 1 T ( r t 2 σ t 2 ) 2 T (2)

The parameter λ which minimizes the MSE will be used in equation 1 to make estimates of the conditional variance of returns. The estimation of VaR for k periods ahead is done by means of the following equation:

V a R [ k ] = [ q ( p ) ] k σ ^ t C (3)

Where k is the number of days ahead for the VaR calculation, q(p) are the p-quantiles of the probability distribution used, in which p=1-α and σ^t corresponds to the conditional variance estimated at time t. The p-quantiles were obtained for α=1;0.5;0.25;0.1, for the normal and Student’s t distributions. For the latter, the number v of degrees of freedom by maximizing the logarithmic function of the likelihood of the standard Student’s t distribution adjusted to the series of log-returns. This function is given by l(v,μ,σ|r), represented in equation 4, in which T is the number of observations used in the sample, r is the vector of log-returns, μ is the position parameter, σ the scale parameter and logΓ(.) represents the natural logarithm of the Gamma function.

l ( v , μ , σ | r ) = T [ l o g Γ ( v + 1 2 ) l o g Γ ( v 2 ) l o g σ 1 2 l n ( π v ) ] ( v + 1 ) 2 t = 1 T l o g [ 1 + ( r t μ σ v ) 2 ] (4)

3.2 Estimation of the VaR by Historical Simulation

According to Berkowitz et al. (2008Berkowitz, J., Christoffersen, P. F., & Pelletier, D. (2008). Evaluating Value-at-Risk Models with Desk-Level Data. Management Science, 57(12), 2213-2227.), the calculation of VaR by Historical Simulation is simply done by obtaining the empirical p-quantile observed from T days passed multiplied by the square root of the number days associated with the projection horizon (k). Thus, the VaR calculated by the Historical Simulation method, with coverage level p and time horizon k, is given by equation 5:

V a R p [ k ] = C q ( p ) k (5)

The estimation of quantiles is a non-parametric alternative for the calculation of VaR (Morettin, 2011Morettin, P. A. (2011). Econometria Financeira: um curso em séries temporais financeiras (2nd ed.). São Paulo: Blucher.), that is, no assumption is made on the probability distribution of the log-returns, only that it will remain the same during the forecast period. The estimator of the p-quantile q(p) is a consistent estimator for the parameter Q(p) and is given by equation 6:

q ( p ) = { r ( j ) , i f p = p j = j 0,5 n , j = 1, , n ( 1 f j ) r ( j ) + f j r ( j + 1 ) , s e p j < p < p j + 1 r ( 1 ) , i f 0 < p < p 1 r ( T ) , i f p T < p < 1 (6)

where f j =(p-p j )/(p j+1 -p j ).

The Historical Simulation method assumes that the frequency distribution of the log-returns will remain the same in the forecast horizon, because it does not consider the possibility of conditional volatility of log-returns.

3.3 Estimation of the VaR by GARCH(m,n) models

Without necessarily imposing the normality hypothesis of the returns of the assets under consideration, a VaR model, estimated by the GARCH method, first proposed by Bollerslev (1986Bollerslev, T. (1986). Generalized Autoregressive Conditional Heteroskedasticity. Journal of Econometrics, 31, 307-326.), is a model that estimates the conditional variance of an asset’s returns as a function of past returns and conditional variances.

We estimate the parameters of a GARCH model(m,n) for the returns on an asset or portfolio of assets by means of the following system of equations:

r t = ε t h t ; ε t ~ R B ( 0, σ 2 ) (7)

h t = ω + i = 1 m α i r t i 2 + j = 1 n β j h t j (8)

Equation 8 is subject to the following restrictions:

ω > 0, α i ≥ 0, i = 1,…, m - 1, α m ≠ 0, β j ≥ 0, j = 1,…, n-1, β n ≠ 0

Where , α i , β j are the model parameters to be estimated, h t is the conditional variance of returns in the period t and ε t is a white noise (WN), with mean 0 and variance 1. In addition, is a condition for stationarity of log-returns that i=1q(αi+βi)<1 in which q=max(m,n). Based on this model, the conditional variances for the k horizons is given by:

h t ^ [ k ] = E [ h t + k | F t (9)

In which F t is the information filtration available in period t. In turn, by assuming rt=εtht, the conditional default forecast errors, et[k], are calculated as follows:

e t [ k ] = h t ^ [ k ] (10)

The cumulative forecast variance k, further ahead, which is given by the following:

V t [ k ] = h t ^ [ k ] + h t ^ [ k 1 ] + h t ^ [ k 2 ] + + h t ^ [ 1 ] (11)

The standard errors of the cumulative returns forecasts are obtained as follows:

e t [ k ] * = V t [ k ] (12)

In this way, we are able to calculate the conditional confidence intervals for forecasts. Considering an interval with probability p, for the VaR calculation of a long position, we calculated the lower limit of the confidence interval, P(rt+k<LIt+k)=p. Thus, the VaR[k] is calculated as follows:

V a R [ k ] = C ( q ( p ) e t [ k ] * ) (13)

We estimated 25 combinations of the GARCH(m,n) Family, with the following series of combinations {(m,n)}={(1,1),...,(5,5)}, assuming that the white noise term εt follows asymmetric Student’s t-distribution. The criteria for the model selection were the joint analysis of the Bayesian Information Criterion (BIC), mean VaR and the adherence backtesting and independence of violations.

3.4 Estimation of the VaR by the Extreme Values Theory (EVT)

It is assumed that log-returns rt are independently and identically distributed, with cumulative distribution function F(x). In the EVT-based model, we will be interested in studying the behavior of the probability distribution tails of log-returns. For a more detailed review on EVT, see Tsay (2010Tsay, R. S. (2010). Analysis of Financial Time Series(3rd ed.). Wiley & Sons.).

In order to model the tails of the distribution of log-returns, we will use the Generalized Extreme Value Distribution (GEVD), whose distribution function is given by:

F ( r n , i ) = { e [ 1 + ξ ( r n , i μ σ ) ] 1 ξ , i f ξ 0 e e ( r n , i μ σ ) , i f ξ = 0 , (14)

Defined in {rn,i:1+ξ(rn,iμσ)>0},ifξ0 , for <μ<+,<ξ+,σ0. .

The distribution family is determined by the parameter ξ, so that if ξ=0 we get the Gumbel Type I family, if ξ>0 we obtain Fréchet Type II family, and if ξ<0 the inverse Weibull Type III family.

Assuming that we have T log-returns available {rj}j=1T, we divide the data into g sub-samples of identical size n, that is, T=gn, such that:

{ r t } t = 1 T = { r 1 , , r n r n + 1 , , r 2 n r 2 n + 1 , , r 3 n r ( g 1 ) n + 1 , , r g n } (15)

Given the relationship g=Tn , depending on the choices of T and n, g may result as not integer. The solution to this is the exclusion of a minimum of the first observations of the series. The minimum number of observations excluded (NE) to make g integer is given by equation 16:

N E = T | g | . n (16)

In which |g| is the lowest integer of g.

With r n,i being the minimum log-return observed in sub-sample i multiplied by -1, where the subscript n denotes the size of the subsample, the series of positive values of the minima will be given by:

{ r n , i } = { m i n { r ( i 1 ) n + j } } , i = 1, , g , j = 1, , n (17)

The estimation of the scale parameters σ, position μ and form ξ can be obtained by the maximum likelihood method. In the event that ξ≠0, the log likelihood function is given by equation 18:

l ( σ n , μ n , ξ n | r n ,1 , , r n , g ) = g l n σ n ( 1 + 1 ξ n ) i = 1 g l n [ 1 + ξ n ( r n , i μ n σ n ) ] i = 1 g [ 1 + ξ n ( r n , i μ n σ n ) ] 1 ξ n (18)

For ξ=0, we have:

l ( σ n , μ n | r n ,1 , , r n , g ) = g ln σ n i = 1 g r n , i μ n σ n i = 1 g e r n , i μ n σ n (19)

Nonlinear optimization computational procedures should be used to find the estimators (σ^n,μ^n,ξ^n) that maximize the value of the respective likelihood functions above.

( σ ^ n , μ ^ n , ξ ^ n ) = a r g m a x σ n , μ n , ξ n l ( σ n , μ n , ξ n | r n ,1 , , r n , g ) (20)

Estimates of the parameters of this distribution were performed using the evd library of R.

Value at Risk, with time horizon k and coverage level p, will be given by:

V a R p [ k ] = { C k ξ ^ n { μ ^ n σ ^ n ξ ^ n [ 1 [ n ln ( p ) ] ξ ^ n ] } , i f ξ ^ n 0 C k ξ ^ n { μ ^ n σ ^ n ln [ n ln ( p ) ] } , i f ξ ^ n = 0 (21)

In the present article, VaR models with moving windows with variations in n were estimated. Considering the size of the series of 3845 daily log-returns, and given the graphical behavior of the volatility of the log-returns series, the sub-prime crisis began to take effect in the IBOVESPA at the end of November 2007. Thus, for the first estimation to include the period of crisis, we defined T=2100, associated with the date of 06/23/2010. Thus, 1745 estimates were made for VaR of 1 day with moving windows of size T=2100. For the VaR of 10 days, 1736 estimates were conducted with the same size for the windows. The models were estimated with n=5, 10 and 21. The choice of n = 5 is associated with the number of working days in a week, whereas n=21, the number of working days in a month. The value of n 10 was an intermediate choice between both.

3.5 Kupiec’s (1995Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3(2), 73-84.) Test

Kupiec’s (1995Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3(2), 73-84.) test, also known as the proportion of failures (POF) test, has the objective of verifying if the proportion of violations in relation to the total observations of a VaR model is adherent to the level of significance chosen for the calculation of this risk measure. Formally, we are interested in testing the hypothesis of unconditional coverage (adherence). A violation It (α) occurs when the Value at Risk ex-ante is lower/higher than the return ex-post in a given time t, considering a long/short position. Assuming that I t (α)Bernoulli(α), for a long position, we will have:

I t ( α ) = { 1, i f r t < V a R t | t k ( α ) 0, o t h e r w i s e (22)

Under the null hypothesis, the number of violations V in a given time interval [1,T] follows binomial distribution with parameters (T,α), such that:

V = t = 1 T I t ( α ) ~ B i n ( T , α ) (23)

The null and alternative hypothesis of the Kupiec’s test are {H0:α=α^=VTH1:αα^

The test statistic is obtained by the likelihood ratio test between the null hypothesis and the alternative, described by:

L R K = 2 l n ( α V ( 1 α ) ( T V ) ( V T ) V [ 1 ( V T ) ] ( T V ) ) (24)

By the properties of logarithms, we can rewrite it as:

L R K = 2 ( l n [ α V ( 1 α ) ( T V ) ] l n { ( V T ) V [ 1 ( V T ) ] ( T V ) } ) (25)

In which LRK follows asymptotic distribution χ2 with 1 degree of freedom.

3.6 Christoffersen’s (1998Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862.) Test

Christoffersen (1998Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862.) assumed that under the alternative VaR inefficiency hypothesis, the process of violations can be modeled by a Markov chain, with transition matrix defined by:

H 1 : Π ^ = [ π 00 π 01 π 10 π 11 ] (26)

In which:

π i j = P [ I t ( α ) = j | I t 1 ( α ) = i ] (27)

A Markov chain postulates the existence of a process AR(1) for the process of violations. The null hypothesis of Christoffersen’s test is defined by:

H 0 : Π α = [ 1 α α 1 α α ] (28)

Under the null hypothesis, whatever the state of the process in t-1, the probability of a violation occurring at time t is equal to α, the level of significance used for the VaR calculation. Therefore, the probability of occurrence or non-occurrence of a violation at time t is independent of the occurrence or not of a violation in time t-1, so that equations 29 to 32 are valid:

P [ I t ( α ) = 1 | I t 1 ( α ) = 1 ] = P [ I t ( α ) = 1 ] = α (29)

P [ I t ( α ) = 1 | I t 1 ( α ) = 0 ] = P [ I t ( α ) = 1 ] = α (30)

P [ I t ( α ) = 0 | I t 1 ( α ) = 1 ] = P [ I t ( α ) = 0 ] = 1 α (31)

P [ I t ( α ) = 0 | I t 1 ( α ) = 0 ] = P [ I t ( α ) = 0 ] = 1 α (32)

A test of the likelihood ratio denoted by LRCC, allows us to jointly test the hypotheses of adherence and independence associated with Christoffersen’s (1998Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862.) test:

L R C C = 2 { l o g L [ Π α , I t ( α ) , , I T ( α ) ] l o g L [ Π ^ , I t ( α ) , , I T ( α ) ] } d T χ 2 ( 2 ) (33)

The statistic LRCC presented in equation 33 converges to a chi-square asymptotic distribution with 2 degrees of freedom. In this equation, Π^ is the transition matrix of the process of violations under the alternative hypothesis:

Π ^ = [ n 00 n 00 + n 01 n 01 n 00 + n 01 n 10 n 10 + n 11 n 11 n 10 + n 11 ] (34)

nij is the number of times we have I t (α)=j and I t-1 (α)=i.

The likelihood function associated with the alternative hypothesis Π^ is:

L [ Π ^ , I t ( α ) , , I T ( α ) ] = ( 1 π ^ 01 ) n 00 π ^ 01 n 01 ( 1 π ^ 11 ) n 10 π ^ 11 n 11 (35)

Similarly, the likelihood function associated with the null hypothesis Π is:

L [ Π ; I t ( α ) , , I T ( α ) ] = ( 1 α ) n 0 α n 1 (36)

In which, n 0 =n 00+n 10 and n 1=n 01+n11.

3.7 Berkowitz-Christoffersen-Pelletier’s (2008Berkowitz, J., Christoffersen, P. F., & Pelletier, D. (2008). Evaluating Value-at-Risk Models with Desk-Level Data. Management Science, 57(12), 2213-2227.) Test

Independence tests based on Markov chains have the limitation of only evaluating the presence of first order dependence in the violations. To circumvent this limitation, it is possible to use an LB test for the VaR violations proposed by Berkowitz et al. (2008Berkowitz, J., Christoffersen, P. F., & Pelletier, D. (2008). Evaluating Value-at-Risk Models with Desk-Level Data. Management Science, 57(12), 2213-2227.). These authors define H it (α) as a variable indicating the ‘i-th” violation occurred in time t, centered around its expected value α In this way, we have H it (α),=I t (α)- α, such that:

H i t ( α ) , = { 1 α ; i f r t < V a R t | t k ( p ) α ; o t h e r w i s e (37)

Berkowitz et al. (2008Berkowitz, J., Christoffersen, P. F., & Pelletier, D. (2008). Evaluating Value-at-Risk Models with Desk-Level Data. Management Science, 57(12), 2213-2227.) start from the fact that the hypothesis of conditional coverage (adherence and independence) is satisfied when the process Hit (α), follows a martingale difference. For more details, see Morettin (2011Morettin, P. A. (2011). Econometria Financeira: um curso em séries temporais financeiras (2nd ed.). São Paulo: Blucher.). Thus, a range of tests for the martingale difference hypothesis can be applied in VaR models for a given level of significance α. The null hypothesis of the LB test is that the stochastic process {H it (α)} follows a martingale difference.

According to the LB test, the statistic associated with the nullity of the first K empirical autocorrelations of the process of centered violations is described by equation 38:

L B ( K ) = T ( T + 2 ) k = 1 K ρ ^ k 2 T k d T χ 2 ( K ) (38)

In which ρ^k is the empirical autocorrelation of order k of the process. Each k-th empirical autocorrelation rk of the process {H it (α)} was calculated as follows:

ρ ^ k = t = k + 1 T { [ H i t ( α ) t = 1 T ( H i t ( α ) / T ) ] [ H t k ( α ) t = 1 T ( H i t ( α ) / T ) ] } t = 1 T [ H i t ( α ) t = 1 T ( H i t ( α ) / T ) ] 2 (39)

In the present study, the LB test was performed for all the models to jointly test different orders of autocorrelations of the violations, having performed ten tests per model, so that K (eq. 38) was set from 1 to 10.

4. RESULTS

Given the objectives of this work, we considered suitable only the VaR models that did not reject Kupiec’s (1995Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3(2), 73-84.) null hypothesis of adherence, the joint null hypothesis of adherence and first order independence of the Markov chain test (Christoffersen, 1998Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862.) and the null hypothesis of independence (not only of first but also of higher orders) of the LB test proposed by Berkowitz et al. (2008Berkowitz, J., Christoffersen, P. F., & Pelletier, D. (2008). Evaluating Value-at-Risk Models with Desk-Level Data. Management Science, 57(12), 2213-2227.).

Table 1 presents the backtesting results for the estimates VaR[k] of 1 and 10 days (k=1 and k=10) for all models considered in the article, for coverage levels of 99%, 99.5%, 99.75% and 99.9%. We present the p-values of the adherence and independence tests of Kupiec and Markov chains. As for the LB test, we present the K values (eq. 38) in which the null hypothesis was rejected. Table 2 presents for all the estimated models and coverage levels the respective mean VaR, standard deviation of the VaR, number of violations (V) and the aggregate, maximum and average violations.

Table 1
Results of Backtesting for Different Estimated VaR Models

Table 2
Mean VaR and Average, Aggregate and Maximum Violations for Different Models

The Historical Simulation models for both 1 and 10 days were not adequate for any of the coverage levels and sizes of the moving windows considered. However, it is interesting to note that if the adequacy criterion did not consider the test for dependence on higher orders (LB), this model with T=500 would be suitable for 99% and 99.5%, with T=1000 for 99.5% and 99.75%, and with T=1500 for 99%, 99.5% and 99.75% all with horizon of 1 day. These results for Kupiec’s (1995Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3(2), 73-84.) and Christoffersen’s (1998Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862.) tests are similar to those found in Tolikas (2008Tolikas, K. (2008). Value-at-risk and extreme value distributions for financial returns. The Journal of Risk, 10(3), 31-77.) and show the importance of considering testing for dependence on higher orders as is the case in this article. Figure 1 illustrates the viscosity of this model, since it takes time to respond to volatility shocks.

Figure 1.
VaR of 1 (a) and 10 (b) days by Historical Simulation for T=250.

It can be seen in Figure 2 that the EVT models are even more viscous than the Historical Simulation models. Another characteristic verified in the Extreme Values models is that the estimated quantiles are quite sensitive to the size of the intervals of each sub-sample to obtain the minimums used to estimate the parameters of the GEV distribution. The estimated models with the three interval sizes used (n=5, 10 and 21), generated results only for the 1-day horizon and coverage levels of 99.75% and 99.9%, according to the three tests used. Among the EVT models, the model with n=21 for 1 day had the lowest mean VaR for the two coverage levels referred to, of -0.0636 and -0.0818, respectively. We also verified that. among all the analyzed models, most of the time the EVT models presented higher mean VaR, less aggregate violation and less maximum violation. For the coverage level of 99.75%, this EVT model had a higher average violation compared to the Historical Simulation, GARCH and IGARCH models. As for the coverage level of 99.9%, presented the lowest average violation in most cases, except for Historical Simulation models with moving windows with T=500 and T=1000.

Figure 2.
VaR of 1 (a) and 10 (b) days by EVT for n=5.

The IGARCH(1,1) models with asymmetric Student’s t-distribution for 1 day and T=1000 and 1500 are suitable for coverage levels of 99.5%, 99.75% and 99.9%. With T=500, the model is only suitable for the coverage level of 99.9%, and, with T=250, only 99.5%. We also observed a reduction of the mean VaR with the increase of T. Among the models suitable for 99.5%, we observe smaller standard deviation, maximum and average violations with T=250 and smaller number of violations, mean VaR and aggregate violation with T=1500. For 99.75%, we observe smaller standard deviation and average violation with T=1000 and smaller number of violations, mean VaR, aggregate and maximum violations with T=1500. For 99.9%, we observe a smaller average violation with T=500, lower standard deviation with T=1000 and lower number of violations, mean VaR, aggregate and maximum violations with T=1500. For 10 days, only the models with T=1000 and 1500 are suitable for the coverage level of 99.9%, with the lowest mean VaR being observed with T=1500 and the lowest standard deviation and aggregate, maximum and average violations with T=1000.

Among all the combinations of estimated GARCH(m,n) models, the GARCH (1,1) model presented the lowest BIC and the best results in terms of the backtestings performed, for both VaR of 1 and 10 days. For the VaR of 1 day, this model accepted the null hypotheses of both Kupiec and Christoffersen’s tests for all coverage levels and used moving windows. However, with T=250, the model is only suitable for 99%. With T=500 and 1000, are suitable for 99% and 99.9%. With T=1500, the model is suitable only for 99.9%. Among the models suitable for 99%, the model with T=1000 presented the lowest mean VaR, standard deviation, number of violations, average and aggregate violations. For 99.9%, the lowest maximum and average violations were observed with T=500, the lowest mean VaR and standard deviation with T=1000 and the lowest number of violations and aggregate violation with T=1500. For 10 days, estimates with all window sizes are suitable for 99.9%, with the lowest mean VaR being observed with T=1000, the lowest standard deviation with T=250, and the lowest aggregate, maximal and average violations observed with T=1500.

From the results of the backtestings carried out, we verified that the VaR models of the GARCH(m,n) and IGARCH(1,1) family, which consider conditional volatility and also asymmetric distributions and heavier tails than normal, are better than traditional models such as Historical Simulation and EVT. The rapid response of these models to volatility shocks can be seen in Figures 3 and 4. Although EVT models have shown along with GARCH and IGARCH models to be suitable for 1-day horizons and higher levels of coverage, if we increase even more the restrictions for suitability of the model considering the necessity of adherence and independence for horizons of 1 and 10 days simultaneously, the range of adequately modeled feasible coverage levels reduces to 99.9%, which is obtained exclusively by GARCH and IGARCH models. In addition, GARCH and IGARCH models perform better than Historical Simulation models because they have, in general, lower Mean VaR.

Figure 3.
VaR of 1 (a) and 10 (b) days by IGARCH (1,1) with asymmetric Student’s t-test and T=250.

Figure 4.
VaR of 1 (a) and 10 (b) days by GARCH(1,1) for T=250.

5. CONCLUSIONS

In this research, four risk models (Historical Simulation, EVT, IGARCH(1,1) and GARCH(1,1)) were estimated for the daily log-returns series of the IBOVESPA and the VaR measure was extracted from each model, with the objective of verifying which of them are suitable for the Brazilian stock market, in investment horizons of 1 and 10 days.

In spite of the common practice of a large number of banks using methods such as Historical Simulation for their VaR, the results show that only models that consider the conditional volatility as GARCH and IGARCH were adequate, taking into account not only the criterion of adherence and independence of first order widely used in the literature for comparison of market risk models, but also independence of higher orders, for forecasting horizons of 1 and 10 days.

With these results, we suggest that entities of the National Financial System that invest their resources in portfolios with a significant percentage in shares traded in the stock exchange, to reassess their internal risk models, including the possibility of dependence on orders greater than 1 of VaR violations in the performance of their backtesting. This becomes especially important if VaR models that do not take into account conditional volatility are still used, as is the case of the Historical Simulation and EVT models. The objective would be to improve the risk models currently used by these entities, in order to reduce the occurrence of significant, unexpected and successive losses which may undermine financial stability and the proper functioning of markets.

In this sense, although less operationally comfortable, the migration to GARCH family models, by entities of the National Financial System that have relevant applications in the Brazilian stock market, may become essential for the calculation of their VaR and bring managerial benefits in terms of lower average values for this risk measure, compared to Historical Simulation and EVT models. Such action would reduce the opportunity costs of these entities, thus allowing greater leverage and the accomplishment of financial operations with potential of greater returns, which would favor a better performance and greater competitiveness of these entities in their markets, while at the same time ensuring better health of the financial system, since it reduces the chances of systemic crises by means of more robust forecasts for the losses.

REFERENCES

  • Bali, T. (2003). An extreme value approach to estimating volatility and Value at Risk. The Journal of Business, 76(1), 83-108.
  • Berkowitz, J., & O’Brien, J. (2001). How accurate are the value-at-risk models at commercial banks. Journal of Finance, 57, 1093-111.
  • Berkowitz, J., Christoffersen, P. F., & Pelletier, D. (2008). Evaluating Value-at-Risk Models with Desk-Level Data. Management Science, 57(12), 2213-2227.
  • Bollerslev, T. (1986). Generalized Autoregressive Conditional Heteroskedasticity. Journal of Econometrics, 31, 307-326.
  • Brooks, C. (2014). Introductory Econometrics for Finance, Cambridge University Press
  • Chan, B. L. (2010). Risco de subscrição frente às regras de solvência do mercado segurador brasileiro (Tese de Doutorado). Faculdade de Economia, Administração e Contabilidade, Universidade de São Paulo, São Paulo.
  • Christoffersen, P. F. (1998). Evaluating interval forecasts. International Economic Review, 39, 841-862.
  • Christoffersen, P. F., & Pelletier, D. (2004). Backtesting Value at Risk: A Duration-Based Approach. Journal of Financial Econometrics, 2(1), 84-108.
  • Ferreira, C. A. (2013). Avaliação de modelos de risco através de backtesting (Dissertação de Mestrado). Instituto de Matemática Pura e Aplicada, Rio de Janeiro.
  • Godeiro, L. L. (2014). Estimating the VaR (Value-at-Risk) of Brazilian stock portfolios via GARCH family models and via Monte Carlo Simulation. Journal of Applied Finance and Banking, 4(4), 143-170.
  • Kupiec, P. (1995). Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3(2), 73-84.
  • Morettin, P. A. (2011). Econometria Financeira: um curso em séries temporais financeiras (2nd ed.). São Paulo: Blucher.
  • Russon, M. G, & Tobin, P. J. (2008). The Intuition and Methodology of Value at Risk. Review of Business, 29(1), 39-50.
  • Tardivo, G. (2002). Value at Risk (VaR): The new benchmark for managing market risk. Journal of Financial Management & Analysis, 15(1), 16-26.
  • Tolikas, K. (2008). Value-at-risk and extreme value distributions for financial returns. The Journal of Risk, 10(3), 31-77.
  • Tsay, R. S. (2010). Analysis of Financial Time Series(3rd ed.). Wiley & Sons.
  • Financial Support

    PIBIC-CNPq-UNIFESP, 2017-2018.

Publication Dates

  • Publication in this collection
    31 Jan 2020
  • Date of issue
    Nov-Dec 2019

History

  • Received
    05 Sept 2018
  • Reviewed
    06 Dec 2018
  • Accepted
    07 Feb 2019
Fucape Business School Av. Fernando Ferrari, 1358, Boa Vista, 29075-505, Vitória, Espírito Santo, Brasil, (27) 4009-4423 - Vitória - ES - Brazil
E-mail: bbronline@bbronline.com.br