Time Varying Higher Moments with the racd package.

The Autoregressive Conditional Density (ACD) model of Hansen (1994) extended GARCH models to include time variation in the higher moment parameters. It was a somewhat natural extension to the premise of time variation in the conditional mean and variance, though it probably raised more questions than it, or subsequent research have been able to answer. In my mind, some of these questions are :

  • What is the rationale and source for time variation in the skew and shape parameters of a conditional density?
  • Can one single source of randomness adequately model both the odd and even moment variation?
  • What is the impact on the coefficients of the lower moments?
  • What are the implications of the time varying density for the standardized residuals on long run relationships, simulation methods and the consistency of the parameters?
  • Do the marginal benefits, in applied work, outweigh the costs of the difficult estimation?

The first question is quite open and difficult to answer. In a GARCH model, most of the variation in the underlying hypothesized density, its expansion and contraction with the news arrival, is captured by the conditional variance dynamics. Given a suitably chosen skewed and shaped distribution, extreme events can also be accommodated within this framework as can changes in the asymmetry of the distribution. If we believe that there is value in allowing higher moment parameters to vary, which we can at times infer from the estimation and forecast performance, is this the result of some underlying inadequacy of the GARCH model or does the ‘true’ and unknown data generating process actually contain such dynamics (and why)?

For the second question, stochastic volatility models have already provided one way to test this at least for GARCH models, though at a considerable computational cost. For ACD models, this is probably still an open research question.

For the third question, some evidence was presented in Harvey and Siddique (1999) where the inclusion of time varying skewness affected the persistence of the conditional variance and caused some of the asymmetries in the variance to disappear (through a reduced asymmetry coefficient in the variance dynamics). I would also actually expect that for standard GARCH models with very high persistence, the inclusion of time variation in the shape parameter (and hence more directly on the higher even moments) to lead to a reduction in such persistence as some of the extreme variations from shocks could be better accommodated. I also wonder whether the inclusion of time variation in the skew parameter would have an impact on the first moment parameters, particularly when their dynamics might include threshold effects.

The fourth question is very difficult to answer, particularly the consistency of the parameters. There is little theoretical work in the ACD literature on this and there have only been some footnotes in the research about simulations confirming that the parameters of ACD models tested had the standard \( \sqrt(N) \) consistency. I remain very skeptical on this point considering the ‘3-dimensional’ nature of ACD simulation. That is, for each period \( t \) that is generated in a simulation, the density of the standardized residuals is varying, unlike GARCH model simulation where a long path results in a sampling from the standardized distribution.

The final question is the one most recent papers have focused on, and an extensive literature is available in the vignette of the racd package. The results are mixed, though one would not necessarily conclude this from just reading the abstract of any one of the papers where there is the natural tendency to portray the results in the best possible light.

Following the rather long intro, the remaining article will provide for an introduction to the usage of the racd package which shares many features with the rugarch package (since it extends it). If you have some unique insights into these models, would like to add something or comment, feel free to drop me an email. The racd package is currently hosted in the teatime repository on r-forge and there are no plans at present to release it to CRAN (see r-downloads).

The Model Specification

The model specification follows closely the implementation in the rugarch package with the exception of the ‘distribution.model’ option which is now a list with a number of choices for defining the conditional skew and shape dynamics. For the conditional variance equation, the GARCH models which are included are the standard GARCH (‘sGARCH’), component GARCH (‘csGARCH’) and the multiplicative component GARCH (‘mcsGARCH’) for intraday data. The reason for only including these models has to do with the calculation of their long run properties and persistence which do not require simulation in light of the time variation of the higher moments.

# setup
library(racd)
library(rugarch)
library(parallel)
library(xts)
data(sp500ret)
plot2xts = function(x, ...) {
plot(x, y = NULL, type = 'l', auto.grid = FALSE, major.ticks = 'auto', minor.ticks = FALSE,
major.format = FALSE, bar.col = 'grey', candle.col = 'white', ann = TRUE,
axes = TRUE, cex.main = 0.9, cex.axis = 0.9, ...)
grid()
}
args(acdspec)
## function (variance.model = list(model = 'sGARCH', garchOrder = c(1,
## 1), external.regressors = NULL, variance.targeting = FALSE),
## mean.model = list(armaOrder = c(1, 1), include.mean = TRUE,
## archm = FALSE, arfima = FALSE, external.regressors = NULL),
## distribution.model = list(model = 'snorm', skewOrder = c(1,
## 1, 1), skewshock = 1, skewshocktype = 1, skewmodel = 'quad',
## skew.regressors = NULL, shapeOrder = c(0, 1, 1), shapeshock = 1,
## shapeshocktype = 1, shapemodel = 'quad', shape.regressors = NULL,
## exp.rate = 1), start.pars = list(), fixed.pars = list())

The distribution.model list contains the details of the conditional higher moment specification:

  • model. This is the conditional distribution, with the same choice of skewed and shaped distributions as in the rugarch package.
  • skewOrder (skewmodel) and shapeOrder (shapemodel). Denote the order of the skew and shape (models) under the different parameterizations available and described in the package’s vignette. Not all models have 3 parameters. For example, the ‘xar’ shape model has 2 parameters whilst the ‘xar’ in skew has 3 (since it is based on the signed rather than absolute shocks).
  • skewshocktype and shapeshocktype. A value of 1 denotes the use of squared shocks while any other value denotes absolute value shocks.
  • skewshock and shapeshock. A value of 1 denotes the use of the standardized residuals as shocks while any other value denotes the use of the residuals.
  • skew.regressors and shape.regressors. A matrix of regressors for the skew and shape dynamics. This should be considered experimental at present, until further testing.
  • exp.rate. The scaling value for the exponential transformation of the unbounded shape dynamics (the skew dynamics use a logistic transformation without extra parameters.)

For the test example, I’ll use the following specification:

spec = acdspec(mean.model = list(armaOrder = c(0, 1)), variance.model = list(variance.targeting = TRUE),
distribution.model = list(model = 'nig', skewOrder = c(1, 1, 1), shapeOrder = c(1,1,1), skewmodel = 'quad', shapemodel = 'pwl'))

The Estimation

As described in the vignette, estimation in ACD models is highly nonlinear and there is no guarantee of a global optimum. For this reason, it is suggested that the problem is estimated from different starting parameters which is why the routine includes a number of solvers preceded by ‘ms’ to denote a multistart strategy. The included solvers are optim (and msoptim), ucminf (and msucminf), solnp (and mssolnp), nlminb (and msnlminb) and a local implementation of cmaes (bound constrained global solver). For the unconstrained solvers (optim and ucminf), the parameters are transformed into a bounded domain via the logistic map. The use of parallel evaluation in the multistart solvers is enabled by passing a cluster object from the parallel package, as the example which follows illustrates:

data(sp500ret)
cl = makePSOCKcluster(10)
fit = acdfit(spec, sp500ret, solver = 'msoptim', solver.control = list(restarts = 10),
cluster = cl)
##
## *---------------------------------*
## *          ACD Model Fit          *
## *---------------------------------*
##
## Conditional Variance Dynamics
## -----------------------------------
## GARCH Model  : sGARCH(1,1)
## Mean Model   : ARFIMA(0,0,1)
## Distribution : nig
##
## Conditional Skew Dynamics
## -----------------------------------
## ACD Skew Model   : quad(1,1,1)
## Shock type:  Std.Residuals
##
## Conditional Shape Dynamics
## -----------------------------------
## ACD Shape Model  : pwl(1,1,1)
## Shock type:  Std.Residuals
##
## Optimal Parameters
## ------------------------------------
##           Estimate  Std. Error  t value Pr(>|t|)
## mu        0.000579    0.000109   5.3235 0.000000
## ma1       0.009191    0.014274   0.6439 0.519643
## alpha1    0.089522    0.008213  10.8999 0.000000
## beta1     0.902153    0.009429  95.6765 0.000000
## skcons   -0.080024    0.031859  -2.5118 0.012011
## skalpha1  0.284640    0.055745   5.1061 0.000000
## skgamma1  0.027768    0.013604   2.0412 0.041233
## skbeta1   0.570018    0.093124   6.1210 0.000000
## shcons    0.443515    0.193872   2.2877 0.022157
## shalpha1  0.014926    0.018471   0.8081 0.419033
## shgamma1  0.783159    0.119464   6.5556 0.000000
## shbeta1   0.657811    0.088193   7.4588 0.000000
## omega     0.000001          NA       NA       NA
##
## Robust Standard Errors:
##           Estimate  Std. Error  t value Pr(>|t|)
## mu        0.000579    0.000107  5.43342 0.000000
## ma1       0.009191    0.014895  0.61709 0.537178
## alpha1    0.089522    0.010646  8.40916 0.000000
## beta1     0.902153    0.012237 73.72243 0.000000
## skcons   -0.080024    0.030544 -2.61994 0.008795
## skalpha1  0.284640    0.060626  4.69498 0.000003
## skgamma1  0.027768    0.015036  1.84676 0.064782
## skbeta1   0.570018    0.087479  6.51609 0.000000
## shcons    0.443515    0.318005  1.39468 0.163112
## shalpha1  0.014926    0.009775  1.52695 0.126773
## shgamma1  0.783159    0.144317  5.42667 0.000000
## shbeta1   0.657811    0.146672  4.48492 0.000007
## omega     0.000001          NA       NA       NA
##
## LogLikelihood : 18154
##
## Information Criteria
## ------------------------------------
##                  ACD   GARCH
## Akaike       -6.5697 -6.5540
## Bayes        -6.5554 -6.5480
## Shibata      -6.5698 -6.5540
## Hannan-Quinn -6.5647 -6.5519
##
## [truncated remaining output]
##
## Elapsed time : 1.818

As can be inferred from the robust standard error, the majority of the higher moment parameters are significant, and the information criteria indicate an improvement over the non time varying GARCH equivalent model.
There are a number of extractor functions, in addition to the standard ones such as ‘fitted’,’sigma’,’residuals’, ‘quantile’ and ‘pit’, such as ‘shape’ and ‘skew’ which extract the conditional time varying skew and shape xts vectors with option for returning either the ‘transformed’ (default TRUE) or unbounded values. Additionally, the methods ‘skewness’ and ‘kurtosis’ return the implied conditional time varying skewness and excess kurtosis xts vectors. The following figure provides a visual illustration of the estimated dynamics:

par(mfrow = c(3, 2), mai = c(0.75, 0.75, 0.3, 0.3))
plot2xts(fitted(fit), col = 'steelblue', main = 'Conditional Mean')
plot2xts(abs(as.xts(sp500ret)), col = 'grey', main = 'Conditional Sigma')
lines(sigma(fit), col = 'steelblue')
plot2xts(skew(fit, transform = FALSE), col = 'grey', main = 'Skew')
lines(skew(fit), col = 'steelblue')
legend('topleft', c('Unbounded', 'Bounded'), col = c('grey', 'steelblue'), lty = c(1,1), bty = 'n', cex = 0.8)
plot2xts(shape(fit, transform = FALSE), col = 'grey', main = 'Shape', ylim = c(0,10))
lines(shape(fit), col = 'steelblue')
legend('topleft', c('Unbounded', 'Bounded'), col = c('grey', 'steelblue'), lty = c(1,1), bty = 'n', cex = 0.8)
plot2xts(skewness(fit), col = 'steelblue', main = 'Skewness')
plot2xts(kurtosis(fit), col = 'steelblue', main = 'Kurtosis (ex)')

racd_1

Forecasting and Filtering

Forecasting in the racd package can only be done on an estimated (ACDfit) object (unlike rugarch where a specification object can also be used), but for 1-ahead forecasting it is possible to use the acdfilter method instead. For n-ahead (n>1) forecasts, for the higher moment dynamics, this is done via simulation as there is no closed form solution as explained in the vignette. There is nothing particularly interesting to note about the acdforecast method here so I will go directly to the rolling forecast and backtesting method (acdroll). This has a number of extra options compared to the ugarchroll method and these are explained in detail in the vignette. Suffice to say, these extra options are related to restrictions on the dynamics to facilitate convergence.

roll = acdroll(spec, sp500ret, n.start = 2000, refit.every = 100, refit.window = 'recursive',
solver = 'msoptim', solver.control = list(restarts = 10), cluster = cl,
fixARMA = TRUE, fixGARCH = TRUE, fixUBShape = TRUE, UBSHapeAdd = 3, compareGARCH = 'LL')
# check convergence(roll) if it is not zero, resubmit via the 'resume'
# method.
gspec = ugarchspec(mean.model = list(armaOrder = c(0, 1)), variance.model = list(variance.targeting = TRUE), distribution = 'nig')
rollg = ugarchroll(gspec, sp500ret, n.start = 2000, refit.every = 100, refit.window = 'recursive')
show(roll)
##
## *-------------------------------------*
## *              ACD Roll               *
## *-------------------------------------*
## No.Refits        : 36
## Refit Horizon    : 100
## No.Forecasts     : 3523
## GARCH Model      : sGARCH(1,1)
## Mean Model       : ARFIMA(0,0,1)
##
## ACD Skew Model   : pwl(1,1,1)
## ACD Shape Model  : pwl(1,1,1)
## Distribution     : nig
##
## Forecast Density:
##               Mu  Sigma   Skew  Shape Shape(GIG) Realized
## 1995-02-03 2e-04 0.0055 0.0501 0.6855          0   0.0123
## 1995-02-06 2e-04 0.0060 0.3130 0.1493          0   0.0052
## 1995-02-07 2e-04 0.0060 0.3292 0.4318          0  -0.0007
## 1995-02-08 3e-04 0.0059 0.2219 0.8460          0   0.0008
## 1995-02-09 3e-04 0.0058 0.1515 0.9846          0  -0.0021
## 1995-02-10 3e-04 0.0057 0.0580 1.0217          0   0.0026
##
## ..........................
##                 Mu  Sigma    Skew  Shape Shape(GIG) Realized
## 2009-01-23  0.0008 0.0273 -0.1161 1.2626          0   0.0054
## 2009-01-26  0.0003 0.0264 -0.1010 1.6388          0   0.0055
## 2009-01-27  0.0003 0.0255 -0.0890 1.7417          0   0.0109
## 2009-01-28  0.0001 0.0248 -0.0623 1.6254          0   0.0330
## 2009-01-29 -0.0004 0.0253  0.0364 0.6087          0  -0.0337
## 2009-01-30  0.0013 0.0258 -0.1027 1.3595          0  -0.0231
##
## Elapsed: 31.52 mins
##
vartable = rbind(as.data.frame(VaRTest(alpha = 0.01, actual = roll@forecast$VaR[,
'realized'], VaR = roll@forecast$VaR[, 'alpha(1%)'])[c(1, 2, 11, 12)], row.names = c('ACD(1%)')),
as.data.frame(VaRTest(alpha = 0.05, actual = roll@forecast$VaR[, 'realized'],
VaR = roll@forecast$VaR[, 'alpha(5%)'])[c(1, 2, 11, 12)], row.names = c('ACD(5%)')),
as.data.frame(VaRTest(alpha = 0.01, actual = rollg@forecast$VaR[, 'realized'],
VaR = rollg@forecast$VaR[, 'alpha(1%)'])[c(1, 2, 11, 12)], row.names = c('GARCH(1%)')),
as.data.frame(VaRTest(alpha = 0.05, actual = rollg@forecast$VaR[, 'realized'],
VaR = rollg@forecast$VaR[, 'alpha(5%)'])[c(1, 2, 11, 12)], row.names = c('GARCH(5%)')))
##
##
##
print(vartable, digits = 3)
##           expected.exceed actual.exceed cc.LRp       cc.Decision
## ACD(1%)                35            28  0.357 Fail to Reject H0
## ACD(5%)               176           203  1.000 Fail to Reject H0
## GARCH(1%)              35            26  0.215 Fail to Reject H0
## GARCH(5%)             176           191  0.571 Fail to Reject H0
##
##
print(rbind(as.data.frame(BerkowitzTest(qnorm(as.numeric(pit(roll))), tail.test = TRUE, alpha = 0.01)[c(1, 2, 4, 6)], row.names = 'ACD'), as.data.frame(BerkowitzTest(qnorm(as.numeric(pit(rollg))), tail.test = TRUE, alpha = 0.01)[c(1, 2, 4, 6)], row.names = 'GARCH')), digits = 4)
##          uLL    rLL     LRp            Decision
## ACD   -163.2 -164.3 0.35633 fail to reject NULL
## GARCH -157.8 -160.3 0.07761 fail to reject NULL
##
##
HL = cbind(HLTest(as.numeric(pit(roll)))$statistic, HLTest(as.numeric(pit(rollg)))$statistic)
colnames(HL) = c('ACD', 'GARCH')
print(HL, digits = 4)
##            ACD  GARCH
## M(1,1)  0.2349  5.787
## M(2,2)  6.2283 19.640
## M(3,3)  9.6166 26.599
## M(4,4) 11.2113 28.929
## M(1,2)  0.8564  4.306
## M(2,1)  8.4187 24.476
## W      18.3033 11.897

At the 1% and 5% coverage levels, neither the ACD nor GARCH models can be rejected, where the null is the correct number of and independence of the VaR exceedances, with higher p-values for the ACD model. In terms of the goodness of fit of the tail of the density, and using the tail test of Berkowitz (2001) at the 1% quantile, the ACD model appears to generate a significantly higher p-value than the GARCH model which can be rejected at the 10% level of significance. Note that the ‘pit’ method returns the probability integral transformation of the realized data given the conditional forecasted density. Another test which uses the ‘pit’ method is that of Hong and Li (2005), a Portmanteau type test, which evaluates the goodness of fit on the whole density. While both models are rejected as providing a ‘correct fit’ (W statistic), the indications from the moment based statistics (M statistics) indicate that the ACD model has significantly better fit, as can be inferred from the lower values (the statistics are distributed as N(0,1)).

Simulation

Unlike GARCH models where there is one call to the random number of size n.sim, in ACD models there are n.sim calls of size 1 because of the time variation in the conditional standardized residuals density. Simulation may be carried out either from a fitted object (using acdsim) or from a specification object (using acdpath). For the latter, this is not enabled for the mcsGARCH model. The following example provides a short illustration of the method and shows how to obtain equivalence between the simulation from fit and spec:

sim1 = acdsim(fit, n.sim = 250, m.sim = 5, rseed = 100:104)
# re-define the spec without variance targeting (only used in estimation
# routine)
spec = acdspec(mean.model = list(armaOrder = c(0, 1)), variance.model = list(variance.targeting = FALSE),
distribution.model = list(model = 'nig', skewOrder = c(1, 1, 1), shapeOrder = c(1,
1, 1), skewmodel = 'quad', shapemodel = 'pwl'))
setfixed(spec) < - as.list(coef(fit))
sim2 = acdpath(spec, n.sim = 250, m.sim = 5, rseed = 100:104, prereturns = tail(sp500ret[,1], 1), presigma = as.numeric(tail(sigma(fit), 1)), preshape = as.numeric(tail(shape(fit),1)), preskew = as.numeric(tail(skew(fit), 1)))
# check
c(all.equal(fitted(sim1), fitted(sim2)), all.equal(sigma(sim1), sigma(sim2)), all.equal(skew(sim1), skew(sim2)), all.equal(shape(sim1), shape(sim2)))
## [1] TRUE TRUE TRUE TRUE
S = skewness(sim1)
K = kurtosis(sim1)
R = fitted(sim1)
V = sigma(sim1)
par(mfrow = c(2, 2), mai = c(0.75, 0.75, 0.3, 0.3), cex.axis = 0.8)
matplot(S, type = 'l', col = 1:5, main = 'Simulated Skewness', xlab = '')
matplot(K, type = 'l', col = 1:5, main = 'Simulated Kurtosis (ex)', xlab = '')
matplot(apply(R, 2, 'cumsum'), type = 'l', col = 1:5, main = 'Simulated Paths',
ylab = '', xlab = '')
matplot(V, type = 'l', col = 1:5, main = 'Simulated Sigma', xlab = '')

racd_2

Conclusion

Time varying higher moments within a GARCH modelling framework (ACD) provide for a natural extension to time variation in the conditional mean and variance. Whether the added marginal benefits of their inclusion justify the complexity in their estimation remains an open empirical question which hopefully the racd package will enable to be researched in greater depth and transparency.

References

Berkowitz, J. (2001). Testing density forecasts, with applications to risk management. Journal of Business & Economic Statistics, 19(4), 465-474.

Hansen, B. E. (1994). Autoregressive Conditional Density Estimation. International Economic Review, 35(3), 705-30.

Harvey, C. R., & Siddique, A. (1999). Autoregressive conditional skewness. Journal of financial and quantitative analysis, 34(4), 465-497.

Hong, Y., and Li, H. (2005), Nonparametric specification testing for continuous-time models with applications to term structure of interest rates, Review of Financial Studies, 18(1), 37-84.

High Frequency GARCH: The multiplicative component GARCH (mcsGARCH) model

The interest in high frequency trading and models has grown exponentially in the last decade. While I have some doubts about the validity of any signals emerging from all the noise at higher and higher frequencies, I have nevertheless decided to look at the statistical modelling of intraday returns using GARCH models. Unlike daily and lower frequency returns, intraday data has certain particular characteristics which make the use of standard modelling approaches invalid. For the purpose of this article, I will use 1-min returns for Citigroup for the period 02-Jan-2008 to 29-Feb-2008, which is available to download here with permission from QuantQuote. This dataset is supplied dividend and split adjusted and with outliers removed. The intraday time range considered is 09:30 to 16:00, the official NYSE trading hours. As with most such studies on intraday data modelling, the first return of the day is removed. Daily data for the symbol is downloaded from Yahoo finance.

library(rugarch)
Sys.setenv(TZ = 'GMT')
library(quantmod)
R_i = read.csv('C_2008_1minret.csv')
R_i = xts(R_i[, 2], as.POSIXct(R_i[, 1]))
C = quantmod::getSymbols('C', from = '2000-01-01',auto.assign=FALSE)
C = quantmod::adjustOHLC(C, use.Adjusted = TRUE)
R_d = TTR::ROC(Cl(C), na.pad = FALSE)

Consider the correlogram of the absolute 1-min returns for Citigroup during the sample period in question:

par(cex.main = 0.85, col.main = 'black')
acf(abs(as.numeric(R_i)), lag.max = 4000, main = '1-min absolute returns\nCitigroup (2008 Jan-Feb)', cex.lab = 0.8)

mcsgarch_1

The regular pattern is quite clear, repeating approximately every 390 periods (1-day) and showing an increase in volatility around the opening and closing times. GARCH, and more generally ARMA type models can only handle an exponential decay, and not the type of pattern seen here. Several approaches have been suggested in the literature in order to de-seasonalize the absolute returns such as the flexible Fourier method of Andersen and Bollerslev (1997), and the periodic GARCH model of Bollerslev and Ghysels (1996). Unfortunately I have found none of these, or closely related models particularly easy to work with. More recently, Engle and Sokalska (2012) (henceforth ES2012) introduced the multiplicative component GARCH model as a parsimonious alternative, which I have now included in the rugarch package (ver 1.01-6). This article discusses its implementation, challenges and specific details of working with this model, which allows a rather simple but powerful way to use GARCH for regularly spaced intraday returns.

The Model

Consider the continuously compounded return \( r_{t,i} \), where \( t \) denotes the day and \( i \) the regularly spaced time interval at which the return was calculated. Under this model, the conditional variance is a multiplicative product of daily, diurnal and stochastic (intraday) components, so that the return process may be represented as:
\[ \begin{gathered}
{r_{t,i}} = {\mu _{t,i}} + {\varepsilon _{t,i}}\\
{\varepsilon _{t,i}} = \left( {{q_{t,i}}{\sigma _t}{s_i}} \right){z_{t,i}}
\end{gathered} \]
where \( q_{t,i} \) is the stochastic intraday volatility, \( \sigma_t \) a daily exogenously determined forecast volatility, \( s_i \) the diurnal volatility in each regularly spaced interval \( i \), \( z_{t,i} \) the i.i.d (0,1) standardized innovation which conditionally follows some appropriately chosen distribution. In ES2012, the forecast volatility \( \sigma_t \) is derived from a multifactor risk model externally, but it is just as possible to generate such forecasts from a daily GARCH model. The seasonal (diurnal) part of the process is defined as:
\[ {s_i} = \frac{1}{T}\sum\limits_{t = 1}^T {\left( {\varepsilon _{_{t,i}}^2/\sigma _t^2} \right)}. \]

Dividing the residuals by the diurnal and daily volatility gives the normalized residuals (\( \bar\varepsilon \)):
\[ {{\bar \varepsilon }_{t,i}} = {\varepsilon _{t,i}}/\left( {{\sigma _t}{s_i}} \right) \]
which may then be used to generate the stochastic component of volatility \( q_{t,i} \) with GARCH motion dynamics. In the rugarch package, unlike the paper of ES2012, the conditional mean and variance equations (and hence the diurnal component on the residuals from the conditional mean filtration) are estimated jointly. Furthermore, and unlike ES2012, it is possible to include ARMAX dynamics in the conditional mean, though because of the complexity of the model and its use of time indices, ARCH-m is not currently allowed. Finally, as an additional departure from ES2012, the diurnal component in the rugarch package is estimated using the median rather than the mean function (since version 1.2-3), providing a more robust alternative given the type and length of the data typically used. The next sections provide a demonstration of the model using the Citigroup dataset.

Estimation

Unlike all other GARCH models implemented in rugarch, the mcsGARCH model requires the user to pass an xts object of the forecast daily variance of the data for the period under consideration.

# Find the unique days in the intraday sample
n = length(unique(format(index(R_i), '%Y-%m-%d')))
# define a daily spec
spec_d = ugarchspec(mean.model = list(armaOrder = c(1, 1)), variance.model = list(model = 'eGARCH', garchOrder = c(2, 1)), distribution = 'nig')
# use the ugarchroll method to create a rolling forecast for the period in
# question:
roll = ugarchroll(spec_d, data = R_d['/2008-02-29'], forecast.length = n, refit.every = 5, refit.window = 'moving', moving.size = 2000, calculate.VaR = FALSE)
# extract the sigma forecast
df = as.data.frame(roll)
f_sigma = as.xts(df[, 'Sigma', drop = FALSE])
# now estimate the intraday model
spec = ugarchspec(mean.model = list(armaOrder = c(1, 1), include.mean = TRUE), variance.model = list(model = 'mcsGARCH'), distribution = 'nig')
# DailyVar is the required xts object of the forecast daily variance
fit = ugarchfit(data = R_i, spec = spec, DailyVar = f_sigma^2)

The following plots show the decomposition of the volatility into its various components. The regular pattern of the Total volatility would have been impossible to capture using standard GARCH models. Note that although the volatility series are stored as xts objects, they cannot be properly plotted using the standard plot.xts function which is why I make use of the axis function with a numeric series.

ep < - axTicksByTime(fit@model$DiurnalVar)
par(mfrow = c(4, 1), mar = c(2.5, 2.5, 2, 1))
plot(as.numeric(fit@model$DiurnalVar^0.5), type = 'l', main = 'Sigma[Diurnal]', col = 'tomato1', xaxt = 'n', ylab = 'sigma', xlab = ' ')
axis(1, at = ep, labels = names(ep), tick = TRUE)
grid()
plot(as.numeric(fit@model$DailyVar^0.5), type = 'l', main = 'Sigma[Daily-Forecast]', col = 'tomato2', xaxt = 'n', ylab = 'sigma', xlab = ' ')
axis(1, at = ep, labels = names(ep), tick = TRUE)
grid()
plot(fit@fit$q, type = 'l', main = 'Sigma[Stochastic]', col = 'tomato3', xaxt = 'n', ylab = 'sigma', xlab = ' ')
axis(1, at = ep, labels = names(ep), tick = TRUE)
grid()
plot(as.numeric(sigma(fit)), type = 'l', main = 'Sigma[Total]', col = 'tomato4', xaxt = 'n', ylab = 'sigma', xlab = ' ')
axis(1, at = ep, labels = names(ep), tick = TRUE)
grid()

mcsgarch_2

Forecasting

The biggest challenge in writing code for the forecast was dealing with the aligning and matching of times, particularly future time/dates, since the model depends on the diurnal component which is time specific. As a key component of the forecast routine, I wrote a little function which creates a sequence of time/dates, similar to seq.POSIXt, but with the extra option of defining the time interval which dictates the start and end of the trading day. For example, considering the opening and closing times of the NYSE, 09:30 to 16:00, I would like to be able to create a set of n future periods starting from T0 within only this interval, and excluding weekends. The function is defined as:
ftseq(T0, length.out, by, interval, exclude.weekends = TRUE)
where T0 is a POSIXct date/time of the starting period, length.out the periods ahead to consider, by the difftime (e.g. “mins”), and interval a character vector of the start and end times which T0 must belong to and is a multiple of by.

# create the interval
interval = format(seq(as.POSIXct('2008-01-02 09:31:00'), as.POSIXct('2008-01-02 16:00:00'), by = 'min'), '%H:%M:%S')
#
ForcTime = ftseq(T0 = as.POSIXct('2008-02-29 16:00:00'), length.out = 390 * 2 + 1, by = 'mins', interval = interval)
tail(ForcTime, 25)
##  [1] '2008-03-04 15:37:00 GMT' '2008-03-04 15:38:00 GMT'
##  [3] '2008-03-04 15:39:00 GMT' '2008-03-04 15:40:00 GMT'
##  [5] '2008-03-04 15:41:00 GMT' '2008-03-04 15:42:00 GMT'
##  [7] '2008-03-04 15:43:00 GMT' '2008-03-04 15:44:00 GMT'
##  [9] '2008-03-04 15:45:00 GMT' '2008-03-04 15:46:00 GMT'
## [11] '2008-03-04 15:47:00 GMT' '2008-03-04 15:48:00 GMT'
## [13] '2008-03-04 15:49:00 GMT' '2008-03-04 15:50:00 GMT'
## [15] '2008-03-04 15:51:00 GMT' '2008-03-04 15:52:00 GMT'
## [17] '2008-03-04 15:53:00 GMT' '2008-03-04 15:54:00 GMT'
## [19] '2008-03-04 15:55:00 GMT' '2008-03-04 15:56:00 GMT'
## [21] '2008-03-04 15:57:00 GMT' '2008-03-04 15:58:00 GMT'
## [23] '2008-03-04 15:59:00 GMT' '2008-03-04 16:00:00 GMT'
## [25] '2008-03-05 09:31:00 GMT'

As can be seen, the first time is immediately after T0 (T0 is not included), and the sequence only runs for the defined interval, and optionally (default TRUE) skips weekends. This comes in very handy in the forecast routine.

Like the estimation method, the forecast routine also requires that you supply the forecast volatility for the period under consideration. However, because of the possible use of the out.sample in the estimation routine, it is not known beforehand whether this will eventually be needed since there may be enough intraday data in the out.sample period and the combination of n.ahead+n.roll chosen so that this user supplied forecast is not required. If you do not supply it and it is needed the routine will check and let you know with an error message. Finally, the presence of the diurnal component complicates the long run unconditional forecast of the underlying variance, so that the use of the uncvariance and related methods will always return the value for the component variance rather than the actual total variance (unlike the csGARCH model which returns both).

fit2 = ugarchfit(data = R_i, spec = spec, DailyVar = f_sigma^2, out.sample = 300)
# won't supply DailyVar to get an error
forc = ugarchforecast(fit2, n.ahead = 10, n.roll = 299)
## Error: DailyVar requires forecasts for: 2008-03-03 ...resubmit.

Notice the error which indicates we need 2008-03-03 forecast. Since we don’t have it, we re-estimate with ugarchforecast:

fit_d = ugarchfit(spec_d, data = R_d['2002/2008-02-29'])
forc_d = ugarchforecast(fit_d, n.ahead = 1)
f_sigma = xts(as.numeric(sigma(forc_d)), as.POSIXct('2008-03-03'))
# intraday forecast
forc = ugarchforecast(fit2, n.ahead = 10, n.roll = 299, DailyVar = f_sigma^2)
show(forc)
##
## *------------------------------------*
## *       GARCH Model Forecast         *
## *------------------------------------*
## Model: mcsGARCH
## Horizon: 10
## Roll Steps: 299
## Out of Sample: 10
##
## 0-roll forecast [T0=2008-02-29 11:00:00]:
##          Series Sigma[Total] Sigma[Stochastic]
## T+1  -7.681e-06    0.0015132            0.8702
## T+2  -7.664e-06    0.0057046            0.8718
## T+3  -7.657e-06    0.0055551            0.8734
## T+4  -7.654e-06    0.0058834            0.8750
## T+5  -7.653e-06    0.0063295            0.8766
## T+6  -7.653e-06    0.0013036            0.8781
## T+7  -7.653e-06    0.0012846            0.8797
## T+8  -7.653e-06    0.0011227            0.8812
## T+9  -7.652e-06    0.0008177            0.8827
## T+10 -7.652e-06    0.0009259            0.8842

Note that plot methods for this model are not yet fully implemented for reasons described previously.

Simulation

Unlike standard GARCH simulation, the interval time is important in intraday GARCH since we are generating paths which follow very specific regularly sampled time points. Additionally, simulated or forecast daily variance needs to again be supplied for the simulation period under consideration. This is an xts object, and can also optionally have m.sim columns so that each independent simulation is based on the adjusted residuals by an independent simulation of the daily variance. The following example code shows the simulation of 10,000 points at 1-min intervals into the future and illustrates the effect of the seasonality component:

T0 = tail(index(R_i), 1)
# model$dtime contains the set of unique interval points in the dataset
# (and available from all rugarch objects for the mcsGARCH model)
# model$dvalues contains the diurnal component for each interval
ftime = ftseq(T0, length.out = 10000, by = fit@model$modeldata$period, interval = fit@model$dtime)
dtime = unique(format(ftime, '%Y-%m-%d'))
#
sim_d = ugarchsim(fit_d, n.sim = length(dtime), m.sim = 1)
var_sim = xts(as.matrix(sigma(sim_d)^2), as.POSIXct(dtime))
sim = ugarchsim(fit, n.sim = 10000, n.start = 0, m.sim = 1, DailyVar = var_sim, rseed = 10)
#
ep < - axTicksByTime(sim@simulation$DiurnalVar)
par(mfrow = c(4, 1), mar = c(2.5, 2.5, 2, 1))
plot(as.numeric(sim@simulation$DiurnalVar^0.5), type = 'l', main = 'Sigma[Diurnal]', col = 'tomato1', xaxt = 'n', ylab = 'sigma', xlab = ' ')
axis(1, at = ep, labels = names(ep), tick = TRUE)
grid()
plot(as.numeric(sim@simulation$DailyVar^0.5), type = 'l', main = 'Sigma[Daily-Simulated]', col = 'tomato2', xaxt = 'n', ylab = 'sigma', xlab = ' ')
axis(1, at = ep, labels = names(ep), tick = TRUE)
grid()
plot(sim@simulation$qSim[, 1], type = 'l', main = 'Sigma[Stochastic]', col = 'tomato3', xaxt = 'n', ylab = 'sigma', xlab = ' ')
axis(1, at = ep, labels = names(ep), tick = TRUE)
grid()
plot(as.numeric(sigma(sim)), type = 'l', main = 'Sigma[Total]', col = 'tomato4',
xaxt = 'n', ylab = 'sigma', xlab = ' ')
axis(1, at = ep, labels = names(ep), tick = TRUE)
grid()

mcsgarch_3

 

A rolling backtest and Value at Risk

The ugarchroll function is quite useful for testing a model’s adequacy in a backtest application, and the code below illustrates this for the mcsGARCH model for the data and period under consideration.

n = length(index(R_d['2008-01-01/2008-03-01']))
spec_d = ugarchspec(mean.model = list(armaOrder = c(1, 1)), variance.model = list(model = 'sGARCH'), distribution = 'std')
roll = ugarchroll(spec_d, data = R_d['/2008-02-29'], forecast.length = n, refit.every = 5, refit.window = 'moving', moving.size = 2000, calculate.VaR = FALSE)
df = as.data.frame(roll)
f_sigma = as.xts(df[, 'Sigma', drop = FALSE])
spec = ugarchspec(mean.model = list(armaOrder = c(1, 1), include.mean = TRUE), variance.model = list(model = 'mcsGARCH'), distribution = 'std')
roll = ugarchroll(spec, data = R_i, DailyVar = f_sigma^2, forecast.length = 3000, refit.every = 390, refit.window = 'moving', moving.size = 3000, calculate.VaR = TRUE)
# Generate the 1% VaR report
report(roll)
## VaR Backtest Report
## ===========================================
## Model: mcsGARCH-std
## Backtest Length: 3000
## ==========================================
## alpha:             1%
## Expected Exceed:   30
## Actual VaR Exceed: 33
## Actual %:          1.1%
##
## Unconditional Coverage (Kupiec)
## Null-Hypothesis: Correct Exceedances
## LR.uc Statistic: 0.294
## LR.uc Critical:  3.841
## LR.uc p-value:   0.588
## Reject Null:     NO
##
## Conditional Coverage (Christoffersen)
## Null-Hypothesis: Correct Exceedances and
##                  Independence of Failures
## LR.cc Statistic: 1.028
## LR.cc Critical:  5.991
## LR.cc p-value:   0.598
## Reject Null:     NO

Not bad at all. Who say’s GARCH models are not good!?

The VaRplot function has been adjusted to work nicely with intraday data as shown below. The spikes in the VaR observed are the result of the seasonal component around the opening of trading.

D = as.POSIXct(rownames(roll@forecast$VaR))
VaRplot(0.01, actual = xts(roll@forecast$VaR[, 3], D), VaR = xts(roll@forecast$VaR[,1], D))

mcsgarch_4

Further Developments

It is quite ‘easy’ to add additional GARCH flavors to the multiplicative model such as the eGARCH, GJR etc, and might do so in due course, time permitting. Another possible direction for expansion would be to treat the diurnal effect separately for each day of the week. I estimate that this would not be too hard to implement, providing a marginal slowdown in the estimation as a result of the increased lookup time for the matching of time and days.

Finally, this model is not ‘plug-and-play’, requiring some thought in the use of time & dates, and the preparation of the intraday returns. The garbage-in garbage-out rule clearly applies.

Please remember that questions about this or other issues with rugarch should be addressed to the r-sig-finance mailing list.

References

Bollerslev, T., & Ghysels, E. (1996). Periodic autoregressive conditional heteroscedasticity. Journal of Business & Economic Statistics, 14(2), 139–151.

Andersen, T. G., & Bollerslev, T. (1997). Intraday periodicity and volatility persistence in financial markets. Journal of Empirical Finance, 4(2), 115–158.

Engle, R. F., & Sokalska, M. E. (2012). Forecasting intraday volatility in the US equity market. Multiplicative component GARCH. Journal of Financial Econometrics, 10(1), 54–83.

Whats new in rugarch (ver 1.01-5)

Since the last release of rugarch on CRAN (ver 1.0-16), there have been many changes and new features in the development version of the package (ver 1.01-5). First, development of the package (and svn) has been moved to google code from r-forge. Second, the package now features exclusive use of xts based time series for input data and also outputs some of the results as xts as well. The sections that follow highlight some of the key changes to the package which I hope will make it easier to work with.

Extractor Methods

sigma, fitted and residuals

The main extractor method is no longer the as.data.frame, but instead, sigma and fitted will now extract the conditional sigma (GARCH) and mean (ARFIMA) values from all objects. The old extractor methods are now mostly deprecated, with the exception of certain classes where it still makes sense to use them (i.e. uGARCHroll, uGARCHdistribution, uGARCHboot).

library(rugarch)
data(sp500ret)
fit = ugarchfit(ugarchspec(), sp500ret, out.sample = 10)
c(is(sigma((fit))), is(fitted(fit)), is(residuals(fit)))
## [1] 'xts' 'xts' 'xts'
##
plot(xts(fit@model$modeldata$data, fit@model$modeldata$index), auto.grid = FALSE,
minor.ticks = FALSE, main = 'S&P500 Conditional Mean')
lines(fitted(fit), col = 2)
grid()

new_garch_1

plot(xts(abs(fit@model$modeldata$data), fit@model$modeldata$index), auto.grid = FALSE,
minor.ticks = FALSE, main = 'S&P500 Conditional Sigma', col = 'grey')
lines(sigma(fit), col = 'steelblue')
grid()

new_garch_2
Apart from getting the nice xts charts, rugarch can now handle any type of time-formatted data (which can be coerced to xts) including intraday.

A key change has also been made to the output of the forecast class (uGARCHforecast) which merits particular attention. Because rugarch allows to combine both rolling and unconditional forecasts, this creates a rather challenging problem in how to meaningfully output the results, with simple extractor methods such as sigma and fitted. The output in this case will be an n.ahead by (n.roll+1) matrix, where the column headings are now the T+0 dates, since they will always exist to use within a forecast framework, whilst the row names are labelled as T+1, T+2, …,T+n.ahead. The following example illustrates, and shows some simple solutions to deal with portraying n.ahead dates which are completely in the future (i.e. not in the available out of sample testing period).

f = ugarchforecast(fit, n.ahead = 25, n.roll = 9)
sf = sigma(f)
head(sf)
##     2009-01-15 2009-01-16 2009-01-20 2009-01-21 2009-01-22 2009-01-23  2009-01-26 2009-01-27 2009-01-28 2009-01-29
## T+1    0.02176    0.02078    0.02587    0.02730    0.02646    0.02519     0.02400    0.02301    0.02388    0.02487
## T+2    0.02171    0.02073    0.02580    0.02722    0.02638    0.02512     0.02393    0.02295    0.02382    0.02480
## T+3    0.02166    0.02069    0.02573    0.02714    0.02630    0.02505     0.02387    0.02289    0.02375    0.02473
## T+4    0.02160    0.02064    0.02565    0.02706    0.02623    0.02498     0.02380    0.02283    0.02369    0.02466
## T+5    0.02155    0.02059    0.02558    0.02697    0.02615    0.02491     0.02374    0.02277    0.02363    0.02459
## T+6    0.02150    0.02054    0.02550    0.02689    0.02607    0.02484     0.02368    0.02271    0.02356    0.02452

Therefore, the column headings are the T+0 dates i.e. the date at which the forecast was made. There are 10 columns since the n.roll option is zero based. To create an xts representation of only the rolling output is very simple:

print(sf[1, ])
## 2009-01-15 2009-01-16 2009-01-20 2009-01-21 2009-01-22 2009-01-23 2009-01-26 2009-01-27 2009-01-28 2009-01-29
##    0.02176    0.02078    0.02587    0.02730    0.02646    0.02519    0.02400    0.02301    0.02388    0.02487
##
print(f.roll < - as.xts(sf[1, ]))
##               [,1]
## 2009-01-15 0.02176
## 2009-01-16 0.02078
## 2009-01-20 0.02587
## 2009-01-21 0.02730
## 2009-01-22 0.02646
## 2009-01-23 0.02519
## 2009-01-26 0.02400
## 2009-01-27 0.02301
## 2009-01-28 0.02388
## 2009-01-29 0.02487

This gives the T+0 dates. If you want to generate the actual T+1 rolling dates, use the move function on the T+0 dates which effectively moves the dates 1-period ahead, and appends an extra 1 day to the end (which is not in the sample date set, therefore this is a ‘generated’ date):

print(f.roll1 < - xts(sf[1, ], move(as.POSIXct(colnames(sf)), by = 1)))
##               [,1]
## 2009-01-16 0.02176
## 2009-01-20 0.02078
## 2009-01-21 0.02587
## 2009-01-22 0.02730
## 2009-01-23 0.02646
## 2009-01-26 0.02519
## 2009-01-27 0.02400
## 2009-01-28 0.02301
## 2009-01-29 0.02388
## 2009-01-30 0.02487

For the n.ahead forecasts, I have included in the rugarch package another simple date function called generatefwd which can be used to generate future non-weekend dates. To use this, you will need to know that the model slot of any class object in rugarch which took a method with a data option will keep that data, its format and periodicity, which can be used as follows:

args(generatefwd)
## function (T0, length.out = 1, by = 'days')
## NULL
print(fit@model$modeldata$period)
## Time difference of 1 days
##
i = 1
DT0 = generatefwd(T0 = as.POSIXct(colnames(sf)[i]), length.out = 25, by = fit@model$modeldata$period)
print(fT0 < - xts(sf[, i], DT0))
##               [,1]
## 2009-01-16 0.02176
## 2009-01-19 0.02171
## 2009-01-20 0.02166
## 2009-01-21 0.02160
## 2009-01-22 0.02155
## 2009-01-23 0.02150
## 2009-01-26 0.02145
## 2009-01-27 0.02139
## 2009-01-28 0.02134
## 2009-01-29 0.02129
## 2009-01-30 0.02124
## 2009-02-02 0.02119
## 2009-02-03 0.02114
## 2009-02-04 0.02109
## 2009-02-05 0.02104
## 2009-02-06 0.02099
## 2009-02-09 0.02094
## 2009-02-10 0.02089
## 2009-02-11 0.02084
## 2009-02-12 0.02079
## 2009-02-13 0.02075
## 2009-02-16 0.02070
## 2009-02-17 0.02065
## 2009-02-18 0.02060
## 2009-02-19 0.02055

The same applies to the fitted method. This concludes the part on the uGARCHforecast class and the changes which have been made. More details can always be found in the documentation (if you are wondering how to obtain documentation for an S4 class object such as uGARCHforecast, you need to append a ‘-class’ to the end and enclose the whole thing in quotes e.g. help(‘uGARCHforecast-class’)).

quantile and pit

The quantile method extracts the conditional quantiles of a rugarch object, subject to an extra option (probs) as in the S3 class method in stats, while pit calculates and returns the probability integral transformation of a fitted, filtered or rolling object (objects guaranteed to have ‘realized’ data to work with).

head(quantile(fit, c(0.01, 0.025, 0.05)))
##             q[0.01] q[0.025]  q[0.05]
## 1987-03-10 -0.02711 -0.02276 -0.01901
## 1987-03-11 -0.02673 -0.02247 -0.01881
## 1987-03-12 -0.02549 -0.02141 -0.01791
## 1987-03-13 -0.02449 -0.02058 -0.01722
## 1987-03-16 -0.02350 -0.01972 -0.01647
## 1987-03-17 -0.02271 -0.01903 -0.01586
##
head(pit(fit))
##               pit
## 1987-03-10 0.7581
## 1987-03-11 0.4251
## 1987-03-12 0.5973
## 1987-03-13 0.3227
## 1987-03-16 0.2729
## 1987-03-17 0.9175

These should prove particularly useful in functions which require the quantiles or PIT transformation, such as the risk (VaRTest and VaRDurTest) and misspecification tests (BerkowitzTest and HLTest).

Distribution Functions

rugarch exports a set of functions for working with certain measures on the conditional distributions included in the package. Some of these functions have recently been re-written to take advantage of vectorization, whilst others re-written to take advantage of analytical representations (e.g. as regards to skewness and kurtosis measures for the sstd and jsu distributions). In the latest release, I’ve also included some functions which provide for a graphical visualization of the different distributions with regards to their higher moment features, and demonstrated below:

distplot('nig')
## Warning: skew lower bound below admissible region...adjusting to
## distribution lower bound.

new_garch_3

The distplot function, available for all skewed and/or shaped distributions provides a visual 3D representation of the interaction of the skew and shape parameters in determining the Skewness and Kurtosis. In cases where the distribution is only skewed or shaped, a simpler 2D plot is created as in the case of the std distribution:

distplot('std')

new_garch_4
Another interesting plot is that of a distribution’s ‘authorized domain’, which shows the region of Skewness-Kurtosis for which a density exists. This is related to the Hamburger moment problem, and the maximum attainable Skewness (S) given kurtosis (K) ( see Widder (1946) ) . The skdomain function returns the values needed to visualize these relationships, and demonstrated below:

# plot the first one
XYnig = skdomain('nig', legend = FALSE)
XYsstd = skdomain('sstd', plot = FALSE)
XYhyp = skdomain('ghyp', plot = FALSE, lambda = 1)
XYjsu = skdomain('jsu', plot = FALSE)
# the values returned are the bottom half of the domain (which is
# symmetric)
lines(XYjsu$Kurtosis, XYjsu$Skewness, col = 3, lty = 2)
lines(XYjsu$Kurtosis, -XYjsu$Skewness, col = 3, lty = 2)
lines(XYsstd$Kurtosis, XYsstd$Skewness, col = 4, lty = 3)
lines(XYsstd$Kurtosis, -XYsstd$Skewness, col = 4, lty = 3)
lines(XYhyp$Kurtosis, XYhyp$Skewness, col = 5, lty = 4)
lines(XYhyp$Kurtosis, -XYhyp$Skewness, col = 5, lty = 4)
legend('topleft', c('MAX', 'NIG', 'JSU', 'SSTD', 'HYP'), col = c(2, 'steelblue', 3, 4, 5), lty = c(1, 1, 2, 3, 4), bty = 'n')

new_garch_5
From the plot, it is clear that the skew-student has the widest possible combination of skewness and kurtosis for values of kurtosis less than ~6, whilst the NIG has the widest combination for values greater than ~8. It is interesting to note that the Hyperbolic distribution is not defined for values of kurtosis greater than ~9.

Model Reduction

The reduce function eliminates non-significant coefficients (subject to a pvalue cut-off argument) from a model by fixing them to zero (in rugarch this is equivalent to eliminating them) and re-estimating the model with the remaining parameters as the following example illustrates:

spec = ugarchspec(mean.model = list(armaOrder = c(4, 4)), variance.model = list(garchOrder = c(2, 2)))
fit = ugarchfit(spec, sp500ret[1:1000, ])
round(fit@fit$robust.matcoef, 4)
##         Estimate  Std. Error  t value Pr(>|t|)
## mu        0.0005      0.0004    1.250   0.2111
## ar1       0.2250      0.0164   13.677   0.0000
## ar2       1.6474      0.0175   94.046   0.0000
## ar3      -0.0661      0.0188   -3.517   0.0004
## ar4      -0.8323      0.0164  -50.636   0.0000
## ma1      -0.2328      0.0065  -35.825   0.0000
## ma2      -1.6836      0.0039 -430.423   0.0000
## ma3       0.0980      0.0098    9.953   0.0000
## ma4       0.8426      0.0075  112.333   0.0000
## omega     0.0000      0.0000    1.527   0.1267
## alpha1    0.2153      0.1265    1.702   0.0888
## alpha2    0.0000      0.0000    0.000   1.0000
## beta1     0.3107      0.1149    2.705   0.0068
## beta2     0.3954      0.0819    4.828   0.0000
##
# eliminate alpha2
fit = reduce(fit, pvalue = 0.1)
print(fit)
##
## *---------------------------------*
## *          GARCH Model Fit        *
## *---------------------------------*
##
## Conditional Variance Dynamics
## -----------------------------------
## GARCH Model  : sGARCH(2,2)
## Mean Model   : ARFIMA(4,0,4)
## Distribution : norm
##
## Optimal Parameters
## ------------------------------------
##         Estimate  Std. Error   t value Pr(>|t|)
## mu       0.00000          NA        NA       NA
## ar1      0.23153    0.006583   35.1698 0.000000
## ar2      1.63558    0.018294   89.4050 0.000000
## ar3     -0.07264    0.018277   -3.9743 0.000071
## ar4     -0.82110    0.014031  -58.5206 0.000000
## ma1     -0.23883    0.009659  -24.7253 0.000000
## ma2     -1.68338    0.005401 -311.6821 0.000000
## ma3      0.11434    0.017035    6.7121 0.000000
## ma4      0.83198    0.013140   63.3179 0.000000
## omega    0.00000          NA        NA       NA
## alpha1   0.14532    0.008997   16.1530 0.000000
## alpha2   0.00000          NA        NA       NA
## beta1    0.26406    0.086397    3.0564 0.002240
## beta2    0.58961    0.088875    6.6342 0.000000
##
## Robust Standard Errors:
##         Estimate  Std. Error  t value Pr(>|t|)
## mu       0.00000          NA       NA       NA
## ar1      0.23153    0.061687   3.7533 0.000175
## ar2      1.63558    0.093553  17.4829 0.000000
## ar3     -0.07264    0.024012  -3.0251 0.002485
## ar4     -0.82110    0.028395 -28.9168 0.000000
## ma1     -0.23883    0.037806  -6.3172 0.000000
## ma2     -1.68338    0.021478 -78.3771 0.000000
## ma3      0.11434    0.066989   1.7068 0.087851
## ma4      0.83198    0.052093  15.9711 0.000000
## omega    0.00000          NA       NA       NA
## alpha1   0.14532    0.057150   2.5428 0.010996
## alpha2   0.00000          NA       NA       NA
## beta1    0.26406    0.129725   2.0356 0.041793
## beta2    0.58961    0.103879   5.6759 0.000000
##
## LogLikelihood : 3091
## #####

References

Hansen, P. R., Lunde, A., & Nason, J. M. (2011). The model confidence set. Econometrica, 79(2), 453-497.
Widder, D. V. (1959). The Laplace Transform. 1946. Princeton University Press, Princeton, New Jersey. Supplementary Bibliography Is. McShane, EJ,“ A canonical form for antiderivatives,” Illinois Jour, of Math, 3, 334-351.

Does anything NOT beat the GARCH(1,1)?

In their paper on GARCH model comparison, Hansen and Lunde (2005) present evidence that among 330 different models, and using daily data on the DM/$ rate and IBM stock returns, no model does significantly better at predicting volatility (based on a realized measure) than the GARCH(1,1) model, for an out of sample period of about 250 days (between 1992/1993 for the currency, and 1999/2000 for the stock). In this demonstration, I instead ask if anything does NOT beat the GARCH(1,1) using a range of operational tests such as VaR exceedances, and model comparison using the model confidence set of Hansen, Lunde and Nason (2011) on a loss function based on the conditional quantiles. Using the S&P 500 index, and the latest 1500 days as the out of sample forecast period, I find that there are few models that do not beat the GARCH(1,1)-Normal.

Data Setup

Given that it is the most widely cited and followed equity index in the world, the S&P 500 is a natural choice for testing a hypothesis, and for the purposes of this demonstration the log returns of the SPY index tracker were used. Starting on 23-01-2007, a rolling forecast and re-estimation scheme was adopted, with a moving window size of 1500 periods, and a re-estimation period of 50. The data was obtained from Yahoo finance and adjusted for dividends and splits.

library(rugarch)
library(shape)
library(xts)
library(quantmod)
getSymbols('SPY', from = '1997-01-01')
SPY = adjustOHLC(SPY, use.Adjusted = TRUE)
R = ROC(Cl(SPY), type = 'continuous', na.pad = FALSE)

The following plot highlights the importance of the forecast period under consideration, which includes the credit crisis and an overall exceptional testing ground for risk model backtesting.

plot(Cl(SPY), col = 'black', screens = 1, blocks = list(start.time = paste(head(tail(index(SPY),1500), 1)), end.time = paste(index(tail(SPY, 1))), col = 'whitesmoke'), main = 'SPY', minor.ticks = FALSE)

beat_garch_1

Model Comparison

For the backtest the following GARCH models were chosen:

  • GARCH
  • GJR
  • EGARCH
  • APARCH
  • component GARCH
  • AVGARCH
  • NGARCH
  • NAGARCH

with four distributions, the Normal, Skew-Student (Fernandez and Steel version), Normal Inverse Gaussian (NIG) and Johnson’s SU (JSU). The conditional mean was based on an ARMA(2,1) model, while the GARCH order was set at (1,1) and (2,1), giving a total combination of 64 models. To create the rolling forecasts, the ugarchroll function was used, which has been completely re-written in the latest version of rugarch and now also includes a resume method for resuming from non-converged estimation windows.

model = c('sGARCH', 'gjrGARCH', 'eGARCH', 'apARCH', 'csGARCH', 'fGARCH', 'fGARCH', 'fGARCH')
submodel = c(NA, NA, NA, NA, NA, 'AVGARCH', 'NGARCH', 'NAGARCH')
spec1 = vector(mode = 'list', length = 16)
for (i in 1:8) spec1[[i]] = ugarchspec(mean.model = list(armaOrder = c(2, 1)), variance.model = list(model = model[i], submodel = if (i > 5) submodel[i] else NULL))
for (i in 9:16) spec1[[i]] = ugarchspec(mean.model = list(armaOrder = c(2, 1)), variance.model = list(garchOrder = c(2, 1), model = model[i - 8], submodel = if ((i - 8) > 5) submodel[i - 8] else NULL))
spec2 = vector(mode = 'list', length = 16)
for (i in 1:8) spec2[[i]] = ugarchspec(mean.model = list(armaOrder = c(2, 1)), variance.model = list(model = model[i], submodel = if (i > 5) submodel[i] else NULL), distribution = 'sstd')
for (i in 9:16) spec2[[i]] = ugarchspec(mean.model = list(armaOrder = c(2, 1)), variance.model = list(garchOrder = c(2, 1), model = model[i - 8], submodel = if ((i - 8) > 5) submodel[i - 8] else NULL), distribution = 'sstd')
spec3 = vector(mode = 'list', length = 16)
for (i in 1:8) spec3[[i]] = ugarchspec(mean.model = list(armaOrder = c(2, 1)), variance.model = list(model = model[i], submodel = if (i > 5) submodel[i] else NULL), distribution = 'nig')
for (i in 9:16) spec3[[i]] = ugarchspec(mean.model = list(armaOrder = c(2, 1)), variance.model = list(garchOrder = c(2, 1), model = model[i - 8], submodel = if ((i - 8) > 5) submodel[i - 8] else NULL), distribution = 'nig')
spec4 = vector(mode = 'list', length = 16)
for (i in 1:8) spec4[[i]] = ugarchspec(mean.model = list(armaOrder = c(2, 1)), variance.model = list(model = model[i], submodel = if (i > 5) submodel[i] else NULL), distribution = 'jsu')
for (i in 9:16) spec4[[i]] = ugarchspec(mean.model = list(armaOrder = c(2, 1)), variance.model = list(garchOrder = c(2, 1), model = model[i - 8], submodel = if ((i - 8) > 5) submodel[i - 8] else NULL), distribution = 'jsu')
spec = c(spec1, spec2, spec3, spec4)
cluster = makePSOCKcluster(15)
clusterExport(cluster, c('spec', 'R'))
clusterEvalQ(cluster, library(rugarch))
# Out of sample estimation
n = length(spec)
fitlist = vector(mode = 'list', length = n)
for (i in 1:n) {
tmp = ugarchroll(spec[[i]], R, n.ahead = 1, forecast.length = 1500, refit.every = 50, refit.window = 'moving', windows.size = 1500, solver = 'hybrid', calculate.VaR = FALSE, cluster = cluster, keep.coef = FALSE)
if (!is.null(tmp@model$noncidx)) {
tmp = resume(tmp, solver = 'solnp', fit.control = list(scale = 1), solver.control = list(tol = 1e-07, delta = 1e-06), cluster = cluster)
if (!is.null(tmp@model$noncidx))
fitlist[[i]] = NA
} else {
fitlist[[i]] = as.data.frame(tmp, which = 'density')
}
}

The method of using as.data.frame with option “density”, returns a data.frame with the forecast conditional mean, sigma, any skew or shape parameters for the estimation/forecast window, and the realized data for the period. Because I have made use of a time based data series, the returned data.frame has rownames the forecast dates.

DateMuSigmaSkewShapeShape (GIG)Realized
23/01/20070.000660.004960000.00295
24/01/20070.000580.004870000.008
25/01/20070.000250.00517000-0.01182
26/01/20070.000740.00604000-0.00088
29/01/20070.000870.00587000-0.00056
30/01/20070.000860.005710000.00518
31/01/20070.000630.005670000.00666
01/02/20070.000330.005750000.00599
02/02/20070.00010.00580000.00141
05/02/20070.00010.005640000.00024

In order to compare the models, the conditional coverage test for VaR exceedances of Christoffersen (1998) and the tail test of Berkowitz (2001) were used. The following code calculates the quantiles and applies the relevant tests described above.

vmodels = c('sGARCH(1,1)', 'gjrGARCH(1,1)', 'eGARCH(1,1)', 'apARCH(1,1)', 'csGARCH(1,1)','AVGARCH(1,1)', 'NGARCH(1,1)', 'NAGARCH(1,1)', 'sGARCH(2,1)', 'gjrGARCH(2,1)','eGARCH(2,1)', 'apARCH(2,1)', 'csGARCH(2,1)', 'AVGARCH(2,1)', 'NGARCH(2,1)','NAGARCH(2,1)')
modelnames = c(paste(vmodels, '-N', sep = ''), paste(vmodels, '-sstd', sep = ''), paste(vmodels, '-nig', sep = ''), paste(vmodels, '-jsu', sep = ''))
q1 = q5 = px = matrix(NA, ncol = 64, nrow = 1500)
dist = c(rep('norm', 16), rep('sstd', 16), rep('nig', 16), rep('jsu', 16))
# use apply since nig and gh distributions are not yet vectorized
for (i in 1:64) {
q1[, i] = as.numeric(apply(fitlist[[i]], 1, function(x) qdist(dist[i], 0.01, mu = x['Mu'], sigma = x['Sigma'], skew = x['Skew'], shape = x['Shape'])))
q5[, i] = as.numeric(apply(fitlist[[i]], 1, function(x) qdist(dist[i], 0.05, mu = x['Mu'], sigma = x['Sigma'], skew = x['Skew'], shape = x['Shape'])))
px[, i] = as.numeric(apply(fitlist[[i]], 1, function(x) pdist(dist[i], x['Realized'], mu = x['Mu'], sigma = x['Sigma'], skew = x['Skew'], shape = x['Shape'])))
}
VaR1cc = apply(q1, 2, function(x) VaRTest(0.01, actual = fitlist[[1]][, 'Realized'], VaR = x)$cc.LRp)
VaR5cc = apply(q5, 2, function(x) VaRTest(0.05, actual = fitlist[[1]][, 'Realized'], VaR = x)$cc.LRp)
BT1 = apply(px, 2, function(x) BerkowitzTest(qnorm(x), tail.test = TRUE, alpha = 0.01)$LRp)
BT5 = apply(px, 2, function(x) BerkowitzTest(qnorm(x), tail.test = TRUE, alpha = 0.05)$LRp)
VTable = cbind(VaR1cc, VaR5cc, BT1, BT5)
rownames(VTable) = modelnames
colnames(VTable) = c('VaR.CC(1%)', 'VaR.CC(5%)', 'BT(1%)', 'BT(5%)')

From a summary look at the head and tails of the tables, it is quite obvious that the GARCH(1,1)-Normal model is NOT hard to beat. More generally, the (1,1)-Normal models fare worst, and this is not surprising since financial markets, particularly for this crisis prone period under study, cannot possibly be well modelled without fat tailed and asymmetric distributions. A more interesting result is that the (2,1)-Non Normal models beat (1,1)-Non Normal models on average. Given the widely adopted (1,1) approach almost everywhere, this is a little surprising. It should however be noted that 1500 points where used for each estimation window…with less data, parameter uncertainty is likely to induce noise in a higher order GARCH model.

Top 10

ModelVaR.CC(1%)VaR.CC(5%)BT(1%)BT(5%)
AVGARCH(2,1)-sstd0.010.270.010.01
AVGARCH(2,1)-nig0.170.180.120.03
NAGARCH(2,1)-sstd0.110.180.10.02
gjrGARCH(2,1)-sstd0.60.140.410.07
NAGARCH(2,1)-nig0.250.140.130.04
NAGARCH(2,1)-jsu0.360.140.230.05
gjrGARCH(2,1)-nig0.60.140.450.13
gjrGARCH(1,1)-jsu0.60.140.480.11
gjrGARCH(2,1)-jsu0.60.140.550.12
gjrGARCH(1,1)-nig0.60.140.390.13

Bottom 10

ModelVaR.CC(1%)VaR.CC(5%)BT(1%)BT(5%)
sGARCH(2,1)-N0000
NGARCH(2,1)-N0000
NGARCH(1,1)-N0000
csGARCH(1,1)-N0000
csGARCH(2,1)-N0000
apARCH(2,1)-N0000
sGARCH(1,1)-N0000
apARCH(1,1)-N0000
eGARCH(1,1)-N0000
eGARCH(2,1)-N0000

 

The following plots provide for a clearer (hopefully!) visual representation of the results.
beat_garch_2

The models with conditional distribution the Normal are excluded since none of them are above the 5% significance cutoff. All other barplots start at 5% so that any bar which shows up indicates acceptance of the null of the hypothesis (for both tests this is phrased so that it indicates a correctly specified model). A careful examination of all the models indicates that the GJR (1,1) and (2,1) model from any of the 3 skewed and shaped conditional distributions passes all 4 tests. The SSTD distribution fares worst among the three, with the NIG and JSU appearing to provide an equally good fit. The final plot shows the VaR performance of the GARCH(1,1)-Normal and the GJR(2,1)-NIG for the period.
beat_garch_3

MCS Test on Loss Function

It is often said that the VaR exceedance tests are rather crude and require a lot of data in order to offer any significant insight into model differences. An alternative method for evaluating competing models is based on the comparison of some relevant loss function using a test such as the Model Confidence Set (MCS) of Hansen, Lunde and Nason (2011). In keeping with the focus on the tail of the distribution, a VaR loss function at the 1% and 5% coverage levels was used, described in González-Rivera et al.(2004) and defined as:
$${Q_{loss}} \equiv {N^{ – 1}}\sum\limits_{t = R}^T {\left( {\alpha  – 1\left( {{r_{t + 1}} < VaR_{t + 1}^\alpha } \right)} \right)} \left( {{r_{t + 1}} – VaR_{t + 1}^\alpha } \right)$$
where \( P = T – R \) is the out-of-sample forecast horizon, \( T \) the total horizon to include in estimation, and \( R \) the start of the out-of-sample forecast. This is an asymmetric loss function, linearly penalizing exceedances more heavily by \( (1-\alpha) \). This is not yet exported in rugarch but can still be called as follows:

LossV5 = apply(q5, 2, function(x) rugarch:::.varloss(0.05, fitlist[[1]][, 'Realized'], x))
LossV1 = apply(q1, 2, function(x) rugarch:::.varloss(0.01, fitlist[[1]][, 'Realized'], x))

The MCS function is also not yet included, but may be in a future version. The following barplot shows the result of running the MCS test using a stationary bootstrap with 5000 replications.
beat_garch_4

 

beat_garch_5

 

While no models are excluded from the model confidence set, at the 95% significance level, the ranking is still somewhat preserved for the top and bottom models, with the (2,1) NIG and JSU models being at the top. Based on this test, it would appear that the eGARCH(2,1)-NIG is the superior model at both coverage rates, with the GARCH(1,1)-Normal being at the very bottom with a 7% and 20% probability of belonging to the set at the 1% and 5% coverage rates respectively.

Conclusion

The normality assumption does not realistically capture the observed market dynamics, and neither does the GARCH(1,1)-N model. There were few models which did not beat it in this application. In fact, higher order GARCH models were shown to provide significant out performance on a range of tail related measures, and distributions such as the NIG and JSU appeared to provide the most realistic representation of the observed dynamics.

References

Berkowitz, J. (2001). Testing density forecasts, with applications to risk management. Journal of Business & Economic Statistics, 19(4), 465-474.
Christoffersen, P. F. (1998). Evaluating Interval Forecasts. International Economic Review, 39(4), 841-62.
González-Rivera, G., Lee, T. H., & Mishra, S. (2004). Forecasting volatility: A reality check based on option pricing, utility function, value-at-risk, and predictive likelihood. International Journal of Forecasting, 20(4), 629-645.
Hansen, P. R., & Lunde, A. (2005). A forecast comparison of volatility models: does anything beat a GARCH (1, 1)?. Journal of applied econometrics, 20(7), 873-889.
Hansen, P. R., Lunde, A., & Nason, J. M. (2011). The model confidence set. Econometrica, 79(2), 453-497.

The GARCH-DCC Model and 2-stage DCC(MVT) estimation.

This short demonstration illustrates the use of the DCC model and its methods using the rmgarch package, and in particular an alternative method for 2-stage DCC estimation in the presence of the MVT distribution shape (nuisance) parameter. The theoretical background and representation of the model is detailed in the package’s vignette. The dataset and period used are purely for illustration purposes.

library(rmgarch)
library(parallel)
data(dji30retw)
Dat = dji30retw[, 1:10, drop = FALSE]
# define a DCCspec object: 2 stage estimation should usually always use
# Normal for 1-stage (see below for
xspec = ugarchspec(mean.model = list(armaOrder = c(1, 1)), variance.model = list(garchOrder = c(1,1), model = 'eGARCH'), distribution.model = 'norm')
uspec = multispec(replicate(10, xspec))
spec1 = dccspec(uspec = uspec, dccOrder = c(1, 1), distribution = 'mvnorm')
spec1a = dccspec(uspec = uspec, dccOrder = c(1, 1), model='aDCC', distribution = 'mvnorm')
spec2 = dccspec(uspec = uspec, dccOrder = c(1, 1), distribution = 'mvlaplace')
spec2a = dccspec(uspec = uspec, dccOrder = c(1, 1), model='aDCC', distribution = 'mvlaplace')

Since multiple DCC models are being estimated on the same dataset with the same first stage dynamics, we can estimate the first stage once and pass it to the dccfit routine (rather than re-estimating it every time dccfit is called):

cl = makePSOCKcluster(10)
multf = multifit(uspec, Dat, cluster = cl)

Next, the second stage of the DCC model is estimated.

fit1 = dccfit(spec1, data = Dat, fit.control = list(eval.se = TRUE), fit = multf, cluster = cl)
fit1a = dccfit(spec1a, data = Dat, fit.control = list(eval.se = TRUE), fit = multf, cluster = cl)
fit2 = dccfit(spec2, data = Dat, fit.control = list(eval.se = TRUE), fit = multf, cluster = cl)
fit2a = dccfit(spec2a, data = Dat, fit.control = list(eval.se = TRUE), fit = multf, cluster = cl)

To fit the DCC (MVT) model in practice, one either assumes a first stage QML, else must jointly estimate in 1 stage the common shape parameter. In the example that follows below, an alternative approach is used to approximate the common shape parameter.

# First Estimate a QML first stage model (multf already estimated). Then
# estimate the second stage shape parameter.
spec3 = dccspec(uspec = uspec, dccOrder = c(1, 1), distribution = 'mvt')
fit3 = dccfit(spec3, data = Dat, fit.control = list(eval.se = FALSE), fit = multf)
# obtain the multivariate shape parameter:
mvt.shape = rshape(fit3)
# Plug that into a fixed first stage model and iterate :
mvt.l = rep(0, 6)
mvt.s = rep(0, 6)
mvt.l[1] = likelihood(fit3)
mvt.s[1] = mvt.shape
for (i in 1:5) {
xspec = ugarchspec(mean.model = list(armaOrder = c(1, 1)), variance.model = list(garchOrder = c(1,1), model = 'eGARCH'), distribution.model = 'std', fixed.pars = list(shape = mvt.shape))
spec3 = dccspec(uspec = multispec(replicate(10, xspec)), dccOrder = c(1,1), distribution = 'mvt')
fit3 = dccfit(spec3, data = Dat, solver = 'solnp', fit.control = list(eval.se = FALSE))
mvt.shape = rshape(fit3)
mvt.l[i + 1] = likelihood(fit3)
mvt.s[i + 1] = mvt.shape
}
# Finally, once more, fixing the second stage shape parameter, and
# evaluating the standard errors
xspec = ugarchspec(mean.model = list(armaOrder = c(1, 1)), variance.model = list(garchOrder = c(1,1), model = 'eGARCH'), distribution.model = 'std', fixed.pars = list(shape = mvt.shape))
spec3 = dccspec(uspec = multispec(replicate(10, xspec)), dccOrder = c(1, 1), distribution = 'mvt', fixed.pars = list(shape = mvt.shape))
fit3 = dccfit(spec3, data = Dat, solver = 'solnp', fit.control = list(eval.se = TRUE), cluster = cl)

The plot in the change of the likelihood and shape shows that only a few iterations are necessary to converge to a stable value.
dccmvt_1

The value of the shape parameter implies an excess kurtosis of 1.06. The exercise is repeated for the asymmetric DCC (MVT) model.

xspec = ugarchspec(mean.model = list(armaOrder = c(1,1)), variance.model = list(garchOrder = c(1,1), model = "eGARCH"),  distribution.model = "norm")
spec3a  = dccspec(uspec = multispec( replicate(10, xspec) ), dccOrder = c(1,1), distribution = "mvt", model="aDCC")
fit3a = dccfit(spec3a, data = Dat, fit.control = list(eval.se=FALSE), fit = multf)
# obtain the multivariate shape parameter:
mvtx.shape = rshape(fit3a)
# Plug that into a fixed first stage model and iterate :
mvtx.l = rep(0, 6)
mvtx.s = rep(0, 6)
mvtx.l[1] = likelihood(fit3)
mvtx.s[1] = mvtx.shape
for(i in 1:5){
xspec = ugarchspec(mean.model = list(armaOrder = c(1,1)), variance.model = list(garchOrder = c(1,1), model = "eGARCH"),  distribution.model = "std", fixed.pars = list(shape=mvtx.shape))
spec3a = dccspec(uspec = multispec( replicate(10, xspec) ), dccOrder = c(1,1), model="aDCC", distribution = "mvt")
fit3a = dccfit(spec3a, data = Dat, solver = "solnp", fit.control = list(eval.se=FALSE))
mvtx.shape = rshape(fit3a)
mvtx.l[i+1] = likelihood(fit3a)
mvtx.s[i+1] = mvtx.shape
}
# Finally, once more, fixing the second stage shaoe parameter, and evaluating the standard errors
xspec = ugarchspec(mean.model = list(armaOrder = c(1,1)), variance.model = list(garchOrder = c(1,1), model = "eGARCH"),  distribution.model = "std", fixed.pars = list(shape=mvtx.shape))
spec3a = dccspec(uspec = multispec( replicate(10, xspec) ), dccOrder = c(1,1), model="aDCC", distribution = "mvt", fixed.pars=list(shape=mvtx.shape))
fit3a = dccfit(spec3a, data = Dat, solver = "solnp", fit.control = list(eval.se=TRUE), cluster = cl)

The table below displays the summary of the estimated models, where the stars next to the coefficients indicate the level of significance (*** 1%, ** 5%, * 10%). The asymmetry parameter is everywhere insignificant and, not surprisingly, the MVT distribution provides for the best overall fit (even when accounting for one extra parameter estimated).

##           DCC-MVN   aDCC-MVN    DCC-MVL   aDCC-MVL      DCC-MVT     aDCC-MVT
## a      0.00784*** 0.00639*** 0.00618***  0.0055***   0.00665***    0.00623***
## b      0.97119*** 0.96956*** 0.97624*** 0.97468***   0.97841***    0.97784***
## g                    0.00439               0.00237                 0.00134
## shape                                                9.63947***    9.72587***
## LogLik      22812      22814      22858      22859        23188         23188

To complete the demonstration, the plots below illustrate some of the dynamic correlations from the different models:
dccmvt_2

 

…and remember to terminate the cluster object:

stopCluster(cl)