PerformanceAnalytics (version 2.0.4)

VaR: calculate various Value at Risk (VaR) measures

Description

Calculates Value-at-Risk(VaR) for univariate, component, and marginal cases using a variety of analytical methods.

Usage

VaR(
  R = NULL,
  p = 0.95,
  ...,
  method = c("modified", "gaussian", "historical", "kernel"),
  clean = c("none", "boudt", "geltner", "locScaleRob"),
  portfolio_method = c("single", "component", "marginal"),
  weights = NULL,
  mu = NULL,
  sigma = NULL,
  m3 = NULL,
  m4 = NULL,
  invert = TRUE,
  SE = FALSE,
  SE.control = NULL
)

Arguments

R

an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns

p

confidence level for calculation, default p=.95

any other passthru parameters

method

one of "modified","gaussian","historical", "kernel", see Details.

clean

method for data cleaning through Return.clean. Current options are "none", "boudt", "geltner", or "locScaleRob".

portfolio_method

one of "single","component","marginal" defining whether to do univariate, component, or marginal calc, see Details.

weights

portfolio weighting vector, default NULL, see Details

mu

If univariate, mu is the mean of the series. Otherwise mu is the vector of means of the return series, default NULL, see Details

sigma

If univariate, sigma is the variance of the series. Otherwise sigma is the covariance matrix of the return series, default NULL, see Details

m3

If univariate, m3 is the skewness of the series. Otherwise m3 is the coskewness matrix (or vector with unique coskewness values) of the returns series, default NULL, see Details

m4

If univariate, m4 is the excess kurtosis of the series. Otherwise m4 is the cokurtosis matrix (or vector with unique cokurtosis values) of the return series, default NULL, see Details

invert

TRUE/FALSE whether to invert the VaR measure. see Details.

SE

TRUE/FALSE whether to ouput the standard errors of the estimates of the risk measures, default FALSE.

SE.control

Control parameters for the computation of standard errors. Should be done using the RPESE.control function.

Background

This function provides several estimation methods for the Value at Risk (typically written as VaR) of a return series and the Component VaR of a portfolio. Take care to capitalize VaR in the commonly accepted manner, to avoid confusion with var (variance) and VAR (vector auto-regression). VaR is an industry standard for measuring downside risk. For a return series, VaR is defined as the high quantile (e.g. ~a 95 quantile) of the negative value of the returns. This quantile needs to be estimated. With a sufficiently large data set, you may choose to utilize the empirical quantile calculated using quantile. More efficient estimates of VaR are obtained if a (correct) assumption is made on the return distribution, such as the normal distribution. If your return series is skewed and/or has excess kurtosis, Cornish-Fisher estimates of VaR can be more appropriate. For the VaR of a portfolio, it is also of interest to decompose total portfolio VaR into the risk contributions of each of the portfolio components. For the above mentioned VaR estimators, such a decomposition is possible in a financially meaningful way.

Univariate VaR estimation methods

The VaR at a probability level \(p\) (e.g. 95%) is the \(p\)-quantile of the negative returns, or equivalently, is the negative value of the \(c=1-p\) quantile of the returns. In a set of returns for which sufficently long history exists, the per-period Value at Risk is simply the quantile of the period negative returns :

$$VaR=q_{.99}$$

where \(q_{.99}\) is the 99% empirical quantile of the negative return series.

This method is also sometimes called “historical VaR”, as it is by definition ex post analysis of the return distribution, and may be accessed with method="historical".

When you don't have a sufficiently long set of returns to use non-parametric or historical VaR, or wish to more closely model an ideal distribution, it is common to us a parmetric estimate based on the distribution. J.P. Morgan's RiskMetrics parametric mean-VaR was published in 1994 and this methodology for estimating parametric mean-VaR has become what most literature generally refers to as “VaR” and what we have implemented as VaR. See Return to RiskMetrics: Evolution of a Standardhttps://www.msci.com/documents/10199/dbb975aa-5dc2-4441-aa2d-ae34ab5f0945.

Parametric mean-VaR does a better job of accounting for the tails of the distribution by more precisely estimating shape of the distribution tails of the risk quantile. The most common estimate is a normal (or Gaussian) distribution \(R\sim N(\mu,\sigma)\) for the return series. In this case, estimation of VaR requires the mean return \(\bar{R}\), the return distribution and the variance of the returns \(\sigma\). In the most common case, parametric VaR is thus calculated by

$$\sigma=variance(R)$$

$$VaR=-\bar{R} - \sqrt{\sigma} \cdot z_{c} $$

where \(z_{c}\) is the \(c\)-quantile of the standard normal distribution. Represented in R by qnorm(c), and may be accessed with method="gaussian".

Other forms of parametric mean-VaR estimation utilize a different distribution for the distribution of losses to better account for the possible fat-tailed nature of downside risk. The now-archived package VaR contained methods for simulating and estimating lognormal and generalized Pareto distributions to overcome some of the problems with nonparametric or parametric mean-VaR calculations on a limited sample size or on potentially fat-tailed distributions. There was also a VaR.backtest function to apply simulation methods to create a more robust estimate of the potential distribution of losses. Less commonly a covariance matrix of multiple risk factors may be applied. This functionality should probably be

The limitations of mean Value-at-Risk are well covered in the literature. The limitations of traditional mean-VaR are all related to the use of a symetrical distribution function. Use of simulations, resampling, or Pareto distributions all help in making a more accurate prediction, but they are still flawed for assets with significantly non-normal (skewed or kurtotic) distributions. Zangari (1996) and Favre and Galeano(2002) provide a modified VaR calculation that takes the higher moments of non-normal distributions (skewness, kurtosis) into account through the use of a Cornish Fisher expansion, and collapses to standard (traditional) mean-VaR if the return stream follows a standard distribution. This measure is now widely cited and used in the literature, and is usually referred to as “Modified VaR” or “Modified Cornish-Fisher VaR”. They arrive at their modified VaR calculation in the following manner:

$$z_{cf}=z_{c}+\frac{(z_{c}^{2}-1)S}{6}+\frac{(z_{c}^{3}-3z_{c})K}{24}-\frac{(2z_{c}^{3}-5z_{c})S^{2}}{36}$$

$$Cornish-Fisher VaR =-\bar{R} - \sqrt(\sigma) \cdot z_{cf}$$

where \(S\) is the skewness of \(R\) and \(K\) is the excess kurtosis of \(R\).

Cornish-Fisher VaR collapses to traditional mean-VaR when returns are normally distributed. As such, the VaR and VaR functions are wrappers for the VaR function. The Cornish-Fisher expansion also naturally encompasses much of the variability in returns that could be uncovered by more computationally intensive techniques such as resampling or Monte-Carlo simulation. This is the default method for the VaR function, and may be accessed by setting method="modified".

Favre and Galeano also utilize modified VaR in a modified Sharpe Ratio as the return/risk measure for their portfolio optimization analysis, see SharpeRatio.modified for more information.

Component VaR

By setting portfolio_method="component" you may calculate the risk contribution of each element of the portfolio. The return from the function in this case will be a list with three components: the univariate portfolio VaR, the scalar contribution of each component to the portfolio VaR (these will sum to the portfolio VaR), and a percentage risk contribution (which will sum to 100%).

Both the numerical and percentage component contributions to VaR may contain both positive and negative contributions. A negative contribution to Component VaR indicates a portfolio risk diversifier. Increasing the position weight will reduce overall portoflio VaR.

If a weighting vector is not passed in via weights, the function will assume an equal weighted (neutral) portfolio.

Multiple risk decomposition approaches have been suggested in the literature. A naive approach is to set the risk contribution equal to the stand-alone risk. This approach is overly simplistic and neglects important diversification effects of the units being exposed differently to the underlying risk factors. An alternative approach is to measure the VaR contribution as the weight of the position in the portfolio times the partial derivative of the portfolio VaR with respect to the component weight. $$C_i \mbox{VaR} = w_i \frac{ \partial \mbox{VaR} }{\partial w_i}.$$ Because the portfolio VaR is linear in position size, we have that by Euler's theorem the portfolio VaR is the sum of these risk contributions. Gourieroux (2000) shows that for VaR, this mathematical decomposition of portfolio risk has a financial meaning. It equals the negative value of the asset's expected contribution to the portfolio return when the portfolio return equals the negative portfolio VaR:

$$C_i \mbox{VaR} = = -E\left[ w_i r_{i} | r_{p} = - \mbox{VaR}\right]$$

For the decomposition of Gaussian VaR, the estimated mean and covariance matrix are needed. For the decomposition of modified VaR, also estimates of the coskewness and cokurtosis matrices are needed. If \(r\) denotes the \(Nx1\) return vector and \(mu\) is the mean vector, then the \(N \times N^2\) co-skewness matrix is $$ m3 = E\left[ (r - \mu)(r - \mu)' \otimes (r - \mu)'\right]$$ The \(N \times N^3\) co-kurtosis matrix is

$$ m_{4} = E\left[ (r - \mu)(r - \mu)' \otimes (r - \mu)'\otimes (r - \mu)' \right] $$

where \(\otimes\) stands for the Kronecker product. The matrices can be estimated through the functions skewness.MM and kurtosis.MM. More efficient estimators have been proposed by Martellini and Ziemann (2007) and will be implemented in the future.

As discussed among others in Cont, Deguest and Scandolo (2007), it is important that the estimation of the VaR measure is robust to single outliers. This is especially the case for modified VaR and its decomposition, since they use higher order moments. By default, the portfolio moments are estimated by their sample counterparts. If clean="boudt" then the \(1-p\) most extreme observations are winsorized if they are detected as being outliers. For more information, see Boudt, Peterson and Croux (2008) and Return.clean. If your data consist of returns for highly illiquid assets, then clean="geltner" may be more appropriate to reduce distortion caused by autocorrelation, see Return.Geltner for details.

Epperlein and Smillie (2006) introduced a non-parametric kernel estimator for component risk contributions, which is available via method="kernel" and portfolio_method="component".

Marginal VaR

Different papers call this different things. In the Denton and Jayaraman paper referenced here, this calculation is called Incremental VaR. We have chosen the more common usage of calling this difference in VaR's in portfolios without the instrument and with the instrument as the “difference at the Margin”, thus the name Marginal VaR. This is incredibly confusing, and hasn't been resolved in the literature at this time.

Simon Keel and David Ardia (2009) attempt to reconcile some of the definitional issues and address some of the shortcomings of this measure in their working paper titled “Generalized Marginal Risk”. Hopefully their improved Marginal Risk measures may be included here in the future.

References

Boudt, Kris, Peterson, Brian, and Christophe Croux. 2008. Estimation and decomposition of downside risk for portfolios with non-normal returns. 2008. The Journal of Risk, vol. 11, 79-103.

Cont, Rama, Deguest, Romain and Giacomo Scandolo. Robustness and sensitivity analysis of risk measurement procedures. Financial Engineering Report No. 2007-06, Columbia University Center for Financial Engineering.

Denton M. and Jayaraman, J.D. Incremental, Marginal, and Component VaR. Sunguard. 2004.

Epperlein, E., Smillie, A. Cracking VaR with kernels. RISK, 2006, vol. 19, 70-74.

Gourieroux, Christian, Laurent, Jean-Paul and Olivier Scaillet. Sensitivity analysis of value at risk. Journal of Empirical Finance, 2000, Vol. 7, 225-245.

Keel, Simon and Ardia, David. Generalized marginal risk. Aeris CAPITAL discussion paper.

Laurent Favre and Jose-Antonio Galeano. Mean-Modified Value-at-Risk Optimization with Hedge Funds. Journal of Alternative Investment, Fall 2002, v 5.

Martellini, L. and Ziemann, V., 2010. Improved estimates of higher-order comoments and implications for portfolio selection. Review of Financial Studies, 23(4):1467-1502.

Return to RiskMetrics: Evolution of a Standard https://www.msci.com/documents/10199/dbb975aa-5dc2-4441-aa2d-ae34ab5f0945

Zangari, Peter. A VaR Methodology for Portfolios that include Options. 1996. RiskMetrics Monitor, First Quarter, 4-12.

Rockafellar, Terry and Uryasev, Stanislav. Optimization of Conditional VaR. The Journal of Risk, 2000, vol. 2, 21-41.

Dowd, Kevin. Measuring Market Risk, John Wiley and Sons, 2010.

Jorian, Phillippe. Value at Risk, the new benchmark for managing financial risk. 3rd Edition, McGraw Hill, 2006.

Hallerback, John. "Decomposing Portfolio Value-at-Risk: A General Analysis", 2003. The Journal of Risk vol 5/2.

Yamai and Yoshiba (2002). "Comparative Analyses of Expected Shortfall and Value-at-Risk: Their Estimation Error, Decomposition, and Optimization", Bank of Japan.

See Also

SharpeRatio.modified chart.VaRSensitivity Return.clean

Examples

Run this code
# NOT RUN {
if(!( Sys.info()[['sysname']]=="Windows") ){
# if on Windows, cut and paste this example

    data(edhec)

    # first do normal VaR calc
    VaR(edhec, p=.95, method="historical")

    # now use Gaussian
    VaR(edhec, p=.95, method="gaussian")

    # now use modified Cornish Fisher calc to take non-normal distribution into account
    VaR(edhec, p=.95, method="modified")

    # now use p=.99
    VaR(edhec, p=.99)
    # or the equivalent alpha=.01
    VaR(edhec, p=.01)

    # now with outliers squished
    VaR(edhec, clean="boudt")

    # add Component VaR for the equal weighted portfolio
    VaR(edhec, clean="boudt", portfolio_method="component")

} # end Windows check

# }

Run the code above in your browser using DataCamp Workspace