Learn R Programming

qrmtools (version 0.0-4)

VaR_bounds: Best and Worst Value-at-Risk for Given Margins

Description

Compute the best and worst Value-at-Risk for given marginal distributions.

Usage

crude_VaR_bounds(alpha, qF, ...)
VaR_bounds_hom(alpha, d, method=c("Wang", "Wang.Par", "dual"),
               interval=NULL, tol=NULL, ...)
dual_bound(s, d, pF, tol=.Machine$double.eps^0.25, ...)

rearrange(X, tol=0, tol.type=c("relative", "absolute"), max.ra=Inf, method=c("worst", "best"), sample=TRUE, is.sorted=FALSE, trace=FALSE) RA(alpha, qF, N, abstol=0, max.ra=Inf, method=c("worst", "best"), sample=TRUE) ARA(alpha, qF, N.exp=seq(8, 19, by=1), reltol=c(0, 0.01), max.ra=10*length(qF), method=c("worst", "best"), sample=TRUE)

Arguments

alpha
Value-at-Risk confidence level (e.g., 0.99).
d
Dimension (number of risk factors; $\ge 2$).
qF
The marginal quantile function (in the homogeneous case) or a d-list containing the marginal quantile functions (in the general case).
method
A character string. For [object Object],[object Object]
interval
Initial interval (a numeric(2)) for computing worst Value-at-Risk. If not provided, these are the defaults chosen: [object Object],[object Object],[object Object]
tol
[object Object],[object Object]
tol.type
character string indicating the type of convergence tolerance function to be used ("relative" for relative tolerance and "absolute" for absolute tolerance).
s
Dual bound evaluation point.
pF
The marginal loss distribution function.
X
An (N, d)-matrix of quantiles (to be rearranged). If is.sorted it is assumed that the columns of X are sorted in increasing order.
max.ra
Maximal number of (considered) column rearrangements of the underlying matrix of quantiles (can be set to Inf).
N
The number of discretization points.
N.exp
The exponents of the number of discretization points (a vector) over which the algorithm iterates to find the smallest number of discretization points for which the desired accuracy (specified by
abstol
Absolute convergence tolerance $\epsilon$ to determine the individual convergence, i.e., the change in the computed minimal (for method="worst") or maximal (for method="best") row sums for the lower bound $\underline{
reltol
A vector of length one or two containing the relative convergence tolerances. If reltol is of length two, the first component is the the individual relative tolerance (used to determ
sample
A logical indicating whether each column of the two underlying matrices of quantiles (see Step 3 of the Rearrangement Algorithm in Embrechts et al. (2013)) are randomly permuted before the rearr
is.sorted
A logical indicating whether the columns of X are sorted in increasing order.
trace
A logical indicating whether the underlying matrix is printed after each rearrangement step. See vignette("VaR_bounds", package="qrmtools") for how to interpret the output.
...
[object Object],[object Object],[object Object]

Value

  • crude_VaR_bounds() returns crude lower and upper bounds for Value-at-Risk at confidence level $\alpha$ for any $d$-dimensional model with marginal quantile functions specified by qF.

    VaR_bounds_hom() returns the best and worst Value-at-Risk at confidence level $\alpha$ for $d$ risks with equal distribution function specified by the ellipsis ....

    dual_bound() returns the value of the dual bound $D(s)$ as given in Embrechts, Puccetti, Rüschendorf{Rueschendorf} (2013, Eq. (12)).

    rearrange() returns a list containing [object Object],[object Object],[object Object],[object Object],[object Object]

    RA() returns a list containing [object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]

    ARA() returns a list containing [object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]

Details

For d=2, VaR_bounds_hom() uses the method of Embrechts et al. (2013, Proposition 2). For method="Wang" and method="Wang.Par" the method presented in McNeil et al. (2015, Prop. 8.32) is implemented; this goes back to Embrechts et al. (2014, Prop. 3.1; note that the published version of this paper contains typos for both bounds). This requires one uniroot() and, for the generic method="Wang", one integrate(). The critical part for the generic method="Wang" is the lower endpoint of the initial interval for uniroot(). If the (marginal) distribution function has finite first moment, this can be taken as 0. However, if it has infinite first moment, the lower endpoint has to be positive (but must lie below the unknown root). Note that the upper endpoint $(1-\alpha)/d$ also happens to be a root and thus one needs a proper initional interval containing the root and being stricticly contained in $(0,(1-\alpha)/d$. In the case of Pareto margins, Hofert et al. (2015) have derived such an initial (which is used by method="Wang.Par"). Also note that the chosen smaller default tolerances for uniroot() in case of method="Wang" and method="Wang.Par" are crucial for obtaining reliable Value-at-Risk values; see Hofert et al. (2015).

For method="dual" for computing worst Value-at-Risk, the method presented of Embrechts et al. (2013, Proposition 4) is implemented. This requires two (nested) uniroot(), and an integrate(). For the inner root-finding procedure to find a root, the lower endpoint of the provided initial interval has to be sufficiently large.

Note that these approaches for computing the Value-at-Risk bounds in the homogeneous case are numerically non-trivial; see the source code and vignette("VaR_bounds", package="qrmtools") for more details. As a rule of thumb, use method="Wang" if you have to (i.e., if the margins are not Pareto) and method="Wang.Par" if you can (i.e., if the margins are Pareto). It is not recommended to use (the numerically even more challenging) method="dual".

Concerning the inhomogeneous case, rearrange() is an auxiliary function which is called by RA() and ARA(). After a column of X has been rearranged, the tolerance between the minimal (for the worst Value-at-Risk) or maximal (for the best Value-at-Risk) row sum after this rearrangement and the one of $d$ steps before (so typically when that column was rearranged the last time) is computed and convergence determined. For performance reasons, no checking is done and rearrange() can change in future versions to (futher) improve run time. Overall it should only be used by experts.

For the Rearrangement Algorithm RA(), convergence of $\underline{s}_N$ and $\overline{s}_N$ is determined if the minimal (for the worst Value-at-Risk) or maximal (for the best Value-at-Risk) row sum satisfies the specified abstol (so $\le\epsilon$) after at most max.ra-many column rearrangements. This is different from Embrechts et al. (2013) who use $<\epsilon$ and="" only="" check="" for="" convergence="" after="" an="" iteration="" through="" all="" columns="" of="" the="" underlying="" matrix="" quantiles="" has="" been="" completed.<="" p="">

For the Adaptive Rearrangement Algorithm ARA(), convergence of $\underline{s}_N$ and $\overline{s}_N$ is determined if, after at most max.ra-many column rearrangements, the (the individual relative tolerance) reltol[1] is satisfied and the relative (joint) tolerance between both bounds is at most reltol[2].

Note that both RA() and ARA() need to evalute the 0-quantile (for the lower bound for the best Value-at-Risk) and the 1-quantile (for the upper bound for the worst Value-at-Risk). As the algorithms can only handle finite values, the 0-quantile and the 1-quantile need to be adjusted if infinite. Instead of the 0-quantile, the $\alpha/(2N)$-quantile is computed and instead of the 1-quantile the $\alpha+(1-\alpha)(1-1/(2N))$-quantile is computed for such margins (if the 0-quantile or the 1-quantile is finite, no adjustment is made).

As a rule of thumb (see the examples below, vignette("VaR_bounds", package="qrmtools") and Hofert et al. (2015) for the reasons), it is recommended to use ARA() instead of RA().

On the theoretical side, let us again stress the following. rearrange(), RA() and ARA() compute $\underline{s}_N$ and $\overline{s}_N$ which are, from a practical point of view, treated as bounds for the worst (i.e., largest) or the best (i.e., smallest) Value-at-Risk (whatever is chosen with method), but which are not known to be such bounds from a theoretical point of view; see also above. Calling them bounds for worst or best Value-at-Risk is thus theoretically not correct (unless proven) but practical. The literature thus speaks of $(\underline{s}_N, \overline{s}_N)$ as the rearrangement range (rather than an interval containing worst or best Value-at-Risk).

References

Embrechts, P., Puccetti, G., Rüschendorf{Rueschendorf}, L., Wang, R., Beleraj, A. (2014). An Academic Response to Basel 3.5. Risks 2(1), 25--48.

Embrechts, P., Puccetti, G., Rüschendorf{Rueschendorf}, L. (2013). Model uncertainty and VaR aggregation. Journal of Banking & Finance 37, 2750--2764.

McNeil, A. J., Frey, R., and Embrechts, P. (2015). Quantitative Risk Management: Concepts, Techniques, Tools. Princeton University Press.

Hofert, M., Memartoluie, A., Saunders, D., Wirjanto, T. (2015). Improved Algorithms for Computing Worst Value-at-Risk: Numerical Challenges and the Adaptive Rearrangement Algorithm. See http://arxiv.org/abs/1505.02281.

See Also

vignette("VaR_bounds", package="qrmtools") for more example calls, numerical challenges encoutered and a comparison of the different methods for computing the worst (i.e., largest) Value-at-Risk.

Examples

Run this code
require(qrmtools)

## Pareto setup
alpha <- 0.99 # VaR confidence level
th <- 2 # Pareto parameter theta
qF <- function(p, theta=th) qPar(p, theta=theta) # Pareto quantile function
pF <- function(q, theta=th) pPar(q, theta=theta) # Pareto distribution function


## d=2: Compute best/worst VaR explicitly (hom. case) and compare with (A)RA ###

d <- 2 # dimension

## Explicit
VaRbounds <- VaR_bounds_hom(alpha, d=d, qF=qF) # (best VaR, worst VaR)

## Adaptive Rearrangement Algorithm (ARA)
set.seed(271) # set seed (for reproducibility)
ARAbest  <- ARA(alpha, qF=rep(list(qF), d), method="best")
ARAworst <- ARA(alpha, qF=rep(list(qF), d))

## Rearrangement Algorithm (RA) with N as in ARA()
RAbest  <- RA(alpha, qF=rep(list(qF), d), N=ARAbest$N.used, method="best")
RAworst <- RA(alpha, qF=rep(list(qF), d), N=ARAworst$N.used)

## Compare
stopifnot(all.equal(c(ARAbest$bounds[1], ARAbest$bounds[2],
                      RAbest$bounds[1],  RAbest$bounds[2]),
                    rep(VaRbounds[1], 4), tolerance=0.004, check.names=FALSE))
stopifnot(all.equal(c(ARAworst$bounds[1], ARAworst$bounds[2],
                      RAworst$bounds[1],  RAworst$bounds[2]),
                    rep(VaRbounds[2], 4), tolerance=0.003, check.names=FALSE))


## d=8: Compute best/worst VaR (hom. case) and compare with (A)RA ##############

d <- 8 # dimension

## Compute VaR bounds with various methods
I <- crude_VaR_bounds(alpha, qF=rep(list(qF), d)) # crude bound
VaR.W     <- VaR_bounds_hom(alpha, d=d, method="Wang", qF=qF)
VaR.W.Par <- VaR_bounds_hom(alpha, d=d, method="Wang.Par", theta=th)
VaR.dual  <- VaR_bounds_hom(alpha, d=d, method="dual", interval=I, pF=pF)

## Adaptive Rearrangement Algorithm (ARA) (with different relative tolerances)
set.seed(271) # set seed (for reproducibility)
ARAbest  <- ARA(alpha, qF=rep(list(qF), d), reltol=c(0.001, 0.01), method="best")
ARAworst <- ARA(alpha, qF=rep(list(qF), d), reltol=c(0.001, 0.01))

## Rearrangement Algorithm (RA) with N as in ARA and abstol (roughly) chosen as in ARA
RAbest  <- RA(alpha, qF=rep(list(qF), d), N=ARAbest$N.used,
              abstol=mean(tail(abs(diff(ARAbest$m.row.sums$low)), n=1),
                          tail(abs(diff(ARAbest$m.row.sums$up)), n=1)),
              method="best")
RAworst <- RA(alpha, qF=rep(list(qF), d), N=ARAworst$N.used,
              abstol=mean(tail(abs(diff(ARAworst$m.row.sums$low)), n=1),
                          tail(abs(diff(ARAworst$m.row.sums$up)), n=1)))

## Compare
stopifnot(all.equal(c(VaR.W[1], ARAbest$bounds, RAbest$bounds),
                    rep(VaR.W.Par[1],5), tolerance=0.004, check.names=FALSE))
stopifnot(all.equal(c(VaR.W[2], VaR.dual[2], ARAworst$bounds, RAworst$bounds),
                    rep(VaR.W.Par[2],6), tolerance=0.003, check.names=FALSE))


## Using (some of) the additional results computed by (A)RA() ##################

xlim <- c(1, max(sapply(RAworst$m.row.sums, length)))
ylim <- range(RAworst$m.row.sums)
plot(RAworst$m.row.sums[[2]], type="l", xlim=xlim, ylim=ylim,
     xlab="Number or rearranged columns",
     ylab=paste0("Minimal row sum per rearranged column"),
     main=substitute("Worst VaR minimal row sums ("*alpha==a.*","~d==d.*"and Par("*
                     th.*"))", list(a.=alpha, d.=d, th.=th)))
lines(1:length(RAworst$m.row.sums[[1]]), RAworst$m.row.sums[[1]], col="royalblue3")
legend("bottomright", bty="n", lty=rep(1,2),
       col=c("black", "royalblue3"), legend=c("upper bound", "lower bound"))
## => One should use ARA() instead of RA()


## "Reproducing" examples from Embrechts et al. (2013) #########################

## "Reproducing" Table 1 (but seed and eps are unknown)
## Left-hand side of Table 1
N <- 50
qPar <- rep(list(qF), 3)
p <- alpha + (1-alpha)*(0:(N-1))/N # for 'worst' (= largest) VaR
X <- sapply(qPar, function(qF) qF(p))
cbind(X, rowSums(X))
## Right-hand side of Table 1
set.seed(271)
res <- RA(alpha, qF=qPar, N=N)
row.sum <- rowSums(res$X.rearranged$low)
cbind(res$X.rearranged$low, row.sum)[order(row.sum),]

## "Reproducing" Table 3 for alpha=0.99 (but seed is unknown)
N <- 2e4 # we use a smaller N here to save run time
eps <- 0.1 # absolute tolerance
xi <- c(1.19, 1.17, 1.01, 1.39, 1.23, 1.22, 0.85, 0.98)
beta <- c(774, 254, 233, 412, 107, 243, 314, 124)
qF.lst <- lapply(1:8, function(j){ function(p) qGPD(p, xi=xi[j], beta=beta[j])})
set.seed(271)
res.best <- RA(0.99, qF=qF.lst, N=N, abstol=eps, method="best")
print(format(res.best$bounds, scientific=TRUE), quote=FALSE) # close to first value of 1st row
res.worst <- RA(0.99, qF=qF.lst, N=N, abstol=eps)
print(format(res.worst$bounds, scientific=TRUE), quote=FALSE) # close to last value of 1st row

Run the code above in your browser using DataLab