The hyperparameters and starting values may not be specified and in this case a warning message will be printed and default values are set. In particular if hyperparam
is missing we have
hyperparam <- NULL
hyperparam$a.unif <- 0
hyperparam$b.unif <- .5
hyperparam$a.beta <- c(.8,.8)
hyperparam$b.beta <- c(5,5)
mu.pois <- hyperparam$mu.pois <- 4
mu.nbinom <- hyperparam$mu.nbinom <- 4
var.nbinom <- 8
pnb <- hyperparam$pnb <- mu.nbinom/var.nbinom
rnb <- hyperparam$rnb <- mu.nbinom^2 / (var.nbinom-mu.nbinom)
The routine returns an estimate of the Pickands dependence function using the Bernstein polynomials approximation proposed in Marcon et al. (2016).
The method is based on a preliminary empirical estimate of the Pickands dependence function.
If you do not provide such an estimate, this is computed by the routine. In this case, you can select one of the empirical methods
available. est = "ht"
refers to the Hall-Tajvidi estimator (Hall and Tajvidi 2000).
With est = "cfg"
the method proposed by Caperaa et al. (1997) is considered. Note that in the multivariate case the adjusted version of Gudendorf and Segers (2011) is used. Finally, with est = "md"
the estimate is based on the madogram defined in Marcon et al. (2016).
For each row of the \((m \times d)\) design matrix x
is a point in the unit d
-dimensional simplex,
\(
S_d := \left\{ (w_1,\ldots, w_d) \in [0,1]^{d}: \sum_{i=1}^{d} w_i = 1 \right\}.
\)
With this "regularization" method, the final estimate satisfies the neccessary conditions in order to be a Pickands dependence function.
\(A(\bold{w}) = \sum_{\bold{\alpha} \in \Gamma_k} \beta_{\bold{\alpha}} b_{\bold{\alpha}} (\bold{w};k).\)
The estimates are obtained by solving an optimization quadratic problem subject to the constraints. The latter are represented by the following conditions:
\(A(e_i)=1; \max(w_i)\leq A(w) \leq 1; \forall i=1,\ldots,d;\) (convexity).
The order of polynomial k
controls the smoothness of the estimate. The higher k
is, the smoother the final estimate is.
Higher values are better with strong dependence (e. g. k=23
), whereas small values (e.g. k=6
or k=10
) are enough with mild or weak dependence.
An empirical transformation of the marginals is performed when margin="emp"
. A max-likelihood fitting of the GEV distributions is implemented when margin="est"
. Otherwise it refers to marginal parametric GEV theorethical distributions (margin = "exp", "frechet", "gumbel"
).
If bounds = TRUE
, a modification can be implemented to satisfy
\(\max(w,1-w) \leq A_n(w) \leq 1 \forall 0 \leq w \leq 1\):
\( A_n(w) = \min(1, \max{A_n(w), w, 1-w}). \)