gpd(y, data, th, qu, phi = ~1, xi = ~1, penalty = "gaussian",
prior = "gaussian", method = "optimize", cov="observed", start = NULL, priorParameters = NULL,
maxit = 10000, trace = NULL,
iter = 10500, burn = 500, thin = 1, jump.cov, jump.const, verbose = TRUE)
## S3 method for class 'gpd':
print(x, digits=max(3, getOption("digits") - 3), ...)
## S3 method for class 'gpd':
show(x, digits=max(3, getOption("digits") - 3), ...)
## S3 method for class 'gpd':
summary(object, nsim=1000, alpha=0.05, ...)
## S3 method for class 'gpd':
coef(object, ...)
## S3 method for class 'gpd':
plot(x, main=rep(NULL, 4), xlab=rep(NULL, 4), nsim=1000, alpha=0.05, ...)
## S3 method for class 'gpd':
AIC(object, ..., k=2)
## S3 method for class 'gpd':
coefficients(object, ...)
## S3 method for class 'bgpd':
print(x, print.seed=FALSE, ...)
## S3 method for class 'bgpd':
summary(object, ...)
## S3 method for class 'bgpd':
plot(x, which.plots=1:3, density.adjust=2, print.seed=FALSE, ...)
## S3 method for class 'bgpd':
coef(object, ...)
## S3 method for class 'bgpd':
coefficients(object, ...)
## S3 method for class 'summary.gpd':
print(x, digits=3, ...)
## S3 method for class 'summary.bgpd':
print(x, ...)
## S3 method for class 'summary.bgpd':
print(x, ...)
## S3 method for class 'summary.gpd':
show(x, digits=3, ...)
data
.y
and any covariates.y
th
, specifying the quantile of the y
phi = ~ 1
- i.e. no covariates.phi = ~ 1
- i.e. no covariates.penalty
is "gaussian" or "lasso" then the parameters for
the penalization are specified through the method = "optimize"
, just an alternative way of
specifying the pentalty, and only one or neither of penalty
and prior
should be given. If method = "simulate"
,
prior must be ``gaussian''optim
and point estimates (either Mcov = "observed"
in which case the observed information matrix
is used, as given in Appendix A of Davison and Hinkley. The only other
option is
optim
.
If not provided, an exponential distribution is assumed as the starting
point.optim
method = "optimize"
,
the argument is passed into optim
-- see the help for that
function. If method = "simulate"
, the argument determines at
how manmethod = "simulate"
.jump.const
.verbose=TRUE
.gpd
, bgpd
, summary.gpd
or summary.bgpd
returned by gpd
or summary.gpd
.plot.gpd
, x-axis labels for plots. Should be a vector of length 4.nsim = 1000
alpha = 0.05
k=2
.print.seed=FALSE
.wh
density
. Controls the amount of
smoothing of the kernel density estimate. Defaults to
density.adjust=2
.method = "optimize"
, an object of class gpd
.optim
relating to whether or
not the optimizer converged.gpd
that produced the object.method = "simulate"
, an object of class bgpd
.gpd
that produced the object.gpd.fit
function
in the ismev
package and is due to Stuart Coles.When a summary or plot is performed, a simulation envelope is produced the data, based on quantiles of the fitted model. This represents a pointwise (1 - alpha)% simulated confidence interval. Since the ordered observations will be correlated, if any observation is outside the envelope, it is likely that a chain of observations will be outside the envelope. Therefore, if the number outside the envelope is a little more than alpha%, that does not immediately imply a serious shortcoming of the fitted model.
When method = "optimize"
, the plot
function produces diagnostic plots for
the fitted generalized Pareto model. A PP-plot, QQ-plot,
histogram with superimposed generalized Pareto density estimate,
and a return level plot with confidence interval are produced.
The PP-plot and QQ-plot contain simulated pointwise confidence regions.
The region is a (1 - alpha)% region based on nsim
simulated
samples.
When there are estimated values of xi <= 0.5<="" code="">, the regularity
conditions of the likelihood break down and inference based on approximate
standard errors cannot be performed. In this case, the most fruitful
approach to inference appears to be by the bootstrap. It might be possible
to simulate from the posterior, but finding a good proposal distribution
might be difficult and you should take care to get an acceptance rate
that is reasonably high (around 40% when there are no covariates, lower
otherwise).
If
start
is not provided, the maximum penalized likelhood point
estimates are computed and used.=>
If method = "simulate"
, the simulation is done by a Metropolis
algorithm.
When plotting the object, if the chains have converged on the posterior distributions, the trace plots should look like `fat hairy caterpillars' and their cumulative means should converge rapidly. Moreover, the autocorrelation functions should converge quickly to zero.
When printing or summarizing the object,
posterior means and standard deviations are computed. Posterior means
are also returned by the coef
method. Depending on what you
want to do and what the posterior distributions look like (try using plot.bgpd
)
you might want to work with quantiles of the poseterior distributions instead.
x <- rnorm(1000)
mod <- gpd(x, qu = 0.7)
mod
par(mfrow=c(2, 2))
plot(mod)
# Following lines commented out to keep CRAN robots happy
# mod <- gpd(x, qu=.7, method="sim")
# mod
# par(mfrow=c(3, 2))
# plot(mod)
Run the code above in your browser using DataLab