gev(llocation = "identity", lscale = "loge", lshape = "logoff",
elocation = list(), escale = list(),
eshape = if(lshape=="logoff") list(offset=0.5) else
if(lshape=="elogit") list(min=-0.5, max=0.5) else list(),
percentiles = c(95, 99),
iscale=NULL, ishape = NULL,
method.init = 1, gshape=c(-0.45, 0.45), tshape0=0.001, zero = 3)
egev(llocation = "identity", lscale = "loge", lshape = "logoff",
elocation = list(), escale = list(),
eshape = if(lshape=="logoff") list(offset=0.5) else
if(lshape=="elogit") list(min=-0.5, max=0.5) else list(),
percentiles = c(95, 99),
iscale=NULL, ishape = NULL,
method.init=1, gshape=c(-0.45, 0.45), tshape0=0.001, zero = 3)
Links
for more choices.percentiles=NULL
, then the mean
$\mu + \sigma (\Gamma(1-\xi)-1) / \xi$
is returned, and this is only defined if $\xiNULL
means a value is computed internally.
The argument ishape
is more important than the other two because
they are initialized from the initial $\xi$.
If a failure to congshape
.
Method 2 is similar to the method of moments.
If both methods fail try using ishape
.method.init
equals 1.zero=NULL
then all linear/additive "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.gev()
with multivariate responses.
In general, egev()
is more reliable than gev()
. Fitting the GEV by maximum likelihood estimation can be numerically
fraught. If $1 + \xi (y-\mu)/ \sigma \leq 0$ then some crude evasive action is taken but the estimation process
can still fail. This is particularly the case if vgam
with s
is used. Then smoothing is best done with
vglm
with regression splines (bs
or ns
) because vglm
implements
half-stepsizing whereas vgam
doesn't. Half-stepsizing
helps handle the problem of straying outside the parameter space.
For the GEV distribution, the $k$th moment about the mean exists if $\xi < 1/k$. Provided they exist, the mean and variance are given by $\mu+\sigma{ \Gamma(1-\xi)-1}/ \xi$ and $\sigma^2 { \Gamma(1-2\xi) - \Gamma^2(1-\xi) } / \xi^2$ respectively, where $\Gamma$ is the gamma function.
Smith (1985) established that when $\xi > -0.5$,
the maximum likelihood estimators are completely regular.
To have some control over the estimated $\xi$ try
using lshape="logoff"
and the eshape=list(offset=0.5)
, say,
or lshape="elogit"
and eshape=list(min=-0.5, max=0.5)
, say.
Prescott, P. and Walden, A. T. (1980) Maximum likelihood estimation of the parameters of the generalized extreme-value distribution. Biometrika, 67, 723--724.
Smith, R. L. (1985) Maximum likelihood estimation in a class of nonregular cases. Biometrika, 72, 67--90.
rgev
,
gumbel
,
egumbel
,
guplot
,
rlplot.egev
,
gpd
,
elogit
,
oxtemp
,
venice
.# Multivariate example
data(venice)
y = as.matrix(venice[,paste("r", 1:10, sep="")])
fit1 = vgam(y[,1:2] ~ s(year, df=3), gev(zero=2:3), venice, trace=TRUE)
coef(fit1, matrix=TRUE)
fitted(fit1)[1:4,]
par(mfrow=c(1,2), las=1)
plot(fit1, se=TRUE, lcol="blue", scol="forestgreen",
main="Fitted mu(year) function (centered)")
attach(venice)
matplot(year, y[,1:2], ylab="Sea level (cm)", col=1:2,
main="Highest 2 annual sealevels and fitted 95 percentile")
lines(year, fitted(fit1)[,1], lty="dashed", col="blue")
detach(venice)
# Univariate example
data(oxtemp)
(fit = vglm(maxtemp ~ 1, egev, data=oxtemp, trace=TRUE))
fitted(fit)[1:3,]
coef(fit, mat=TRUE)
Coef(fit)
vcov(fit)
vcov(fit, untransform=TRUE)
sqrt(diag(vcov(fit))) # Approximate standard errors
rlplot(fit)
Run the code above in your browser using DataLab