Last chance! 50% off unlimited learning
Sale ends in
zapoisson(lpobs0 = "logit", llambda = "loge",
type.fitted = c("mean", "pobs0", "onempobs0"), zero = NULL)
zapoissonff(llambda = "loge", lonempobs0 = "logit",
type.fitted = c("mean", "pobs0", "onempobs0"), zero = -2)
pobs0
here.
See Links
for more choices.Links
for more choices.CommonVGAMffArguments
and fittedvlm
for information.CommonVGAMffArguments
for more information."vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The fitted.values
slot of the fitted object,
which should be extracted by the generic function fitted
,
returns the mean $\mu$ (default) which is given by
type.fitted = "pobs0"
then $p_0$ is returned.
For one response/species, by default, the two linear/additive
predictors for zapoisson()
are $(logit(p_0), \log(\lambda))^T$.
The zapoissonff()
has a few
changes compared to zapoisson()
.
These are:
(i) the order of the linear/additive predictors is switched so the
Poisson mean comes first;
(ii) argument onempobs0
is now 1 minus the probability of an observed 0,
i.e., the probability of the positive Poisson distribution,
i.e., onempobs0
is 1-pobs0
;
(iii) argument zero
has a new default so that the onempobs0
is intercept-only by default.
Now zapoissonff()
is generally recommended over
zapoisson()
.
Both functions implement Fisher scoring and can handle
multiple responses.
Angers, J-F. and Biswas, A. (2003) A Bayesian analysis of zero-inflated generalized Poisson model. Computational Statistics & Data Analysis, 42, 37--46.
Yee, T. W. (2014) Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis.
Documentation accompanying the
rzapois
,
zipoisson
,
pospoisson
,
posnegbinomial
,
binomialff
,
rpospois
,
CommonVGAMffArguments
,
simulate.vlm
.zdata <- data.frame(x2 = runif(nn <- 1000))
zdata <- transform(zdata, pobs0 = logit( -1 + 1*x2, inverse = TRUE),
lambda = loge(-0.5 + 2*x2, inverse = TRUE))
zdata <- transform(zdata, y = rzapois(nn, lambda, pobs0 = pobs0))
with(zdata, table(y))
fit <- vglm(y ~ x2, zapoisson, data = zdata, trace = TRUE)
fit <- vglm(y ~ x2, zapoisson, data = zdata, trace = TRUE, crit = "coef")
head(fitted(fit))
head(predict(fit))
head(predict(fit, untransform = TRUE))
coef(fit, matrix = TRUE)
summary(fit)
# Another example ------------------------------
# Data from Angers and Biswas (2003)
abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1))
abdata <- subset(abdata, w > 0)
Abdata <- data.frame(yy = with(abdata, rep(y, w)))
fit3 <- vglm(yy ~ 1, zapoisson, data = Abdata, trace = TRUE, crit = "coef")
coef(fit3, matrix = TRUE)
Coef(fit3) # Estimate lambda (they get 0.6997 with SE 0.1520)
head(fitted(fit3), 1)
with(Abdata, mean(yy)) # Compare this with fitted(fit3)
Run the code above in your browser using DataLab