Last chance! 50% off unlimited learning
Sale ends in
freund61(la = "loge", lap = "loge", lb = "loge", lbp = "loge",
ia = NULL, iap = NULL, ib = NULL, ibp = NULL,
independent = FALSE, zero = NULL)
p
'' stands for ``prime'').
See Links
for more choices.TRUE
then the parameters are constrained to satisfy
$\alpha=\alpha'$ and $\beta=\beta'$,
which implies that $y_1$ and $y_2$ are independent
and each have an ordinary exponential distribution."vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.The marginal distributions are, in general, not exponential. By default, the linear/additive predictors are $\eta_1=\log(\alpha)$, $\eta_2=\log(\alpha')$, $\eta_3=\log(\beta)$, $\eta_4=\log(\beta')$.
A special case is when $\alpha=\alpha'$ and $\beta=\beta'$, which means that $y_1$ and $y_2$ are independent, and both have an ordinary exponential distribution with means $1 / \alpha$ and $1 / \beta$ respectively.
Fisher scoring is used, and the initial values correspond to the MLEs of an intercept model. Consequently, convergence may take only one iteration.
exponential
.fdata <- data.frame(y1 = rexp(nn <- 1000, rate = exp(1)))
fdata <- transform(fdata, y2 = rexp(nn, rate = exp(2)))
fit1 <- vglm(cbind(y1, y2) ~ 1, fam = freund61, data = fdata, trace = TRUE)
coef(fit1, matrix = TRUE)
Coef(fit1)
vcov(fit1)
head(fitted(fit1))
summary(fit1)
# y1 and y2 are independent, so fit an independence model
fit2 <- vglm(cbind(y1, y2) ~ 1, freund61(indep = TRUE),
data = fdata, trace = TRUE)
coef(fit2, matrix = TRUE)
constraints(fit2)
pchisq(2 * (logLik(fit1) - logLik(fit2)), # p-value
df = df.residual(fit2) - df.residual(fit1), lower.tail = FALSE)
lrtest(fit1, fit2) # Better alternative
Run the code above in your browser using DataLab