
Last chance! 50% off unlimited learning
Sale ends in
lms.bcn(percentiles = c(25, 50, 75), zero = c(1, 3),
llambda = "identity", lmu = "identity", lsigma = "loge",
elambda = list(), emu = list(), esigma = list(),
dfmu.init = 4, dfsigma.init = 2, ilambda = 1,
isigma = NULL, expectiles = FALSE)
zero = NULL
Links
for more choices,
and CommonVGAMffArguments<
earg
in Links
for general information,
as well as CommonVGAMffArguments
.vsmooth.spline
.vsmooth.spline
.
This argument may be assigned NULL
to get an initial value
NULL
, means an initial value is computed
in the @initialize
slot of the family function.TRUE
then the method is LMS-expectile
regression; expectiles are returned rather than quantiles.
The default is LMS quantile regression based on the normal distribution."vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
rrvglm
and vgam
.maxits
to be the iteration
number corresponding to the highest likelihood value. One trick is to fit a simple model and use it to provide initial values for a more complex model; see in the examples below.
In more detail,
the basic idea behind this method is that, for a fixed
value of $x$, a Box-Cox transformation of the response $Y$
is applied to obtain standard normality. The 3 parameters
($\lambda$, $\mu$, $\sigma$,
which start with the letters ``L-M-S''
respectively, hence its name) are chosen to maximize a penalized
log-likelihood (with vgam
). Then the
appropriate quantiles of the standard normal distribution
are back-transformed onto the original scale to get the
desired quantiles.
The three parameters may vary as a smooth function of $x$.
The Box-Cox power transformation here of the $Y$, given $x$, is
Of the three functions, it is often a good idea to allow
$\mu(x)$ to be more flexible because the functions
$\lambda(x)$ and $\sigma(x)$
usually vary more smoothly with $x$. This is somewhat
reflected in the default value for the argument zero
,
viz. zero = c(1,3)
.
Green, P. J. and Silverman, B. W. (1994) Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach, London: Chapman & Hall.
Yee, T. W. (2004) Quantile regression via vector generalized additive models. Statistics in Medicine, 23, 2295--2315.
Documentation accompanying the
lms.bcg
,
lms.yjn
,
qtplot.lmscreg
,
deplot.lmscreg
,
cdf.lmscreg
, alaplace1
,
amlnormal
,
denorm
,
CommonVGAMffArguments
.mysubset = subset(xs.nz, sex == "M" & ethnic == "1" & Study1)
mysubset = transform(mysubset, BMI = weight / height^2)
BMIdata = mysubset[, c("age", "BMI")]
BMIdata = na.omit(BMIdata)
BMIdata = subset(BMIdata, BMI < 80 & age < 65) # Delete an outlier
summary(BMIdata)
fit = vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), BMIdata, trace = TRUE)
head(predict(fit))
head(fitted(fit))
head(BMIdata)
head(cdf(fit)) # Person 56 is probably overweight, given his age
colMeans(c(depvar(fit)) < fitted(fit)) # Sample proportions below the quantiles
# Convergence problems? Try this trick: fit0 is a simpler model used for fit1
fit0 = vgam(BMI ~ s(age, df = 4), lms.bcn(zero = c(1,3)), BMIdata, trace = TRUE)
fit1 = vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), BMIdata,
etastart = predict(fit0), trace = TRUE)
# Quantile plot
par(bty = "l", mar = c(5, 4, 4, 3) + 0.1, xpd = TRUE)
qtplot(fit, percentiles = c(5, 50, 90, 99), main = "Quantiles",
xlim = c(15, 66), las = 1, ylab = "BMI", lwd = 2, lcol = 4)
# Density plot
ygrid = seq(15, 43, len = 100) # BMI ranges
par(mfrow=c(1, 1), lwd = 2)
(aa = deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "black",
main = "Density functions at Age = 20 (black), 42 (red) and 55 (blue)"))
aa = deplot(fit, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red")
aa = deplot(fit, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue",
Attach = TRUE)
aa@post$deplot # Contains density function values
Run the code above in your browser using DataLab