lqa (x, ...)
lqa.update2 (x, y, family = NULL, penalty = NULL, intercept = TRUE,
weights = rep (1, nobs), control = lqa.control (),
initial.beta, mustart, eta.new, gamma1 = 1, ...)
## S3 method for class 'formula':
lqa(formula, data = list (), weights = rep (1, nobs), subset,
na.action, start = NULL, etastart, mustart, offset, ...)
## S3 method for class 'default':
lqa(x, y, family = gaussian (), penalty = NULL, method = "lqa.update2",
weights = rep (1, nobs), start = NULL,
etastart = NULL, mustart = NULL, offset = rep (0, nobs),
control = lqa.control (), intercept = TRUE,
standardize = TRUE, ...)
family
penalty
for details on penalty functions.method = lqa.update2
applies the LQA algorithm.lqa.update2
to enforce convergence if necessary.lqa.control
for details.lqa
returns an object of class lqa
which inherits from the classes glm
and lm
.
The generic accessor functions coefficients
, fitted.values
and residuals
can be used to
extract various useful features of the object returned by lqa
.
Note it is highly recommended to include an intercept in the model (e.g. use Intercept = TRUE
).
If you use Intercept = FALSE
in the classical linear model then make sure that your y
argument is already centered! Otherwise the model would not be valid.
An object of class lqa
is a list containing at least the following components:family
object used.penalty
object used, indicating which penalty has been used.lqa.update2
).glm()
function.
As there, the right hand side of the model formula specifies the form of the linear predictor and hence gives the
link function of the mean of the response, rather than the mean of the response directly.
Per default an intercept is included in the model. If it should be removed then use formulae of the form `response ~ 0 + terms'
or `response ~ terms - 1'.
Also lqa
takes a family
argument, which is used to specify the distribution from the exponential
family to use, and the link function that is to go with it. The default value is the canonical link.cv.lqa
, penalty
set.seed (1111)
n <- 200
p <- 5
X <- matrix (rnorm (n * p), ncol = p)
X[,2] <- X[,1] + rnorm (n, sd = 0.1)
X[,3] <- X[,1] + rnorm (n, sd = 0.1)
true.beta <- c (1, 2, 0, 0, -1)
y <- drop (X %*% true.beta) + rnorm (n)
obj1 <- lqa (y ~ X, family = gaussian (), penalty = lasso (1.5),
control = lqa.control ())
obj1$coef
set.seed (4321)
n <- 25
p <- 5
X <- matrix (rnorm (n * p), ncol = p)
X[,2] <- X[,1] + rnorm (n, sd = 0.1)
X[,3] <- X[,1] + rnorm (n, sd = 0.1)
true.beta <- c (1, 2, 0, 0, -1)
family1 <- binomial ()
eta.true <- drop (X %*% true.beta)
mu.true <- family1$linkinv (eta.true)
prob1 <- sum (as.integer (y > 0)) / n
nvec <- 1 : n
y2 <- sapply (mu.true, function (n.vec) {rbinom (1, 1, mu.true)})
obj2 <- lqa (y2 ~ X, family = binomial (),
penalty = fused.lasso (c (0.0001, 0.2)))
obj2$coef
Run the code above in your browser using DataLab