Fit a generalized linear model via penalized maximum likelihood. The regularization path is computed for the lasso or elasticnet penalty at a grid of values for the regularization parameter lambda. Can deal with all shapes of data, including very large sparse data matrices. Fits linear, logistic and multinomial, poisson, and Cox regression models.
glmnet(x, y, family=c("gaussian","binomial","poisson","multinomial","cox","mgaussian"),
weights, offset=NULL, alpha = 1, nlambda = 100,
lambda.min.ratio = ifelse(nobs
input matrix, of dimension nobs x nvars; each row is an
observation vector. Can be in sparse matrix format (inherit from class "sparseMatrix"
as in package Matrix
; not yet available for family="cox"
)
response variable. Quantitative for family="gaussian"
,
or family="poisson"
(non-negative counts). For
family="binomial"
should be either a factor with two levels, or
a two-column matrix of counts or proportions (the second column is
treated as the target class; for a factor, the last level in
alphabetical order is the target class). For
family="multinomial"
, can be a nc>=2
level factor, or a
matrix with nc
columns of counts or proportions.
For either "binomial"
or "multinomial"
, if y
is
presented as a vector, it will be coerced into a factor. For
family="cox"
, y
should be a two-column matrix with
columns named 'time' and 'status'. The latter is a binary variable,
with '1' indicating death, and '0' indicating right censored. The
function Surv()
in package survival produces such a
matrix. For family="mgaussian"
, y
is a matrix of quantitative responses.
Response type (see above)
observation weights. Can be total counts if responses are proportion matrices. Default is 1 for each observation
A vector of length nobs
that is included in the linear predictor (a nobs x nc
matrix for the "multinomial"
family). Useful for the "poisson"
family (e.g. log of exposure time), or for refining a model by starting at a current fit. Default is NULL
. If supplied, then values must also be supplied to the predict
function.
The elasticnet mixing parameter, with
\(0\le\alpha\le 1\). The penalty is defined
as $$(1-\alpha)/2||\beta||_2^2+\alpha||\beta||_1.$$ alpha=1
is the lasso penalty, and alpha=0
the ridge penalty.
The number of lambda
values - default is 100.
Smallest value for lambda
, as a fraction of
lambda.max
, the (data derived) entry value (i.e. the smallest
value for which all coefficients are zero). The default depends on the
sample size nobs
relative to the number of variables
nvars
. If nobs > nvars
, the default is 0.0001
,
close to zero. If nobs < nvars
, the default is 0.01
.
A very small value of
lambda.min.ratio
will lead to a saturated fit in the nobs <
nvars
case. This is undefined for
"binomial"
and "multinomial"
models, and glmnet
will exit gracefully when the percentage deviance explained is almost
1.
A user supplied lambda
sequence. Typical usage
is to have the
program compute its own lambda
sequence based on
nlambda
and lambda.min.ratio
. Supplying a value of
lambda
overrides this. WARNING: use with care. Avoid supplying
a single value for lambda
(for predictions after CV use predict()
instead). Supply instead
a decreasing sequence of lambda
values. glmnet
relies
on its warms starts for speed, and its often faster to fit a whole
path than compute a single fit.
Logical flag for x variable standardization, prior to
fitting the model sequence. The coefficients are always returned on
the original scale. Default is standardize=TRUE
.
If variables are in the same units already, you might not wish to
standardize. See details below for y standardization with family="gaussian"
.
Should intercept(s) be fitted (default=TRUE) or set to zero (FALSE)
Convergence threshold for coordinate descent. Each inner
coordinate-descent loop continues until the maximum change in the
objective after any coefficient update is less than thresh
times the null deviance. Defaults value is 1E-7
.
Limit the maximum number of variables in the
model. Useful for very large nvars
, if a partial path is desired.
Limit the maximum number of variables ever to be nonzero
Indices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).
Separate penalty factors can be applied to each
coefficient. This is a number that multiplies lambda
to allow
differential shrinkage. Can be 0 for some variables, which implies
no shrinkage, and that variable is always included in the
model. Default is 1 for all variables (and implicitly infinity for
variables listed in exclude
). Note: the penalty factors are
internally rescaled to sum to nvars, and the lambda sequence will
reflect this change.
Vector of lower limits for each coefficient;
default -Inf
. Each
of these must be non-positive. Can be presented as a single value
(which will then be replicated), else a vector of length nvars
Vector of upper limits for each coefficient;
default Inf
. See lower.limits
Maximum number of passes over the data for all lambda values; default is 10^5.
Two algorithm types are supported for (only)
family="gaussian"
. The default when nvar<500
is
type.gaussian="covariance"
, and saves all
inner-products ever computed. This can be much faster than
type.gaussian="naive"
, which loops through nobs
every
time an inner-product is computed. The latter can be far more efficient for nvar >>
nobs
situations, or when nvar > 500
.
If "Newton"
then the exact hessian is used
(default), while "modified.Newton"
uses an upper-bound on the
hessian, and can be faster.
This is for the family="mgaussian"
family, and allows the user to standardize the response variables
If "grouped"
then a grouped lasso penalty
is used on the multinomial coefficients for a variable. This ensures
they are all in our out together. The default is "ungrouped"
An object with S3 class "glmnet","*"
, where "*"
is
"elnet"
, "lognet"
,
"multnet"
, "fishnet"
(poisson), "coxnet"
or "mrelnet"
for the various types of models.
the call that produced this object
Intercept sequence of length length(lambda)
For "elnet"
, "lognet"
, "fishnet"
and "coxnet"
models, a nvars x
length(lambda)
matrix of coefficients, stored in sparse column
format ("CsparseMatrix"
). For "multnet"
and "mgaussian"
, a list of nc
such
matrices, one for each class.
The actual sequence of lambda
values used. When
alpha=0
, the largest lambda reported does not quite give the
zero coefficients reported (lambda=inf
would in principle). Instead, the
largest lambda
for alpha=0.001
is used, and the sequence
of lambda
values is derived from this.
The fraction of (null) deviance explained (for "elnet"
, this
is the R-square). The deviance calculations incorporate weights if
present in the model. The deviance is defined to be 2*(loglike_sat -
loglike), where loglike_sat is the log-likelihood for the saturated
model (a model with a free parameter per observation). Hence dev.ratio=1-dev/nulldev.
Null deviance (per observation). This is defined to be 2*(loglike_sat -loglike(Null)); The NULL model refers to the intercept model, except for the Cox, where it is the 0 model.
The number of nonzero coefficients for each value of
lambda
. For "multnet"
, this is the number of variables
with a nonzero coefficient for any class.
For "multnet"
and "mrelnet"
only. A matrix consisting of the
number of nonzero coefficients per class
dimension of coefficient matrix (ices)
number of observations
total passes over the data summed over all lambda values
a logical variable indicating whether an offset was included in the model
error flag, for warnings and errors (largely for internal debugging).
The sequence of models implied by lambda
is fit by coordinate
descent. For family="gaussian"
this is the lasso sequence if
alpha=1
, else it is the elasticnet sequence.
For the other families, this is a lasso or elasticnet regularization path
for fitting the generalized linear regression
paths, by maximizing the appropriate penalized log-likelihood (partial likelihood for the "cox" model). Sometimes the sequence is truncated before nlambda
values of lambda
have been used, because of instabilities in
the inverse link functions near a saturated fit. glmnet(...,family="binomial")
fits a traditional logistic regression model for the
log-odds. glmnet(...,family="multinomial")
fits a symmetric multinomial model, where
each class is represented by a linear model (on the log-scale). The
penalties take care of redundancies. A two-class "multinomial"
model
will produce the same fit as the corresponding "binomial"
model,
except the pair of coefficient matrices will be equal in magnitude and
opposite in sign, and half the "binomial"
values.
Note that the objective function for "gaussian"
is $$1/2
RSS/nobs + \lambda*penalty,$$ and for the other models it is
$$-loglik/nobs + \lambda*penalty.$$ Note also that for
"gaussian"
, glmnet
standardizes y to have unit variance
(using 1/n rather than 1/(n-1) formula)
before computing its lambda sequence (and then unstandardizes the
resulting coefficients); if you wish to reproduce/compare results with other
software, best to supply a standardized y. The coefficients for any predictor variables
with zero variance are set to zero for all values of lambda.
The latest two features in glmnet are the family="mgaussian"
family and the type.multinomial="grouped"
option for
multinomial fitting. The former allows a multi-response gaussian model
to be fit, using a "group -lasso" penalty on the coefficients for each
variable. Tying the responses together like this is called
"multi-task" learning in some domains. The grouped multinomial allows the same penalty for the
family="multinomial"
model, which is also multi-responsed. For
both of these the penalty on the coefficient vector for variable j is
$$(1-\alpha)/2||\beta_j||_2^2+\alpha||\beta_j||_2.$$ When
alpha=1
this is a group-lasso penalty, and otherwise it mixes
with quadratic just like elasticnet. A small detail in the Cox model:
if death times are tied with censored times, we assume the censored
times occurred just before the death times in computing the
Breslow approximation; if users prefer the usual convention of
after, they can add a small number to all censoring times to
achieve this effect.
Friedman, J., Hastie, T. and Tibshirani, R. (2008) Regularization Paths for Generalized Linear Models via Coordinate Descent, https://web.stanford.edu/~hastie/Papers/glmnet.pdf Journal of Statistical Software, Vol. 33(1), 1-22 Feb 2010 http://www.jstatsoft.org/v33/i01/ Simon, N., Friedman, J., Hastie, T., Tibshirani, R. (2011) Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent, Journal of Statistical Software, Vol. 39(5) 1-13 http://www.jstatsoft.org/v39/i05/ Tibshirani, Robert., Bien, J., Friedman, J.,Hastie, T.,Simon, N.,Taylor, J. and Tibshirani, Ryan. (2012) Strong Rules for Discarding Predictors in Lasso-type Problems, JRSSB vol 74, http://statweb.stanford.edu/~tibs/ftp/strong.pdf Stanford Statistics Technical Report Glmnet Vignette https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html
print
, predict
, coef
and plot
methods, and the cv.glmnet
function.
# NOT RUN {
# Gaussian
x=matrix(rnorm(100*20),100,20)
y=rnorm(100)
fit1=glmnet(x,y)
print(fit1)
coef(fit1,s=0.01) # extract coefficients at a single value of lambda
predict(fit1,newx=x[1:10,],s=c(0.01,0.005)) # make predictions
#multivariate gaussian
y=matrix(rnorm(100*3),100,3)
fit1m=glmnet(x,y,family="mgaussian")
plot(fit1m,type.coef="2norm")
#binomial
g2=sample(1:2,100,replace=TRUE)
fit2=glmnet(x,g2,family="binomial")
#multinomial
g4=sample(1:4,100,replace=TRUE)
fit3=glmnet(x,g4,family="multinomial")
fit3a=glmnet(x,g4,family="multinomial",type.multinomial="grouped")
#poisson
N=500; p=20
nzc=5
x=matrix(rnorm(N*p),N,p)
beta=rnorm(nzc)
f = x[,seq(nzc)]%*%beta
mu=exp(f)
y=rpois(N,mu)
fit=glmnet(x,y,family="poisson")
plot(fit)
pfit = predict(fit,x,s=0.001,type="response")
plot(pfit,y)
#Cox
set.seed(10101)
N=1000;p=30
nzc=p/3
x=matrix(rnorm(N*p),N,p)
beta=rnorm(nzc)
fx=x[,seq(nzc)]%*%beta/3
hx=exp(fx)
ty=rexp(N,hx)
tcens=rbinom(n=N,prob=.3,size=1)# censoring indicator
y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)
fit=glmnet(x,y,family="cox")
plot(fit)
# Sparse
n=10000;p=200
nzc=trunc(p/10)
x=matrix(rnorm(n*p),n,p)
iz=sample(1:(n*p),size=n*p*.85,replace=FALSE)
x[iz]=0
sx=Matrix(x,sparse=TRUE)
inherits(sx,"sparseMatrix")#confirm that it is sparse
beta=rnorm(nzc)
fx=x[,seq(nzc)]%*%beta
eps=rnorm(n)
y=fx+eps
px=exp(fx)
px=px/(1+px)
ly=rbinom(n=length(px),prob=px,size=1)
system.time(fit1<-glmnet(sx,y))
system.time(fit2n<-glmnet(x,y))
# }
Run the code above in your browser using DataLab