glmnet
fit a GLM with lasso or elasticnet regularization
Fit a generalized linear model via penalized maximum likelihood. The regularization path is computed for the lasso or elasticnet penalty at a grid of values for the regularization parameter lambda. Can deal with all shapes of data, including very large sparse data matrices. Fits linear, logistic and multinomial, poisson, and Cox regression models.
- Keywords
- models, regression
Usage
glmnet(x, y,
family=c("gaussian","binomial","poisson","multinomial","cox","mgaussian"), weights, offset=NULL,
alpha = 1, nlambda = 100, lambda.min.ratio = ifelse(nobs
Arguments
- x
- input matrix, of dimension nobs x nvars; each row is an
observation vector. Can be in sparse matrix format (inherit from class
"sparseMatrix"
as in packageMatrix
; not yet available forfamily="cox"
) - y
- response variable. Quantitative for
family="gaussian"
, orfamily="poisson"
(non-negative counts). Forfamily="binomial"
should be either a factor with two levels, or a two-column matrix of counts or proportions - family
- Response type (see above)
- weights
- observation weights. Can be total counts if responses are proportion matrices. Default is 1 for each observation
- offset
- A vector of length
nobs
that is included in the linear predictor (anobs x nc
matrix for the"multinomial"
family). Useful for the"poisson"
family (e.g. log of exposure time), or for refining a model by - alpha
- The elasticnet mixing parameter, with
$0\le\alpha\le 1$. The penalty is defined
as $$(1-\alpha)/2||\beta||_2^2+\alpha||\beta||_1.$$
alpha=1
is the lasso penalty, andalpha=0
the ridge penalty. - nlambda
- The number of
lambda
values - default is 100. - lambda.min.ratio
- Smallest value for
lambda
, as a fraction oflambda.max
, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample sizenobs
relative to th - lambda
- A user supplied
lambda
sequence. Typical usage is to have the program compute its ownlambda
sequence based onnlambda
andlambda.min.ratio
. Supplying a value oflambda
over - standardize
- Logical flag for x variable standardization, prior to
fitting the model sequence. The coefficients are always returned on
the original scale. Default is
standardize=TRUE
. If variables are in the same units already, you might not wis - thresh
- Convergence threshold for coordinate descent. Each inner
coordinate-descent loop continues until the maximum change in the
objective after any coefficient update is less than
thresh
times the null deviance. Defaults value is1E-7<
- dfmax
- Limit the maximum number of variables in the
model. Useful for very large
nvars
, if a partial path is desired. - pmax
- Limit the maximum number of variables ever to be nonzero
- exclude
- Indices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).
- penalty.factor
- Separate penalty factors can be applied to each
coefficient. This is a number that multiplies
lambda
to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in - maxit
- Maximum number of passes over the data for all lambda values; default is 10^5.
- type.gaussian
- Two algorithm types are supported for (only)
family="gaussian"
. The default whennvar<500< code=""> is
type.gaussian="covariance"
, and saves all inner-products ever computed. This can be much faster thant
500<> - standardize.response
- This is for the
family="mgaussian"
family, and allows the user to standardize the response variables - type.multinomial
- If
"grouped"
then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is"ungrouped"
Details
The sequence of models implied by lambda
is fit by coordinate
descent. For family="gaussian"
this is the lasso sequence if
alpha=1
, else it is the elasticnet sequence.
For the other families, this is a lasso or elasticnet regularization path
for fitting the generalized linear regression
paths, by maximizing the appropriate penalized log-likelihood (partial likelihood for the "cox" model). Sometimes the sequence is truncated before nlambda
values of lambda
have been used, because of instabilities in
the inverse link functions near a saturated fit. glmnet(...,family="binomial")
fits a traditional logistic regression model for the
log-odds. glmnet(...,family="multinomial")
fits a symmetric multinomial model, where
each class is represented by a linear model (on the log-scale). The
penalties take care of redundancies. A two-class "multinomial"
model
will produce the same fit as the corresponding "binomial"
model,
except the pair of coefficient matrices will be equal in magnitude and
opposite in sign, and half the "binomial"
values.
Note that the objective function for "gaussian"
is $$1/2
RSS/nobs + \lambda*penalty,$$ and for the other models it is
$$-loglik/nobs + \lambda*penalty.$$ Note also that for
"gaussian"
, glmnet
standardizes y to have unit variance
before computing its lambda sequence (and then unstandardizes the
resulting coefficients); if you wish to reproduce/compare results with other
software, best to supply a standardized y.
The latest two features in glmnet are the family="mgaussian"
family and the type.multinomial="grouped"
option for
multinomial fitting. The former allows a multi-response gaussian model
to be fit, using a "group -lasso" penalty on the coefficients for each
variable. Tying the responses together like this is called
"multi-task" learning in some domains. The grouped multinomial allows the same penalty for the
family="multinomial"
model, which is also multi-responsed. For
both of these the penalty on the coefficient vector for variable j is
$$(1-\alpha)/2||\beta_j||_2^2+\alpha||\beta_j||_2.$$ When
alpha=1
this is a group-lasso penalty, and otherwise it mixes
with quadratic just like elasticnet.
Value
- An object with S3 class
"glmnet","*"
, where"*"
is"elnet"
,"lognet"
,"multnet"
,"fishnet"
(poisson),"coxnet"
or"mrelnet"
for the various types of models. call the call that produced this object a0 Intercept sequence of length length(lambda)
beta For "elnet"
,"lognet"
,"fishnet"
and"coxnet"
models, anvars x length(lambda)
matrix of coefficients, stored in sparse column format ("CsparseMatrix"
). For"multnet"
and"mgaussian"
, a list ofnc
such matrices, one for each class.lambda The actual sequence of lambda
values useddev.ratio The fraction of (null) deviance explained (for "elnet"
, this is the R-square). The deviance calculations incorporate weights if present in the model. The deviance is defined to be 2*(loglike_sat - loglike), where loglike_sat is the log-likelihood for the saturated model (a model with a free parameter per observation). Hence dev.ratio=1-dev/nulldev.nulldev Null deviance (per observation). This is defined to be 2*(loglike_sat -loglike(Null)); The NULL model refers to the intercept model, except for the Cox, where it is the 0 model. df The number of nonzero coefficients for each value of lambda
. For"multnet"
, this is the number of variables with a nonzero coefficient for any class.dfmat For "multnet"
and"mrelnet"
only. A matrix consisting of the number of nonzero coefficients per classdim dimension of coefficient matrix (ices) nobs number of observations npasses total passes over the data summed over all lambda values offset a logical variable indicating whether an offset was included in the model jerr error flag, for warnings and errors (largely for internal debugging).
References
Friedman, J., Hastie, T. and Tibshirani, R. (2008)
Regularization Paths for Generalized Linear Models via Coordinate
Descent,
See Also
print
, predict
, coef
and plot
methods, and the cv.glmnet
function.
Examples
# Gaussian
x=matrix(rnorm(100*20),100,20)
y=rnorm(100)
fit1=glmnet(x,y)
print(fit1)
coef(fit1,s=0.01) # extract coefficients at a single value of lambda
predict(fit1,newx=x[1:10,],s=c(0.01,0.005)) # make predictions
#multivariate gaussian
y=matrix(rnorm(100*3),100,3)
fit1m=glmnet(x,y,family="mgaussian")
plot(fit1m,type.coef="2norm")
#binomial
g2=sample(1:2,100,replace=TRUE)
fit2=glmnet(x,g2,family="binomial")
#multinomial
g4=sample(1:4,100,replace=TRUE)
fit3=glmnet(x,g4,family="multinomial")
fit3a=glmnet(x,g4,family="multinomial",type.multinomial="grouped")
#poisson
N=500; p=20
nzc=5
x=matrix(rnorm(N*p),N,p)
beta=rnorm(nzc)
f = x[,seq(nzc)]%*%beta
mu=exp(f)
y=rpois(N,mu)
fit=glmnet(x,y,family="poisson")
plot(fit)
pfit = predict(fit,x,s=0.001,type="response")
plot(pfit,y)
#Cox
set.seed(10101)
N=1000;p=30
nzc=p/3
x=matrix(rnorm(N*p),N,p)
beta=rnorm(nzc)
fx=x[,seq(nzc)]%*%beta/3
hx=exp(fx)
ty=rexp(N,hx)
tcens=rbinom(n=N,prob=.3,size=1)# censoring indicator
y=cbind(time=ty,status=1-tcens) # y=Surv(ty,1-tcens) with library(survival)
fit=glmnet(x,y,family="cox")
plot(fit)
# Sparse
n=10000;p=200
nzc=trunc(p/10)
x=matrix(rnorm(n*p),n,p)
iz=sample(1:(n*p),size=n*p*.85,replace=FALSE)
x[iz]=0
sx=Matrix(x,sparse=TRUE)
inherits(sx,"sparseMatrix")#confirm that it is sparse
beta=rnorm(nzc)
fx=x[,seq(nzc)]%*%beta
eps=rnorm(n)
y=fx+eps
px=exp(fx)
px=px/(1+px)
ly=rbinom(n=length(px),prob=px,size=1)
system.time(fit1<-glmnet(sx,y))
system.time(fit2n<-glmnet(x,y))
Community examples
## first ```{r} N=500; p=20 nzc=5 x=matrix(rnorm(N*p),N,p) beta=rnorm(nzc) f = x[,seq(nzc)]%*%beta mu=exp(f) y=rpois(N,mu) fit=glmnet(x,y,family="poisson") plot(fit) pfit = predict(fit,x,s=0.001,type="response") plot(pfit,y) ```