fit an elasticnet model path

Fit a regularization path for the elasticnet at a grid of values for the regularization parameter lambda. Can deal with all shapes of data, including very large sparse data matrices. Fits linear, logistic and multinomial regression models.

models, regression
glmnet(x, y, family=c("gaussian","binomial","multinomial"), weights, alpha = 1,
  nlambda = 100, lambda.min = ifelse(nobs
input matrix, of dimension nobs x nvars; each row is an observation vector. Can be in sparse column format (class "dgCMatrix" as in package Matrix)
response variable. Quantitative for family="gaussian". For family="binomial" should be either a factor with two levels, or a two-column matrix of counts or proportions. For family="multinomial", can be a nc>=2<
Response type (see above)
observation weights. Can be total counts if responses are proportion matrices. Default is 1 for each observation
The elasticnet mixing parameter, with $0<\alpha\le 1$.="" the="" penalty="" is="" defined="" as="" $$(1-\alpha)="" 2||\beta||_2^2+\alpha||\beta||_1.$$="" alpha=1 is the lasso penalty; Currently alpha<0.01< code=""> not reliable, unless you supply
The number of lambda values - default is 100.
Smallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to th
A user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min. Supplying a value of lambda overrides
Logical flag for variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is is standardize=TRUE
Convergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the relative change in any coefficient is less than thresh. Defaults value is 1E-4.
Limit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.
Limit the maximum number of variables ever to be nonzero
Indices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).
Separate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in
Maximum number of outer-loop iterations for "binomial" or "multinomial" families. Default is 100.
Only applies to "binomial" or "multinomial" families. If FALSE (the default), an upper-bound approximation is made to the hessian, which is not recalculated at each outer loop.
Two algorithm types are supported for (only) family="gaussian". The default type="covariance" saves all inner-products ever computed, and can be much faster than type="naive". The latter can be more effic

The sequence of models implied by lambda is fit by coordinate descent. For family="gaussian" this is the lasso sequence if alpha=1, else it is the elasticnet sequence. For family="binomial" or family="multinomial", this is a lasso or elasticnet regularization path for fitting the linear logistic or multinomial logistic regression paths. Sometimes the sequence is truncated before nlambda values of lambda have been used, because of instabilities in the logistic or multinomial models near a saturated fit. glmnet(...,family="binomial") fits a traditional logistic regression model for the log-odds. glmnet(...,family="multinomial") fits a symmetric multinomial model, where each class is represented by a linear model (on the log-scale). The penalties take care of redundancies. A two-class "multinomial" model will produce the same fit as the corresponding "binomial" model, except the pair of coefficient matrices will be equal in magnitude and opposite in sign, and half the "binomial" values. Note that the objective function for "gaussian" is $$1/(2*nobs)RSS + \lambda*penalty$$, and for the logistic models it is $$1/nobs -loglik + \lambda*penalty$$


  • An object with S3 class "glmnet","*", where "*" is "elnet", "lognet" or "multnet" for the three types of models.
  • callthe call that produced this object
  • a0Intercept sequence of length length(lambda)
  • betaFor "elnet" and "lognet" models, a nvars x length(lambda) matrix of coefficients, stored in sparse column format ("dgCMatrix"). For "multnet", a list of nc such matrices, one for each class.
  • lambdaThe actual sequence of lambda values used
  • devThe fraction of (null) deviance explained (for "elnet", this is the R-square).
  • nulldevNull deviance (per observation)
  • dfThe number of nonzero coefficients for each value of lambda. For "multnet", this is the number of variables with a nonzero coefficient for any class.
  • dfmatFor "multnet" only. A matrix consisting of the number of nonzero coefficients per class
  • dimdimension of coefficient matrix (ices)
  • npassestotal passes over the data summed over all lambda values
  • jerrerror flag, for warnings and errors (largely for internal debugging).


Friedman, J., Hastie, T. and Tibshirani, R. (2008) Regularization Paths for Generalized Linear Models via Coordinate Descent

See Also

print, predict and coef methods.

  • glmnet
coef(fit1,s=0.01) # extract coefficients at a single value of lambda
predict(fit1,newx=x[1:10,],s=c(0.01,0.005)) # make predictions
Documentation reproduced from package glmnet, version 1.1-5, License: GPL-2

Community examples

mayweiwang at Apr 26, 2017 glmnet v2.0-5

## first ```{r} N=500; p=20 nzc=5 x=matrix(rnorm(N*p),N,p) beta=rnorm(nzc) f = x[,seq(nzc)]%*%beta mu=exp(f) y=rpois(N,mu) fit=glmnet(x,y,family="poisson") plot(fit) pfit = predict(fit,x,s=0.001,type="response") plot(pfit,y) ```