lars (version 1.3)

lars: Fits Least Angle Regression, Lasso and Infinitesimal Forward Stagewise regression models

Description

These are all variants of Lasso, and provide the entire sequence of coefficients and fits, starting from zero, to the least squares fit.

Usage

lars(x, y, type = c("lasso", "lar", "forward.stagewise", "stepwise"), 
    trace = FALSE, normalize = TRUE, intercept = TRUE, Gram, eps = 1e-12,
     max.steps, use.Gram = TRUE)

Arguments

x

matrix of predictors

y

response

type

One of "lasso", "lar", "forward.stagewise" or "stepwise". The names can be abbreviated to any unique substring. Default is "lasso".

trace

If TRUE, lars prints out its progress

normalize

If TRUE, each variable is standardized to have unit L2 norm, otherwise it is left alone. Default is TRUE.

intercept

if TRUE, an intercept is included in the model (and not penalized), otherwise no intercept is included. Default is TRUE.

Gram

The X'X matrix; useful for repeated runs (bootstrap) where a large X'X stays the same.

eps

An effective zero, with default 1e-12. If lars() stops and reports NAs, consider increasing this slightly.

max.steps

Limit the number of steps taken; the default is 8 * min(m, n-intercept), with m the number of variables, and n the number of samples. For type="lar" or type="stepwise", the maximum number of steps is min(m,n-intercept). For type="lasso" and especially type="forward.stagewise", there can be many more terms, because although no more than min(m,n-intercept) variables can be active during any step, variables are frequently droppped and added as the algorithm proceeds. Although the default usually guarantees that the algorithm has proceeded to the saturated fit, users should check.

use.Gram

When the number m of variables is very large, i.e. larger than N, then you may not want LARS to precompute the Gram matrix. Default is use.Gram=TRUE.

Value

A "lars" object is returned, for which print, plot, predict, coef and summary methods exist.

Details

LARS is described in detail in Efron, Hastie, Johnstone and Tibshirani (2002). With the "lasso" option, it computes the complete lasso solution simultaneously for ALL values of the shrinkage parameter in the same computational cost as a least squares fit. A "stepwise" option has recently been added to LARS.

References

Efron, Hastie, Johnstone and Tibshirani (2003) "Least Angle Regression" (with discussion) Annals of Statistics 10.1214/009053604000000067; see also https://hastie.su.domains/Papers/LARS/LeastAngle_2002.pdf. Hastie, Tibshirani and Friedman (2002) Elements of Statistical Learning, Springer, NY.

See Also

print, plot, summary and predict methods for lars, and cv.lars

Examples

Run this code
# NOT RUN {
data(diabetes)
par(mfrow=c(2,2))
attach(diabetes)
object <- lars(x,y)
plot(object)
object2 <- lars(x,y,type="lar")
plot(object2)
object3 <- lars(x,y,type="for") # Can use abbreviations
plot(object3)
detach(diabetes)
# }

Run the code above in your browser using DataCamp Workspace