lars (version 1.3)

predict.lars: Make predictions or extract coefficients from a fitted lars model

Description

While lars() produces the entire path of solutions, predict.lars allows one to extract a prediction at a particular point along the path.

Usage

# S3 method for lars
predict(object, newx, s, type = c("fit", "coefficients"), mode = c("step", 
    "fraction", "norm", "lambda"), ...)
# S3 method for lars
coef(object, ...)

Arguments

object

A fitted lars object

newx

If type="fit", then newx should be the x values at which the fit is required. If type="coefficients", then newx can be omitted.

s

a value, or vector of values, indexing the path. Its values depends on the mode= argument. By default (mode="step"), s should take on values between 0 and p (e.g., a step of 1.3 means .3 of the way between step 1 and 2.)

type

If type="fit", predict returns the fitted values. If type="coefficients", predict returns the coefficients. Abbreviations allowed.

mode

Mode="step" means the s= argument indexes the lars step number, and the coefficients will be returned corresponding to the values corresponding to step s. If mode="fraction", then s should be a number between 0 and 1, and it refers to the ratio of the L1 norm of the coefficient vector, relative to the norm at the full LS solution. Mode="norm" means s refers to the L1 norm of the coefficient vector. Mode="lambda" uses the lasso regularization parameter for s; for other models it is the maximal correlation (does not make sense for lars/stepwise models). Abbreviations allowed.

Any arguments for predict.lars should work for coef.lars

Value

Either a vector/matrix of fitted values, or a vector/matrix of coefficients.

Details

LARS is described in detail in Efron, Hastie, Johnstone and Tibshirani (2002). With the "lasso" option, it computes the complete lasso solution simultaneously for ALL values of the shrinkage parameter in the same computational cost as a least squares fit.

References

Efron, Hastie, Johnstone and Tibshirani (2002) "Least Angle Regression" (with discussion) Annals of Statistics; see also 10.1214/009053604000000067. Hastie, Tibshirani and Friedman (2002) Elements of Statistical Learning, Springer, NY.

See Also

print, plot, lars, cv.lars

Examples

Run this code
# NOT RUN {
data(diabetes)
attach(diabetes)
object <- lars(x,y,type="lasso")
### make predictions at the values in x, at each of the
### steps produced in object
fits <- predict.lars(object, x, type="fit")
### extract the coefficient vector with L1 norm=4.1
coef4.1 <- coef(object, s=4.1, mode="norm") # or
coef4.1 <- predict(object, s=4.1, type="coef", mode="norm")
detach(diabetes)
# }

Run the code above in your browser using DataCamp Workspace