Learn R Programming

foba (version 0.1)

foba: Greedy variable selection for ridge regression

Description

Variable Selection for Ridge Regression using Forward Greedy, Backward Greedy, and Adaptive Forward-Backward Greedy (FoBa) Methods

Usage

foba(x,y, type=c("foba","foba.aggressive", "foba.conservative", "forward","backward"), steps=0, intercept=TRUE, nu=0.5,lambda=1e-10)

Arguments

x
matrix of predictors
y
response
type
One of "foba", "foba.aggressive", "foba.conservative", "forward", or "backward". The names can be abbreviated to any unique substring. Default is "foba".
steps
Number of greedy (forward+backward) steps. Default is the number of variables for forward and backward, and twice the number of variables for foba.
intercept
If TRUE, an intercept is included in the model (and not penalized), otherwise no intercept is included. Default is TRUE.
nu
In range (0,1): controls how likely to take a backward step (more likely when nu is larger). Default is 0.5.
lambda
Regularization parameter for ridge regression. Default is 1e-5.

Value

A "foba" object is returned, which contains the following components:
call
The function call resulting to the object
type
Which variable selection method is used
path
The variable selection path: a sequence of variable addition/deletions
beta
Coefficients (ridge regression solution) at each step with selected features
meanx
Zero if intercept=FALSE, and the mean of x if intercept=TRUE
meany
Zero if intercept=FALSE, and the mean of y if intercept=TRUE

Details

FoBa for least squares regression is described in [Tong Zhang (2008)]. This implementation supports ridge regression. The "foba" method takes a backward step when the ridge penalized risk increase is less than nu times the ridge penalized risk reduction in the corresponding backward step. The "foba.conservative" method takes a backward step when the risk increase is less than nu times the smallest risk reduction in all previous forward steps. The "foba.aggressive" method takes a backward step when the cumulative risk changes in backward step is less than nu times the changes in the forward steps.

References

Tong Zhang (2008) "Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations", Rutgers Technical Report (long version).

Tong Zhang (2008) "Adaptive Forward-Backward Greedy Algorithm for Sparse Learning with Linear Models", NIPS'08 (short version).

See Also

print.foba and predict.foba methods for foba

Examples

Run this code
data(boston)

model.foba <- foba(boston$x,boston$y,steps=20)
print(model.foba)

model.foba.a <- foba(boston$x,boston$y,type="foba.a",steps=20) # Can use abbreviations
print(model.foba.a)

model.for <- foba(boston$x,boston$y,type="for",steps=20) 
print(model.for)

model.back <- foba(boston$x,boston$y,type="back") # Use only first 20 variables
print(model.back)


Run the code above in your browser using DataLab