Last chance! 50% off unlimited learning
Sale ends in
Does k-folds cross validation for rq.pen. If multiple values of a are specified then does a grid based search for best value of
rq.pen.cv(
x,
y,
tau = 0.5,
lambda = NULL,
penalty = c("LASSO", "Ridge", "ENet", "aLASSO", "SCAD", "MCP"),
a = NULL,
cvFunc = NULL,
nfolds = 10,
foldid = NULL,
nlambda = 100,
groupError = TRUE,
cvSummary = mean,
tauWeights = rep(1, length(tau)),
printProgress = FALSE,
weights = NULL,
...
)
Matrix of cvSummary function, default is average, cross-validation error for each model, tau and a combination, and lambda.
Matrix of the standard error of cverr foreach model, tau and a combination, and lambda.
The rq.pen.seq object fit to the full data.
A data.table of the values of a and lambda that are best as determined by the minimum cross validation error and the one standard error rule, which fixes a. In btr the values of lambda and a are selected seperately for each quantile.
A data.table for the combination of a and lambda that minimize the cross validation error across all tau.
Group, across all quantiles, cross-validation error results for each value of a and lambda.
Original call to the function.
Matrix of predictors.
Vector of responses.
Quantiles to be modeled.
Values of
Choice of penalty between LASSO, Ridge, Elastic Net (ENet), Adaptive Lasso (aLASSO), SCAD and MCP.
Tuning parameter of a. LASSO and Ridge has no second tuning parameter, but for notation is set to 1 or 0 respectively, the values for elastic net. Defaults are Ridge ()
Loss function for cross-validation. Defaults to quantile loss, but user can specify their own function.
Number of folds.
Ids for folds. If set will override nfolds.
Number of lambda, ignored if lambda is set.
If set to false then reported error is the sum of all errors, not the sum of error for each fold.
Function to summarize the errors across the folds, default is mean. User can specify another function, such as median.
Weights for the different tau models.
If set to TRUE prints which partition is being worked on.
Weights for the quantile loss objective function.
Additional arguments passed to rq.pen()
Ben Sherwood, ben.sherwood@ku.edu
Two cross validation results are returned. One that considers the best combination of a and lambda for each quantile. The second considers the best combination of the tuning
parameters for all quantiles. Let lambda
and a
that minimize the average, or whatever function is used for cvSummary
, of
The other approach is the group tau results, gtr. Consider the case of estimating Q quantiles of lambda
and a
that minimizes the average, or again whatever function is used for cvSummary
, of
if (FALSE) {
x <- matrix(runif(800),ncol=8)
y <- 1 + x[,1] + x[,8] + (1+.5*x[,3])*rnorm(100)
r1 <- rq.pen.cv(x,y) #lasso fit for median
# Elastic net fit for multiple values of a and tau
r2 <- rq.pen.cv(x,y,penalty="ENet",a=c(0,.5,1),tau=c(.25,.5,.75))
#same as above but more weight given to median when calculating group cross validation error.
r3 <- rq.pen.cv(x,y,penalty="ENet",a=c(0,.5,1),tau=c(.25,.5,.75),tauWeights=c(.25,.5,.25))
# uses median cross-validation error instead of mean.
r4 <- rq.pen.cv(x,y,cvSummary=median)
#Cross-validation with no penalty on the first variable.
r5 <- rq.pen.cv(x,y,penalty.factor=c(0,rep(1,7)))
}
Run the code above in your browser using DataLab