cv.cvplogistic(y, x, penalty = "mcp", approach = "mmcd", nfold = 5,
kappa = 1/2.7, nlambda = 100, lambda.min = 0.01,
epsilon = 1e-3, maxit = 1e+3, seed = 1000)
The regularization parameter controls the concavity of the penalty, with larger value of kappa being more concave. When kappa=0, both the MCP and SCAD penalty become Lasso penalty. Hence if zero is specified for kappa, the algorithm returns Lasso solutions.
To select an appropriate tuning parameter for prediction, we use k-fold cross-validated area under ROC curve (CV-AUC) approach. The CV-AUC approach calculated the predictive AUC for each validation set by using the coefficients estimated from the corresponding training set. As the cross validation proceeds, the average predictive AUC is calculated. Then the CV-AUC approach chooses the lambda corresponding to the maximum average predictive AUC as the tuning parameter.
Zou, H., Li, R. (2008). One-step Sparse Estimates in Nonconcave Penalized Likelihood Models. Ann Stat, 364: 1509-1533.
Breheny, P., Huang, J. (2011). Coordinate Descent Algorithms for Nonconvex Penalized Regression, with Application to Biological Feature Selection. Ann Appl Stat, 5(1), 232-253.
Jiang, D., Huang, J., Zhang, Y. (2011). The Cross-validated AUC for MCP-Logistic Regression with High-dimensional Data. Stat Methods Med Res, online first, Nov 28, 2011.
cvplogistic
, hybrid.logistic
, cv.hybrid
,
path.plot
set.seed(10000)
n=100
y=rbinom(n,1,0.4)
p=10
x=matrix(rnorm(n*p),n,p)
## MCP penalty by MMCD algorithm
out=cv.cvplogistic(y, x, "mcp", "mmcd")
## MCP by adaptive rescaling algorithm
## out=cv.cvplogistic(y, x, "mcp", "adaptive")
## MCP by LLA-CD algorith,
## out=cv.cvplogistic(y, x, "mcp", "llacd")
## SCAD penalty
## out=cv.cvplogistic(y, x, "scad")
## Lasso penalty
## out=cv.cvplogistic(y, x, kappa =0)
Run the code above in your browser using DataLab