ncvreg (version 3.12.0)

ncvfit: Direct interface for nonconvex penalized regression (non-pathwise)

Description

This function is intended for users who know exactly what they're doing and want complete control over the fitting process: no standardization is applied, no intercept is included, no path is fit. All of these things are best practices for data analysis, so if you are choosing not to do them, you are on your own -- I cannot guarantee that your results will be meaningful. Some things in particular that you should pay attention to:

  • If your model has an intercept, it is up to you to (un)penalize it properly, typically by settings its corresponding element of penalty.factor to zero.

  • You should provide initial values for the coefficients; in nonconvex optimization, initial values are very important in determining which local solution an algorithm converges to.

Usage

ncvfit(
  X,
  y,
  init = rep(0, ncol(X)),
  penalty = c("MCP", "SCAD", "lasso"),
  gamma = switch(penalty, SCAD = 3.7, 3),
  alpha = 1,
  lambda,
  eps = 1e-04,
  max.iter = 1000,
  penalty.factor = rep(1, ncol(X)),
  warn = TRUE
)

Arguments

X

Design matrix; no intercept will be added, no standardization will occur

y

Response vector

init

Initial value for beta; default: zero

penalty

Penalty function to be applied, either "MCP" (default), "SCAD", or "lasso")

gamma

Tuning parameter of the MCP/SCAD penalty, as in ncvreg(); default is 3 for MCP and 3.7 for SCAD.

alpha

Tuning paramter controlling the ridge component of penalty, as in ncvreg(); default is 1 (meaning no ridge penalty)

lambda

Regularization parameter value at which to estimate beta; must be scalar -- for pathwise optimization, see ncvreg()

eps

Convergence threshhold. The algorithm iterates until the RMSD for the change in linear predictors for each coefficient is less than eps. Default is 1e-4.

max.iter

Maximum number of allowed iterations; if this number is reached, algorithm will terminate prior to convergence. Default: 1000.

penalty.factor

Multiplicative factor for the penalty applied to each coefficient, as in ncvreg(). In particular, note that if you include an intercept, you probably want to set its entry to zero here.

warn

Return warning messages for failures to converge and model saturation? Default is TRUE.

Value

A list containing:

  • beta: The estimated regression coefficients

  • iter: The number of iterations required to solve for `beta

  • loss: The loss (residual sum of squares) at convergence

  • lambda: See above

  • penalty: See above

  • gamma: See above

  • alpha: See above

  • penalty.factor: See above

  • n: Sample size

Details

At the moment, this function only works for least-squares loss functions. Additional functionality for other loss functions (logistic, Cox) is in development.

Examples

Run this code
# NOT RUN {
data(Prostate)
X <- cbind(1, Prostate$X)
y <- Prostate$y
fit <- ncvfit(X, y, lambda=0.1, penalty.factor=c(0, rep(1, ncol(X)-1)))
fit$beta
# Compare with:
coef(ncvreg(X, y), 0.1)
# The unstandardized version makes little sense here, as it fails to account
# for differences in the scales of the predictors.
# }

Run the code above in your browser using DataCamp Workspace