hqreg(X, y, method = c("huber", "quantile", "ls"),
gamma, tau = 0.5, alpha = 1, nlambda = 100,
lambda.min = ifelse(nrow(X)>ncol(X), 0.001, 0.05), lambda,
preprocess = c("standardize", "rescale", "none"), screen = c("ASR", "SR", "none"),
max.iter = 10000, eps = 1e-7, dfmax = ncol(X)+1, penalty.factor = rep(1, ncol(X)),
message = FALSE)
Details
).alpha=1
is the lasso penalty and alpha=0
the ridge penalty.lambda
sequence based on
nlambda
and lambda.min
. Specifying lambda
overrDetails
). The coefficients
are always returned on the original scale.lambda
that discards variables
for speed. Either "ASR" (default), "SR" or "none". "SR" stands for the strong rule,
and "ASR" for the adaptive strong rule. Using "ASR" typically requires fewer iteeps
times the null deviance.
Default is 1E-7
.dfmax
is reached. Useful for very large dimensions.lambda
to allow differential penalization. Can be 0 for
some variables, in which case the variable is always in the model without penalization.
"hqreg"
, which is a list containing:nlambda
. An intercept is included.nlambda
containing the number of iterations until
convergence at each value of lambda
.dfmax
.NULL
except when method = "huber"
.NULL
except when method = "quantile"
.nv
is the number of violations.lambda
is fit
using a semismooth Newton coordinate descent algorithm. The objective function is defined
to be $$\frac{1}{n} \sum loss_i + \lambda\textrm{penalty}.$$
For method = "huber"
,
$$loss(t) = \frac{t^2}{2\gamma} I(|t|\le \gamma) + (|t| - \frac{\gamma}{2};) I(|t|>
\gamma)$$
for method = "quantile"
, $$loss(t) = t (\tau - I(t<0));$$ for="" method = "ls", $$loss(t) = \frac{t^2}{2}$$
In the model, "t" is replaced by residuals. The program supports different types of preprocessing techniques. They are applied to
each column of the input matrix X
. Let x be a column of X
. For
preprocess = "standardize"
, the formula is
$$x' = \frac{x-mean(x)}{sd(x)};$$
for preprocess = "rescale"
,
$$x' = \frac{x-min(x)}{max(x)-min(x)}.$$
The models are fit with preprocessed input, then the coefficients are transformed back
to the original scale via some algebra.
0));$$>
plot.hqreg
, cv.hqreg
X = matrix(rnorm(1000*100), 1000, 100)
beta = rnorm(10)
eps = 4*rnorm(1000)
y = drop(X[,1:10] %*% beta + eps)
# Huber loss
fit1 = hqreg(X, y)
coef(fit1, 0.01)
predict(fit1, X[1:5,], lambda = c(0.02, 0.01))
# Quantile loss
fit2 = hqreg(X, y, method = "quantile", tau = 0.2)
plot(fit2, xvar = "norm")
# Squared loss
fit3 = hqreg(X, y, method = "ls", preprocess = "rescale")
plot(fit3, xvar = "lambda", log.x = TRUE)
Run the code above in your browser using DataLab