Solve the squared hinge loss interval regression problem for one
regularization parameter: w* = argmin_w L(w) + regularization *
||w||_1 where L(w) is the average squared hinge loss with respect
to the targets, and ||w||_1 is the L1-norm of the weight vector
(excluding the first element, which is the un-regularized
intercept or bias term). This function performs no scaling of
input features, and is meant for internal use only! To learn a
regression model, try IntervalRegressionCV or
IntervalRegressionUnregularized.
IntervalRegressionInternal(features,
targets, initial.param.vec,
regularization, threshold = 0.001,
max.iterations = 1000,
weight.vec = NULL,
Lipschitz = NULL,
verbose = 2, margin = 1,
biggest.crit = 100)Scaled numeric feature matrix (problems x features). The first
column/feature should be all ones and will not be regularized.
Numeric target matrix (problems x 2).
initial guess for weight vector (features).
Degree of L1-regularization.
When the stopping criterion gets below this threshold, the
algorithm stops and declares the solution as optimal.
If the algorithm has not found an optimal solution after this many
iterations, increase Lipschitz constant and max.iterations.
A numeric vector of weights for each training example.
A numeric scalar or NULL, which means to compute Lipschitz as the
mean of the squared L2-norms of the rows of the feature matrix.
Cat messages: for restarts and at the end if >= 1, and for every iteration if >= 2.
Margin size hyper-parameter, default 1.
Restart FISTA with a bigger Lipschitz (smaller step size) if crit
gets larger than this.
Numeric vector of scaled weights w of the affine function f_w(X) = X %*% w for a scaled feature matrix X with the first row entirely ones.