Learn R Programming

gelnet (version 1.2.1)

gelnet: GELnet for linear regression, binary classification and one-class problems.

Description

Infers the problem type and learns the appropriate GELnet model via coordinate descent.

Usage

gelnet(X, y, l1, l2, nFeats = NULL, a = rep(1, n), d = rep(1, p),
  P = diag(p), m = rep(0, p), max.iter = 100, eps = 1e-05,
  w.init = rep(0, p), b.init = NULL, fix.bias = FALSE, silent = FALSE,
  balanced = FALSE, nonneg = FALSE)

Arguments

X
n-by-p matrix of n samples in p dimensions
y
n-by-1 vector of response values. Must be numeric vector for regression, factor with 2 levels for binary classification, or NULL for a one-class task.
l1
coefficient for the L1-norm penalty
l2
coefficient for the L2-norm penalty
nFeats
alternative parameterization that returns the desired number of non-zero weights. Takes precedence over l1 if not NULL (default: NULL)
a
n-by-1 vector of sample weights (regression only)
d
p-by-1 vector of feature weights
P
p-by-p feature association penalty matrix
m
p-by-1 vector of translation coefficients
max.iter
maximum number of iterations
eps
convergence precision
w.init
initial parameter estimate for the weights
b.init
initial parameter estimate for the bias term
fix.bias
set to TRUE to prevent the bias term from being updated (regression only) (default: FALSE)
silent
set to TRUE to suppress run-time output to stdout (default: FALSE)
balanced
boolean specifying whether the balanced model is being trained (binary classification only) (default: FALSE)
nonneg
set to TRUE to enforce non-negativity constraints on the weights (default: FALSE )

Value

  • A list with two elements: [object Object],[object Object]

Details

The method determines the problem type from the labels argument y. If y is a numeric vector, then a regression model is trained by optimizing the following objective function: $$\frac{1}{2n} \sum_i a_i (y_i - (w^T x_i + b))^2 + R(w)$$

If y is a factor with two levels, then the function returns a binary classification model, obtained by optimizing the following objective function: $$-\frac{1}{n} \sum_i y_i s_i - \log( 1 + \exp(s_i) ) + R(w)$$ where $$s_i = w^T x_i + b$$

Finally, if no labels are provided (y == NULL), then a one-class model is constructed using the following objective function: $$-\frac{1}{n} \sum_i s_i - \log( 1 + \exp(s_i) ) + R(w)$$ where $$s_i = w^T x_i$$

In all cases, the regularizer is defined by $$R(w) = \lambda_1 \sum_j d_j |w_j| + \frac{\lambda_2}{2} (w-m)^T P (w-m)$$

The training itself is performed through cyclical coordinate descent, and the optimization is terminated after the desired tolerance is achieved or after a maximum number of iterations.

See Also

gelnet.lin.obj, gelnet.logreg.obj, gelnet.oneclass.obj