Learn R Programming

gelnet (version 1.2.1)

gelnet.ker: Kernel models for linear regression, binary classification and one-class problems.

Description

Infers the problem type and learns the appropriate kernel model.

Usage

gelnet.ker(K, y, lambda, a, max.iter = 100, eps = 1e-05, v.init = rep(0,
  nrow(K)), b.init = 0, fix.bias = FALSE, silent = FALSE,
  balanced = FALSE)

Arguments

K
n-by-n matrix of pairwise kernel values over a set of n samples
y
n-by-1 vector of response values. Must be numeric vector for regression, factor with 2 levels for binary classification, or NULL for a one-class task.
lambda
scalar, regularization parameter
a
n-by-1 vector of sample weights (regression only)
max.iter
maximum number of iterations (binary classification and one-class problems only)
eps
convergence precision (binary classification and one-class problems only)
v.init
initial parameter estimate for the kernel weights (binary classification and one-class problems only)
b.init
initial parameter estimate for the bias term (binary classification only)
fix.bias
set to TRUE to prevent the bias term from being updated (regression only) (default: FALSE)
silent
set to TRUE to suppress run-time output to stdout (default: FALSE)
balanced
boolean specifying whether the balanced model is being trained (binary classification only) (default: FALSE)

Value

  • A list with two elements: [object Object],[object Object]

Details

The entries in the kernel matrix K can be interpreted as dot products in some feature space $\phi$. The corresponding weight vector can be retrieved via $w = \sum_i v_i \phi(x_i)$. However, new samples can be classified without explicit access to the underlying feature space: $$w^T \phi(x) + b = \sum_i v_i \phi^T (x_i) \phi(x) + b = \sum_i v_i K( x_i, x ) + b$$

The method determines the problem type from the labels argument y. If y is a numeric vector, then a ridge regression model is trained by optimizing the following objective function: $$\frac{1}{2n} \sum_i a_i (z_i - (w^T x_i + b))^2 + w^Tw$$

If y is a factor with two levels, then the function returns a binary classification model, obtained by optimizing the following objective function: $$-\frac{1}{n} \sum_i y_i s_i - \log( 1 + \exp(s_i) ) + w^Tw$$ where $$s_i = w^T x_i + b$$

Finally, if no labels are provided (y == NULL), then a one-class model is constructed using the following objective function: $$-\frac{1}{n} \sum_i s_i - \log( 1 + \exp(s_i) ) + w^Tw$$ where $$s_i = w^T x_i$$

In all cases, $w = \sum_i v_i \phi(x_i)$ and the method solves for $v_i$.

See Also

gelnet