kernlab (version 0.8-2)

kha: Kernel Principal Components Analysis

Description

Kernel Hebbian Algorithm is a nonlinear iterative algorithm for principal component analysis.

Usage

## S3 method for class 'formula':
kha(x, data = NULL, na.action, ...)

## S3 method for class 'matrix': kha(x, kernel = "rbfdot", kpar = list(sigma = 0.1), features = 5, eta = 0.005, th = 1e-4, maxiter = 10000, verbose = FALSE, na.action = na.omit...)

Arguments

x
The data matrix indexed by row or a formula descibing the model. Note, that an intercept is always included, whether given in the formula or not.
data
an optional data frame containing the variables in the model (when using a formula).
kernel
the kernel function used in training and predicting. This parameter can be set to any function, of class kernel, which computes the inner product in feature space between two vector arguments (see kernels<
kpar
the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. Valid parameters for existing kernels are :
  • sigmainverse kernel width for the Radial Basis
features
Number of features (principal components) to return. (default: 5)
eta
The hebbian learning rate (default : 0.005)
th
the smallest value of the convergence step (default : 0.0001)
maxiter
the maximum number of iterations.
verbose
print convergence every 100 iterations. (default : FALSE)
na.action
A function to specify the action to be taken if NAs are found. The default action is na.omit, which leads to rejection of cases with missing values on any required variable. An alternative is na.fail<
...
additional parameters

Value

  • An S4 object containing the principal component vectors along with the corresponding normalization values.
  • pcva matrix containing the principal component vectors (column wise)
  • eigThe normalization values
  • xmatrixThe original data matrix
  • all the slots of the object can be accessed by accessor functions.

Details

The original form of KPCA can only be used on small data sets since it requieres the estimation of the eigenvectors of a full kernel matrix. The Kernel Hebbian Algorithm iteratively estimates the Kernel Principal Components with only linear order memory complexity. (see ref. for more details)

References

Kwang In Kim, M.O. Franz and B. Sch�lkopf Kernel Hebbian Algorithm for Iterative Kernel Principal Component Analysis Max-Planck-Institut f�r biologische Kybernetik, T�bingen (109) http://www.kyb.tuebingen.mpg.de/publications/pdfs/pdf2302.pdf

See Also

kpca, kfa, kcca, pca

Examples

Run this code
# another example using the iris
data(iris)
test <- sample(1:150,20)

kpc <- kha(~.,data=iris[-test,-5],kernel="rbfdot",kpar=list(sigma=0.2),features=2)

#print the principal component vectors
pcv(kpc)

#plot the data projection on the components
plot(predict(kpc,iris[,-5]),col=as.integer(iris[,5]),xlab="1st Principal Component",ylab="2nd Principal Component")

Run the code above in your browser using DataCamp Workspace