kernlab (version 0.9-13)

kfa: Kernel Feature Analysis

Description

The Kernel Feature Analysis algorithm is an algorithm for extracting structure from possibly high-dimensional data sets. Similar to kpca a new basis for the data is found. The data can then be projected on the new basis.

Usage

## S3 method for class 'formula':
kfa(x, data = NULL, na.action = na.omit, ...)

## S3 method for class 'matrix': kfa(x, kernel = "rbfdot", kpar = list(sigma = 0.1), features = 0, subset = 59, normalize = TRUE, na.action = na.omit)

Arguments

x
The data matrix indexed by row or a formula describing the model. Note, that an intercept is always included, whether given in the formula or not.
data
an optional data frame containing the variables in the model (when using a formula).
kernel
the kernel function used in training and predicting. This parameter can be set to any function, of class kernel, which computes an inner product in feature space between two vector arguments. kernlab provides the most popular kernel
kpar
the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. Valid parameters for existing kernels are :
  • sigmainverse kernel width for the Radial Basis
features
Number of features (principal components) to return. (default: 0 , all)
subset
the number of features sampled (used) from the data set
normalize
normalize the feature selected (default: TRUE)
na.action
A function to specify the action to be taken if NAs are found. The default action is na.omit, which leads to rejection of cases with missing values on any required variable. An alternative is n
...
additional parameters

Value

  • kfa returns an object of class kfa containing the features selected by the algorithm.
  • xmatrixcontains the features selected
  • alphacontains the sparse alpha vector
  • The predict function can be used to embed new data points into to the selected feature base.

Details

Kernel Feature analysis is similar to Kernel PCA, but instead of extracting eigenvectors of the training dataset in feature space, it approximates the eigenvectors by selecting training patterns which are good basis vectors for the training set. It works by choosing a fixed size subset of the data set and scaling it to unit length (under the kernel). It then chooses the features that maximize the value of the inner product (kernel function) with the rest of the patterns.

References

Alex J. Smola, Olvi L. Mangasarian and Bernhard Schoelkopf Sparse Kernel Feature Analysis Data Mining Institute Technical Report 99-04, October 1999 ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/99-04.ps

See Also

kpca, kfa-class

Examples

Run this code
data(promotergene)
f <- kfa(~.,data=promotergene,features=2,kernel="rbfdot",kpar=list(sigma=0.01))
plot(predict(f,promotergene),col=as.numeric(promotergene[,1]))

Run the code above in your browser using DataCamp Workspace