## S3 method for class 'formula':
kkmeans(x, data = NULL, na.action = na.omit, ...)## S3 method for class 'matrix':
kkmeans(x, centers, kernel = "rbfdot", kpar = "automatic",
alg="kkmeans", p=1, na.action = na.omit, ...)
## S3 method for class 'kernelMatrix':
kkmeans(x, centers, ...)
## S3 method for class 'list':
kkmeans(x, centers, kernel = "stringdot",
kpar = list(length=4, lambda=0.5),
alg ="kkmeans", p = 1, na.action = na.omit, ...)
kernelMatrix, or a list of character vectors.link{kernels}). "automatic" uses a heuristic the determine a
suitable value for the width parameter of the RBF kernel.
A list can also be used contaikkmeans and kerninghan.specc which extends the class vector
containing integers indicating the cluster to which
each point is allocated. The following slots contain useful informationkernel k-means uses the 'kernel trick' (i.e. implicitly projecting all data
into a non-linear feature space with the use of a kernel) in order to
deal with one of the major drawbacks of k-means that is that it cannot
capture clusters that are not linearly separable in input space.
The algorithm is implemented using the triangle inequality to avoid
unnecessary and computational expensive distance calculations.
This leads to significant speedup particularly on large data sets with
a high number of clusters.
With a particular choice of weights this algorithm becomes
equivalent to Kernighan-Lin, and the norm-cut graph partitioning
algorithms.
The function also support input in the form of a kernel matrix
or a list of characters for text clustering.
The data can be passed to the kkmeans function in a matrix or a
data.frame, in addition kkmeans also supports input in the form of a
kernel matrix of class kernelMatrix or as a list of character
vectors where a string kernel has to be used.specc, kpca, kcca## Cluster the iris data set.
data(iris)
sc <- kkmeans(as.matrix(iris[,-5]), centers=3)
sc
centers(sc)
size(sc)
withinss(sc)Run the code above in your browser using DataLab