This function uses a kd-tree to find all k nearest neighbors in a data matrix (including distances) fast.
kNN(
x,
k,
query = NULL,
sort = TRUE,
search = "kdtree",
bucketSize = 10,
splitRule = "suggest",
approx = 0
)# S3 method for kNN
sort(x, decreasing = FALSE, ...)
# S3 method for kNN
adjacencylist(x, ...)
# S3 method for kNN
print(x, ...)
An object of class kNN
(subclass of NN) containing a
list with the following components:
a matrix with distances.
a matrix with ids
.
number k
used.
used distance metric.
a data matrix, a dist object or a kNN object.
number of neighbors to find.
a data matrix with the points to query. If query is not
specified, the NN for all the points in x
is returned. If query is
specified then x
needs to be a data matrix.
sort the neighbors by distance? Note that some search methods
already sort the results. Sorting is expensive and sort = FALSE
may
be much faster for some search methods. kNN objects can be sorted using
sort()
.
nearest neighbor search strategy (one of "kdtree"
, "linear"
or
"dist"
).
max size of the kd-tree leafs.
rule to split the kd-tree. One of "STD"
, "MIDPT"
, "FAIR"
,
"SL_MIDPT"
, "SL_FAIR"
or "SUGGEST"
(SL stands for sliding). "SUGGEST"
uses
ANNs best guess.
use approximate nearest neighbors. All NN up to a distance of
a factor of 1 + approx
eps may be used. Some actual NN may be omitted
leading to spurious clusters and noise points. However, the algorithm will
enjoy a significant speedup.
sort in decreasing order?
further arguments
Michael Hahsler
Ties: If the kth and the (k+1)th nearest neighbor are tied, then the neighbor found first is returned and the other one is ignored.
Self-matches: If no query is specified, then self-matches are removed.
Details on the search parameters:
search
controls if
a kd-tree or linear search (both implemented in the ANN library; see Mount
and Arya, 2010). Note, that these implementations cannot handle NAs.
search = "dist"
precomputes Euclidean distances first using R. NAs are
handled, but the resulting distance matrix cannot contain NAs. To use other
distance measures, a precomputed distance matrix can be provided as x
(search
is ignored).
bucketSize
and splitRule
influence how the kd-tree is
built. approx
uses the approximate nearest neighbor search
implemented in ANN. All nearest neighbors up to a distance of
eps / (1 + approx)
will be considered and all with a distance
greater than eps
will not be considered. The other points might be
considered. Note that this results in some actual nearest neighbors being
omitted leading to spurious clusters and noise points. However, the
algorithm will enjoy a significant speedup. For more details see Mount and
Arya (2010).
David M. Mount and Sunil Arya (2010). ANN: A Library for Approximate Nearest Neighbor Searching, http://www.cs.umd.edu/~mount/ANN/.
Other NN functions:
NN
,
comps()
,
frNN()
,
kNNdist()
,
sNN()
data(iris)
x <- iris[, -5]
# Example 1: finding kNN for all points in a data matrix (using a kd-tree)
nn <- kNN(x, k = 5)
nn
# explore neighborhood of point 10
i <- 10
nn$id[i,]
plot(x, col = ifelse(seq_len(nrow(iris)) %in% nn$id[i,], "red", "black"))
# visualize the 5 nearest neighbors
plot(nn, x)
# visualize a reduced 2-NN graph
plot(kNN(nn, k = 2), x)
# Example 2: find kNN for query points
q <- x[c(1,100),]
nn <- kNN(x, k = 10, query = q)
plot(nn, x, col = "grey")
points(q, pch = 3, lwd = 2)
# Example 3: find kNN using distances
d <- dist(x, method = "manhattan")
nn <- kNN(d, k = 1)
plot(nn, x)
Run the code above in your browser using DataLab