Carry out dimensionality reduction of a dataset using the Uniform Manifold Approximation and Projection (UMAP) method (McInnes & Healy, 2018). Some of the following help text is lifted verbatim from the Python reference implementation at https://github.com/lmcinnes/umap.
umap(X, n_neighbors = 15, n_components = 2, metric = "euclidean",
n_epochs = NULL, alpha = 1, scale = FALSE, init = "spectral",
spread = 1, min_dist = 0.01, set_op_mix_ratio = 1,
local_connectivity = 1, bandwidth = 1, gamma = 1,
negative_sample_rate = 5, a = NULL, b = NULL, nn_method = NULL,
n_trees = 50, search_k = 2 * n_neighbors * n_trees,
approx_pow = FALSE, y = NULL, target_n_neighbors = n_neighbors,
target_weight = 0.5, ret_model = FALSE, n_threads = max(1,
RcppParallel::defaultNumThreads()/2), grain_size = 1,
verbose = getOption("verbose", TRUE))
Input data. Can be a data.frame
, matrix
,
dist
object or sparseMatrix
.
A sparse matrix is interpreted as a distance matrix and both implicit and
explicit zero entries are ignored. Set zero distances you want to keep to
an arbitrarily small non-zero value (e.g. 1e-10
). Matrix and data
frames should contain one observation per row. Data frames will have any
non-numeric columns removed.
The size of local neighborhood (in terms of number of
neighboring sample points) used for manifold approximation. Larger values
result in more global views of the manifold, while smaller values result in
more local data being preserved. In general values should be in the range
2
to 100
.
The dimension of the space to embed into. This defaults
to 2
to provide easy visualization, but can reasonably be set to any
integer value in the range 2
to 100
.
Type of distance metric to use to find nearest neighbors. One of:
"euclidean"
(the default)
"cosine"
"manhattan"
Only applies if nn_method = "annoy"
(for nn_method = "fnn"
, the
distance metric is always "euclidean").
Number of epochs to use during the optimization of the
embedded coordinates. By default, this value is set to 500
for datasets
containing 10,000 vertices or less, and 200
otherwise.
Initial learning rate used in optimization of the coordinates.
Scaling to apply to X
if it is a data frame or matrix:
"none"
or FALSE
or NULL
No scaling.
"scale"
or TRUE
Scale each column to zero mean and variance 1.
"maxabs"
Center each column to mean 0, then divide each element by the
maximum absolute value over the entire matrix.
"range"
Range scale the entire matrix, so the smallest element is 0 and
the largest is 1.
For UMAP, the default is "none"
.
Type of initialization for the coordinates. Options are:
"spectral"
Spectral embedding using the normalized Laplacian
of the fuzzy 1-skeleton, with Gaussian noise added.
"normlaplacian"
. Spectral embedding using the normalized
Laplacian of the fuzzy 1-skeleton, without noise.
"random"
. Coordinates assigned using a uniform random
distribution between -10 and 10.
"lvrandom"
. Coordinates assigned using a Gaussian
distribution with standard deviation 1e-4, as used in LargeVis
(Tang et al., 2016) and t-SNE.
"laplacian"
. Spectral embedding using the Laplacian Eigenmap
(Belkin and Niyogi, 2002).
"pca"
. The first two principal components from PCA of
X
if X
is a data frame, and from a 2-dimensional classical
MDS if X
is of class "dist"
.
"spca"
. Like "pca"
, but each dimension is then scaled
so the standard deviation is 1e-4, to give a distribution similar to
that used in t-SNE.
A matrix of initial coordinates.
The effective scale of embedded points. In combination with
min_dist
, this determines how clustered/clumped the embedded points
are.
The effective minimum distance between embedded points.
Smaller values will result in a more clustered/clumped embedding where
nearby points on the manifold are drawn closer together, while larger
values will result on a more even dispersal of points. The value should be
set relative to the spread
value, which determines the scale at
which embedded points will be spread out.
Interpolate between (fuzzy) union and intersection as
the set operation used to combine local fuzzy simplicial sets to obtain a
global fuzzy simplicial sets. Both fuzzy set operations use the product
t-norm. The value of this parameter should be between 0.0
and
1.0
; a value of 1.0
will use a pure fuzzy union, while
0.0
will use a pure fuzzy intersection.
The local connectivity required -- i.e. the number of nearest neighbors that should be assumed to be connected at a local level. The higher this value the more connected the manifold becomes locally. In practice this should be not more than the local intrinsic dimension of the manifold.
The effective bandwidth of the kernel if we view the algorithm as similar to Laplacian eigenmaps. Larger values induce more connectivity and a more global view of the data, smaller values concentrate more locally.
Weighting applied to negative samples in low dimensional embedding optimization. Values higher than one will result in greater weight being given to negative samples.
The number of negative edge/1-simplex samples to use per positive edge/1-simplex sample in optimizing the low dimensional embedding.
More specific parameters controlling the embedding. If NULL
these values are set automatically as determined by min_dist
and
spread
.
More specific parameters controlling the embedding. If NULL
these values are set automatically as determined by min_dist
and
spread
.
Method for finding nearest neighbors. Options are:
"fnn"
. Use exact nearest neighbors via the
FNN package.
"annoy"
Use approximate nearest neighbors via the
RcppAnnoy package.
By default, if X
has less than 4,096 vertices, the exact nearest
neighbors are found. Otherwise, approximate nearest neighbors are used.
Number of trees to build when constructing the nearest
neighbor index. The more trees specified, the larger the index, but the
better the results. With search_k
, determines the accuracy of the
Annoy nearest neighbor search. Only used if the nn_method
is
"annoy"
. Sensible values are between 10
to 100
.
Number of nodes to search during the neighbor retrieval. The
larger k, the more the accurate results, but the longer the search takes.
With n_trees
, determines the accuracy of the Annoy nearest neighbor
search. Only used if the nn_method
is "annoy"
.
If TRUE
, use an approximation to the power function
in the UMAP gradient, from
https://martin.ankerl.com/2012/01/25/optimized-approximative-pow-in-c-and-cpp/.
Optional target array for supervised dimension reduction. Must be a
factor or numeric vector with the same length as X
.
Number of nearest neighbors to use to construct the
target simplcial set. Default value is n_neighbors
. Applies only if
y
is non-NULL
and numeric
.
Weighting factor between data topology and target
topology. A value of 0.0 weights entirely on data, a value of 1.0 weights
entirely on target. The default of 0.5 balances the weighting equally
between data and target. Only applies if y
is non-NULL
.
If TRUE
, then return extra data that can be used to
add new data to an existing embedding via umap_transform
.
Otherwise, just return the coordinates.
Number of threads to use. Default is half that recommended
by RcppParallel. For nearest neighbor search, only applies if
nn_method = "annoy"
.
Minimum batch size for multithreading. If the number of
items to process in a thread falls below this number, then no threads will
be used. Used in conjunction with n_threads
.
If TRUE
, log details to the console.
A matrix of optimized coordinates, or if ret_model = TRUE
, a
list containing extra information that can be used to add new data to an
existing embedding via umap_transform
. In this case, the
coordinates are available in the list item embedding
.
Belkin, M., & Niyogi, P. (2002). Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in neural information processing systems (pp. 585-591). http://papers.nips.cc/paper/1961-laplacian-eigenmaps-and-spectral-techniques-for-embedding-and-clustering.pdf
McInnes, L., & Healey, J. (2018). UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction arXiv preprint arXiv:1802.03426. https://arxiv.org/abs/1802.03426
Tang, J., Liu, J., Zhang, M., & Mei, Q. (2016, April). Visualizing large-scale and high-dimensional data. In Proceedings of the 25th International Conference on World Wide Web (pp. 287-297). International World Wide Web Conferences Steering Committee. https://arxiv.org/abs/1602.00370
Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9 (2579-2605). http://www.jmlr.org/papers/v9/vandermaaten08a.html
# NOT RUN {
iris_umap <- umap(iris, n_neighbors = 50, alpha = 0.5, init = "random")
# Faster approximation to the gradient
iris_umap <- umap(iris, n_neighbors = 15, approx_pow = TRUE)
# Load mnist from somewhere, e.g.
# devtools::install_github("jlmelville/snedata")
# mnist <- snedata::download_mnist()
mnist_umap <- umap(mnist, n_neighbors = 15, min_dist = 0.001, verbose = TRUE)
# Supervised dimension reduction
mnist_sumap <- umap(mnist, n_neighbors = 15, min_dist = 0.001, verbose = TRUE,
y = mnist$Label, target_weight = 0.5)
# }
Run the code above in your browser using DataLab