umapr (version 0.0.0.9001)

umap: umap

Description

Provides an interface to the UMAP algorithm implemented in Python.

Usage

umap(data, include_input = TRUE, n_neighbors = 15L,
  n_components = 2L, metric = "euclidean", n_epochs = NULL,
  learning_rate = 1, alpha = 1, init = "spectral", spread = 1,
  min_dist = 0.1, set_op_mix_ratio = 1, local_connectivity = 1L,
  repulsion_strength = 1, bandwidth = 1, gamma = 1,
  negative_sample_rate = 5L, transform_queue_size = 4, a = NULL,
  b = NULL, random_state = NULL, metric_kwds = dict(),
  angular_rp_forest = FALSE, target_n_neighbors = -1L,
  target_metric = "categorical", target_metric_kwds = dict(),
  target_weight = 0.5, transform_seed = 42L, verbose = FALSE)

Arguments

data

data frame or matrix. input data.

include_input

logical. Attach input data to UMAP embeddings if desired.

n_neighbors

integer. The size of local neighborhood (in terms of number of neighboring sample points) used for manifold approximation. Larger values result in more global views of the manifold, while smaller values result in more local data being preserved. In general values should be in the range 2 to 100.

n_components

integer The dimension of the space to embed into. This defaults to 2 to provide easy visualization, but can reasonably be set to any integer value in the range 2 to 100.

metric

character. The metric to use to compute distances in high dimensional space. If a string is passed it must match a valid predefined metric. If a general metric is required a function that takes two 1d arrays and returns a float can be provided. For performance purposes it is required that this be a numba jit'd function. Valid string metrics include: euclidean, manhattan, chebyshev, minkowski, canberra, braycurtis, mahalanobis, wminkowski, seuclidean, cosine, correlation, haversine, hamming, jaccard, dice, russelrao, kulsinski, rogerstanimoto, sokalmichener, sokalsneath, yule. Metrics that take arguments (such as minkowski, mahalanobis etc.) can have arguments passed via the metric_kwds dictionary. At this time care must be taken and dictionary elements must be ordered appropriately; this will hopefully be fixed in the future.

n_epochs

integer The number of training epochs to use in optimization.

learning_rate

numeric. The initial learning rate for the embedding optimization.

alpha

numeric. The initial learning rate for the embedding optimization.

init

character. How to initialize the low dimensional embedding. Options are: 'spectral' (use a spectral embedding of the fuzzy 1-skeleton), 'random' (assign initial embedding positions at random), * A numpy array of initial embedding positions.

spread

numeric. The effective scale of embedded points. In combination with ``min_dist`` this determines how clustered/clumped the embedded points are.

min_dist

numeric. The effective minimum distance between embedded points. Smaller values will result in a more clustered/clumped embedding where nearby points on the manifold are drawn closer together, while larger values will result on a more even dispersal of points. The value should be set relative to the ``spread`` value, which determines the scale at which embedded points will be spread out.

set_op_mix_ratio

numeric. Interpolate between (fuzzy) union and intersection as the set operation used to combine local fuzzy simplicial sets to obtain a global fuzzy simplicial sets. Both fuzzy set operations use the product t-norm. The value of this parameter should be between 0.0 and 1.0; a value of 1.0 will use a pure fuzzy union, while 0.0 will use a pure fuzzy intersection.

local_connectivity

integer The local connectivity required -- i.e. the number of nearest neighbors that should be assumed to be connected at a local level. The higher this value the more connected the manifold becomes locally. In practice, this should be not more than the local intrinsic dimension of the manifold.

repulsion_strength

numeric. Weighting applied to negative samples in low dimensional embedding optimization. Values higher than one will result in greater weight being given to negative samples.

bandwidth

numeric. The effective bandwidth of the kernel if we view the algorithm as similar to Laplacian eigenmaps. Larger values induce more connectivity and a more global view of the data, smaller values concentrate more locally.

gamma

numeric. Weighting applied to negative samples in low dimensional embedding optimization. Values higher than one will result in greater weight being given to negative samples.

negative_sample_rate

numeric. The number of negative edge/1-simplex samples to use per positive edge/1-simplex sample in optimizing the low dimensional embedding.

transform_queue_size

numeric. For transform operations (embedding new points using a trained model_ this will control how aggressively to search for nearest neighbors. Larger values will result in slower performance but more accurate nearest neighbor evaluation.

a

numeric. More specific parameters controlling the embedding. If NULL, these values are set automatically as determined by ``min_dist`` and ``spread``.

b

numeric. More specific parameters controlling the embedding. If NULL, these values are set automatically as determined by ``min_dist`` and ``spread``.

random_state

integer. If integer, random_state is the seed used by the random number generator; If NULL, the random number generator is the RandomState instance used by `np.random`.

metric_kwds

reticulate dictionary. Arguments to pass on to the metric, such as the ``p`` value for Minkowski distance.

angular_rp_forest

logical. Whether to use an angular random projection forest to initialise the approximate nearest neighbor search. This can be faster, but is mostly on useful for metric that use an angular style distance such as cosine, correlation etc. In the case of those metrics angular forests will be chosen automatically.

target_n_neighbors

integer. The number of nearest neighbors to use to construct the target simplcial set. If set to -1 use the n_neighbors value.

target_metric

character or function. The metric used to measure distance for a target array is using supervised dimension reduction. By default this is <U+2018>categorical<U+2019> which will measure distance in terms of whether categories match or are different. Furthermore, if semi-supervised is required target values of -1 will be trated as unlabelled under the <U+2018>categorical<U+2019> metric. If the target array takes continuous values (e.g. for a regression problem) then metric of <U+2018>l1<U+2019> or <U+2018>l2<U+2019> is probably more appropriate.

target_metric_kwds

reticulate dictionary. Keyword argument to pass to the target metric when performing supervised dimension reduction. If None then no arguments are passed on.

target_weight

numeric. weighting factor between data topology and target topology. A value of 0.0 weights entirely on data, a value of 1.0 weights entirely on target. The default of 0.5 balances the weighting equally between data and target.

transform_seed

integer. Random seed used for the stochastic aspects of the transform operation. This ensures consistency in transform operations.

verbose

logical. Controls verbosity of logging.

Value

matrix

References

Leland McInnes and John Healy (2018). UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. ArXiv e-prints 1802.03426.

Examples

Run this code
# NOT RUN {
#import umap library (and load python module)
library("umapr")
umap(as.matrix(iris[, 1:4]))
umap(iris[, 1:4])
# }

Run the code above in your browser using DataLab