Carry out dimensionality reduction of a dataset using a method similar to LargeVis (Tang et al., 2016).
lvish(X, perplexity = 50, n_neighbors = perplexity * 3,
n_components = 2, metric = "euclidean", n_epochs = -1,
learning_rate = 1, scale = "maxabs", init = "lvrandom",
init_sdev = NULL, repulsion_strength = 7, negative_sample_rate = 5,
nn_method = NULL, n_trees = 50, search_k = 2 * n_neighbors *
n_trees, n_threads = max(1, RcppParallel::defaultNumThreads()/2),
n_sgd_threads = 0, grain_size = 1, kernel = "gauss", pca = NULL,
pca_center = TRUE, pcg_rand = TRUE, fast_sgd = FALSE,
ret_nn = FALSE, tmpdir = tempdir(), verbose = getOption("verbose",
TRUE))
Input data. Can be a data.frame
, matrix
,
dist
object or sparseMatrix
. A
sparse matrix is interpreted as a distance matrix and both implicit and
explicit zero entries are ignored. Set zero distances you want to keep to
an arbitrarily small non-zero value (e.g. 1e-10
). Matrix and data
frames should contain one observation per row. Data frames will have any
non-numeric columns removed, although factor columns will be used if
explicitly included via metric
(see the help for metric
for
details). Can be NULL
if precomputed nearest neighbor data is passed
to nn_method
, and init
is not "spca"
or "pca"
.
Controls the size of the local neighborhood used for
manifold approximation. This is the analogous to n_neighbors
in
umap
. Change this, rather than n_neighbors
.
The number of neighbors to use when calculating the
perplexity
. Usually set to three times the value of the
perplexity
. Must be at least as large as perplexity
.
The dimension of the space to embed into. This defaults
to 2
to provide easy visualization, but can reasonably be set to any
integer value in the range 2
to 100
.
Type of distance metric to use to find nearest neighbors. One of:
"euclidean"
(the default)
"cosine"
"manhattan"
"hamming"
"categorical"
(see below)
Only applies if nn_method = "annoy"
(for nn_method = "fnn"
, the
distance metric is always "euclidean").
If X
is a data frame or matrix, then multiple metrics can be
specified, by passing a list to this argument, where the name of each item in
the list is one of the metric names above. The value of each list item should
be a vector giving the names or integer ids of the columns to be included in
a calculation, e.g. metric = list(euclidean = 1:4, manhattan = 5:10)
.
Each metric calculation results in a separate fuzzy simplicial set, which are intersected together to produce the final set. Metric names can be repeated. Because non-numeric columns are removed from the data frame, it is safer to use column names than integer ids.
Factor columns can also be used by specifying the metric name
"categorical"
. Factor columns are treated different from numeric
columns and although multiple factor columns can be specified in a vector,
each factor column specified is processed individually. If you specify
a non-factor column, it will be coerced to a factor.
For a given data block, you may override the pca
and pca_center
arguments for that block, by providing a list with one unnamed item
containing the column names or ids, and then any of the pca
or
pca_center
overrides as named items, e.g. metric =
list(euclidean = 1:4, manhattan = list(5:10, pca_center = FALSE))
. This
exists to allow mixed binary and real-valued data to be included and to have
PCA applied to both, but with centering applied only to the real-valued data
(it is typical not to apply centering to binary data before PCA is applied).
Number of epochs to use during the optimization of the embedded coordinates. The default is calculate the number of epochs dynamically based on dataset size, to give the same number of edge samples as the LargeVis defaults. This is usually substantially larger than the UMAP defaults.
Initial learning rate used in optimization of the coordinates.
Scaling to apply to X
if it is a data frame or matrix:
"none"
or FALSE
or NULL
No scaling.
"Z"
or "scale"
or TRUE
Scale each column to
zero mean and variance 1.
"maxabs"
Center each column to mean 0, then divide each
element by the maximum absolute value over the entire matrix.
"range"
Range scale the entire matrix, so the smallest
element is 0 and the largest is 1.
"colrange"
Scale each column in the range (0,1).
For lvish, the default is "maxabs"
, for consistency with LargeVis.
Type of initialization for the coordinates. Options are:
"spectral"
Spectral embedding using the normalized Laplacian
of the fuzzy 1-skeleton, with Gaussian noise added.
"normlaplacian"
. Spectral embedding using the normalized
Laplacian of the fuzzy 1-skeleton, without noise.
"random"
. Coordinates assigned using a uniform random
distribution between -10 and 10.
"lvrandom"
. Coordinates assigned using a Gaussian
distribution with standard deviation 1e-4, as used in LargeVis
(Tang et al., 2016) and t-SNE.
"laplacian"
. Spectral embedding using the Laplacian Eigenmap
(Belkin and Niyogi, 2002).
"pca"
. The first two principal components from PCA of
X
if X
is a data frame, and from a 2-dimensional classical
MDS if X
is of class "dist"
.
"spca"
. Like "pca"
, but each dimension is then scaled
so the standard deviation is 1e-4, to give a distribution similar to that
used in t-SNE and LargeVis. This is an alias for init = "pca",
init_sdev = 1e-4
.
"agspectral"
An "approximate global" modification of
"spectral"
which all edges in the graph to a value of 1, and then
sets a random number of edges (negative_sample_rate
edges per
vertex) to 0.1, to approximate the effect of non-local affinities.
A matrix of initial coordinates.
For spectral initializations, ("spectral"
, "normlaplacian"
,
"laplacian"
), if more than one connected component is identified,
each connected component is initialized separately and the results are
merged. If verbose = TRUE
the number of connected components are
logged to the console. The existence of multiple connected components
implies that a global view of the data cannot be attained with this
initialization. Either a PCA-based initialization or increasing the value of
n_neighbors
may be more appropriate.
If non-NULL
, scales each dimension of the initialized
coordinates (including any user-supplied matrix) to this standard
deviation. By default no scaling is carried out, except when init =
"spca"
, in which case the value is 0.0001
. Scaling the input may
help if the unscaled versions result in initial coordinates with large
inter-point distances or outliers. This usually results in small gradients
during optimization and very little progress being made to the layout.
Shrinking the initial embedding by rescaling can help under these
circumstances. Scaling the result of init = "pca"
is usually
recommended and init = "spca"
as an alias for init = "pca",
init_sdev = 1e-4
but for the spectral initializations the scaled versions
usually aren't necessary unless you are using a large value of
n_neighbors
(e.g. n_neighbors = 150
or higher).
Weighting applied to negative samples in low dimensional embedding optimization. Values higher than one will result in greater weight being given to negative samples.
The number of negative edge/1-simplex samples to use per positive edge/1-simplex sample in optimizing the low dimensional embedding.
Method for finding nearest neighbors. Options are:
"fnn"
. Use exact nearest neighbors via the
FNN package.
"annoy"
Use approximate nearest neighbors via the
RcppAnnoy package.
By default, if X
has less than 4,096 vertices, the exact nearest
neighbors are found. Otherwise, approximate nearest neighbors are used.
You may also pass precalculated nearest neighbor data to this argument. It
must be a list consisting of two elements:
"idx"
. A n_vertices x n_neighbors
matrix
containing the integer indexes of the nearest neighbors in X
. Each
vertex is considered to be its own nearest neighbor, i.e.
idx[, 1] == 1:n_vertices
.
"dist"
. A n_vertices x n_neighbors
matrix
containing the distances of the nearest neighbors.
Multiple nearest neighbor data (e.g. from two different precomputed
metrics) can be passed by passing a list containing the nearest neighbor
data lists as items.
The n_neighbors
parameter is ignored when using precomputed
nearest neighbor data.
Number of trees to build when constructing the nearest
neighbor index. The more trees specified, the larger the index, but the
better the results. With search_k
, determines the accuracy of the
Annoy nearest neighbor search. Only used if the nn_method
is
"annoy"
. Sensible values are between 10
to 100
.
Number of nodes to search during the neighbor retrieval. The
larger k, the more the accurate results, but the longer the search takes.
With n_trees
, determines the accuracy of the Annoy nearest neighbor
search. Only used if the nn_method
is "annoy"
.
Number of threads to use (except during stochastic gradient
descent). Default is half that recommended by RcppParallel. For
nearest neighbor search, only applies if nn_method = "annoy"
. If
n_threads > 1
, then the Annoy index will be temporarily written to
disk in the location determined by tempfile
.
Number of threads to use during stochastic gradient
descent. If set to > 1, then results will not be reproducible, even if
`set.seed` is called with a fixed seed before running. Set to
"auto"
go use the same value as n_threads
.
Minimum batch size for multithreading. If the number of
items to process in a thread falls below this number, then no threads will
be used. Used in conjunction with n_threads
and
n_sgd_threads
.
Type of kernel function to create input probabilities. Can be
one of "gauss"
(the default) or "knn"
. "gauss"
uses
the usual Gaussian weighted similarities. "knn"
assigns equal
probabilities to every edge in the nearest neighbor graph, and zero
otherwise, using perplexity
nearest neighbors. The n_neighbors
parameter is ignored in this case.
If set to a positive integer value, reduce data to this number of
columns using PCA. Doesn't applied if the distance metric
is
"hamming"
, or the dimensions of the data is larger than the
number specified (i.e. number of rows and columns must be larger than the
value of this parameter). If you have > 100 columns in a data frame or
matrix, reducing the number of columns in this way may substantially
increase the performance of the nearest neighbor search at the cost of a
potential decrease in accuracy. In many t-SNE applications, a value of 50
is recommended, although there's no guarantee that this is appropriate for
all settings.
If TRUE
, center the columns of X
before
carrying out PCA. For binary data, it's recommended to set this to
FALSE
.
If TRUE
, use the PCG random number generator (O'Neill,
2014) during optimization. Otherwise, use the faster (but probably less
statistically good) Tausworthe "taus88" generator. The default is
TRUE
.
If TRUE
, then the following combination of parameters
is set: pcg_rand = TRUE
and n_sgd_threads = "auto"
. The
default is FALSE
. Setting this to TRUE
will speed up the
stochastic optimization phase, but give a potentially less accurate
embedding, and which will not be exactly reproducible even with a fixed
seed. For visualization, fast_sgd = TRUE
will give perfectly good
results. For more generic dimensionality reduction, it's safer to leave
fast_sgd = FALSE
. If fast_sgd = TRUE
, then user-supplied
values of pcg_rand
and n_sgd_threads
, are ignored.
If TRUE
, then in addition to the embedding, also return
nearest neighbor data that can be used as input to nn_method
to
avoid the overhead of repeatedly calculating the nearest neighbors when
manipulating unrelated parameters (e.g. min_dist
, n_epochs
,
init
). See the "Value" section for the names of the list items. If
FALSE
, just return the coordinates. Note that the nearest neighbors
could be sensitive to data scaling, so be wary of reusing nearest neighbor
data if modifying the scale
parameter.
Temporary directory to store nearest neighbor indexes during
nearest neighbor search. Default is tempdir
. The index is
only written to disk if n_threads > 1
and
nn_method = "annoy"
; otherwise, this parameter is ignored.
If TRUE
, log details to the console.
A matrix of optimized coordinates, or if ret_nn = TRUE
,
returns the nearest neighbor data as a list containing a matrix idx
with the integer ids of the neighbors; and a matrix dist
with the
distances. This list can be used as input to the nn_method
parameter.
lvish
differs from the official LargeVis implementation in the
following:
Only the nearest-neighbor index search phase is multi-threaded.
Matrix input data is not normalized.
The n_trees
parameter cannot be dynamically chosen based on
data set size.
Nearest neighbor results are not refined via the
neighbor-of-my-neighbor method. The search_k
parameter is twice
as large than default to compensate.
Gradient values are clipped to 4.0
rather than 5.0
.
Negative edges are generated by uniform sampling of vertexes rather than their degree ^ 0.75.
The default number of samples is much reduced. The default number of
epochs, n_epochs
, is set to 5000
, much larger than for
umap
, but may need to be increased further depending on your
dataset. Using init = "spectral"
can help.
Tang, J., Liu, J., Zhang, M., & Mei, Q. (2016, April). Visualizing large-scale and high-dimensional data. In Proceedings of the 25th International Conference on World Wide Web (pp. 287-297). International World Wide Web Conferences Steering Committee. https://arxiv.org/abs/1602.00370
# NOT RUN {
# Default number of epochs is much larger than for UMAP, assumes random
# initialization
# If using a more global initialization, can use fewer epochs
iris_lvish_short <- lvish(iris, perplexity = 50, n_epochs = 200,
init = "pca")
# Use perplexity rather than n_neighbors to control the size of the local
# neighborhood
# 200 epochs may be too small for a random initialization
iris_lvish <- lvish(iris, perplexity = 50, learning_rate = 0.5,
init = "random", n_epochs = 200)
# }
Run the code above in your browser using DataLab