|
braycurtis |
| Bray-Curtis difference, should use proportions |
|
canberra |
| Canberra difference, should use proportions |
|
chebyshev |
| Largest difference in any one dimension, like in chess |
|
covariance |
| You may want to transpose the data before using this |
|
euclidean |
| multivariate 2-norm |
|
equality |
| the sum of exactly equal elements in each row |
|
hellinger |
| Hellinger difference |
|
jaccard |
| Jaccard distance |
|
mahalanobis |
Euclidean distance after scaling and removing
covariance, which you can supply with init.info |
|
manhattan |
| The sum of each dimension, no diagonal movement allowed |
|
minkowski |
arbitrary n-norm, so that init.info=2 yields
"euclidean" and init.info = Inf yields "chebyshev" (but don't do the latter!) |
|
pearson |
| Pearson product-moment correlation, you may want to
transpose the data |
|
procrustes |
Doesn't scale or rotate, just treats the vectors
as matrices with init.info columns and calculates total
distance between homologous points |