hclust(d, method = "complete", members=NULL)plot.hclust(x, labels = NULL, hang = 0.1,
axes = TRUE, frame.plot = FALSE, ann = TRUE,
main = "Cluster Dendrogram",
sub = NULL, xlab = NULL, ylab = "Height", ...)
plclust(tree, hang = 0.1, unit = FALSE, level = FALSE, hmin = 0,
square = TRUE, labels = NULL, plot. = TRUE,
axes = TRUE, frame.plot = FALSE, ann = TRUE,
main = "", sub = NULL, xlab = NULL, ylab = "Height")
merge
describes the merging of clusters
at step $i$ of the clustering.
If an element $j$ in the row is negative,
then observation $-j$ was merged at this stage.
If $j$ is positive then the merge
was with the cluster formed at the (earlier) stage $j$
of the algorithm.
Thus negative entries in merge
indicate agglomerations
of singletons, and positive entries indicate agglomerations
of non-singletons.method
for the particular agglomeration.merge
will not have
crossings of the branches.d
(only returned if the distance object has a "method"
attribute).print
and a plot
method for
hclust
objects.
The plclust()
function is basically the same as the plot method,
plot.hclust
, primarily for back compatibility with S-plus. Its
extra arguments are not yet implemented.A number of different clustering methods are provided. Ward's minimum variance method aims at finding compact, spherical clusters. The complete linkage method finds similar clusters. The single linkage method (which is closely related to the minimal spanning tree) adopts a `friends of friends' clustering strategy. The other methods can be regarded as aiming for clusters with characteristics somewhere between the single and complete link methods.
If members!=NULL
, then d
is taken to be a
dissimilarity matrix between clusters instead of dissimilarities
between singletons and members
gives the number of observations
per cluster. This way the hierarchical cluster algorithm can be ``started
in the middle of the dendrogram'', e.g., in order to reconstruct the
part of the tree above a cut (see examples). Dissimilarities between
clusters can be efficiently computed (i.e., without hclust
itself) only for a limited number of distance/linkage combinations,
the simplest one being squared Euclidean distance and centroid
linkage. In this case the dissimilarities between the clusters are
the squared Euclidean distances between cluster means.
In hierarchical cluster displays, a decision is needed at each merge to
specify which subtree should go on the left and which on the right.
Since, for $n$ observations there are $n-1$ merges,
there are $2^{(n-1)}$ possible orderings for the leaves
in a cluster tree, or dendrogram.
The algorithm used in hclust
is to order the subtree so that
the tighter cluster is on the left (the last, i.e. most recent,
merge of the left subtree is at a lower value than the last
merge of the right subtree).
Single observations are the tightest clusters possible,
and merges involving two observations place them in order by their
observation sequence number.
More details on agglomeration methods between groups $A$ and $B$:
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Hartigan, J. A. (1975). Clustering Algorithms. New York: Wiley.
Sneath, P. H. A. and R. R. Sokal (1973). Numerical Taxonomy. San Francisco: Freeman.
Anderberg, M. R. (1973). Cluster Analysis for Applications. Academic Press: New York.
Gordon, A. D. (1999). Classification. Second Edition. London: Chapman and Hall / CRC
Murtagh, F. (1985). ``Multidimensional Clustering Algorithms'', in COMPSTAT Lectures 4. Wuerzburg: Physica-Verlag (for algorithmic details of algorithms used).
hclust
kmeans
.library(amap)
data(USArrests)
hc <- hclust(dist(USArrests), "ave")
plot(hc)
plot(hc, hang = -1)
## Do the same with centroid clustering and squared Euclidean distance,
## cut the tree into ten clusters and reconstruct the upper part of the
## tree from the cluster centers.
hc <- hclust(dist(USArrests)^2, "cen")
memb <- cutree(hc, k = 10)
cent <- NULL
for(k in 1:10){
cent <- rbind(cent, colMeans(USArrests[memb == k, , drop = FALSE]))
}
hc1 <- hclust(dist(cent)^2, method = "cen", members = table(memb))
opar <- par(mfrow = c(1, 2))
plot(hc, labels = FALSE, hang = -1, main = "Original Tree")
plot(hc1, labels = FALSE, hang = -1, main = "Re-start from 10 clusters")
par(opar)
Run the code above in your browser using DataLab