Learn R Programming

amap (version 0.5-1)

hcluster: Hierarchical Clustering

Description

Hierarchical cluster analysis.

Usage

hcluster(x, method = "euclidean", diag = FALSE, upper = FALSE,
         link = "complete", members = NULL)

Arguments

Value

An object of class hclust which describes the tree produced by the clustering process. The object is a list with components:mergean $n-1$ by 2 matrix. Row $i$ of merge describes the merging of clusters at step $i$ of the clustering. If an element $j$ in the row is negative, then observation $-j$ was merged at this stage. If $j$ is positive then the merge was with the cluster formed at the (earlier) stage $j$ of the algorithm. Thus negative entries in merge indicate agglomerations of singletons, and positive entries indicate agglomerations of non-singletons.heighta set of $n-1$ non-decreasing real values. The clustering height: that is, the value of the criterion associated with the clustering method for the particular agglomeration.ordera vector giving the permutation of the original observations suitable for plotting, in the sense that a cluster plot using this ordering and matrix merge will not have crossings of the branches.labelslabels for each of the objects being clustered.callthe call which produced the result.methodthe cluster method that has been used.dist.methodthe distance that has been used to create d (only returned if the distance object has a "method" attribute).There is a print and a plot method for hclust objects. The plclust() function is basically the same as the plot method, plot.hclust, primarily for back compatibility with S-plus. Its extra arguments are not yet implemented.

Details

This function is a mix of function hclust and function dist. hcluster(x, method = "euclidean",link = "complete") = hclust(dist(x, method = "euclidean"),method = "complete")) It use twice less memory, as it doesn't store distance matrix.

For more details, see documentation of hclust and dist.

See Also

Dist, hclusterpar, hclust, kmeans.

Examples

Run this code
data(USArrests)
hc <- hcluster(USArrests,link = "ave")
plot(hc)
plot(hc, hang = -1)

## Do the same with centroid clustering and squared Euclidean distance,
## cut the tree into ten clusters and reconstruct the upper part of the
## tree from the cluster centers.
hc <- hclust(dist(USArrests)^2, "cen")
memb <- cutree(hc, k = 10)
cent <- NULL
for(k in 1:10){
  cent <- rbind(cent, colMeans(USArrests[memb == k, , drop = FALSE]))
}
hc1 <- hclust(dist(cent)^2, method = "cen", members = table(memb))
opar <- par(mfrow = c(1, 2))
plot(hc,  labels = FALSE, hang = -1, main = "Original Tree")
plot(hc1, labels = FALSE, hang = -1, main = "Re-start from 10 clusters")
par(opar)

Run the code above in your browser using DataLab