Computes a divisive hierarchical clustering of the dataset
returning an object of class diana
.
diana(x, diss = inherits(x, "dist"), metric = "euclidean", stand = FALSE,
stop.at.k = FALSE,
keep.diss = n < 100, keep.data = !diss, trace.lev = 0)
data matrix or data frame, or dissimilarity matrix or object,
depending on the value of the diss
argument.
In case of a matrix or data frame, each row corresponds to an observation,
and each column corresponds to a variable. All variables must be numeric.
Missing values (NA
s) are allowed.
In case of a dissimilarity matrix, x
is typically the output
of daisy
or dist
. Also a vector of
length n*(n-1)/2 is allowed (where n is the number of observations),
and will be interpreted in the same way as the output of the
above-mentioned functions. Missing values (NAs) are not allowed.
logical flag: if TRUE (default for dist
or
dissimilarity
objects), then x
will be considered as a
dissimilarity matrix. If FALSE, then x
will be considered as
a matrix of observations by variables.
character string specifying the metric to be used for calculating
dissimilarities between observations.
The currently available options are "euclidean" and
"manhattan". Euclidean distances are root sum-of-squares of
differences, and manhattan distances are the sum of absolute
differences. If x
is already a dissimilarity matrix, then
this argument will be ignored.
logical; if true, the measurements in x
are
standardized before calculating the dissimilarities. Measurements
are standardized for each variable (column), by subtracting the
variable's mean value and dividing by the variable's mean absolute
deviation. If x
is already a dissimilarity matrix, then this
argument will be ignored.
logical or integer, FALSE
by default.
Otherwise must be integer, say \(k\), in \(\{1,2,..,n\}\),
specifying that the diana
algorithm should stop early.
Non-default NOT YET IMPLEMENTED.
logicals indicating if the dissimilarities
and/or input data x
should be kept in the result. Setting
these to FALSE
can give much smaller results and hence even save
memory allocation time.
integer specifying a trace level for printing
diagnostics during the algorithm. Default 0
does not print
anything; higher values print increasingly more.
an object of class "diana"
representing the clustering;
this class has methods for the following generic functions:
print
, summary
, plot
.
Further, the class "diana"
inherits from
"twins"
. Therefore, the generic function pltree
can be
used on a diana
object, and as.hclust
and
as.dendrogram
methods are available.
A legitimate diana
object is a list with the following components:
a vector giving a permutation of the original observations to allow for plotting, in the sense that the branches of a clustering tree will not cross.
a vector similar to order
, but containing observation labels
instead of observation numbers. This component is only available if
the original observations were labelled.
a vector with the diameters of the clusters prior to splitting.
the divisive coefficient, measuring the clustering structure of the
dataset. For each observation i, denote by \(d(i)\) the diameter of the
last cluster to which it belongs (before being split off as a single
observation), divided by the diameter of the whole dataset. The
dc
is the average of all \(1 - d(i)\). It can also be seen
as the average width (or the percentage filled) of the banner plot.
Because dc
grows with the number of observations, this
measure should not be used to compare datasets of very different
sizes.
an (n-1) by 2 matrix, where n is the number of
observations. Row i of merge
describes the split at step n-i of
the clustering. If a number \(j\) in row r is negative, then the single
observation \(|j|\) is split off at stage n-r. If j is positive, then the
cluster that will be splitted at stage n-j (described by row j), is
split off at stage n-r.
an object of class "dissimilarity"
, representing the total
dissimilarity matrix of the dataset.
a matrix containing the original or standardized measurements, depending
on the stand
option of the function agnes
. If a
dissimilarity matrix was given as input structure, then this component
is not available.
diana
is fully described in chapter 6 of Kaufman and Rousseeuw (1990).
It is probably unique in computing a divisive hierarchy, whereas most
other software for hierarchical clustering is agglomerative.
Moreover, diana
provides (a) the divisive coefficient
(see diana.object
) which measures the amount of clustering structure
found; and (b) the banner, a novel graphical display
(see plot.diana
).
The diana
-algorithm constructs a hierarchy of clusterings,
starting with one large
cluster containing all n observations. Clusters are divided until each cluster
contains only a single observation.
At each stage, the cluster with the largest diameter is selected.
(The diameter of a cluster is the largest dissimilarity between any
two of its observations.)
To divide the selected cluster, the algorithm first looks for its most
disparate observation (i.e., which has the largest average dissimilarity to the
other observations of the selected cluster). This observation initiates the
"splinter group". In subsequent steps, the algorithm reassigns observations
that are closer to the "splinter group" than to the "old party". The result
is a division of the selected cluster into two new clusters.
agnes
also for background and references;
cutree
(and as.hclust
) for grouping
extraction; daisy
, dist
,
plot.diana
, twins.object
.
# NOT RUN {
data(votes.repub)
dv <- diana(votes.repub, metric = "manhattan", stand = TRUE)
print(dv)
plot(dv)
## Cut into 2 groups:
dv2 <- cutree(as.hclust(dv), k = 2)
table(dv2) # 8 and 42 group members
rownames(votes.repub)[dv2 == 1]
## For two groups, does the metric matter ?
dv0 <- diana(votes.repub, stand = TRUE) # default: Euclidean
dv.2 <- cutree(as.hclust(dv0), k = 2)
table(dv2 == dv.2)## identical group assignments
str(as.dendrogram(dv0)) # {via as.dendrogram.twins() method}
data(agriculture)
## Plot similar to Figure 8 in ref
# }
# NOT RUN {
plot(diana(agriculture), ask = TRUE)
# }
Run the code above in your browser using DataLab