`agnes`

is fully described in chapter 5 of Kaufman and Rousseeuw (1990).
Compared to other agglomerative clustering methods such as `hclust`

,
`agnes`

has the following features: (a) it yields the
agglomerative coefficient (see `agnes.object`

)
which measures the amount of clustering structure found; and (b)
apart from the usual tree it also provides the banner, a novel
graphical display (see `plot.agnes`

).

The `agnes`

-algorithm constructs a hierarchy of clusterings.
At first, each observation is a small cluster by itself. Clusters are
merged until only one large cluster remains which contains all the
observations. At each stage the two *nearest* clusters are combined
to form one larger cluster.

For `method="average"`

, the distance between two clusters is the
average of the dissimilarities between the points in one cluster and the
points in the other cluster.

In `method="single"`

, we use the smallest dissimilarity between a
point in the first cluster and a point in the second cluster (nearest
neighbor method).

When `method="complete"`

, we use the largest dissimilarity
between a point in the first cluster and a point in the second cluster
(furthest neighbor method).

The `method = "flexible"`

allows (and requires) more details:
The Lance-Williams formula specifies how dissimilarities are
computed when clusters are agglomerated (equation (32) in K&R(1990),
p.237). If clusters \(C_1\) and \(C_2\) are agglomerated into a
new cluster, the dissimilarity between their union and another
cluster \(Q\) is given by
$$
D(C_1 \cup C_2, Q) = \alpha_1 * D(C_1, Q) + \alpha_2 * D(C_2, Q) +
\beta * D(C_1,C_2) + \gamma * |D(C_1, Q) - D(C_2, Q)|,
$$
where the four coefficients \((\alpha_1, \alpha_2, \beta, \gamma)\)
are specified by the vector `par.method`

, either directly as vector of
length 4, or (more conveniently) if `par.method`

is of length 1,
say \(= \alpha\), `par.method`

is extended to
give the “Flexible Strategy” (K&R(1990), p.236 f) with
Lance-Williams coefficients \((\alpha_1 = \alpha_2 = \alpha, \beta =
1 - 2\alpha, \gamma=0)\).
Also, if `length(par.method) == 3`

, \(\gamma = 0\) is set.

**Care** and expertise is probably needed when using `method = "flexible"`

particularly for the case when `par.method`

is specified of
longer length than one. Since cluster version 2.0, choices
leading to invalid `merge`

structures now signal an error (from
the C code already).
The *weighted average* (`method="weighted"`

) is the same as
`method="flexible", par.method = 0.5`

. Further,
`method= "single"`

is equivalent to `method="flexible", par.method = c(.5,.5,0,-.5)`

, and
`method="complete"`

is equivalent to `method="flexible", par.method = c(.5,.5,0,+.5)`

.

The `method = "gaverage"`

is a generalization of `"average"`

, aka
“flexible UPGMA” method, and is (a generalization of the approach)
detailed in Belbin et al. (1992). As `"flexible"`

, it uses the
Lance-Williams formula above for dissimilarity updating, but with
\(\alpha_1\) and \(\alpha_2\) not constant, but *proportional* to
the *sizes* \(n_1\) and \(n_2\) of the clusters \(C_1\) and
\(C_2\) respectively, i.e,
$$\alpha_j = \alpha'_j \frac{n_1}{n_1+n_2},$$
where \(\alpha'_1\), \(\alpha'_2\) are determined from `par.method`

,
either directly as \((\alpha_1, \alpha_2, \beta, \gamma)\) or
\((\alpha_1, \alpha_2, \beta)\) with \(\gamma = 0\), or (less flexibly,
but more conveniently) as follows:

Belbin et al proposed “flexible beta”, i.e. the user would only
specify \(\beta\) (as `par.method`

), sensibly in
$$-1 \leq \beta < 1,$$
and \(\beta\) determines \(\alpha'_1\) and \(\alpha'_2\) as
$$\alpha'_j = 1 - \beta,$$ and \(\gamma = 0\).

This \(\beta\) may be specified by `par.method`

(as length 1 vector),
and if `par.method`

is not specified, a default value of -0.1 is used,
as Belbin et al recommend taking a \(\beta\) value around -0.1 as a general
agglomerative hierarchical clustering strategy.

Note that `method = "gaverage", par.method = 0`

(or ```
par.method =
c(1,1,0,0)
```

) is equivalent to the `agnes()`

default method `"average"`

.