The confusion matrix yields marginal counts and Recall for each row, and
marginal counts, Precision and class F-measure for each column. The 3x2
subset of cells at the bottom right show (in this order): the overall
Accuracy, the average Recall, the average Precision, NaN, NaN, and the
overall Macro-F-Measure. The number of classes (expert/reference labeling)
should match or, at least not be greater than the number of clusters. The
overall value of the Macro-F-Measure is an average of the class F-measure
values, hence it is underestimated if the number of classes is lower than the
number of clusters.
If obj
is a binClstPath_instance and there is a column "lbl" in
the obj@pth slot with an expert labeling, this labeling will be used by
default.
If obj
is a binClstStck
instance and, for all paths in the
stack, there is a column "lbl" in the obj@pth slot of each, this labeling
will be used to compute the confusion matrix for the whole stack.
If obj
and ref
are both a binClst_instance (e.g.
smoothed versus non-smoothed), the confusion matrix compares both labelings.