In this implementation, we provided some attribute subset quality measures which can be
passed to the algorithm by the parameter qualityF
. Those measures
guide the computations in the search for a decision/approximated reduct. They are used to
assess amount of information gained after addition of an attribute. For example,
X.entropy
corresponds to the information gain measure.Additionally, this function can use the value of epsilon
parameter in order to compute
$\epsilon$-approximate reducts. The $\epsilon$-approximate can be defined as an
irreducable subset of attributes B
, such that:
$Quality_{\mathcal{A}}(B) \ge (1 - \epsilon)Quality_{\mathcal{A}}(A)$,
where $Quality_{\mathcal{A}}(B)$ is the value of a quality measure (see possible values
of the parameter qualityF
) for an attribute subset $B$ in decision table $\mathcal{A}$
and $\epsilon$ is a numeric value between 0 and 1 expressing the approximation threshold.
A lot of monographs provide comprehensive explanations about this topics, for example
(Janusz and Stawicki, 2011; Slezak, 2002; Wroblewski, 2001) which are used as the references of this function.
Finally, this implementation allows to restrain the computational complexity of greedy
searching for decision reducts by setting the value of the parameter nAttrs
. If this
parameter is set to a positive integer, the Monte Carlo method of selecting candidating
attributes will be used in each iteration of the algorithm.