"saturationFilter"(formula, data, ...)
"saturationFilter"(x, noiseThreshold = NULL, classColumn = ncol(x), ...)
"consensusSF"(formula, data, ...)
"consensusSF"(x, nfolds = 10, consensusLevel = nfolds - 1, noiseThreshold = NULL, classColumn = ncol(x), ...)
"classifSF"(formula, data, ...)
"classifSF"(x, nfolds = 10, noiseThreshold = NULL, classColumn = ncol(x), ...)NULL, the
threshold is appropriately chosen according to the number of training instances.consensusSF and classifSF, number of folds
in which the dataset is split.consensusSF, it sets the (minimum) number of 'noisy
votes' an instance must get in order to be removed. By default, the nfolds-1 filters built
over each instance must label it as noise.filter, which is a list with seven components:
cleanData is a data frame containing the filtered dataset.
remIdx is a vector of integers indicating the indexes for
removed instances (i.e. their row number with respect to the original data frame).
repIdx is a vector of integers indicating the indexes for
repaired/relabelled instances (i.e. their row number with respect to the original data frame).
repLab is a factor containing the new labels for repaired instances.
parameters is a list containing the argument values.
call contains the original call to the filter.
extraInf is a character that includes additional interesting
information not covered by previous items.
saturationFilter removes those
instances which most enable to reduce the CLCH (Complexity of the Least Complex Hypotheses)
of the training dataset. The full method can be looked up in (Gamberger et al., 1999), and
the previous step of literals extraction is detailed in (Gamberger et al., 1996).consensusSF splits the dataset in nfolds folds, and applies
saturationFilter to every combination of nfolds-1 folds. Those instances
with (at least) consensusLevel 'noisy votes' are removed.
classifSF combines saturationFilter with a nfolds-folds cross validation
scheme (the latter in the spirit of filters such as EF, CVCF).
Namely, the dataset is split in nfolds folds and, for every combination
of nfolds-1 folds, saturationFilter is applied and a classifier
(we implement a standard C4.5 tree) is built. Instances
from the excluded fold are removed according to this classifier.
Gamberger D., Lavrac N., Dzeroski S. (1996, January): Noise elimination in inductive concept learning: A case study in medical diagnosis. In Algorithmic Learning Theory (pp. 199-212). Springer Berlin Heidelberg.
Gamberger D., Lavrac N. (1997): Conditions for Occam's razor applicability and noise elimination (pp. 108-123). Springer Berlin Heidelberg.
# Next example is not run because saturation procedure is time-consuming.
## Not run:
# data(iris)
# out1 <- saturationFilter(Species~., data = iris)
# out2 <- consensusSF(Species~., data = iris)
# out3 <- classifSF(Species~., data = iris)
# print(out1)
# print(out2)
# print(out3)
# ## End(Not run)
Run the code above in your browser using DataLab