Last chance! 50% off unlimited learning
Sale ends in
Provides the generic function interestMeasure
and the needed S4 method
to calculate various additional interest measures for existing sets of
itemsets or rules. Definitions and equations can be found in
Hahsler (2015).
interestMeasure(x, measure, transactions = NULL, reuse = TRUE, ...)
a set of itemsets or rules.
name or vector of names of the desired interest measures (see details for available measures). If measure is missing then all available measures are calculated.
the transaction data set used to mine
the associations or a set of different transactions to calculate
interest measures from (Note: you need to set reuse=FALSE
in the
later case).
logical indicating if information in quality slot should
be reuse for calculating the measures. This speeds up the process
significantly since only very little (or no) transaction counting
is necessary if support, confidence and lift are already available.
Use reuse=FALSE
to force counting (might be very slow but
is necessary if you use a different set of transactions than was used
for mining).
further arguments for the measure calculation.
If only one measure is used, the function returns a numeric vector
containing the values of the interest measure for each association
in the set of associations x
.
If more than one measures are specified, the result is a data.frame containing the different measures for each association.
NA
is returned for rules/itemsets for which a certain measure is not
defined.
For itemsets
Is defined on itemsets as the minimum confidence of all possible rule generated from the itemset.
Range:
Defined on itemsets as the ratio of the support of the least frequent item
to the support of the most frequent item, i.e.,
Range:
Probability (support) of the itemset over the product of the probabilities
of all items in the itemset, i.e.,
Range:
Support is an estimate of
Range:
Absolute support count of the itemset.
Range:
For rules
Defined as
Range:
The chi-squared statistic
to test for independence between the lhs and rhs of the rule.
The critical value of the chi-squared distribution with
Called with significance=TRUE
, the p-value of the test for
independence is returned instead of the chi-squared statistic.
For p-values, substitutes effects can be tested using
the parameter complements = FALSE
.
Range:
The certainty factor is a measure of variation of the probability that Y is in a transaction when only considering transactions with X. An inreasing CF means a decrease of the probability that Y is not in a transaction that X is in. Negative CFs have a similar interpretation.
Range:
Collective strength (S).
Range:
Rule confidence is an estimate of
Range
Defined as
Range:
Defined as
Range:
Absolute support count of the rule.
Range:
Support of the left-hand-side of the rule,
i.e.,
Range:
Confidence confirmed by its negative as
Range:
Confidence reinforced by negatives given by
Range:
Support improved by negatives given by
Range:
Range:
Defined by
Range:
Defined as
Range:
p-value of Fisher's exact test used in the analysis of contingency tables
where sample sizes are small.
By default complementary effects are mined, substitutes can be found
by using the parameter complements = FALSE
.
Note that it is equal to hyper-confidence with significance=TRUE
.
Range:
Measures quadratic entropy.
Range:
Adaptation of the lift measure which is more robust for low counts. It is
based on the idea that under
independence the count
Hyper-lift is defined as:
where d
(default: d=0.99
).
Range:
Confidence level for observation of too high/low counts
for rules significance=TRUE
is used) for
By default complementary effects are mined, substitutes can be found
by using the parameter complements = FALSE
.
Range:
IR is defined as
Range:
Defined as
Range:
Log likelihood of the right-hand side of the rule, given the left-hand side of the rule.
where
Range:
The improvement of a rule is
the minimum difference between its confidence and the confidence of any
more general rule (i.e., a rule with the same consequent but one or
more items removed in the LHS). Defined as
Range:
Null-invariant measure defined as
Range:
Measures cross entrophy.
Range:
Defined as
Range:
Defined as
Range:
Calculate the null-invariant Kulczynski measure with a preference for skewed patterns.
Range:
Range:
Estimates confidence with increasing each count by 1. Prevents counts of 0 and L decreases with lower support.
Range:
Range:
Defined as
Range:
PS is defined as
Range:
Lift quantifies dependence between X and Y by
Range:
Null-invariant measure defined as
Range:
Measures the information gain for Y provided by X.
Range:
The odds of finding X in transactions which contain Y divided by the odds of finding X in transactions which do not contain Y.
Range:
Equivalent to Pearsons Product Moment Correlation Coefficient
Range:
Range:
RLD evaluates the deviation of the support of the whole rule from the support expected under independence given the supports of the LHS and the RHS. The code was contributed by Silvia Salini.
Range:
Product of support and confidence. Can be seen as rule confidence weighted by support.
Range:
Defined as
Range:
Support is an estimate of
Range:
Defined as
Range:
Defined as
Range:
Defined as
Range:
Hahsler, Michael (2015). A Probabilistic Comparison of Commonly Used Interest Measures for Association Rules, 2015, URL: http://michael.hahsler.net/research/association_rules/measures.html.
Agrawal, R., H Mannila, R Srikant, H Toivonen, AI Verkamo (1996). Fast Discovery of Association Rules. Advances in Knowledge Discovery and Data Mining 12 (1), 307--328.
Aze, J. and Y. Kodratoff (2004). Extraction de pepites de connaissances dans les donnees: Une nouvelle approche et une etude de sensibilite au bruit. In Mesures de Qualite pour la fouille de donnees. Revue des Nouvelles Technologies de l'Information, RNTI.
Bernard, Jean-Marc and Charron, Camilo (1996). L'analyse implicative bayesienne, une methode pour l'etude des dependances orientees. II : modele logique sur un tableau de contingence Mathematiques et Sciences Humaines, Volume 135 (1996), p. 5--18.
Bayardo, R. , R. Agrawal, and D. Gunopulos (2000). Constraint-based rule mining in large, dense databases. Data Mining and Knowledge Discovery, 4(2/3):217--240.
Berzal, Fernando, Ignacio Blanco, Daniel Sanchez and Maria-Amparo Vila (2002). Measuring the accuracy and interest of association rules: A new framework. Intelligent Data Analysis 6, 221--235.
Brin, Sergey, Rajeev Motwani, Jeffrey D. Ullman, and Shalom Tsur (1997). Dynamic itemset counting and implication rules for market basket data. In SIGMOD 1997, Proceedings ACM SIGMOD International Conference on Management of Data, pages 255--264, Tucson, Arizona, USA.
Diatta, J., H. Ralambondrainy, and A. Totohasina (2007). Towards a unifying probabilistic implicative normalized quality measure for association rules. In Quality Measures in Data Mining, 237--250, 2007.
Hahsler, Michael and Kurt Hornik (2007). New probabilistic interest measures for association rules. Intelligent Data Analysis, 11(5):437--455.
Hofmann, Heike and Adalbert Wilhelm (2001). Visual comparison of association rules. Computational Statistics, 16(3):399--415.
Kenett, Ron and Silvia Salini (2008). Relative Linkage Disequilibrium: A New measure for association rules. In 8th Industrial Conference on Data Mining ICDM 2008, July 16--18, 2008, Leipzig/Germany.
Kodratoff, Y. (1999). Comparing Machine Learning and Knowledge Discovery in Databases: An Application to Knowledge Discovery in Texts. Lecture Notes on AI (LNAI) - Tutorial series.
Kulczynski, S. (1927). Die Pflanzenassoziationen der Pieninen. Bulletin International de l'Academie Polonaise des Sciences et des Lettres, Classe des Sciences Mathematiques et Naturelles B, 57--203.
Lerman, I.C. (1981). Classification et analyse ordinale des donnees. Paris.
Liu, Bing, Wynne Hsu, and Yiming Ma (1999). Pruning and summarizing the discovered associations. In KDD '99: Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 125--134. ACM Press, 1999.
Ochin, Suresh Kumar, and Nisheeth Joshi (2016). Rule Power Factor: A New Interest Measure in Associative Classification. 6th International Conference On Advances In Computing and Communications, ICACC 2016, 6-8 September 2016, Cochin, India.
Omiecinski, Edward R. (2003). Alternative interest measures for mining associations in databases. IEEE Transactions on Knowledge and Data Engineering, 15(1):57--69, Jan/Feb 2003.
Piatetsky-Shapiro, G. (1991). Discovery, analysis, and presentation of strong rules. In: Knowledge Discovery in Databases, pages 229--248.
Sebag, M. and M. Schoenauer (1988). Generation of rules with certainty and confidence factors from incomplete and incoherent learning bases. In Proceedings of the European Knowledge Acquisition Workshop (EKAW'88), Gesellschaft fuer Mathematik und Datenverarbeitung mbH, 28.1--28.20.
Smyth, Padhraic and Rodney M. Goodman (1991). Rule Induction Using Information Theory. Knowledge Discovery in Databases, 159--176.
Tan, Pang-Ning and Vipin Kumar (2000). Interestingness Measures for Association Patterns: A Perspective. TR 00-036, Department of Computer Science and Engineering University of Minnesota.
Tan, Pang-Ning, Vipin Kumar, and Jaideep Srivastava (2002). Selecting the right interestingness measure for association patterns. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining (KDD '02), ACM, 32--41.
Tan, Pang-Ning, Vipin Kumar, and Jaideep Srivastava (2004). Selecting the right objective measure for association analysis. Information Systems, 29(4):293--313.
Wu, T., Y. Chen, and J. Han (2010). Re-examination of interestingness measures in pattern mining: A unified framework. Data Mining and Knowledge Discovery, 21(3):371-397, 2010.
Xiong, Hui, Pang-Ning Tan, and Vipin Kumar (2003). Mining strong affinity association patterns in data sets with skewed support distribution. In Bart Goethals and Mohammed J. Zaki, editors, Proceedings of the IEEE International Conference on Data Mining, November 19--22, 2003, Melbourne, Florida, pages 387--394.
# NOT RUN {
data("Income")
rules <- apriori(Income)
## calculate a single measure and add it to the quality slot
quality(rules) <- cbind(quality(rules),
hyperConfidence = interestMeasure(rules, measure = "hyperConfidence",
transactions = Income))
inspect(head(rules, by = "hyperConfidence"))
## calculate several measures
m <- interestMeasure(rules, c("confidence", "oddsRatio", "leverage"),
transactions = Income)
inspect(head(rules))
head(m)
## calculate all available measures for the first 5 rules and show them as a
## table with the measures as rows
t(interestMeasure(head(rules, 5), transactions = Income))
## calculate measures on a differnt set of transactions (I use a sample here)
## Note: reuse = TRUE (default) would just return the stored support on the
## data set used for mining
newTrans <- sample(Income, 100)
m2 <- interestMeasure(rules, "support", transactions = newTrans, reuse = FALSE)
head(m2)
# }
Run the code above in your browser using DataLab