Calculate the lexical diversity or complexity of text(s).
textstat_lexdiv(x, measure = c("all", "TTR", "C", "R", "CTTR", "U", "S",
"K", "D", "Vm", "Maas"), log.base = 10)an input dfm
a character vector defining the measure to calculate.
a numeric value defining the base of the logarithm (for measures using logs)
not used
textstat_lexdiv returns a data.frame of documents and
their lexical diversity scores.
textstat_lexdiv calculates a variety of proposed indices for
lexical diversity. In the following formulas, \(N\) refers to the total
number of tokens, \(V\) to the number of types, and \(f_v(i, N)\) to the
numbers of types occurring \(i\) times in a sample of length \(N\).
"TTR":The ordinary Type-Token Ratio: $$TTR = \frac{V}{N}$$
"C":Herdan's C (Herdan, 1960, as cited in Tweedie & Baayen, 1998; sometimes referred to as LogTTR): $$C = \frac{\log{V}}{\log{N}}$$
"R":Guiraud's Root TTR (Guiraud, 1954, as cited in Tweedie & Baayen, 1998): $$R = \frac{V}{\sqrt{N}}$$
"CTTR":Carroll's Corrected TTR: $$CTTR = \frac{V}{\sqrt{2N}}$$
"U":Dugast's Uber Index (Dugast, 1978, as cited in Tweedie & Baayen, 1998): $$U = \frac{(\log{N})^2}{\log{N} - \log{V}}$$
"S":Summer's index: $$S = \frac{\log{\log{V}}}{\log{\log{N}}}$$
"K":Yule's K (Yule, 1944, as presented in Tweedie & Baayen, 1998, Eq. 16) is calculated by: $$K = 10^4 \times \left[ -\frac{1}{N} + \sum_{i=1}^{V} f_v(i, N) \left( \frac{i}{N} \right)^2 \right] $$
"D":Simpson's D (Simpson 1949, as presented in Tweedie & Baayen, 1998, Eq. 17) is calculated by: $$D = \sum_{i=1}^{V} f_v(i, N) \frac{i}{N} \frac{i-1}{N-1}$$
"Vm":Herdan's \(V_m\) (Herdan 1955, as presented in Tweedie & Baayen, 1998, Eq. 18) is calculated by: $$V_m = \sqrt{ \sum_{i=1}^{V} f_v(i, N) (i/N)^2 - \frac{i}{V} }$$
"Maas":Maas' indices (\(a\), \(\log{V_0}\) & \(\log{}_{e}{V_0}\)): $$a^2 = \frac{\log{N} - \log{V}}{\log{N}^2}$$ $$\log{V_0} = \frac{\log{V}}{\sqrt{1 - \frac{\log{V}}{\log{N}}^2}}$$ The measure was derived from a formula by Mueller (1969, as cited in Maas, 1972). \(\log{}_{e}{V_0}\) is equivalent to \(\log{V_0}\), only with \(e\) as the base for the logarithms. Also calculated are \(a\), \(\log{V_0}\) (both not the same as before) and \(V'\) as measures of relative vocabulary growth while the text progresses. To calculate these measures, the first half of the text and the full text will be examined (see Maas, 1972, p. 67 ff. for details). Note: for the current method (for a dfm) there is no computation on separate halves of the text.
Covington, M.A. & McFall, J.D. (2010). "Cutting the Gordian Knot: The Moving-Average Type-Token Ratio (MATTR)". Journal of Quantitative Linguistics 17(2), 94--100.
Herdan, Gustav. 1955. "A New Derivation and Interpretation of Yule's 'Characteristic' K." Zeitschrift f<U+00FC>r angewandte Mathematik und Physik 6(4): 332--34.
Maas, H.-D., (1972). "\"Uber den Zusammenhang zwischen Wortschatzumfang und L\"ange eines Textes". Zeitschrift f\"ur Literaturwissenschaft und Linguistik 2(8), 73--96.
McCarthy, P.M. & Jarvis, S. (2007). "vocd: A theoretical and empirical evaluation". Language Testing 24(4), 459--488.
McCarthy, P.M. & Jarvis, S. (2010). "MTLD, vocd-D, and HD-D: A validation study of sophisticated approaches to lexical diversity assessment". Behaviour Research Methods 42(2), 381--392.
Michalke, Meik. (2014) koRpus: An R Package for Text Analysis. Version 0.05-5. http://reaktanz.de/?c=hacking&s=koRpus
Simpson, Edward H. 1949. "Measurement of Diversity." Nature 163: 688.
Tweedie. F.J. & Baayen, R.H. (1998). "How Variable May a Constant Be? Measures of Lexical Richness in Perspective". Computers and the Humanities 32(5), 323--352.
# NOT RUN {
mydfm <- dfm(corpus_subset(data_corpus_inaugural, Year > 1980), verbose = FALSE)
(result <- textstat_lexdiv(mydfm, c("CTTR", "TTR", "U")))
cor(textstat_lexdiv(mydfm, "all")[,-1])
# }
Run the code above in your browser using DataLab