Functions for information measures of and between distributions of values.
Functions will check if
.data if a distribution of random variable (sum == 1) or not.
To force normalisation and / or to prevent this, set
.do.norm to TRUE (do normalisation)
or FALSE (don't do normalisation). For
kl.div vectors of values must have
- The Shannon entropy quantifies the uncertainty (entropy or degree of surprise) associated with this prediction.
- Kullback-Leibler divergence (information gain, information divergence, relative entropy, KLIC) is a non-symmetric measure of the difference between two probability distributions P and Q (measure of information lost when Q is used to approximate P).
- Jensen-Shannon divergence is a symmetric version of KLIC. Square root of this is a metric often referred to as Jensen-Shannon distance.
entropy(.data, .norm = F, .do.norm = NA, .laplace = 1e-12)
kl.div(.alpha, .beta, .do.norm = NA, .laplace = 1e-12)
js.div(.alpha, .beta, .do.norm = NA, .laplace = 1e-12, .norm.entropy = F)
Vector of values.
if T then compute normalised entropy (H / Hmax).
One of the three values - NA, T or F. If NA than check for distrubution
(sum(.data) == 1).
and normalise if needed with the given laplace correction value. if T then do normalisation and laplace
correction. If F than don't do normalisaton and laplace correction.
Value for Laplace correction which will be added to every value in the .data.
if T then normalise JS-divergence by entropy.
Shannon entropy, Jensen-Shannon divergence or Kullback-Leibler divergence values.