ForeCA (version 0.2.7)

continuous_entropy: Shannon entropy for a continuous pdf

Description

Computes the Shannon entropy \(\mathcal{H}(p)\) for a continuous probability density function (pdf) \(p(x)\) using numerical integration.

Usage

continuous_entropy(pdf, lower, upper, base = 2)

Arguments

pdf

R function for the pdf \(p(x)\) of a RV \(X \sim p(x)\). This function must be non-negative and integrate to \(1\) over the interval [lower, upper].

lower, upper

lower and upper integration limit. pdf must integrate to 1 on this interval.

base

logarithm base; entropy is measured in ``nats'' for base = exp(1); in ``bits'' if base = 2 (default).

Value

scalar; entropy value (real).

Since continuous_entropy uses numerical integration (integrate()) convergence is not garantueed (even if integral in definition of \(\mathcal{H}(p)\) exists). Issues a warning if integrate() does not converge.

Details

The Shannon entropy of a continuous random variable (RV) \(X \sim p(x)\) is defined as $$ \mathcal{H}(p) = -\int_{-\infty}^{\infty} p(x) \log p(x) d x. $$

Contrary to discrete RVs, continuous RVs can have negative entropy (see Examples).

See Also

discrete_entropy

Examples

Run this code
# NOT RUN {
# entropy of U(a, b) = log(b - a). Thus not necessarily positive anymore, e.g.
continuous_entropy(function(x) dunif(x, 0, 0.5), 0, 0.5) # log2(0.5)

# Same, but for U(-1, 1)
my_density <- function(x){
  dunif(x, -1, 1)
}
continuous_entropy(my_density, -1, 1) # = log(upper - lower)

# a 'triangle' distribution
continuous_entropy(function(x) x, 0, sqrt(2))

# }

Run the code above in your browser using DataLab