Learn R Programming

gmmsslm (version 1.1.5)

get_entropy: Shannon entropy

Description

Shannon entropy

Usage

get_entropy(dat, n, p, g, pi = NULL, mu = NULL, sigma = NULL, paralist = NULL)

Value

clusprobs

The posterior probabilities of the i-th entity that belongs to the j-th group.

Arguments

dat

An \(n\times p\) matrix where each row represents an individual observation

n

Number of observations.

p

Dimension of observation vecor.

g

Number of multivariate normal classes.

pi

A g-dimensional vector for the initial values of the mixing proportions.

mu

A \(p \times g\) matrix for the initial values of the location parameters.

sigma

A \(p\times p\) covariance matrix,or a list of g covariance matrices with dimension \(p\times p \times g\).

paralist

A list containing the required parameters \((\pi, \mu, \Sigma)\). It is assumed to fit the model with a common covariance matrix if sigma is a \(p\times p\) covariance matrix; otherwise it is assumed to fit the model with unequal covariance matrices.

Details

The concept of information entropy was introduced by shannon1948mathematical. The entropy of \(y_j\) is formally defined as $$e_j( y_j; \theta)=-\sum_{i=1}^g \tau_i( y_j; \theta) \log\tau_i(y_j;\theta).$$

Examples

Run this code
n<-150
pi<-c(0.25,0.25,0.25,0.25)
sigma<-array(0,dim=c(3,3,4))
sigma[,,1]<-diag(1,3)
sigma[,,2]<-diag(2,3)
sigma[,,3]<-diag(3,3)
sigma[,,4]<-diag(4,3)
mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),3,4)
dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma)
en<-get_entropy(dat=dat$Y,n=150,p=3,g=4,mu=mu,sigma=sigma,pi=pi)

Run the code above in your browser using DataLab