Learn R Programming

copBasic (version 2.0.1)

kullCOP: Kullback-Leibler Divergence, Jeffrey's Divergence, and Kullback-Leibler Sample Size

Description

Compute the Kullback-Leibler divergence, Jeffrey's divergence, and Kullback-Leibler sample size following Joe (2014, pp. 234--237). Consider two densities $f = c_1(u,v; \Theta_f)$ and $g = c_2(u,v; \Theta_g)$ for two different bivariate copulas $\mathbf{C}_1(\Theta_1)$ and $\mathbf{C}_2(\Theta_2)$ having respective parameters $\Theta$, then the Kullback-Leibler divergence of $f$ relative to $g$ is $$\mathrm{KL}(f|g) = \int\!\!\int_{\mathcal{I}^2} g\, \log(g/f)\,\mathrm{d}u\mathrm{d}v\mbox{,}$$ and Kullback-Leibler divergence of $g$ relative to $f$ is $$\mathrm{KL}(g|f) = \int\!\!\int_{\mathcal{I}^2} f\, \log(f/g)\,\mathrm{d}u\mathrm{d}v\mbox{,}$$ where the limits of integration $\mathcal{I}^2$ theoretically are closed on $[0,1]^2$ but an open interval $(0,1)$ might be needed for numerical integration. Note that in general $\mathrm{KL}(f|g) \ne \mathrm{KL}(g|f)$. The $\mathrm{KL}(f|g)$ is the expected log-likelihood ratios of $g$ to $f$ when $g$ is the true density (Joe, 2014, p. 234), whereas $\mathrm{KL}(g|f)$ is the opposite.

This asymmetry leads to Jeffrey's divergence, which is defined as a symmetrized version of the two Kullback-Leibler divergences, and is $$J(f,g) = \mathrm{KL}(f|g) + \mathrm{KL}(g|f) = \int\!\!\int_{\mathcal{I}^2} (g-f)\, \log(g/f)\,\mathrm{d}u\mathrm{d}v\mbox{.}$$

The variances of the Kullback-Leibler divergences are defined as $$\sigma^2_{\mathrm{KL}(f|g)} = \int\!\!\int_{\mathcal{I}^2} g\,[\log(g/f)]^2\,\mathrm{d}u\mathrm{d}v - [\mathrm{KL}(f|g)]^2\mbox{,}$$ and $$\sigma^2_{\mathrm{KL}(g|f)} = \int\!\!\int_{\mathcal{I}^2} f\,[\log(f/g)]^2\,\mathrm{d}u\mathrm{d}v - [\mathrm{KL}(g|f)]^2\mbox{.}$$

For comparison of copula families $f$ and $g$ and taking an $\alpha = 0.05$, the Kullback-Leibler sample size is defined as $$n_{f\!g} = [\Phi^{(-1)}(1-\alpha) \times \eta_\mathrm{KL}]^2\mbox{,}$$ where $\Phi^{(-1)}(t)$ is the quantile function for the standard normal distribution $\sim$ N(0,1) for nonexceedance probability $t$, and $\eta$ is the maximum of $$\eta_\mathrm{KL} = \mathrm{max}[\sigma_{\mathrm{KL}(f|g)}/\mathrm{KL}(f|g),\, \sigma_{\mathrm{KL}(g|f)}/\mathrm{KL}(g|f)]\mbox{.}$$ The $n_{f\!g}$ gives an indication of the sample size needed to distinguish $f$ and $g$ with a probability of at least $1 - \alpha = 1 - 0.05 = 0.95$ or 95 percent.

The copBasic features a naïve{naive} Monte Carlo integration scheme in the primary interface kullCOP, although the function kullCOPint provides for nested numerical integration. This later function is generally fast but suffers too much for general application from integral divergencies issued from the integrate() function in R---this must be judged in the light that the package focuses only on implementation of the function of the copula itself and numerical estimation of copula density (densityCOP) and not analytical copula densities or hybrid representations. Sufficient bread crumbs are left among the code and documentation for users to re-implement if speed is paramount. Numerical comparison to the results of Joe (2014) (see Examples) suggests that the default Monte Carlo sample size should be more than sufficient for general inference with the expense of considerable CPU time; however, a couple of repeated calls of kullCOP would be advised and compute say the mean of the resulting sample sizes.

Usage

kullCOP(cop1=NULL, cop2=NULL, para1=NULL, para2=NULL, alpha=0.05,
           del=0, n=1E6, verbose=TRUE, ...)

kullCOPint(cop1=NULL, cop2=NULL, para1=NULL, para2=NULL, alpha=0.05, del=.Machine$double.eps^0.25, verbose=TRUE, ...)

Arguments

cop1
A copula function corresponding to copula $f$ in Joe (2014);
para1
Vector of parameters or other data structure, if needed, to pass to the copula $f$;
cop2
A copula function corresponding to copula $g$ in Joe (2014);
para2
Vector of parameters or other data structure, if needed, to pass to the copula $g$;
alpha
The $\alpha$ in the Kullback-Leibler sample size equation;
del
A small value used to denote the lo and hi values of the numerical integration: lo = del and hi = 1 - del. If del == 0, then lo = 0 and hi = 1, which corresponds to the theoretical limits $\
n
kullCOP (Monte Carlo integration) only---The Monte Carlo integration simulation size;
verbose
A logical trigging a couple of status lines of output through the message() function in R; and
...
Additional arguments to pass to the densityCOP function.

Value

  • An Rlist is returned having the following components:
  • MonteCarlo.sim.sizekullCOP (Monte Carlo integration) only---The simulation size for numerical integration;
  • divergencesA vector of the Kullback-Leibler divergences and their standard deviations: $\mathrm{KL}(f|g)$, $\sigma_{\mathrm{KL}(f|g)}$, $\mathrm{KL}(g|f)$, and $\sigma_{\mathrm{KL}(g|f)}$, respectively;
  • stdev.divergenceskullCOP (Monte Carlo integration) only---The standard deviation of the divergences and the variances;
  • Jeffrey.divergenceJeffrey's divergence $J(f,g)$;
  • KL.sample.sizeKullback-Leibler sample size $n_{f\!g}$; and
  • integrationskullCOPint (numerical integration) only---An Rlist of the outer call of the integrate() function for the respective numerical integrals shown in this documentation.

encoding

utf8

concept

  • Kullback-Leibler divergence
  • Kullback-Leibler sample size
  • Jeffery divergence
  • Jeffery's divergence
  • copula divergence

References

Joe, H., 2014, Dependence modeling with copulas: Boca Raton, CRC Press, 462 p.

See Also

densityCOP, vuongCOP

Examples

Run this code
# See another demonstration under the Note section of statTn().
# Joe (2014, p. 237, table 5.2)
# The Gumbel-Hougaard copula and Plackett copula below each have a Kendall Tau
# of about 0.5. Joe (2014) lists in the table that Jeffrey divergence is about
# 0.110 and the Kullback-Leibler sample size is 133. Joe (2014) does not list
# the parameters for either copula, just that Kendall Tau = 0.5.
KL <- kullCOP(cop1=GHcop, para1=2, cop2=PLACKETTcop, para2=11.40484)
# Reports Jeffrey divergence        =   0.1086986
#      Kullback-Leibler sample size = 130 (another run produces 132)

# Joe (2014, p. 237, table 5.3)
# The Gumbel-Hougaard copula and Plackett copula below each have a Spearman Rho
# of about 0.5. Joe (2014) lists in the table that Jeffrey divergence is about
# 0.063 and the Kullback-Leibler sample size is 210. Joe (2014) does not list
# the parameters for either copula, just that for Spearman Rho = 0.5.
KL <- kullCOP(cop1=GHcop, para1=1.541071, cop2=PLACKETTcop, para2=5.115658)
# Reports Jeffrey divergence        =   0.06381151
#      Kullback-Leibler sample size = 220 (another run produces 203)

# Joe (2014) likely did the numerical integrations using analytical solutions to the
# probability densities and not rectangular approximations as in densityCOP().

Run the code above in your browser using DataLab