Learn R Programming

gkwreg (version 1.0.7)

llmc: Negative Log-Likelihood for the McDonald (Mc)/Beta Power Distribution

Description

Computes the negative log-likelihood function for the McDonald (Mc) distribution (also known as Beta Power) with parameters gamma (\(\gamma\)), delta (\(\delta\)), and lambda (\(\lambda\)), given a vector of observations. This distribution is the special case of the Generalized Kumaraswamy (GKw) distribution where \(\alpha = 1\) and \(\beta = 1\). This function is suitable for maximum likelihood estimation.

Usage

llmc(par, data)

Value

Returns a single double value representing the negative log-likelihood (\(-\ell(\theta|\mathbf{x})\)). Returns Inf

if any parameter values in par are invalid according to their constraints, or if any value in data is not in the interval (0, 1).

Arguments

par

A numeric vector of length 3 containing the distribution parameters in the order: gamma (\(\gamma > 0\)), delta (\(\delta \ge 0\)), lambda (\(\lambda > 0\)).

data

A numeric vector of observations. All values must be strictly between 0 and 1 (exclusive).

Author

Lopes, J. E.

Details

The McDonald (Mc) distribution is the GKw distribution (dmc) with \(\alpha=1\) and \(\beta=1\). Its probability density function (PDF) is: $$ f(x | \theta) = \frac{\lambda}{B(\gamma,\delta+1)} x^{\gamma \lambda - 1} (1 - x^\lambda)^\delta $$ for \(0 < x < 1\), \(\theta = (\gamma, \delta, \lambda)\), and \(B(a,b)\) is the Beta function (beta). The log-likelihood function \(\ell(\theta | \mathbf{x})\) for a sample \(\mathbf{x} = (x_1, \dots, x_n)\) is \(\sum_{i=1}^n \ln f(x_i | \theta)\): $$ \ell(\theta | \mathbf{x}) = n[\ln(\lambda) - \ln B(\gamma, \delta+1)] + \sum_{i=1}^{n} [(\gamma\lambda - 1)\ln(x_i) + \delta\ln(1 - x_i^\lambda)] $$ This function computes and returns the negative log-likelihood, \(-\ell(\theta|\mathbf{x})\), suitable for minimization using optimization routines like optim. Numerical stability is maintained, including using the log-gamma function (lgamma) for the Beta function term.

References

McDonald, J. B. (1984). Some generalized functions for the size distribution of income. Econometrica, 52(3), 647-663.

Cordeiro, G. M., & de Castro, M. (2011). A new family of generalized distributions. Journal of Statistical Computation and Simulation,

Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46(1-2), 79-88.

See Also

llgkw (parent distribution negative log-likelihood), dmc, pmc, qmc, rmc, grmc (gradient, if available), hsmc (Hessian, if available), optim, lbeta

Examples

Run this code
# \donttest{
# Assuming existence of rmc, grmc, hsmc functions for Mc distribution

# Generate sample data from a known Mc distribution
set.seed(123)
true_par_mc <- c(gamma = 2, delta = 3, lambda = 0.5)
# Use rmc for data generation
sample_data_mc <- rmc(100, gamma = true_par_mc[1], delta = true_par_mc[2],
                      lambda = true_par_mc[3])
hist(sample_data_mc, breaks = 20, main = "Mc(2, 3, 0.5) Sample")

# --- Maximum Likelihood Estimation using optim ---
# Initial parameter guess
start_par_mc <- c(1.5, 2.5, 0.8)

# Perform optimization (minimizing negative log-likelihood)
mle_result_mc <- stats::optim(par = start_par_mc,
                              fn = llmc, # Use the Mc neg-log-likelihood
                              method = "BFGS", # Or "L-BFGS-B" with lower=1e-6
                              hessian = TRUE,
                              data = sample_data_mc)

# Check convergence and results
if (mle_result_mc$convergence == 0) {
  print("Optimization converged successfully.")
  mle_par_mc <- mle_result_mc$par
  print("Estimated Mc parameters:")
  print(mle_par_mc)
  print("True Mc parameters:")
  print(true_par_mc)
} else {
  warning("Optimization did not converge!")
  print(mle_result_mc$message)
}

# --- Compare numerical and analytical derivatives (if available) ---
# Requires 'numDeriv' package and analytical functions 'grmc', 'hsmc'
if (mle_result_mc$convergence == 0 &&
    requireNamespace("numDeriv", quietly = TRUE) &&
    exists("grmc") && exists("hsmc")) {

  cat("\nComparing Derivatives at Mc MLE estimates:\n")

  # Numerical derivatives of llmc
  num_grad_mc <- numDeriv::grad(func = llmc, x = mle_par_mc, data = sample_data_mc)
  num_hess_mc <- numDeriv::hessian(func = llmc, x = mle_par_mc, data = sample_data_mc)

  # Analytical derivatives (assuming they return derivatives of negative LL)
  ana_grad_mc <- grmc(par = mle_par_mc, data = sample_data_mc)
  ana_hess_mc <- hsmc(par = mle_par_mc, data = sample_data_mc)

  # Check differences
  cat("Max absolute difference between gradients:\n")
  print(max(abs(num_grad_mc - ana_grad_mc)))
  cat("Max absolute difference between Hessians:\n")
  print(max(abs(num_hess_mc - ana_hess_mc)))

} else {
   cat("\nSkipping derivative comparison for Mc.\n")
   cat("Requires convergence, 'numDeriv' package and functions 'grmc', 'hsmc'.\n")
}

# }

Run the code above in your browser using DataLab