Learn R Programming

gkwreg (version 1.0.7)

llbkw: Negative Log-Likelihood for Beta-Kumaraswamy (BKw) Distribution

Description

Computes the negative log-likelihood function for the Beta-Kumaraswamy (BKw) distribution with parameters alpha (\(\alpha\)), beta (\(\beta\)), gamma (\(\gamma\)), and delta (\(\delta\)), given a vector of observations. This distribution is the special case of the Generalized Kumaraswamy (GKw) distribution where \(\lambda = 1\). This function is typically used for maximum likelihood estimation via numerical optimization.

Usage

llbkw(par, data)

Value

Returns a single double value representing the negative log-likelihood (\(-\ell(\theta|\mathbf{x})\)). Returns Inf

if any parameter values in par are invalid according to their constraints, or if any value in data is not in the interval (0, 1).

Arguments

par

A numeric vector of length 4 containing the distribution parameters in the order: alpha (\(\alpha > 0\)), beta (\(\beta > 0\)), gamma (\(\gamma > 0\)), delta (\(\delta \ge 0\)).

data

A numeric vector of observations. All values must be strictly between 0 and 1 (exclusive).

Author

Lopes, J. E.

Details

The Beta-Kumaraswamy (BKw) distribution is the GKw distribution (dgkw) with \(\lambda=1\). Its probability density function (PDF) is: $$ f(x | \theta) = \frac{\alpha \beta}{B(\gamma, \delta+1)} x^{\alpha - 1} \bigl(1 - x^\alpha\bigr)^{\beta(\delta+1) - 1} \bigl[1 - \bigl(1 - x^\alpha\bigr)^\beta\bigr]^{\gamma - 1} $$ for \(0 < x < 1\), \(\theta = (\alpha, \beta, \gamma, \delta)\), and \(B(a,b)\) is the Beta function (beta). The log-likelihood function \(\ell(\theta | \mathbf{x})\) for a sample \(\mathbf{x} = (x_1, \dots, x_n)\) is \(\sum_{i=1}^n \ln f(x_i | \theta)\): $$ \ell(\theta | \mathbf{x}) = n[\ln(\alpha) + \ln(\beta) - \ln B(\gamma, \delta+1)] + \sum_{i=1}^{n} [(\alpha-1)\ln(x_i) + (\beta(\delta+1)-1)\ln(v_i) + (\gamma-1)\ln(w_i)] $$ where:

  • \(v_i = 1 - x_i^{\alpha}\)

  • \(w_i = 1 - v_i^{\beta} = 1 - (1-x_i^{\alpha})^{\beta}\)

This function computes and returns the negative log-likelihood, \(-\ell(\theta|\mathbf{x})\), suitable for minimization using optimization routines like optim. Numerical stability is maintained similarly to llgkw.

References

Cordeiro, G. M., & de Castro, M. (2011). A new family of generalized distributions. Journal of Statistical Computation and Simulation

Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46(1-2), 79-88.

See Also

llgkw (parent distribution negative log-likelihood), dbkw, pbkw, qbkw, rbkw, grbkw (gradient, if available), hsbkw (Hessian, if available), optim, lbeta

Examples

Run this code
# \donttest{

# Generate sample data from a known BKw distribution
set.seed(2203)
true_par_bkw <- c(alpha = 2.0, beta = 1.5, gamma = 1.5, delta = 0.5)
sample_data_bkw <- rbkw(1000, alpha = true_par_bkw[1], beta = true_par_bkw[2],
                         gamma = true_par_bkw[3], delta = true_par_bkw[4])
hist(sample_data_bkw, breaks = 20, main = "BKw(2, 1.5, 1.5, 0.5) Sample")

# --- Maximum Likelihood Estimation using optim ---
# Initial parameter guess
start_par_bkw <- c(1.8, 1.2, 1.1, 0.3)

# Perform optimization (minimizing negative log-likelihood)
mle_result_bkw <- stats::optim(par = start_par_bkw,
                               fn = llbkw, # Use the BKw neg-log-likelihood
                               method = "BFGS", # Needs parameters > 0, consider L-BFGS-B
                               hessian = TRUE,
                               data = sample_data_bkw)

# Check convergence and results
if (mle_result_bkw$convergence == 0) {
  print("Optimization converged successfully.")
  mle_par_bkw <- mle_result_bkw$par
  print("Estimated BKw parameters:")
  print(mle_par_bkw)
  print("True BKw parameters:")
  print(true_par_bkw)
} else {
  warning("Optimization did not converge!")
  print(mle_result_bkw$message)
}

# --- Compare numerical and analytical derivatives (if available) ---
# Requires 'numDeriv' package and analytical functions 'grbkw', 'hsbkw'
if (mle_result_bkw$convergence == 0 &&
    requireNamespace("numDeriv", quietly = TRUE) &&
    exists("grbkw") && exists("hsbkw")) {

  cat("\nComparing Derivatives at BKw MLE estimates:\n")

  # Numerical derivatives of llbkw
  num_grad_bkw <- numDeriv::grad(func = llbkw, x = mle_par_bkw, data = sample_data_bkw)
  num_hess_bkw <- numDeriv::hessian(func = llbkw, x = mle_par_bkw, data = sample_data_bkw)

  # Analytical derivatives (assuming they return derivatives of negative LL)
  ana_grad_bkw <- grbkw(par = mle_par_bkw, data = sample_data_bkw)
  ana_hess_bkw <- hsbkw(par = mle_par_bkw, data = sample_data_bkw)

  # Check differences
  cat("Max absolute difference between gradients:\n")
  print(max(abs(num_grad_bkw - ana_grad_bkw)))
  cat("Max absolute difference between Hessians:\n")
  print(max(abs(num_hess_bkw - ana_hess_bkw)))

} else {
   cat("\nSkipping derivative comparison for BKw.\n")
   cat("Requires convergence, 'numDeriv' package and functions 'grbkw', 'hsbkw'.\n")
}

# }

Run the code above in your browser using DataLab