50% off | Unlimited Data & AI Learning

Last chance! 50% off unlimited learning

Sale ends in


gkwreg (version 1.0.7)

llkkw: Negative Log-Likelihood for the kkw Distribution

Description

Computes the negative log-likelihood function for the Kumaraswamy-Kumaraswamy (kkw) distribution with parameters alpha (α), beta (β), delta (δ), and lambda (λ), given a vector of observations. This distribution is a special case of the Generalized Kumaraswamy (GKw) distribution where γ=1.

Usage

llkkw(par, data)

Value

Returns a single double value representing the negative log-likelihood ((θ|x)). Returns Inf

if any parameter values in par are invalid according to their constraints, or if any value in data is not in the interval (0, 1).

Arguments

par

A numeric vector of length 4 containing the distribution parameters in the order: alpha (α>0), beta (β>0), delta (δ0), lambda (λ>0).

data

A numeric vector of observations. All values must be strictly between 0 and 1 (exclusive).

Author

Lopes, J. E.

Details

The kkw distribution is the GKw distribution (dgkw) with γ=1. Its probability density function (PDF) is: f(x|θ)=(δ+1)λαβxα1(1xα)β1[1(1xα)β]λ1{1[1(1xα)β]λ}δ for 0<x<1 and θ=(α,β,δ,λ). The log-likelihood function (θ|x) for a sample x=(x1,,xn) is i=1nlnf(xi|θ): (θ|x)=n[ln(δ+1)+ln(λ)+ln(α)+ln(β)]+i=1n[(α1)ln(xi)+(β1)ln(vi)+(λ1)ln(wi)+δln(zi)] where:

  • vi=1xiα

  • wi=1viβ=1(1xiα)β

  • zi=1wiλ=1[1(1xiα)β]λ

This function computes and returns the negative log-likelihood, (θ|x), suitable for minimization using optimization routines like optim. Numerical stability is maintained similarly to llgkw.

References

Cordeiro, G. M., & de Castro, M. (2011). A new family of generalized distributions. Journal of Statistical Computation and Simulation,

Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46(1-2), 79-88.

See Also

llgkw (parent distribution negative log-likelihood), dkkw, pkkw, qkkw, rkkw, grkkw (gradient, if available), hskkw (Hessian, if available), optim

Examples

Run this code
# \donttest{
# Assuming existence of rkkw, grkkw, hskkw functions for kkw distribution

# Generate sample data from a known kkw distribution
set.seed(123)
true_par_kkw <- c(alpha = 2, beta = 3, delta = 1.5, lambda = 0.5)
# Use rkkw if it exists, otherwise use rgkw with gamma=1
if (exists("rkkw")) {
  sample_data_kkw <- rkkw(100, alpha = true_par_kkw[1], beta = true_par_kkw[2],
                         delta = true_par_kkw[3], lambda = true_par_kkw[4])
} else {
  sample_data_kkw <- rgkw(100, alpha = true_par_kkw[1], beta = true_par_kkw[2],
                         gamma = 1, delta = true_par_kkw[3], lambda = true_par_kkw[4])
}
hist(sample_data_kkw, breaks = 20, main = "kkw(2, 3, 1.5, 0.5) Sample")

# --- Maximum Likelihood Estimation using optim ---
# Initial parameter guess
start_par_kkw <- c(1.5, 2.5, 1.0, 0.6)

# Perform optimization (minimizing negative log-likelihood)
mle_result_kkw <- stats::optim(par = start_par_kkw,
                               fn = llkkw, # Use the kkw neg-log-likelihood
                               method = "BFGS",
                               hessian = TRUE,
                               data = sample_data_kkw)

# Check convergence and results
if (mle_result_kkw$convergence == 0) {
  print("Optimization converged successfully.")
  mle_par_kkw <- mle_result_kkw$par
  print("Estimated kkw parameters:")
  print(mle_par_kkw)
  print("True kkw parameters:")
  print(true_par_kkw)
} else {
  warning("Optimization did not converge!")
  print(mle_result_kkw$message)
}

# --- Compare numerical and analytical derivatives (if available) ---
# Requires 'numDeriv' package and analytical functions 'grkkw', 'hskkw'
if (mle_result_kkw$convergence == 0 &&
    requireNamespace("numDeriv", quietly = TRUE) &&
    exists("grkkw") && exists("hskkw")) {

  cat("\nComparing Derivatives at kkw MLE estimates:\n")

  # Numerical derivatives of llkkw
  num_grad_kkw <- numDeriv::grad(func = llkkw, x = mle_par_kkw, data = sample_data_kkw)
  num_hess_kkw <- numDeriv::hessian(func = llkkw, x = mle_par_kkw, data = sample_data_kkw)

  # Analytical derivatives (assuming they return derivatives of negative LL)
  ana_grad_kkw <- grkkw(par = mle_par_kkw, data = sample_data_kkw)
  ana_hess_kkw <- hskkw(par = mle_par_kkw, data = sample_data_kkw)

  # Check differences
  cat("Max absolute difference between gradients:\n")
  print(max(abs(num_grad_kkw - ana_grad_kkw)))
  cat("Max absolute difference between Hessians:\n")
  print(max(abs(num_hess_kkw - ana_hess_kkw)))

} else {
   cat("\nSkipping derivative comparison for kkw.\n")
   cat("Requires convergence, 'numDeriv' package and functions 'grkkw', 'hskkw'.\n")
}

# }

Run the code above in your browser using DataLab