Last chance! 50% off unlimited learning
Sale ends in
loglikeST
and
loglikeSTnaive
.
Uses
genGradient
and genHessian
to
compute finite difference derivatives of the
log-likelihood function in loglikeST
and
loglikeSTnaive
.loglikeSTGrad(x, STmodel, type = "p", x.fixed = NULL,
h = 0.001, diff.type = 0) loglikeSTHessian(x, STmodel, type = "p", x.fixed = NULL,
h = 0.001)
loglikeSTnaiveGrad(x, STmodel, type = "p",
x.fixed = NULL, h = 0.001, diff.type = 0)
loglikeSTnaiveHessian(x, STmodel, type = "p",
x.fixed = NULL, h = 0.001)
loglikeST
.STmodel
object with the model for
which to compute derivatives of the log-likelihood.loglikeST
.genGradient
.loglikeST
and loglikeSTnaive
functions.loglikeSTnaiveGrad
and
loglikeSTnaiveHhessian
may take very long
time to run, use with extreme caution.loglikeST
,
loglikeSTnaive
Other numerical derivatives: genGradient
,
genHessian
##load the data
data(mesa.model)
##Compute dimensions for the data structure
dim <- loglikeSTdim(mesa.model)
##Let's create random vectors of values
x <- runif(dim$nparam.cov)
x.all <- runif(dim$nparam)
##Compute the gradients
Gf <- loglikeSTGrad(x.all, mesa.model, "f")
Gp <- loglikeSTGrad(x, mesa.model, "p")
Gr <- loglikeSTGrad(x, mesa.model, "r")
##And the Hessian, this may take some time...
Hf <- loglikeSTHessian(x.all, mesa.model, "f")
Hp <- loglikeSTHessian(x, mesa.model, "p")
Hr <- loglikeSTHessian(x, mesa.model, "r")
Run the code above in your browser using DataLab