Last chance! 50% off unlimited learning
Sale ends in
loglike
and
loglike.naive
.loglike.grad(x, mesa.data.model, type = "p", h = 0.001,
diff.type = 0)loglike.naive.grad(x, mesa.data.model, type = "p", h = 0.001,
diff.type = 0)
loglike.hessian(x, mesa.data.model, type = "p", h = 0.001)
loglike.naive.hessian(x, mesa.data.model, type = "p", h = 0.001)
loglike
.create.data.model
and
diff.type
gives forward
differences, 0
gives central differences, and negative values
gives backward differences. See loglike
and loglike.naive
functions.loglike.naive.grad
and
loglike.naive.hessian
may take very long time to run,
use with extreme care.gen.gradient
and gen.hessian
to
compute finite difference derivatives of the log-likelihood function
in loglike
and loglike.naive
.
Used by the model fitting function fit.mesa.model
and
provided for users who want to implement their own model fitting.loglike
. Expected names for x
are given by
loglike.var.names
.
Used by the estimation functions fit.mesa.model
and
run.MCMC
.
For general computation of gradient and hessian see
gen.gradient
and gen.hessian
.
For further log-likelihood computations see loglike
,
loglike.dim
, and
loglike.var.names
.
##load the data
data(mesa.data.model)
##Compute dimensions for the data structure
dim <- loglike.dim(mesa.data.model)
##Let's create random vectors of values
x <- runif(dim$nparam.cov)
x.all <- runif(dim$nparam)
##Compute the gradients
Gf <- loglike.grad(x.all, mesa.data.model, "f")
Gp <- loglike.grad(x, mesa.data.model, "p")
Gr <- loglike.grad(x, mesa.data.model, "r")
##And the Hessian, this may take some time...
Hf <- loglike.hessian(x.all, mesa.data.model, "f")
Hp <- loglike.hessian(x, mesa.data.model, "p")
Hr <- loglike.hessian(x, mesa.data.model, "r")
Run the code above in your browser using DataLab