The Objective
object specifies the framework for an objective function
for numerical optimization.
An Objective
object.
objective_name
[character(1)
]
The label for the objective function.
fixed_arguments
[character()
]
The name(s) of the fixed argument(s) (if any).
seconds
[numeric(1)
]
A time limit in seconds. Computations are interrupted
prematurely if seconds
is exceeded.
No time limit if seconds = Inf
(the default).
Note the limitations documented in setTimeLimit
.
hide_warnings
[logical(1)
]
Hide warnings when evaluating the objective function?
verbose
[logical(1)
]
Print status messages?
npar
[integer()
]
The length of each target argument.
target
[character()
]
The argument name(s) that get optimized.
gradient_specified
[logical(1)
]
Whether a gradient function has been specified via $set_gradient()
.
hessian_specified
[logical(1)
]
Whether a Hessian function has been specified via $set_hessian()
.
new()
Creates a new Objective
object.
Objective$new(f, target = NULL, npar, ...)
f
[function
]
A function
to be optimized.
It is expected that f
has at least one numeric
argument.
Further, it is expected that the return value of f
is of the
structure numeric(1)
, i.e. a single numeric
value.
target
[character()
]
The argument name(s) that get optimized.
All target arguments must be numeric
.
Can be NULL
(default), then the first function argument is selected.
npar
[integer()
]
The length of each target argument, i.e., the length(s) of the
numeric
vector
argument(s) specified by target
.
...
Optionally additional function arguments that are fixed during the optimization.
set_argument()
Set a function argument that remains fixed during optimization.
Objective$set_argument(..., .overwrite = TRUE, .verbose = self$verbose)
...
Optionally additional function arguments that are fixed during the optimization.
.overwrite
[logical(1)
]
Overwrite existing values?
.verbose
[logical(1)
]
Print status messages?
get_argument()
Get a fixed function argument.
Objective$get_argument(argument_name, .verbose = self$verbose)
argument_name
[character(1)
]
A function argument name.
.verbose
[logical(1)
]
Print status messages?
remove_argument()
Remove a fixed function argument.
Objective$remove_argument(argument_name, .verbose = self$verbose)
argument_name
[character(1)
]
A function argument name.
.verbose
[logical(1)
]
Print status messages?
set_gradient()
Set a gradient function.
Objective$set_gradient(
gradient,
target = self$target,
npar = self$npar,
...,
.verbose = self$verbose
)
gradient
[function
]
A function
that computes the gradient of the objective function
f
.
It is expected that gradient
has the same call as f
, and
that gradient
returns a numeric
vector
of length
self$npar
.
target
[character()
]
The argument name(s) that get optimized.
All target arguments must be numeric
.
Can be NULL
(default), then the first function argument is selected.
npar
[integer()
]
The length of each target argument, i.e., the length(s) of the
numeric
vector
argument(s) specified by target
.
...
Optionally additional function arguments that are fixed during the optimization.
.verbose
[logical(1)
]
Print status messages?
set_hessian()
Set a Hessian function.
Objective$set_hessian(
hessian,
target = self$target,
npar = self$npar,
...,
.verbose = self$verbose
)
hessian
[function
]
A function
that computes the Hessian of the objective function
f
.
It is expected that hessian
has the same call as f
, and
that hessian
returns a numeric
matrix
of dimension
self$npar
times self$npar
.
target
[character()
]
The argument name(s) that get optimized.
All target arguments must be numeric
.
Can be NULL
(default), then the first function argument is selected.
npar
[integer()
]
The length of each target argument, i.e., the length(s) of the
numeric
vector
argument(s) specified by target
.
...
Optionally additional function arguments that are fixed during the optimization.
.verbose
[logical(1)
]
Print status messages?
evaluate()
Evaluate the objective function.
Objective$evaluate(
.at,
.negate = FALSE,
.gradient_as_attribute = FALSE,
.gradient_attribute_name = "gradient",
.hessian_as_attribute = FALSE,
.hessian_attribute_name = "hessian",
...
)
.at
[numeric()
]
The values for the target argument(s), written in a single vector. (
Must be of length sum(self$npar)
.
.negate
[logical(1)
]
Negate the function return value?
.gradient_as_attribute
[`logical(1)]
Add the value of the gradient function as an attribute to the output?
The attribute name is defined via the .gradient_attribute_name
argument.
Ignored if $gradient_specified
is FALSE
.
.gradient_attribute_name
[character(1)\]\cr Only relevant if
.gradient_as_attribute = TRUE`.
In that case, the attribute name for the gradient (if available).
.hessian_as_attribute
[`logical(1)]
Add the value of the Hessian function as an attribute to the output?
The attribute name is defined via the .hessian_attribute_name
argument.
Ignored if $hessian_specified
is FALSE
.
.hessian_attribute_name
[character(1)\]\cr Only relevant if
.hessian_as_attribute = TRUE`.
In that case, the attribute name for the Hessian (if available).
...
Optionally additional function arguments that are fixed during the optimization.
evaluate_gradient()
Evaluate the gradient function.
Objective$evaluate_gradient(.at, .negate = FALSE, ...)
.at
[numeric()
]
The values for the target argument(s), written in a single vector. (
Must be of length sum(self$npar)
.
.negate
[logical(1)
]
Negate the function return value?
...
Optionally additional function arguments that are fixed during the optimization.
evaluate_hessian()
Evaluate the Hessian function.
Objective$evaluate_hessian(.at, .negate = FALSE, ...)
.at
[numeric()
]
The values for the target argument(s), written in a single vector. (
Must be of length sum(self$npar)
.
.negate
[logical(1)
]
Negate the function return value?
...
Optionally additional function arguments that are fixed during the optimization.
clone()
The objects of this class are cloneable with this method.
Objective$clone(deep = FALSE)
deep
Whether to make a deep clone.
### define log-likelihood function of Gaussian mixture model
llk <- function(mu, sd, lambda, data) {
sd <- exp(sd)
lambda <- plogis(lambda)
cluster_1 <- lambda * dnorm(data, mu[1], sd[1])
cluster_2 <- (1 - lambda) * dnorm(data, mu[2], sd[2])
sum(log(cluster_1 + cluster_2))
}
### optimize over the first three arguments, the 'data' argument is constant
objective <- Objective$new(
f = llk, target = c("mu", "sd", "lambda"), npar = c(2, 2, 1),
data = faithful$eruptions
)
### evaluate at 1:5 (1:2 is passed to mu, 3:4 to sd, and 5 to lambda)
objective$evaluate(1:5)
Run the code above in your browser using DataLab