control_owl() sets the default control arguments
for backwards outcome weighted learning, type = "owl".
The arguments are passed directly to DTRlearn2::owl() if not
specified otherwise.
control_owl(
policy_vars = NULL,
reuse_scales = TRUE,
res.lasso = TRUE,
loss = "hinge",
kernel = "linear",
augment = FALSE,
c = 2^(-2:2),
sigma = c(0.03, 0.05, 0.07),
s = 2^(-2:2),
m = 4
)list of (default) control arguments.
Character vector/string or list of character
vectors/strings. Variable names used to restrict the policy.
The names must be a subset of the history names, see get_history_names().
Not passed to owl().
The history matrix passed to owl() is scaled
using scale() as advised. If TRUE, the scales of the history matrix
will be saved and reused when applied to (new) test data.
If TRUE a lasso penalty is applied.
Loss function. The options are "hinge", "ramp",
"logit", "logit.lasso", "l2", "l2.lasso".
Type of kernel used by the support vector machine. The
options are "linear", "rbf".
If TRUE the outcomes are augmented.
Regularization parameter.
Tuning parameter.
Slope parameter.
Number of folds for cross-validation of the parameters.