Step 4: Replaying the experiment with optimal parameters
rpl_e(
result,
free_params = NULL,
data,
colnames,
behrule,
ids = NULL,
models,
funcs = NULL,
priors = NULL,
settings = NULL,
...
)An S3 object of class multiRL.replay.
A List containing, for each subject and each fitted model, the
estimated optimal parameters, along with the resulting
multiRL.model and multiRL.summary objects obtained by
replaying the model with those parameters.
Result from rcv_d or fit_p
In order to prevent ambiguity regarding the free parameters, their names can be explicitly defined by the user.
A data frame in which each row represents a single trial, see data
Column names in the data frame, see colnames
The agent’s implicitly formed internal rule, see behrule
The Subject ID of the participant whose data needs to be fitted.
Reinforcement Learning Models
The functions forming the reinforcement learning model, see funcs
Prior probability density function of the free parameters, see priors
Other model settings, see settings
Additional arguments passed to internal functions.
# info
data = multiRL::TAB
colnames = list(
object = c("L_choice", "R_choice"),
reward = c("L_reward", "R_reward"),
action = "Sub_Choose"
)
behrule = list(
cue = c("A", "B", "C", "D"),
rsp = c("A", "B", "C", "D")
)
replay.recovery <- multiRL::rpl_e(
result = recovery.MLE,
data = data,
colnames = colnames,
behrule = behrule,
models = list(multiRL::TD, multiRL::RSTD, multiRL::Utility),
settings = list(name = c("TD", "RSTD", "Utility")),
omit = c("data", "funcs")
) replay.fitting <- multiRL::rpl_e(
result = fitting.MLE,
data = data,
colnames = colnames,
behrule = behrule,
models = list(multiRL::TD, multiRL::RSTD, multiRL::Utility),
settings = list(name = c("TD", "RSTD", "Utility")),
omit = c("funcs")
)