This function is designed to construct and customize reinforcement learning models.
Items for model construction:
Data Input and Specification: You must provide the raw
dataset for analysis. Crucially, you need to inform the run_m
function about the corresponding column names within your dataset
(e.g.,
Mason_2024_Exp1
,
Mason_2024_Exp2
)
This is a game, so it's critical that your dataset includes rewards
for both the human-chosen option and the unchosen options.
Customizable RL Models: This function allows you to define and adjust the number of free parameters to create various reinforcement learning models.
Value Function:
Learning Rate:
By adjusting the number of eta
, you can construct basic
reinforcement learning models such as Temporal Difference (TD)
and Risk Sensitive Temporal Difference (RSTD).
You can also directly adjust func_eta
to define your
own custom learning rate function.
Utility Function: You can directly adjust the form
of func_gamma
to incorporate the principles of
Kahneman's Prospect Theory. Currently, the built-in
func_gamma
only takes the form of a power function,
consistent with Stevens' Power Law.
Exploration–Exploitation Trade-off:
Initial Values: This involves setting the initial expected value for each option when it hasn't been chosen yet. A higher initial value encourages exploration.
Epsilon: Adjusting the threshold
,
epsilon
and lambda
parameters can lead to
exploration strategies such as epsilon-first, epsilon-greedy,
or epsilon-decreasing.
Upper-Confidence-Bound: By adjusting pi
,
it controls the degree of exploration by scaling the uncertainty
bonus given to less-explored options.
Soft-Max: By adjusting the inverse temperature
parameter tau
, this controls the agent's sensitivity to
value differences. A higher value of tau means greater emphasis
on value differences, leading to more exploitation. A smaller
value of tau indicates a greater tendency towards exploration.
Objective Function Format for Optimization: Once your
model is defined in run_m
, it must be structured as an objective
function that accepts params
as input and returns a loss value
(typically logL
). This format ensures compatibility with the
algorithm package, which uses it to estimate optimal parameters.
For an example of a standard objective function format, see
TD
,
RSTD
,
Utility
.
For more information, please refer to the homepage of this package: https://github.com/yuki-961004/binaryRL
run_m(
mode = c("simulate", "fit", "replay"),
data,
id,
n_params,
n_trials,
softmax = TRUE,
seed = 123,
initial_value = NA,
threshold = 1,
alpha = NA,
beta = NA,
gamma = 1,
eta,
epsilon = NA,
lambda = NA,
pi = 0.001,
tau = 1,
util_func = func_gamma,
rate_func = func_eta,
expl_func = func_epsilon,
bias_func = func_pi,
prob_func = func_tau,
sub = "Subject",
time_line = c("Block", "Trial"),
L_choice = "L_choice",
R_choice = "R_choice",
L_reward = "L_reward",
R_reward = "R_reward",
sub_choose = "Sub_Choose",
rob_choose = "Rob_Choose",
raw_cols = NULL,
var1 = NA,
var2 = NA,
digits_1 = 2,
digits_2 = 5
)
A list of class binaryRL
containing the results of the model fitting.
[character] This parameter controls the function's operational mode. It has three possible values, each typically associated with a specific function:
"simulate"
: Should be used when working with rcv_d
.
"fit"
: Should be used when working with fit_p
.
"replay"
: Should be used when working with rpl_e
.
In most cases, you won't need to modify this parameter directly, as suitable default values are set for different contexts.
[data.frame] This data should include the following mandatory columns:
"sub"
"time_line" (e.g., "Block", "Trial")
"L_choice"
"R_choice"
"L_reward"
"R_reward"
"sub_choose"
[integer] Which subject is going to be analyzed. The value should correspond to an entry in the "sub" column, which must contain the subject IDs.
e.g., id = 18
[integer] The number of free parameters in your model.
[integer] The total number of trials in your experiment.
[logical] Whether to use the softmax function.
TRUE
: The value of each option directly influences
the probability of selecting that option. Higher values lead to a
higher probability of selection.
FALSE
: The subject will always choose the option
with the higher value. There is no possibility of selecting the
lower-value option.
default: softmax = TRUE
[integer] Random seed. This ensures that the results are reproducible and remain the same each time the function is run.
default: seed = 123
[numeric]
Subject's initial expected value for each stimulus's reward. If this value
is not set initial_value = NA
, the subject will use the reward received
after the first trial as the initial value for that stimulus. In other
words, the learning rate for the first trial is 100
default: initial_value = NA
[integer]
Controls the initial exploration phase in the epsilon-first strategy.
This is the number of early trials where the subject makes purely random
choices, as they haven't yet learned the options' values. For example,
threshold = 20
means random choices for the first 20 trials.
For epsilon-greedy or epsilon-decreasing strategies,
`threshold` should be kept at its default value.
$$P(x) = \begin{cases} \text{trial} \le \text{threshold}, & x=1 \text{ (random choosing)} \\ \text{trial} > \text{threshold}, & x=0 \text{ (value-based choosing)} \end{cases}$$
default: threshold = 1
epsilon-first: threshold = 20, epsilon = NA, lambda = NA
[vector] Extra parameters that may be used in functions.
[vector] Extra parameters that may be used in functions.
[vector] This parameter represents the exponent in utility functions, specifically:
Stevens' Power Law: Utility is modeled as: $$U = {R}^{\gamma}$$
Kahneman's Prospect Theory: This exponent is applied differently based on the sign of the reward: $$U = \begin{cases} R^{\gamma_{1}}, & R > 0 \\ \beta \cdot R^{\gamma_{2}}, & R < 0 \end{cases}$$
[numeric]
Parameters used in the Learning Rate Function, rate_func
, representing
the rate at which the subject updates the difference (prediction error)
between the reward and the expected value in the subject's mind.
The structure of eta
depends on the model type:
For the Temporal Difference (TD) model, where a single learning rate is used throughout the experiment $$V_{new} = V_{old} + \eta \cdot (R - V_{old})$$
For the Risk-Sensitive Temporal Difference (RDTD) model, where two different learning rates are used depending on whether the reward is lower or higher than the expected value: $$V_{new} = V_{old} + \eta_{+} \cdot (R - V_{old}), R > V_{old}$$ $$V_{new} = V_{old} + \eta_{-} \cdot (R - V_{old}), R < V_{old}$$
TD: eta = 0.3
RSTD: eta = c(0.3, 0.7)
[numeric] A parameter used in the epsilon-greedy exploration strategy. It defines the probability of making a completely random choice, as opposed to choosing based on the relative values of the left and right options. For example, if `epsilon = 0.1`, the subject has a 10 choice and a 90 relevant when `threshold` is at its default value (1) and `lambda` is not set.
$$P(x) = \begin{cases} \epsilon, & x=1 \text{ (random choosing)} \\ 1-\epsilon, & x=0 \text{ (value-based choosing)} \end{cases}$$
epsilon-greedy: threshold = 1, epsilon = 0.1, lambda = NA
[vector] A numeric value that controls the decay rate of exploration probability in the epsilon-decreasing strategy. A higher `lambda` value means the probability of random choice will decrease more rapidly as the number of trials increases.
$$P(x) = \begin{cases} \frac{1}{1+\lambda \cdot trial}, & x=1 \text{ (random choosing)} \\ \frac{\lambda \cdot trial}{1+\lambda \cdot trial}, & x=0 \text{ (value-based choosing)} \end{cases}$$
epsilon-decreasing threshold = 1, epsilon = NA, lambda = 0.5
[vector]
Parameter used in the Upper-Confidence-Bound (UCB) action selection
formula. `bias_func` controls the degree of exploration by scaling the
uncertainty bonus given to less-explored options. A larger value of
pi
(denoted as c
in Sutton and Barto(1998) ) increases the
influence of this bonus, leading to more exploration of actions with
uncertain estimated values. Conversely, a smaller pi
results in
less exploration.
$$ A_t = \arg \max_{a} \left[ V_t(a) + \pi \sqrt{\frac{\ln(t)}{N_t(a)}} \right] $$
default: pi = 0.001
[vector] Parameters used in the Soft-Max Function. `prob_func` representing the sensitivity of the subject to the value difference when making decisions. It determines the probability of selecting the left option versus the right option based on their values. A larger value of tau indicates greater sensitivity to the value difference between the options. In other words, even a small difference in value will make the subject more likely to choose the higher-value option.
$$P_L = \frac{1}{1+e^{-(V_L-V_R) \cdot \tau}}; P_R = \frac{1}{1+e^{-(V_R-V_L) \cdot \tau}}$$
e.g., tau = c(0.5)
[function] Utility Function see func_gamma
.
[function] Learning Rate Function see func_eta
.
[function] Exploration Strategy Function see func_epsilon
.
[function] Upper-Confidence-Bound see func_pi
.
[function] Soft-Max Function see func_tau
.
[character] column name of subject ID
e.g., sub = "Subject"
[vector] A vector specifying the name of the column that the sequence of the experiment. This argument defines how the experiment is structured, such as whether it is organized by "Block" with breaks in between, and multiple trials within each block.
default: time_line = c("Block", "Trial")
[character] Column name of left choice.
default: L_choice = "Left_Choice"
[character] Column name of right choice.
default: R_choice = "Right_Choice"
[character] Column name of the reward of left choice
default: L_reward = "Left_reward"
[character] Column name of the reward of right choice
default: R_reward = "Right_reward"
[character] Column name of choices made by the subject.
default: sub_choose = "Choose"
[character] Column name of choices made by the model, which you could ignore.
default: rob_choose = "Rob_Choose"
[vector] Defaults to `NULL`. If left as `NULL`, it will directly capture all column names from the raw data.
[character] Column name of extra variable 1. If your model uses more than just reward and expected value, and you need other information, such as whether the choice frame is Gain or Loss, then you can input the 'Frame' column as var1 into the model.
default: var1 = "Extra_Var1"
[character] Column name of extra variable 2. If one additional variable, var1, does not meet your needs, you can add another additional variable, var2, into your model.
default: var2 = "Extra_Var2"
[integer] The number of decimal places to retain for columns related to value function
default: digits_1 = 2
[integer] The number of decimal places to retain for columns related to select function.
default: digits_2 = 5
data <- binaryRL::Mason_2024_Exp1
binaryRL.res <- binaryRL::run_m(
mode = "fit",
data = data,
id = 18,
eta = c(0.321, 0.765),
n_params = 2,
n_trials = 360
)
summary(binaryRL.res)
Run the code above in your browser using DataLab