Tools for building reinforcement learning (RL) models specifically tailored for Two-Alternative Forced Choice (TAFC) tasks, commonly employed in psychological research. These models build upon the foundational principles of model-free reinforcement learning detailed in Sutton and Barto (2018) <ISBN:9780262039246>. The package allows for the intuitive definition of RL models using simple if-else statements. Our approach to constructing and evaluating these computational models is informed by the guidelines proposed in Wilson & Collins (2019) tools:::Rd_expr_doi("10.7554/eLife.49547"). Example datasets included with the package are sourced from the work of Mason et al. (2024) tools:::Rd_expr_doi("10.3758/s13423-023-02415-x").
Mason_2024_Exp1:
Experiment 1 of Mason et al. (2024)
Mason_2024_Exp2:
Experiment 2 of Mason et al. (2024)
run_m:
Step 1: Building reinforcement learning model
rcv_d:
Step 2: Generating fake data for parameter and model recovery
fit_p:
Step 3: Optimizing parameters to fit real data
rpl_e:
Step 4: Replaying the experiment with optimal parameters
TD:
TD Model
RSTD:
RSTD Model
Utility:
Utility Model
func_gamma: Utility Function
func_eta: Learning Rate
func_epsilon: Exploration Strategy
func_pi: Upper-Confidence-Bound
func_tau: Soft-Max
optimize_para: optimizing free parameters
simulate_list: simulating fake datasets
recovery_data: parameter and model recovery
summary.binaryRL: summary(binaryRL.res)
Maintainer: YuKi hmz1969a@gmail.com (ORCID)
Useful links: