Tools for building Rescorla-Wagner Models for Two-Alternative Forced Choice tasks, commonly employed in psychological research. Most concepts and ideas within this R package are referenced from Sutton and Barto (2018) <ISBN:9780262039246>. The package allows for the intuitive definition of RL models using simple if-else statements and three basic models built into this R package are referenced from Niv et al. (2012)tools:::Rd_expr_doi("10.1523/JNEUROSCI.5498-10.2012"). Our approach to constructing and evaluating these computational models is informed by the guidelines proposed in Wilson & Collins (2019) tools:::Rd_expr_doi("10.7554/eLife.49547"). Example datasets included with the package are sourced from the work of Mason et al. (2024) tools:::Rd_expr_doi("10.3758/s13423-023-02415-x").
Mason_2024_G1:
Group 1 of Mason et al. (2024)
Mason_2024_G2:
Group 2 of Mason et al. (2024)
run_m:
Step 1: Building reinforcement learning model
rcv_d:
Step 2: Generating fake data for parameter and model recovery
fit_p:
Step 3: Optimizing parameters to fit real data
rpl_e:
Step 4: Replaying the experiment with optimal parameters
TD:
TD Model
RSTD:
RSTD Model
Utility:
Utility Model
func_gamma:
Utility Function
func_eta:
Learning Rate
func_epsilon:
Epsilon Related
func_pi:
Upper-Confidence-Bound
func_tau:
Soft-Max
func_logl:
Loss Function
optimize_para:
optimizing free parameters
simulate_list:
simulating fake datasets
recovery_data:
parameter and model recovery
summary.binaryRL: summary(binaryRL.res)
Maintainer: YuKi hmz1969a@gmail.com (ORCID)
Useful links: