Learn R Programming

binaryRL (version 0.9.0)

binaryRL-package: binaryRL: Reinforcement Learning Tools for Two-Alternative Forced Choice Tasks

Description

Tools for building reinforcement learning (RL) models specifically tailored for Two-Alternative Forced Choice (TAFC) tasks, commonly employed in psychological research. These models build upon the foundational principles of model-free reinforcement learning detailed in Sutton and Barto (2018) <ISBN:9780262039246>. The package allows for the intuitive definition of RL models using simple if-else statements. Our approach to constructing and evaluating these computational models is informed by the guidelines proposed in Wilson & Collins (2019) tools:::Rd_expr_doi("10.7554/eLife.49547"). Example datasets included with the package are sourced from the work of Mason et al. (2024) tools:::Rd_expr_doi("10.3758/s13423-023-02415-x").

Arguments

Example Data

  • Mason_2024_Exp1: Experiment 1 of Mason et al. (2024)

  • Mason_2024_Exp2: Experiment 2 of Mason et al. (2024)

Steps

  • run_m: Step 1: Building reinforcement learning model

  • rcv_d: Step 2: Generating fake data for parameter and model recovery

  • fit_p: Step 3: Optimizing parameters to fit real data

  • rpl_e: Step 4: Replaying the experiment with optimal parameters

Models

  • TD: TD Model

  • RSTD: RSTD Model

  • Utility: Utility Model

Functions

  • func_gamma: Utility Function

  • func_eta: Learning Rate

  • func_epsilon: Exploration Strategy

  • func_pi: Upper-Confidence-Bound

  • func_tau: Soft-Max

Processes

  • optimize_para: optimizing free parameters

  • simulate_list: simulating fake datasets

  • recovery_data: parameter and model recovery

Summary

  • summary.binaryRL: summary(binaryRL.res)

Author

Maintainer: YuKi hmz1969a@gmail.com (ORCID)

See Also