Learn R Programming

binaryRL (version 0.8.9)

Reinforcement Learning Tools for Two-Alternative Forced Choice Tasks

Description

Tools for building reinforcement learning (RL) models specifically tailored for Two-Alternative Forced Choice (TAFC) tasks, commonly employed in psychological research. These models build upon the foundational principles of model-free reinforcement learning detailed in Sutton and Barto (1998) . The package allows for the intuitive definition of RL models using simple if-else statements. Our approach to constructing and evaluating these computational models is informed by the guidelines proposed in Wilson & Collins (2019) . Example datasets included with the package are sourced from the work of Mason et al. (2024) .

Copy Link

Version

Install

install.packages('binaryRL')

Version

0.8.9

License

GPL-3

Issues

Pull Requests

Stars

Forks

Maintainer

YuKi

Last Published

June 15th, 2025

Functions in binaryRL (0.8.9)

run_m

Step 1: Building reinforcement learning model
summary.binaryRL

S3method summary
rpl_e

Step 4: Replaying the experiment with optimal parameters
simulate_list

Process: Simulating Fake Data
Utility

Model: Utility
func_eta

Function: Learning Rate
Mason_2024_Exp2

Experiment 2 from Mason et al. (2024)
func_gamma

Function: Utility Function
fit_p

Step 3: Optimizing parameters to fit real data
RSTD

Model: RSTD
func_tau

Function: Soft-Max Function
func_epsilon

Function: Epsilon Greedy
TD

Model: TD
Mason_2024_Exp1

Experiment 1 from Mason et al. (2024)
optimize_para

Process: Optimizing Parameters
recovery_data

Process: Recovering Fake Data
rcv_d

Step 2: Generating fake data for parameter and model recovery