Learn R Programming

binaryRL (version 0.9.8)

Reinforcement Learning Tools for Two-Alternative Forced Choice Tasks

Description

Tools for building Rescorla-Wagner Models for Two-Alternative Forced Choice tasks, commonly employed in psychological research. Most concepts and ideas within this R package are referenced from Sutton and Barto (2018) . The package allows for the intuitive definition of RL models using simple if-else statements and three basic models built into this R package are referenced from Niv et al. (2012). Our approach to constructing and evaluating these computational models is informed by the guidelines proposed in Wilson & Collins (2019) . Example datasets included with the package are sourced from the work of Mason et al. (2024) .

Copy Link

Version

Install

install.packages('binaryRL')

Monthly Downloads

430

Version

0.9.8

License

GPL-3

Maintainer

YuKi

Last Published

October 28th, 2025

Functions in binaryRL (0.9.8)

rcv_d

Step 2: Generating fake data for parameter and model recovery
rpl_e

Step 4: Replaying the experiment with optimal parameters
func_pi

Function: Upper-Confidence-Bound
recovery_data

Process: Recovering Fake Data
simulate_list

Process: Simulating Fake Data
summary.binaryRL

S3method summary
run_m

Step 1: Building reinforcement learning model
func_logl

Function: Loss Function
optimize_para

Process: Optimizing Parameters
func_tau

Function: Soft-Max Function
func_gamma

Function: Utility Function
Mason_2024_G1

Group 1 from Mason et al. (2024)
binaryRL-package

binaryRL: Reinforcement Learning Tools for Two-Alternative Forced Choice Tasks
TD

Model: TD
Utility

Model: Utility
fit_p

Step 3: Optimizing parameters to fit real data
RSTD

Model: RSTD
func_eta

Function: Learning Rate
Mason_2024_G2

Group 2 from Mason et al. (2024)
func_epsilon

Function: Epsilon Related