Reinforcement Learning Tools for Multi-Armed Bandit
Description
A flexible general-purpose toolbox for implementing Rescorla-Wagner models
in multi-armed bandit tasks.
As the successor and functional extension of the 'binaryRL' package,
'multiRL' modularizes the Markov Decision Process (MDP) into six core
components. This framework enables users to construct custom models via
intuitive if-else syntax and define latent learning rules for agents.
For parameter estimation, it provides both likelihood-based
inference (MLE and MAP) and simulation-based inference (ABC and
RNN), with full support for parallel processing across subjects.
The workflow is highly standardized, featuring four main functions
that strictly follow the four-step protocol (and ten rules)
proposed by Wilson & Collins (2019) .
Beyond the three built-in models (TD, RSTD, and Utility), users
can easily derive new variants by declaring which variables are
treated as free parameters.