Learn R Programming

multiRL (version 0.2.3)

settings: Settings of Model

Description

The settings argument is responsible for defining the model's name, the estimation method, and other configurations.

Arguments

Class

settings [List]

Slots

  • name [Character]

    The name of model.

  • mode [Character]

    There are two modes: "fitting" and "simulating". In most cases, users do not need to explicitly specify the value of this slot, as the program will set it automatically.

    Typically, the "fitting" mode is used when executing fit_p, while the "simulating" mode is used when executing rcv_d.

  • estimate [Character]

    The package supports four estimation methods: Maximum Likelihood Estimation (MLE), Maximum A Posteriori Estimation (MAP), Approximate Bayesian Computation (ABC), and Recurrent Neural Network (RNN). Generally, users no longer need to specify the estimation method in the settings object. This slot has been moved to an argument within the main functions, rcv_d and fit_p. For details, please refer to the documentation for estimate.

  • policy [Character]

    The naming of this slot as policy is still under consideration.

    Colloquially, policy = "on" means the agent selects an option based on its estimated probability and then updates the value of the chosen option.

    Conversely, policy = "off" means the agent directly mimics human behavior, solely using its estimated probability and the human's choice to calculate the likelihood.

    For details, please refer to the documentation for policy.

  • system [Character]

    In decision-making paradigms, multiple systems may operate jointly to influence human decisions. These systems can include a reinforcement learning system, as well as working memory, and even habitual choice tendencies.

    If system = "RL", the learning process follows the Rescorla-Wagner (RW) model using a learning rate less than 1, representing a slow, incremental value update system.

    If system = "WM", the process still follows the Rescorla-Wagner (RW) model but with a fixed learning rate of 1, functioning as a pure memory system that immediately updates an option's value.

    If system = c("RL", "WM"), the agent maintains two distinct Q-tables, one for reinforcement learning (RL) and one for working memory (WM), during the decision-making process, integrating their values based on the provided weight to determine the final choice.

    For details, please refer to the documentation for system.

Example

 # model settings
 settings = list(
   name = "TD",
   mode = "fitting",
   estimate = "MLE",
   policy = "off",
   system = "RL"
 )