Learn R Programming

multiRL (version 0.2.3)

run_m: Step 1: Building reinforcement learning model

Description

Step 1: Building reinforcement learning model

Usage

run_m(
  data,
  colnames = list(),
  behrule = list(),
  funcs = list(),
  params = list(),
  priors = list(),
  settings = list(),
  engine = "Cpp",
  ...
)

Value

An S4 object of class multiRL.model.

input

An S4 object of class multiRL.input, containing the raw data, column specifications, parameters and ...

behrule

An S4 object of class multiRL.behrule, defining the latent learning rules.

result

An S4 object of class multiRL.result, storing trial-level outputs of the Markov Decision Process.

sumstat

An S4 object of class multiRL.sumstat, providing summary statistics across different estimation methods.

extra

A List containing additional user-defined information.

Arguments

data

A data frame in which each row represents a single trial, see data

colnames

Column names in the data frame, see colnames

behrule

The agent’s implicitly formed internal rule, see behrule

funcs

The functions forming the reinforcement learning model, see funcs

params

Parameters used by the model’s internal functions, see params

priors

Prior probability density function of the free parameters, see priors

settings

Other model settings, see settings

engine

Specifies whether the core Markov Decision Process (MDP) update loop is executed in C++ or in R.

...

Additional arguments passed to internal functions.

Examples

Run this code
multiRL.model <- multiRL::run_m(
  data = multiRL::TAB[multiRL::TAB[, "Subject"] == 1, ],
  behrule = list(
    cue = c("A", "B", "C", "D"),
    rsp = c("A", "B", "C", "D")
  ),
  colnames = list(
    subid = "Subject", block = "Block", trial = "Trial",
    object = c("L_choice", "R_choice"), 
    reward = c("L_reward", "R_reward"),
    action = "Sub_Choose",
    exinfo = c("Frame", "NetWorth", "RT")
  ),
  params = list(
    free = list(
      alpha = 0.5,
      beta = 0.5
    ),
    fixed = list(
      gamma = 1, 
      delta = 0.1, 
      epsilon = NA_real_,
      zeta = 0
    ),
    constant = list(
      seed = 123,
      Q0 = NA_real_, 
      reset = NA_real_,
      lapse = 0.01,
      threshold = 1,
      bonus = 0,
      weight = 1,
      capacity = 0,
      sticky = 0
    )
  ),
  priors = list(
    alpha = function(x) {stats::dbeta(x, shape1 = 2, shape2 = 2, log = TRUE)}, 
    beta = function(x) {stats::dexp(x, rate = 1, log = TRUE)}
  ),
  settings = list(
    name = "TD",
    mode = "fitting",
    estimate = "MLE",
    policy = "off",
    system = c("RL", "WM")
 ),
 engine = "R"
)
 
multiRL.summary <- multiRL::summary(multiRL.model)

 

Run the code above in your browser using DataLab