Learn R Programming

statConfR (version 0.2.1)

fitConfModels: Fit several static confidence models to multiple participants

Description

The fitConfModels function fits the parameters of several computational models of decision confidence, in binary choice tasks, specified in the model argument, to different subsets of one data frame, indicated by different values in the column participant of the data argument. fitConfModels is a wrapper of the function fitConf and calls fitConf for every possible combination of model in the models argument and sub-data frame of data for each value in the participant column. See Details for more information about the parameters. Parameters are fitted using a maximum likelihood estimation method with a initial grid search to find promising starting values for the optimization. In addition, several measures of model fit (negative log-likelihood, BIC, AIC, and AICc) are computed, which can be used for a quantitative model evaluation.

Usage

fitConfModels(data, models = "all", nInits = 5, nRestart = 4,
  .parallel = FALSE, n.cores = NULL)

Value

Gives data.frame with one row for each combination of model and participant. There are different columns for the model, the participant ID, and one one column for each estimated model parameter (parameters not present in a specific model are filled with NAs). Additional information about the fit is provided in additional columns:

  • negLogLik (negative log-likelihood of the best-fitting set of parameters),

  • k (number of parameters),

  • N (number of trials),

  • AIC (Akaike Information Criterion; Akaike, 1974),

  • BIC (Bayes information criterion; Schwarz, 1978),

  • AICc (AIC corrected for small samples; Burnham & Anderson, 2002) If length(models) > 1 or models == "all", there will be three additional columns:

Arguments

data

a data.frame where each row is one trial, containing following variables:

  • diffCond (optional; different levels of discriminability, should be a factor with levels ordered from hardest to easiest),

  • rating (discrete confidence judgments, should be a factor with levels ordered from lowest confidence to highest confidence; otherwise will be transformed to factor with a warning),

  • stimulus (stimulus category in a binary choice task, should be a factor with two levels, otherwise it will be transformed to a factor with a warning),

  • correct (encoding whether the response was correct; should be 0 for incorrect responses and 1 for correct responses)

  • participant (some group ID, most often a participant identifier; the models given in the second argument are fitted to each subset of data determined by the different values of this column)

models

character. The different computational models that should be fitted. Models implemented so far: 'WEV', 'SDT', 'GN', 'PDA', 'IG', 'ITGc', 'RCE', 'CAS', 'ITGcm', 'logN', and 'logWEV'. Alternatively, if model="all" (default), all implemented models will be fit.

nInits

integer. Number of initial values used for maximum likelihood optimization. Defaults to 5.

nRestart

integer. Number of times the optimization is restarted. Defaults to 4.

.parallel

logical. Whether to parallelize the fitting over models and participant (default: FALSE)

n.cores

integer. Number of cores used for parallelization. If NULL (default), the available number of cores -1 will be used.

Author

Sebastian Hellmann, sebastian.hellmann@tum.de
Manuel Rausch, manuel.rausch@ku.de

Details

The provided data argument is split into subsets according to the values of the participant column. Then for each subset and each model in the models argument, the parameters of the respective model are fitted to the data subset.

The fitting routine first performs a coarse grid search to find promising starting values for the maximum likelihood optimization procedure. Then the best nInits parameter sets found by the grid search are used as the initial values for separate runs of the Nelder-Mead algorithm implemented in optim. Each run is restarted nRestart times.

Mathematical description of models

The computational models are all based on signal detection theory (Green & Swets, 1966). It is assumed that participants select a binary discrimination response \(R\) about a stimulus \(S\). Both \(S\) and \(R\) can be either -1 or 1. \(R\) is considered correct if \(S=R\). In addition, we assume that there are \(K\) different levels of stimulus discriminability in the experiment, i.e. a physical variable that makes the discrimination task easier or harder. For each level of discriminability, the function fits a different discrimination sensitivity parameter \(d_k\). If there is more than one sensitivity parameter, we assume that the sensitivity parameters are ordered such as \(0 < d_1 < d_2 < ... < d_K\). The models assume that the stimulus generates normally distributed sensory evidence \(x\) with mean \(S\times d_k/2\) and variance of 1. The sensory evidence \(x\) is compared to a decision criterion \(c\) to generate a discrimination response \(R\), which is 1, if \(x\) exceeds \(c\) and -1 else. To generate confidence, it is assumed that the confidence variable \(y\) is compared to another set of criteria \(\theta_{R,i}, i=1,2,...,L-1\), depending on the discrimination response \(R\) to produce a \(L\)-step discrete confidence response. The number of thresholds will be inferred from the number of steps in the rating column of data. Thus, the parameters shared between all models are:

  • sensitivity parameters \(d_1\),...,\(d_K\) (\(K\): number of difficulty levels)

  • decision criterion \(c\)

  • confidence criterion \(\theta_{-1,1}\),\(\theta_{-1,2}\), ..., \(\theta_{-1,L-1}\), \(\theta_{1,1}\), \(\theta_{1,2}\),..., \(\theta_{1,L-1}\) (\(L\): number of confidence categories available for confidence ratings)

How the confidence variable \(y\) is computed varies across the different models. The following models have been implemented so far:

Signal detection rating model (SDT)

According to SDT, the same sample of sensory evidence is used to generate response and confidence, i.e., \(y=x\) and the confidence criteria span from the left and right side of the decision criterion \(c\)(Green & Swets, 1966).

Gaussian noise model (GN)

According to the model, \(y\) is subject to additive noise and assumed to be normally distributed around the decision evidence value \(x\) with a standard deviation \(\sigma\)(Maniscalco & Lau, 2016). \(\sigma\) is an additional free parameter.

Weighted evidence and visibility model (WEV)

WEV assumes that the observer combines evidence about decision-relevant features of the stimulus with the strength of evidence about choice-irrelevant features to generate confidence (Rausch et al., 2018). Thus, the WEV model assumes that \(y\) is normally distributed with a mean of \((1-w)\times x+w \times d_k\times R\) and standard deviation \(\sigma\). The standard deviation quantifies the amount of unsystematic variability contributing to confidence judgments but not to the discrimination judgments. The parameter \(w\) represents the weight that is put on the choice-irrelevant features in the confidence judgment. \(w\) and \(\sigma\) are fitted in addition to the set of shared parameters.

Post-decisional accumulation model (PDA)

PDA represents the idea of on-going information accumulation after the discrimination choice (Rausch et al., 2018). The parameter \(a\) indicates the amount of additional accumulation. The confidence variable is normally distributed with mean \(x+S\times d_k\times a\) and variance \(a\). For this model the parameter \(a\) is fitted in addition to the shared parameters.

Independent Gaussian model (IG)

According to IG, \(y\) is sampled independently from \(x\) (Rausch & Zehetleitner, 2017). \(y\) is normally distributed with a mean of \(a\times d_k\) and variance of 1 (again as it would scale with \(m\)). The additional parameter \(m\) represents the amount of information available for confidence judgment relative to amount of evidence available for the discrimination decision and can be smaller as well as greater than 1.

Independent truncated Gaussian model: HMetad-Version (ITGc)

According to the version of ITG consistent with the HMetad-method (Fleming, 2017; see Rausch et al., 2023), \(y\) is sampled independently from \(x\) from a truncated Gaussian distribution with a location parameter of \(S\times d_k \times m/2\) and a scale parameter of 1. The Gaussian distribution of \(y\) is truncated in a way that it is impossible to sample evidence that contradicts the original decision: If \(R = -1\), the distribution is truncated to the right of \(c\). If \(R = 1\), the distribution is truncated to the left of \(c\). The additional parameter \(m\) represents metacognitive efficiency, i.e., the amount of information available for confidence judgments relative to amount of evidence available for discrimination decisions and can be smaller as well as greater than 1.

Independent truncated Gaussian model: Meta-d'-Version (ITGcm)

According to the version of the ITG consistent with the original meta-d' method (Maniscalco & Lau, 2012, 2014; see Rausch et al., 2023), \(y\) is sampled independently from \(x\) from a truncated Gaussian distribution with a location parameter of \(S\times d_k \times m/2\) and a scale parameter of 1. If \(R = -1\), the distribution is truncated to the right of \(m\times c\). If \(R = 1\), the distribution is truncated to the left of \(m\times c\). The additional parameter \(m\) represents metacognitive efficiency, i.e., the amount of information available for confidence judgments relative to amount of evidence available for the discrimination decision and can be smaller as well as greater than 1.

Lognormal noise model (logN)

According to logN, the same sample of sensory evidence is used to generate response and confidence, i.e., \(y=x\) just as in SDT (Shekhar & Rahnev, 2021). However, according to logN, the confidence criteria are not assumed to be constant, but instead they are affected by noise drawn from a lognormal distribution. In each trial, \(\theta_{-1,i}\) is given by \(c - \epsilon_i\). Likewise, \(\theta_{1,i}\) is given by \(c + \epsilon_i\). \(\epsilon_i\) is drawn from a lognormal distribution with the location parameter \(\mu_{R,i}=log(|\overline{\theta}_{R,i}- c|) - 0.5 \times \sigma^{2}\) and scale parameter \(\sigma\). \(\sigma\) is a free parameter designed to quantify metacognitive ability. It is assumed that the criterion noise is perfectly correlated across confidence criteria, ensuring that the confidence criteria are always perfectly ordered. Because \(\theta_{-1,1}\), ..., \(\theta_{-1,L-1}\), \(\theta_{1,1}\), ..., \(\theta_{1,L-1}\) change from trial to trial, they are not estimated as free parameters. Instead, we estimate the means of the confidence criteria, i.e., \(\overline{\theta}_{-1,1}, ..., \overline{\theta}_{-1,L-1}, \overline{\theta}_{1,1}, ... \overline{\theta}_{1,L-1}\), as free parameters.

Lognormal weighted evidence and visibility model (logWEV)

logWEV is a combination of logN and WEV proposed by Shekhar and Rahnev (2023). Conceptually, logWEV assumes that the observer combines evidence about decision-relevant features of the stimulus with the strength of evidence about choice-irrelevant features (Rausch et al., 2018). The model also assumes that noise affecting the confidence decision variable is lognormal in accordance with Shekhar and Rahnev (2021). According to logWEV, the confidence decision variable \(y\) is equal to \(y^*\times R\). \(y^*\) is sampled from a lognormal distribution with a location parameter of \((1-w)\times x\times R + w \times d_k\) and a scale parameter of \(\sigma\). The parameter \(\sigma\) quantifies the amount of unsystematic variability contributing to confidence judgments but not to the discrimination judgments. The parameter \(w\) represents the weight that is put on the choice-irrelevant features in the confidence judgment. \(w\) and \(\sigma\) are fitted in addition to the set of shared parameters.

Response-congruent evidence model (RCE)

The response-congruent evidence model represents the idea that observers use all available sensory information to make the discrimination decision, but for confidence judgements, they only consider evidence consistent with the selected decision and ignore evidence against the decision (Peters et al., 2017). The model assumes two separate samples of sensory evidence collected in each trial, each belonging to one possible identity of the stimulus. Both samples of sensory evidence \(x_{-1}\) and \(x_1\) are sampled from Gaussian distributions with a standard deviations of \(\sqrt{1/2}\). The mean of \(x_{-1}\) is given by \((1 - S) \times 0.25 \times d\); the mean of \(x_1\) is given by \((1 + S) \times 0.25 \times d\). The sensory evidence used for the discrimination choice is \(x = x_2 - x_1\), which implies that the process underlying the discrimination decision is equivalent to standard SDT. The confidence decision variable y is \(y = - x_1\) if the response R is -1 and \(y = x_2\) otherwise.

CASANDRE (CAS)

Generation of the primary choice in the CASANDRE model follows standard SDT assumptions. For confidence, the CASANDRE model assumes an additional stage of processing based on the observer’s estimate of the perceived reliability of their choices (Boundy-Singer et al., 2023). The confidence decision variable y is given by \(y = \frac{x}{\hat{\sigma}}\). \(\hat{\sigma}\) represents a noisy internal estimate of the sensory noise. It is assumed that \(\hat{\sigma}\) is sampled from a lognormal distribution with a mean fixed to 1 and a free noise parameter \(\sigma\). Conceptually, \(\sigma\) represents the uncertainty in an individual's estimate of their own sensory uncertainty.

References

Akaike, H. (1974). A New Look at the Statistical Model Identification. IEEE Transactions on Automatic Control, AC-19(6), 716–723.doi: 10.1007/978-1-4612-1694-0_16

Burnham, K. P., & Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach. Springer.

Fleming, S. M. (2017). HMeta-d: Hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings. Neuroscience of Consciousness, 1, 1–14. doi: 10.1093/nc/nix007

Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. Wiley.

Maniscalco, B., & Lau, H. (2012). A signal detection theoretic method for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21(1), 422–430.

Maniscalco, B., & Lau, H. C. (2014). Signal Detection Theory Analysis of Type 1 and Type 2 Data: Meta-d’, Response- Specific Meta-d’, and the Unequal Variance SDT Model. In S. M. Fleming & C. D. Frith (Eds.), The Cognitive Neuroscience of Metacognition (pp. 25–66). Springer. doi: 10.1007/978-3-642-45190-4_3

Maniscalco, B., & Lau, H. (2016). The signal processing architecture underlying subjective reports of sensory awareness. Neuroscience of Consciousness, 1, 1–17. doi: 10.1093/nc/niw002

Rausch, M., Hellmann, S., & Zehetleitner, M. (2018). Confidence in masked orientation judgments is informed by both evidence and visibility. Attention, Perception, and Psychophysics, 80(1), 134–154. doi: 10.3758/s13414-017-1431-5

Rausch, M., Hellmann, S., & Zehetleitner, M. (2023). Measures of metacognitive efficiency across cognitive models of decision confidence. Psychological Methods. doi: 10.31234/osf.io/kdz34

Rausch, M., & Zehetleitner, M. (2017). Should metacognition be measured by logistic regression? Consciousness and Cognition, 49, 291–312. doi: 10.1016/j.concog.2017.02.007

Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464. doi: 10.1214/aos/1176344136

Shekhar, M., & Rahnev, D. (2021). The Nature of Metacognitive Inefficiency in Perceptual Decision Making. Psychological Review, 128(1), 45–70. doi: 10.1037/rev0000249

Shekhar, M., & Rahnev, D. (2023). How Do Humans Give Confidence? A Comprehensive Comparison of Process Models of Perceptual Metacognition. Journal of Experimental Psychology: General. doi:10.1037/xge0001524

Peters, M. A. K., Thesen, T., Ko, Y. D., Maniscalco, B., Carlson, C., Davidson, M., Doyle, W., Kuzniecky, R., Devinsky, O., Halgren, E., & Lau, H. (2017). Perceptual confidence neglects decision-incongruent evidence in the brain. Nature Human Behaviour, 1(0139), 1–21. doi:10.1038/s41562-017-0139

Boundy-Singer, Z. M., Ziemba, C. M., & Goris, R. L. T. (2022). Confidence reflects a noisy decision reliability estimate. Nature Human Behaviour, 7(1), 142–154. doi:10.1038/s41562-022-01464-x

Examples

Run this code
# 1. Select two subjects from the masked orientation discrimination experiment
data <- subset(MaskOri, participant %in% c(1:2))
head(data)

# 2. Fit some models to each subject of the masked orientation discrimination experiment
# \donttest{
  # Fitting several models to several subjects takes quite some time
  # (about 10 minutes per model fit per participant on a 2.8GHz processor
  # with the default values of nInits and nRestart).
  # If you want to fit more than just two subjects,
  # we strongly recommend setting .parallel=TRUE
  Fits <- fitConfModels(data, models = c("SDT", "ITGc"), .parallel = FALSE)
# }

Run the code above in your browser using DataLab