CDMs are statistical models that fully integrates cognitive structure variables, which define the response
probability of subjects on questions by assuming the mechanism of action between attributes. In the
dichotomous test, this probability is the probability of answering correctly. According to the specificity
or generality of CDM assumptions, it can be divided into reduced CDM and saturated CDM.
Reduced CDMs possess special and strong assumptions about the mechanisms of attribute interactions, leading
to clear interactions between attributes. Representative reduced models include the Deterministic Input,
Noisy and Gate (DINA) model (Haertel, 1989; Junker & Sijtsma, 2001; de la Torre & Douglas, 2004), the
Deterministic Input, Noisy or Gate (DINO) model (Templin & Henson, 2006), and the Additive Cognitive Diagnosis
Model (A-CDM; de la Torre, 2011), the reduced Reparametrized Unified Model (r-RUM; Hartz, 2002), among others.
Compared to reduced models, saturated models do not have strict assumptions about the mechanisms of attribute
interactions. When appropriate constraints are applied, they can be transformed into various reduced models
(Henson et al., 2008; de la Torre, 2011), such as the Log-Linear Cognitive Diagnosis Model (LCDM; Henson et
al., 2009) and the general Deterministic Input, Noisy and Gate model (G-DINA; de la Torre, 2011).
The LCDM (Log-Linear Cognitive Diagnosis Model) is a saturated CDM fully proposed within the framework of
cognitive diagnosis. Unlike simplified models that only discuss the main effects of attributes, it also
considers the interactions between attributes, thus having more generalized assumptions about attributes.
Its definition of the probability of correct response is as follows:
$$
P(X_{pi}=1|\mathbf{\alpha}_{l}) =
\frac{\exp(\lambda_{i0} + \mathbf{\lambda}_{i}^{T} \mathbf{h} (\mathbf{q_{i}}, \mathbf{\alpha_{l}}))}
{1 + \exp(\lambda_{i0} + \mathbf{\lambda}_{i}^{T} \mathbf{h}(\mathbf{q_{i}}, \mathbf{\alpha_{l}}))}
$$
$$
\mathbf{\lambda}_{i}^{T} \mathbf{h}(\mathbf{q_{i}}, \mathbf{\alpha_{l}}) =
\lambda_{i0} + \sum_{k=1}^{K^\ast}\lambda_{ik}\alpha_{lk} q_{ik} +\sum_{k=1}^{K^\ast-1}\sum_{k'=k+1}^{K^\ast}
\lambda_{ik}\lambda_{ik'}\alpha_{lk}\alpha_{lk'} q_{ik} q_{ik'} +
\cdots + \lambda_{12 \cdots K^\ast}\prod_{k=1}^{K^\ast}\alpha_{lk}q_{ik}
$$
Where, \(P(X_{pi}=1|\mathbf{\alpha}_{l})\) represents the probability of a subject with attribute mastery
pattern \(\mathbf{\alpha}_{l}\), where \(l=1,2,\cdots,L\) and \(L=2^{K^\ast}\), correctly answering
item i.
Here, \(K^\ast\) denotes the number of attributes in the collapsed q-vector, \(\lambda_{i0}\) is the
intercept parameter, and \(\mathbf{\lambda}_{i}=(\lambda_{i1}, \lambda_{i2}, \cdots, \lambda_{i12},
\cdots, \lambda_{i12{\cdots}K^\ast})\) represents the effect vector of the attributes. Specifically,
\(\lambda_{ik}\) is the main effect of attribute \(k\), \(\lambda_{ikk'}\) is the interaction effect between
attributes \(k\) and \(k'\), and \(\lambda_{j12{\cdots}K}\) represents the interaction effect of all attributes.
The general Deterministic Input, Noisy and Gate model (G-DINA), proposed by de la Torre (2011), is a saturated
model that offers three types of link functions: identity link, log link, and logit link, which are defined as follows:
$$P(X_{pi}=1|\mathbf{\alpha}_{l}) =
\delta_{i0} + \sum_{k=1}^{K^\ast}\delta_{ik}\alpha_{lk} +\sum_{k=1}^{K^\ast-1}\sum_{k'=k+1}^{K^\ast}\delta_{ik}\delta_{ik'}\alpha_{lk}\alpha_{lk'} +
\cdots + \delta_{12{\cdots}K^\ast}\prod_{k=1}^{K^\ast}\alpha_{lk}
$$
$$log(P(X_{pi}=1|\mathbf{\alpha}_{l})) =
v_{i0} + \sum_{k=1}^{K^\ast}v_{ik}\alpha_{lk} +\sum_{k=1}^{K^\ast-1}\sum_{k'=k+1}^{K^\ast}v_{ik}v_{ik'}\alpha_{lk}\alpha_{lk'} +
\cdots + v_{12{\cdots}K^\ast}\prod_{k=1}^{K^\ast}\alpha_{lk}
$$
$$logit(P(X_{pi}=1|\mathbf{\alpha}_{l})) =
\lambda_{i0} + \sum_{k=1}^{K^\ast}\lambda_{ik}\alpha_{lk} +\sum_{k=1}^{K^\ast-1}\sum_{k'=k+1}^{K^\ast}\lambda_{ik}\lambda_{ik'}\alpha_{lk}\alpha_{lk'} +
\cdots + \lambda_{12{\cdots}K^\ast}\prod_{k=1}^{K^\ast}\alpha_{lk}
$$
Where \(\delta_{i0}\), \(v_{i0}\), and \(\lambda_{i0}\) are the intercept parameters for the three
link functions, respectively; \(\delta_{ik}\), \(v_{ik}\), and \(\lambda_{ik}\) are the main effect
parameters of \(\alpha_{lk}\) for the three link functions, respectively; \(\delta_{ikk'}\), \(v_{ikk'}\),
and \(\lambda_{ikk'}\) are the interaction effect parameters between \(\alpha_{lk}\) and \(\alpha_{lk'}\)
for the three link functions, respectively; and \(\delta_{i12{\cdots }K^\ast}\), \(v_{i12{\cdots}K^\ast}\),
and \(\lambda_{i12{\cdots}K^\ast}\) are the interaction effect parameters of \(\alpha_{l1}{\cdots}\alpha_{lK^\ast}\)
for the three link functions, respectively. It can be observed that when the logit link is adopted, the
G-DINA model is equivalent to the LCDM model.
Specifically, the A-CDM can be formulated as:
$$P(X_{pi}=1|\mathbf{\alpha}_{l}) =
\delta_{i0} + \sum_{k=1}^{K^\ast}\delta_{ik}\alpha_{lk}
$$
The RRUM, can be written as:
$$log(P(X_{pi}=1|\mathbf{\alpha}_{l})) =
\lambda_{i0} + \sum_{k=1}^{K^\ast}\lambda_{ik}\alpha_{lk}
$$
The item response function for LLM can be given by:
$$logit(P(X_{pi}=1|\mathbf{\alpha}_{l})) =
\lambda_{i0} + \sum_{k=1}^{K^\ast}\lambda_{ik}\alpha_{lk}
$$
In the DINA model, every item is characterized by two key parameters: guessing (g) and slip (s). Within
the traditional framework of DINA model parameterization, a latent variable \(\eta\), specific to
individual \(p\) who has the attribute mastery pattern \(\alpha_{l}\) and item \(i\), is defined as follows:
$$
\eta_{li}=\prod_{k=1}^{K}\alpha_{lk}^{q_{ik}}
$$
If individual \(p\) who has the attribute mastery pattern \(\alpha_{l}\) has acquired every attribute
required by item i, \(\eta_{pi}\) is given a value of 1. If not, \(\eta_{pi}\) is set to 0. The
DINA model's item response function can be concisely formulated as such:
$$P(X_{pi}=1|\mathbf{\alpha}_{l}) =
(1-s_j)^{\eta_{li}}g_j^{(1-\eta_{li})} =
\delta_{i0}+\delta_{i12{\cdots}K}\prod_{k=1}^{K^\ast}\alpha_{lk}
$$
In contrast to the DINA model, the DINO model suggests that an individual can correctly respond to
an item if they have mastered at least one of the item's measured attributes. Additionally, like the
DINA model, the DINO model also accounts for parameters related to guessing and slipping. Therefore,
the main difference between DINO and DINA lies in their respective \(\eta_{pi}\) formulations. The
DINO model can be given by:
$$\eta_{li} = 1-\prod_{k=1}^{K}(1 - \alpha_{lk})^{q_{lk}}$$