The calculation of priorities is straightforward (Qin & Guo, 2025): the priority of an attribute is the
regression coefficient obtained from a LASSO multinomial logistic regression, with the attribute
as the independent variable and the response data from the examinees as the dependent variable.
The formula (Tu et al., 2022) is as follows:
$$
\log[\frac{P(X_{pi} = 1 | \boldsymbol{\Lambda}_{p})}{P(X_{pi} = 0 | \boldsymbol{\Lambda}_{p})}] =
logit[P(X_{pi} = 1 | \boldsymbol{\Lambda}_{p})] =
\beta_{i0} + \beta_{i1} \Lambda_{p1} + \ldots + \beta_{ik} \Lambda_{pk} + \ldots + \beta_{iK} \Lambda_{pK}
$$
Where \(X_{pi}\) represents the response of examinee \(p\) on item \(i\),
\(\boldsymbol{\Lambda}_{p}\) denotes the marginal mastery probabilities of examinee \(p\)
(which can be obtained from the return value alpha.P
of the CDM
function),
\(\beta_{i0}\) is the intercept term, and \(\beta_{ik}\) represents the regression coefficient.
The LASSO loss function can be expressed as:
$$l_{lasso}(\boldsymbol{X}_i | \boldsymbol{\Lambda}) = l(\boldsymbol{X}_i | \boldsymbol{\Lambda}) - \lambda |\boldsymbol{\beta}_i|$$
Where \(l_{lasso}(\boldsymbol{X}_i | \boldsymbol{\Lambda})\) is the penalized likelihood,
\(l(\boldsymbol{X}_i | \boldsymbol{\Lambda})\) is the original likelihood,
and \(\lambda\) is the tuning parameter for penalization (a larger value imposes a stronger penalty on
\(\boldsymbol{\beta}_i = [\beta_{i1}, \ldots, \beta_{ik}, \ldots, \beta_{iK}]\)).
The priority for attribute \(i\) is defined as: \(\boldsymbol{priority}_i = \boldsymbol{\beta}_i = [\beta_{i1}, \ldots, \beta_{ik}, \ldots, \beta_{iK}]\)