Learn R Programming

mirt (version 0.1-19)

mirt: Full-Information Item Factor Analysis (Multidimensional Item Response Theory)

Description

mirt fits an unconditional maximum likelihood factor analysis model to dichotomous data under the item response theory paradigm. Pseudo-guessing parameters may be included but must be declared as constant, since the estimation of these parameters often leads to unacceptable solutions. Missing values are automatically assumed to be 0.

Usage

mirt(fulldata, nfact, guess = 0, prev.cor = NULL, par.prior = FALSE,
  startvalues = NULL, quadpts = NULL, ncycles = 300, tol = .001, nowarn = TRUE, 
  debug = FALSE, ...)

## S3 method for class 'mirt':
summary(object, rotate='varimax', suppress = 0, digits = 3, ...)

## S3 method for class 'mirt':
coef(object, digits = 3, ...)

## S3 method for class 'mirt':
anova(object, object2, ...)

## S3 method for class 'mirt':
fitted(object, digits = 3, ...)

## S3 method for class 'mirt':
plot(x, type = 'info', npts = 50, rot = list(x = -70, y = 30, z = 10), ...)

## S3 method for class 'mirt':
residuals(object, restype = 'LD', digits = 3, ...)

Arguments

fulldata
a matrix or data.frame that consists of only 0, 1, and NA values to be factor analyzed. If scores have been recorded by the response pattern then they can be recoded to dichotomous format using the
nfact
number of factors to be extracted
guess
fixed pseudo-guessing parameters. Can be entered as a single value to assign a global guessing parameter or may be entered as a numeric vector correspding to each item
prev.cor
use a previously computed correlation matrix to be used to estimate starting values for the EM estimation? Default in NULL
par.prior
a list declaring which items should have assumed priors distributions, and what these prior weights are. Elements are slope and int to specify the coefficients beta prior for the slopes and normal prior for the intercepts, and
rotate
type of rotation to perform after the initial orthogonal parameters have been extracted. See below for list of possible rotations
startvalues
user declared start values for parameters
quadpts
number of quadrature points per dimension
ncycles
the number of EM iterations to be performed
tol
if the largest change in the EM cycle is less than this value then the EM iteration are stopped early
x
an object of class mirt to be plotted or printed
object
a model estimated from mirt of class mirt
object2
a second model estimated from mirt of class mirt with more estimated parameters than object
suppress
a numeric value indicating which (possibly rotated) factor loadings should be suppressed. Typical values are around .3 in most statistical software. Default is 0 for no suppression
digits
number of significant digits to be rounded
type
type of plot to view; can be 'curve' for the total test score as a function of two dimensions, or 'info' to show the test information function for two dimensions
npts
number of quadrature points to be used for plotting features. Larger values make plots look smoother
rot
allows rotation of the 3D graphics
restype
type of residuals to be displayed. Can be either 'LD' for a local dependence matrix (Chen & Thissen, 1997) or 'exp' for the expected values for the frequencies of every response pattern
nowarn
logical; suppress warnings from dependent packages?
debug
logical; turn on debugging features?
...
additional arguments to be passed

Convergence

Unrestricted full-information factor analysis is known to have problems with convergence, and some items may need to be constrained or removed entirely to allow for an acceptable solution. Be mindful of the item facility values that are printed with coef since these will be helpful in determining whether a guessing parameter should be removed (item facility value is too close to the guessing parameter) or if an item should be constrained or removed entirely (values too close to 0 or 1). As a general rule, items with facilities greater than .95, or items that are only .05 greater than the guessing parameter, should be considered for removal from the analysis or treated with prior distributions. Also, increasing the number of quadrature points per dimension may help to stabilize the estimation process.

Additional Estimation Functions

For data that do not contain strickly dichotomous responses users should use the stochastic equivelent polymirt function, which can analyze mixed dichotomous and polytomous data formats as well as estimate the lower asymptote paramters. For a specialized bifactor model see bfactor, and for more general confirmatory item response modeling see the confmirt function.

Details

mirt follows the item factor analysis strategy by marginal maximum likelihood estimation (MML) outlined in Bock and Aiken (1981) and Bock, Gibbons and Muraki (1988). Nested models may be compared via the approximate chi-squared difference test or by a reduction in AIC/BIC values (comparison via anova). The general equation used for dichotomous multidimensional item response theory item is a logistic form with a scaling correction of 1.702. This correction is applied to allow comparison to mainstream programs such as TESTFACT (2003). The general IRT equation is $$P(X | \theta; \bold{a}_i; d_i; g_i) = g_j + (1 - g_j) / (1 + exp(-1.702(\bold{a}_j' \theta + d_j)))$$ where j is the item index, $\bold{a}_j$ is the vector of discrimination parameters (i.e., slopes), $$\theta$$ is the vector of factor scores, $d_j$ is the intercept, and $g_j$ is the pseudo-guessing parameter. To avoid estimation difficulties the $g_j$'s must be specified by the user. Estimation begins by computing a matrix of quasi-tetrachoric correlations, potentially with Carroll's (1945) adjustment for chance responds. A MINRES factor analysis with nfact is then extracted and item parameters are estimated by $a_{ij} = f_{ij}/u_j$, where $f_{ij}$ is the factor loading for the jth item on the ith factor, and $u_j$ is the square root of the factor uniqueness, $\sqrt{1 - h_j^2}$. The initial intercept parameters are determined by calculating the inverse normal of the item facility (i.e., item easiness), $q_j$, to obtain $d_j = q_j / u_j$. Following these initial estimates the model is iterated using the EM estimation strategy with fixed quadrature points. Implicit equation accelerations described by Ramsey (1975) are also added to facilitate parameter convergence speed, and these are adjusted every third cycle. Factor scores are estimated assuming a normal prior distribution and can be appended to the input data matrix (full.data = TRUE) or displayed in a summary table for all the unique response patterns. summary allows for various rotations available from the GPArotation package. These are: [object Object],[object Object] Using plot will plot the either the test surface function or the test information function for 1 and 2 dimensional solutions. To examine individual item plots use itemplot (although the plink package may be more suitable for IRT graphics) which will also plot information and surface functions. Residuals are computed using the LD statistic (Chen & Thissen, 1997) in the lower diagonal of the matrix returned by residuals, and Cramer's V above the diagonal.

References

Bock, R. D., & Aitkin, M. (1981). Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika, 46(4), 443-459. Bock, R. D., Gibbons, R., & Muraki, E. (1988). Full-Information Item Factor Analysis. Applied Psychological Measurement, 12(3), 261-280. Carroll, J. B. (1945). The effect of difficulty and chance success on correlations between items and between tests. Psychometrika, 26, 347-372. Ramsay, J. O. (1975). Solving implicit equations in psychometric data analysis. Psychometrika, 40(3), 337-360. Wood, R., Wilson, D. T., Gibbons, R. D., Schilling, S. G., Muraki, E., & Bock, R. D. (2003). TESTFACT 4 for Windows: Test Scoring, Item Statistics, and Full-information Item Factor Analysis [Computer software]. Lincolnwood, IL: Scientific Software International.

See Also

expand.table,key2binary,polymirt,confmirt

Examples

Run this code
#load LSAT section 7 data and compute 1 and 2 factor models
data(LSAT7)
fulldata <- expand.table(LSAT7)

(mod1 <- mirt(fulldata, 1))
summary(mod1)
residuals(mod1)
plot(mod1) #test information function

(mod2 <- mirt(fulldata, 2))
summary(mod2)
coef(mod2)
residuals(mod2)

anova(mod1, mod2) #compare the two models
scores <- fscores(mod2) #save factor score table

###########
data(SAT12)
fulldata <- key2binary(SAT12,
  key = c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5))

#without guessing scree(tmat) #looks like a 2 factor solution
mod1 <- mirt(fulldata, 1)
mod2 <- mirt(fulldata, 2)
mod3 <- mirt(fulldata, 3)
anova(mod1,mod2)
anova(mod2, mod3) #negative AIC, 2 factors probably best

#with guessing
mod1g <- mirt(fulldata, 1, guess = .1)
coef(mod1g)
mod2g <- mirt(fulldata, 2, guess = .1)
coef(mod2g)
anova(mod1g, mod2g)
summary(mod2g, rotate='promax')

Run the code above in your browser using DataLab